id
string | text
string | source
string | created
timestamp[s] | added
string | metadata
dict |
---|---|---|---|---|---|
0904.4234
|
# A reassessment of the Burns temperature and its relationship to the diffuse
scattering, lattice dynamics, and thermal expansion in the relaxor
Pb(Mg1/3Nb2/3)O3
P. M. Gehring,1 H. Hiraka,2,3 C. Stock,4 S.-H. Lee,1 W. Chen,5 Z.-G. Ye,5 S.
B. Vakhrushev,6 and Z. Chowdhuri1 1NIST Center for Neutron Research, National
Institute of Standards and Technology, Gaithersburg, Maryland 20899-6100, USA
2Department of Physics, Brookhaven National Laboratory, Upton, New York
11973-5000, USA 3Institute for Materials Research, Tohoku University, Sendai,
980-8577, Japan 4ISIS Facility, Rutherford Appleton Laboratory, Chilton,
Didcot, OX11 0QX, UK 5Department of Chemistry, Simon Fraser University,
Burnaby, BC, V5A 1S6, Canada 6Ioffe Physical-Technical Institute, 26
Politekhnicheskaya, 194021 St. Petersburg, Russia
###### Abstract
We have used neutron scattering techniques that probe time scales from
$10^{-12}$ s to $10^{-9}$ s to characterize the diffuse scattering and low-
energy lattice dynamics in single crystals of the relaxor PbMg1/3Nb2/3O3 from
10 K to 900 K. Our study extends far below $T_{c}=213$ K, where long-range
ferroelectric correlations have been reported under field-cooled conditions,
and well above the nominal Burns temperature $T_{d}\approx 620$ K, where
optical measurements suggest the development of short-range polar correlations
known as “polar nanoregions” (PNR). We observed two distinct types of diffuse
scattering. The first is weak, relatively temperature independent, persists to
at least 900 K, and forms bow-tie-shaped patterns in reciprocal space centered
on $(h00)$ Bragg peaks. We associate this primarily with chemical short-range
order. The second is strong, temperature dependent, and forms butterfly-shaped
patterns centered on $(h00)$ Bragg peaks. This diffuse scattering has been
attributed to the PNR because it responds to an electric field and vanishes
near $T_{d}\approx 620$ K when measured with thermal neutrons. Surprisingly,
it vanishes at 420 K when measured with cold neutrons, which provide $\sim 4$
times superior energy resolution. That this onset temperature depends so
strongly on the instrumental energy resolution indicates that the diffuse
scattering has a quasielastic character and demands a reassessment of the
Burns temperature $T_{d}$. Neutron backscattering measurements made with 300
times better energy resolution confirm the onset temperature of $420\pm 20$ K.
The energy width of the diffuse scattering is resolution limited, indicating
that the PNR are static on timescales of at least 2 ns between 420 K and 10 K.
Transverse acoustic (TA) phonon lifetimes, which are known to decrease
dramatically for wave vectors $q\approx 0.2$ Å-1 and $T_{c}<T<T_{d}$, are
temperature independent up to 900 K for $q$ close to the zone center. This
motivates a physical picture in which sufficiently long-wavelength TA phonons
average over the PNR; only those TA phonons having wavelengths comparable to
the size of the PNR are affected. Finally, the PMN lattice constant changes by
less than 0.001 Å below 300 K, but expands rapidly at a rate of $2.5\times
10^{-5}$ 1/K at high temperature. These disparate regimes of low and high
thermal expansion bracket the revised value of $T_{d}$, which suggests the
anomalous thermal expansion results from the condensation of static PNR.
###### pacs:
77.84.Dy, 77.65.-j, 77.80.Bh
## I Introduction
The concept of nanometer-scale regions of polarization, randomly embedded
within a non-polar cubic matrix, has become central to attempts to explain the
remarkable physical properties of relaxors such as PbMg1/3Nb2/3O3 (PMN) and
PbZn1/3Nb2/3O3 (PZN). Ye_review ; Park ; Bokov_rev ; Vug06:73 The existence
of these so-called “polar nanoregions” (PNR) was first inferred from the optic
index of refraction studies of Burns and Dacol on PMN, PZN, and other related
systems, Burns and later confirmed using many different experimental
techniques including x-ray and neutron diffraction, Bonneau89:24 ; Mathan91:3
; Zhao ; Hirota06:75 207Pb NMR, Blinc and piezoresponse force microscopy.
Shvartsman04:69 Early small-angle x-ray scattering and neutron pair
distribution function (PDF) measurements on PMN by Egami et al. cast doubt on
the nano-domain model of relaxors. Egami However, the recent PDF analysis of
Jeong et al., which shows the formation of polar ionic shifts in PMN below
$\approx 650$ K, and which occupy only one third of the total sample volume at
low temperatures, provides convincing support for the existence of PNR.
Jeong05:94 Neutron inelastic scattering data published by Naberezhnov et al.
on PMN offered the first dynamical evidence of PNR in the form of a prominent
broadening of the transverse acoustic (TA) mode that coincides with the onset
of strong diffuse scattering at 650 K, Naberezhnov ; Koo roughly the same
temperature ($\approx 620$ K) as that reported by Burns and Dacol in their
optical study of PMN. This temperature, commonly known as the Burns
temperature, and denoted by $T_{d}$ in the original paper, is widely viewed as
that at which static PNR first appear. Likely condensing from a soft TO mode,
distinctive butterfly-shaped and ellipsoidal diffuse scattering intensity
contours centered on $(h00)$ and $(hh0)$ Bragg peaks, respectively, are seen
below $T_{d}$ in both PMN Vakhrushev_JPCS ; You ; Xu_TOF and PZN. Xu06:5
Similar diffuse scattering patterns were subsequently observed in solid
solutions of PMN and PZN with PbTiO3 (PMN-$x$PT and PZN-$x$PT); however these
patterns appear only on the Ti-poor (relaxor) side of the well-known
morphotropic phase boundary (MPB). Mat06:74 ; Xu04:70 The polar nature of the
strong diffuse scattering, and thus its association with the formation of PNR,
was unambiguously established by several electric field studies of PMN
Vakhrushev_efield ; Stock:unpub and PZN-8%PT, Gehring_efield ; Xu_memory ;
Xu_redistribution ; Xu06:5 all of which showed dramatic changes in the shape
and intensity of the diffuse scattering as a function of field strength and
field orientation. The Burns temperature $T_{d}$ thus represents what is
arguably the most important temperature scale in relaxors and is several
hundred degrees Kelvin higher than the critical temperature $T_{c}$ (which for
PMN is $\approx 210$ K, but defined only in non-zero electric field).
Figure 1: Top - temperature dependence of the TA phonon energy width
$\Gamma_{TA}$ measured with thermal neutrons by Wakimoto et al. Waki_sm
Middle - diffuse scattering intensity contours measured at (110) below (300 K)
and just above (650 K) the Burns temperature $T_{d}$ using cold neutrons by
Hiraka et al. Hiraka Bottom - diffuse scattering temperature dependence
measured with cold neutrons (open circles) and thermal neutrons (dashed line)
by Hiraka et al. Hiraka
Recent studies of the neutron diffuse scattering in PMN by Stock et al., Stock
and especially those by Hlinka et al. Hlinka_110 and Gvasaliya et al.,
Gvasaliya ; Gvasaliya2 have proven to be extremely important because they
demonstrated the utility of cold neutron spectroscopy to the study of
relaxors. Generally speaking cold neutrons are ill-suited to lattice dynamical
studies because the longer wavelengths necessarily limit their use to the
study of comparatively fewer (lower-$Q$) Brillouin zones. On the other hand
cold neutron spectrometers provide significantly better energy ($\hbar\omega$)
and wave vector ($q$) resolution than do their thermal neutron counterparts.
In addition, both the (110) and the (100) Brillouin zones of PMN are
accessible using cold neutrons with wavelengths $\lambda\approx 4.26$ Å. The
combined cold and thermal neutron study of PMN by Hiraka et al. exploited this
fact and uncovered several major new findings, two of which are summarized in
Fig. 1. Hiraka The first finding is that the temperature at which the strong
diffuse scattering vanishes depends on whether the measurement is made with
cold or thermal neutrons, i. e. the value of $T_{d}$ depends on the energy
resolution of the spectrometer. This fact, which is illustrated in the bottom
panel of Fig. 1, indicates that the diffuse scattering in PMN, and most likely
other relaxors, contains a substantial quasielastic component. However, a
consensus on this issue is lacking; while the finding of Hiraka et al. is
consistent with the study of Gvasaliya et al., Gvasaliya ; Gvasaliya2 it
contradicts that of Hlinka et al. Hlinka_110 ; eres A second major finding is
displayed in the middle panel of Fig. 1 where intensity contours of the
diffuse scattering measured with cold neutrons around (110) are shown to
exhibit markedly different reciprocal space geometries at 300 K and 650 K.
These data unambiguously demonstrate the presence of two distinct types of
diffuse scattering in PMN. As the PNR are absent at 650 K $\geq T_{d}$, the
physical origin of the bow-tie-shaped diffuse scattering cross section is
believed to come primarily from the underlying chemical short-range order. We
shall refer to this as CSRO, although another commonly-used term for this is
compositionally/chemically ordered regions (COR). Burton
An intriguing perspective on the Burns temperature in PMN is provided in the
top panel of Fig. 1 where the TA phonon energy width $\Gamma_{TA}$, which is
inversely proportional to the phonon lifetime, is plotted as a function of
temperature for PMN at the scattering vector $\vec{Q}=(2,0.2,0)$. These data
were measured with thermal neutrons by Wakimoto et al. Waki_sm and are
consistent with those of Naberezhnov et al. in that the TA mode begins to
broaden at $T_{d}\approx 620$ K, the same temperature where the strong diffuse
scattering first appears. These data also show that the TA broadening achieves
a maximum value (minimum lifetime) near 420 K, which coincides with the value
of $T_{d}$ measured with cold neutrons. These data raise the question of how
to interpret the Burns temperature $T_{d}$ properly in that they paint a
picture, markedly different from that currently held, in which the PNR are
dynamic below $\sim 650$ K and condense into static entities only at a much
lower temperature of 420 K.
The goal of our study then is to determine the intrinsic value of the Burns
temperature $T_{d}$ and to clarify its relationship to the diffuse scattering,
lattice dynamics, and structure in PMN. To this end we have carried out
extensive measurements of the neutron diffuse scattering cross section from 10
K to 900 K, far below $T_{c}$ and well above the nominal value of $T_{d}$,
that probe timescales from $10^{-12}$ s to $10^{-9}$ s. We find that a
300-fold improvement in energy resolution over that used by Hiraka et al.,
Hiraka obtained using neutron backscattering techniques reproduces the same
onset temperature for the diffuse scattering; hence the intrinsic Burns
temperature $T_{d}$ for PMN is 420 K. At the same time an enormous change in
the thermal expansion is observed near 420 K, which is indistinguishable from
zero at low temperature. Given the revised value of $T_{d}$, this result
implies the existence of a direct influence of the PNR on the intrinsic
structural properties of PMN.
We also present new data on the effects of the PNR on the lattice dynamics
through measurements of the temperature and wave vector dependence of the
long-wavelength TA phonon energy width $\Gamma_{TA}$ measured near (110) from
25 K to 900 K. We find that TA modes with reduced wave vectors $q\ll 0.2$ Å-1
exhibit the same energy width at all temperature whereas those with $q\approx
0.2$ Å-1 exhibit a strongly temperature-dependent broadening similar to that
shown in the top panel of Fig.1. This behavior contrasts with that observed in
thermal neutron studies of the TO mode, which exhibits a broadening for all
$q\leq 0.2$ Å-1. Previous neutron scattering work on PMN-60%PT by Stock et
al., Stock06:73 a material in which there is no strong, temperature dependent
diffuse scattering, and thus no polar nanoregions, found no evidence of any TA
phonon broadening. In this context, our data lend extremely strong support to
the PNR model: the lifetimes of TA modes with wavelengths comparable in size
to the PNR are strongly diminished by the PNR, whereas long-wavelength (low
$q$) TA phonons simply average over the PNR and are unaffected.
## II Experimental Details
The neutron scattering data presented here were obtained using the BT9 thermal
neutron triple-axis spectrometer, the SPINS cold neutron triple-axis
spectrometer, and the cold neutron High Flux Backscattering Spectrometer
(HFBS), all of which are located at the NIST Center for Neutron Research
(NCNR). On BT9, measurements of the phonons and diffuse scattering were made
at a fixed final (thermal) neutron energy $E_{f}=$14.7 meV ($\lambda=2.36$ Å)
using the (002) Bragg reflection of highly-oriented pyrolytic graphite (HOPG)
crystals to monochromate and analyze the incident and scattered neutron beams,
respectively. Horizontal beam collimations were 40′-47′-S-40′-80′ (S =
sample). A special, non-standard high $q$-resolution configuration was
employed to measure the thermal expansion in which the (004) Bragg reflection
from a perfect Ge crystal was used as analyzer and horizontal beam
collimations were tightened to 15′-47′-S-20′-40′. The choice of Ge was
motivated by the close matching between the PMN (022) (1.431 Å) and Ge (004)
(1.414 Å) $d$-spacings, which provides a significant improvement in the
instrumental $q$-resolution. Xu_Acta On SPINS, which sits on the cold neutron
guide NG5, the phonon and diffuse scattering measurements were made at a fixed
final neutron energy $E_{f}=4.5$ meV ($\lambda=4.264$ Å) also using the (002)
Bragg reflection of HOPG crystals as monochromator and analyzer. A liquid-
nitrogen cooled Be filter was located after the sample to remove higher order
neutron wavelengths from the scattered beam, and horizontal beam collimations
were set to guide-80′-S-80′-80′. The resultant elastic ($\hbar\omega=0$)
energy resolution for the SPINS measurements was $\delta E=0.12$ meV half-
width at half-maximum (HWHM).
The High-Flux Backscattering Spectrometer was used to look for dynamics that
might be associated with the strong diffuse scattering below $T_{d}$. This
instrument uses a mechanically-driven Si(111) monochromator to Doppler shift
the energies of incident neutrons over a narrow range centered about 2.08 meV.
Neutrons are backscattered from the monochromator and proceed towards the
sample where they are scattered into a 12 m2 array of Si (111) crystals that
serve as analyzer. These neutrons are then backscattered a second time by the
analyzer, which selects the final neutron energy $E_{f}=2.08$ meV, into a
series of detectors positioned about the sample. The effective angular
acceptance of each detector is $\approx 15^{\circ}$. The HFBS instrument is
described in further detail elsewhere. Meyer03:74 The elastic energy
resolution for the HFBS measurements described here was $\delta E=0.4$ $\mu$eV
(HWHM).
Two high-quality single crystals of PMN, labeled PMN #4 and PMN #5, were used
in this study; both were grown using a top-seeded solution growth technique.
Ye The crystal growth conditions were determined from the pseudo-binary phase
diagram established for PMN and PbO. The PMN #4 and #5 crystals weigh 2.7 g
(0.33 cm3) and 4.8 g (0.59 cm3), respectively. At 300 K the mosaic of each
crystal measured at (220) is less than 0.04∘ full-width at half-maximum
(FWHM). Loss of PbO, the formation of a pyrochlore phase, and the reduction of
Nb5+ are known to occur in PMN single crystals when subjected to high
temperatures under vacuum for extended periods of time. This process results
in a dramatic blackening of the crystal, which is normally of a transparent
gold/amber color. While dielectric measurements on such darkened crystals
reportedly show little difference from those on unheated samples,
Vakhrushev_private our measurements reveal a diminishment of the diffuse
scattering intensity after sustained and repeated heating. Therefore
experiments on the larger PMN crystal #5 were limited to 600 K or less, while
PMN crystal #4 was used to obtain data above 600 K.
Both samples were mounted with an [001] axis oriented vertically, giving
access to reflections of the form $(hk0)$. For the high temperature
experiments, PMN crystal #4 was wrapped in quartz wool, mounted in a niobium
holder secured by tungsten wire, and then loaded into a water-cooled furnace
capable of reaching temperatures from 300 K to 1800 K. PMN crystal #5 was
mounted onto an aluminum sample holder assembly placed inside an aluminum
sample can, and then loaded inside the vacuum space of a closed-cycle 4He
refrigerator that provides access to temperatures from 10 K to 700 K. Each
sample has a cubic lattice spacing of $a=4.05$ Å at 300 K; thus 1 rlu
(reciprocal lattice unit) equals $2\pi/a=1.55$ Å-1.
## III Origins of the Diffuse Scattering: PNR versus CSRO
Figure 2: Diffuse scattering intensity contours in PMN measured on SPINS near
(100) are shown using a linear gray scale at (a) 300 K, (b) 650 K, and (c) 900
K. Diffuse scattering intensity contours measured on BT9 at 900 K near (300)
using thermal neutrons are shown in (d). The data shown in panels (a) and (b)
are from Hiraka et al. Hiraka Schematic diagrams of the diffuse scattering
intensity contours below (Low-$T$) and at $T_{d}$ (High-$T$) are shown in the
upper right. Figure 3: Contour plot of the difference in diffuse scattering
intensity near (110) measured at 550 K and 400 K. The result is the
characteristic figure “8” pattern, observed previously by Vakhrushev et al.
Vakhrushev
The diffuse scattering in PMN crystal #5 was studied by Hiraka et al. Hiraka
at and below the nominal Burns temperature $T_{d}$ with good energy resolution
($\delta E\approx 0.12$ meV HWHM) using cold neutrons. Data were measured in
the (100) and (110) Brillouin zones and are represented schematically in the
upper right portion of Fig. 2. The horizontal and vertical axes in this figure
correspond to the components of the scattering vector $\vec{Q}=(hk0)$, which
are measured in reciprocal lattice units (rlu). The “Low-$T$” ($T<T_{d}$)
regime is dominated by contributions from the PNR, where the diffuse
scattering intensity contours near (100) resemble a butterfly and those near
(110) resemble an ellipse for which the long axis is oriented perpendicular to
$\vec{Q}$. This is shown in Fig. 2 (a) where diffuse scattering data near
(100) at 300 K reveal an intense butterfly-shaped pattern. Hiraka We note
here that we see no evidence of any “transverse component” to the diffuse
scattering in any of our PMN crystals like that reported previously by
Vakhrushev et al. Sergey_NSE We assume that this component was due to the
crystal imperfections mentioned by these authors in their PMN sample, which
gave rise to a powder ring in their data at (100).
The same butterfly/ellipsoidal diffuse scattering geometry was shown to
persist in single crystals of PMN-$x$PT and PZN-$x$PT in studies by,
respectively, Matsuura et al. and Xu et al. for compositions spanning the Ti-
poor (relaxor) side of the morphotropic phase boundary (MPB). Mat06:74 ;
Xu04:70 These results also completely refute those of La-Orauttapong et al.
who reported that the orientation of the strong diffuse scattering varies with
Ti content in PZN-$x$PT and concluded that the PNR orientation changes with
doping. Gop ; comment For Ti-rich (tetragonal) PMN-$x$PT compositions just
beyond the MPB, Matsuura et al. found that the strong, temperature dependent
diffuse scattering vanishes and is replaced by critical scattering. Mat06:74
Matsuura et al. also found that the $q$-integrated diffuse scattering
intensity increases with Ti content on the Ti-poor side of the MPB, peaks near
the MPB, then drops dramatically on crossing the MPB. This finding is
significant because it suggests that an intriguing and direct correlation
exists between the PNR and the piezoelectric coefficient $d_{33}$, which
exhibits the same dependence on Ti content. Park A model based on pancake-
shaped, ferroelectric domains has been used successfully to fit the three-
dimensional diffuse scattering distributions measured in PZN-$x$PT with high-
energy x-rays. Xu04:70 ; Welberry74:06 ; Welberry38:05 A similar type of
real-space structure has been proposed to explain the diffuse scattering in
the relaxor KLixTa1-xO3. Waki74:06 On the other hand, alternative models
explaining the same diffuse scattering distributions have also been proposed.
Pasciak
In the “High-$T$” regime ($T\geq T_{d}$) there are no PNR, and the associated
butterfly-shaped diffuse scattering is no longer present. The weak diffuse
scattering that remains is thus argued to originate primarily from the
underlying chemical short range order (CSRO), which reflects weak correlations
between the Mg2+ and Nb5+ cations on the $B$-site of the perovskite $AB$O3
structure. In this regime the shapes of the diffuse scattering contours are
radically different, resembling a bow-tie in both $\vec{Q}=(h00)$ and
$\vec{Q}=(hh0)$ Brillouin zones in which the diffuse scattering extends mainly
parallel to $\vec{Q}$. The only difference between the contours near (100) and
(110) appears to be in the orientation of the triangular regions of diffuse
scattering, which point in towards (100), but away from (110). Data taken near
(100) at 650 K are displayed in Fig. 2 (b). Hiraka At this temperature the
diffuse scattering intensities, shown using a linear gray scale, are much
weaker than those of the butterfly pattern at 300 K. We further note that the
intensities increase (become darker) from left to right in panels (b) and (c),
which corresponds to increasing $Q$. Given the $(\vec{Q}\cdot\vec{u})^{2}$
dependence of the neutron diffuse scattering cross section, this intensity
signature implies the presence of some short-range, correlated displacements
$\vec{u}$ since otherwise, if $u=0$, there would be no $Q$-dependence. Thus
the weak diffuse scattering is not solely due to CSRO.
In Fig. 3 we plot the difference between the diffuse scattering intensities
measured near $\vec{Q}=(110)$ at 400 K and 550 K. The resulting contours
approximately reproduce the “figure 8” pattern observed previously by
Vakhrushev et al., Vakhrushev , and indicates that the ellipsoidal diffuse
scattering has a strong temperature dependence. By contrast, the bow-tie-
shaped diffuse scattering is effectively subtracted out in this analysis,
which confirms that it has little to no temperature dependence. Thus the high-
temperature diffuse scattering is not associated with the Burns temperature
$T_{d}$ or with the formation of long-range polar correlations below $T_{c}$.
We note that such a strictly incoherent treatment of the high-temperature
(CSRO) and low-temperature (PNR) components of the total diffuse scattering,
as described here, ignores the inevitable cross terms that must exist between
them. On the other hand, if the high-temperature scattering does arise
primarily from CSRO, then it should be largely independent of the ionic
displacements that give rise to the butterfly-shaped diffuse scattering (PNR)
below $T_{d}$. The relative weakness of the high-temperature diffuse
scattering compared to that at low temperatures also suggests such cross terms
should be weak, and this appears to be supported by the simple subtraction
analysis presented in Fig. 3 in that one effectively recovers the ellipsoidal
(not bow-tie) intensity contours. For this reason we believe it is a
reasonable first approximation to treat the two diffuse scattering components
as being nearly independent.
Figure 4: Diffuse scattering intensity measured on BT9 at 300 K as a function
of reduced momentum transfer relative to (100) (open circles) and (200) (solid
circles). The dotted lines isolate the diffuse scattering component from the
Bragg peak. The inset shows the orientation of the scan relative to the
familiar butterfly pattern measured near (200) at 100 K. Figure 5: Diffuse
scattering contours measured on BT9 near (200) are shown at (a) 100 K, (b) 300
K, and (c) 600 K.
We extended the diffuse scattering measurements to temperatures well above
$T_{d}$ using PMN crystal #4, which we reserved for very high temperature
experiments. Previous data taken on PMN and PZN crystals heated to 1000 K
revealed significant evidence of sample decomposition, so we limited our
measurements to 900 K. Diffuse scattering intensity contours at 900 K for the
(100) and the (300) Brillouin zones are presented in Fig. 2 (c) and 2 (d),
respectively. Although the diffuse scattering near (100) is quite weak, it is
consistent with the bow-tie geometry observed in Fig. 2 (b) at 650 K. We
exploited the $Q^{2}$ dependence of the neutron diffuse scattering cross
section to obtain higher intensity by using thermal neutrons to access (300).
The (300) diffuse scattering intensity contours are shown in Fig. 2 (d), where
the bow-tie pattern observed at 650 K is still present at 900 K. The contours
are truncated for $h>3.02$ rlu because of a mechanical limit on the maximum
scattering angle available on BT9, but the triangular region on the low-$Q$
side of (300) is clearly evident. That the bow-tie-shaped diffuse scattering
persists to such high temperature provides what is perhaps the most convincing
evidence that it arises mainly from CSRO.
During the course of our measurements we noticed that the diffuse scattering
in PMN crystal #4 had diminished and was noticeably weaker than that in PMN
crystal #5, which we had never exposed to temperatures above 650 K. We also
observed a broad ring of scattering passing directly through $\vec{Q}=(300)$,
which is shown in Fig. 2 (d). This feature was never observed prior to heating
this crystal to 900 K. Given the length of time spent at high temperatures,
this feature most likely corresponds to a powder ring arising from partially
decomposed regions of PMN crystal #4, which had turned entirely black after
exposure to high temperatures. These regions do not affect any other data
presented in this paper because the 900 K measurements on PMN crystal #4 were
the last ones performed on this sample. Therefore the powder ring appears only
in Fig. 2 (d).
It is instructive to compare these results with those on PMN-60%PT, a
composition that lies well beyond the morphotropic phase boundary and
undergoes a first-order ferroelectric transition from a cubic to a tetragonal
phase near 540 K. An extensive study of this material using neutron and high-
energy x-ray scattering methods found no sign of the strong, butterfly-shaped
diffuse scattering at low temperatures. Stock06:73 This result lends further
support to our association of the strong, temperature dependent diffuse
scattering with the PNR, which are absent in PMN-60%PT. Neutron measurements
on PMN-60%PT do, however, reveal the presence of bow-tie shaped diffuse
scattering intensity contours at all temperatures studied, which supports the
identification of such diffuse scattering with chemical short-range order
between cations on the $B$ site of the PMN perovskite $AB$O3 structure. This
picture is supported by theoretical work Burton99:60 as well as 93Nb NMR,
Laguta03:67 electron microscopy, Boul94:108 and polarized Raman scattering
Svit03:68 measurements. All of these studies suggest that there is no
temperature dependence to the bow-tie shaped diffuse scattering below $\approx
1000$ K, which is consistent with our results on PMN over the extended
temperature range.
During our study of PMN we discovered that the diffuse scattering near (200)
is not as weak as previously believed. Vakhrushev ; You ; Hirota ; Gop To
confirm this finding, we made detailed measurements of the diffuse scattering
intensity near (200) at 300 K and 100 K along a trajectory in reciprocal space
that follows one wing of the butterfly intensity contour; this is shown by the
dashed line in the inset to Fig. 4. The results of the 300 K scan are compared
to an identical scan measured in the (100) zone, both of which are shown in
Fig. 4. These data demonstrate that the (100) diffuse scattering cross
section, represented by the dotted lines passing through the open circles, is
substantially larger than that at (200), designated by the solid circles. This
result supports the model of Hirota et al. in which the (unexpectedly) weak
(200) diffuse scattering cross section observed in PMN and other relaxors can
be explained by the presence of a uniform shift or displacement of the PNR
relative to the non-polar cubic matrix along the direction of the local PNR
polarization. Hirota Indirect evidence for the existence of this shift has
been obtained from neutron scattering measurements of the anisotropic response
of the diffuse scattering in PZN-8%PT to an electric field applied along the
[001] direction. Gehring_efield
Fig. 5 shows diffuse scattering intensity contours measured on BT9 at 100 K,
300 K, and 600 K near(200); these data illustrate that the (200) diffuse
scattering intensity follows the same temperature dependence as that measured
in other Brillouin zones, where the diffuse scattering is much stronger. As
the temperature is raised the diffuse scattering intensity decreases in the
same manner as that previously measured and observed in the (100), (110), and
(300) Brillouin zones. This proves that the diffuse scattering measured at
(200) has the same origin as that in other zones, i. e. that it is associated
with the formation of PNR. At 600 K in panel (c) one can already see the
emergence of the bow-tie-shaped diffuse scattering that is otherwise obscured
by the stronger PNR-related diffuse scattering at lower temperatures. These
data are important because they support the mode-coupling analysis of Stock et
al. Stock , which assumes that the diffuse scattering in PMN in the (200) and
(220) Brillouin zones is much weaker than that in the (110) zone. Thus we
emphasize that while the neutron diffuse scattering cross-section near (200)
is not zero, it is small and consistent with previous structure factor
calculations.
## IV Diffuse Scattering Dynamics: A Reassessment of the Burns Temperature
Figure 6: Diffuse scattering intensity contours in PMN measured on the NCNR
Disk Chopper Spectrometer (DCS) near (100) are plotted on a logarithmic gray
scale at 300 K. Open circles represent the reciprocal space locations of the
detectors of the High Flux Backscattering Spectrometer relative to the
butterfly-shaped intensity contours. Each open circle is a separate detector,
and the detector numbers are indicated. These DCS data are taken from Xu et
al. Xu_TOF Figure 7: (Color online) (a) Contour plot of the diffuse
scattering intensity measured on the HFBS as a function of energy transfer and
temperature. Contours are shown on a linear intensity scale; dashed lines
indicate the full-width at half-maximum (FWHM) of the peak linewidth at each
temperature. Data were summed over detectors 10-16 as illustrated in Fig. 6.
Panels (b), (c), and (d) show inelastic scans at 700 K, 300 K, and 100 K. The
horizontal bar in panel (c) represents the instrumental FWHM elastic energy
resolution ($2\delta E$).
The reciprocal space geometry of the strong diffuse scattering in PMN was
first characterized using x-ray diffraction and is consistent with the neutron
scattering data we have presented here. Vakhrushev_JPCS ; You The energy
resolution provided by x-ray diffraction ($\delta E\approx 1000$ meV) is
typically much broader than that of thermal neutrons ($\delta E\approx 1$
meV); thus it was assumed that the strong diffuse x-ray scattering originated
from low-energy, soft, transverse optic (TO) phonons that were captured by the
large energy resolution. You ; Comes70:26 However the cold neutron data of
Hiraka al. provide a much narrower elastic energy resolution of $\approx 0.12$
meV HWHM and show, unambiguously, that the diffuse scattering cross section
contains a component that is static on timescales of at least $\sim 6$ ps
below 420 K as illustrated in Fig. 1. This result was subsequently confirmed
on a separate PMN crystal by the neutron study of Gvasaliya et al. which
employed comparable energy resolution. Gvasaliya_JPhysC05 Hence the observed
strong diffuse scattering cannot simply be the result of a soft, low-lying TO
phonon. The TO mode must condense and/or broaden sufficiently to produce the
elastic diffuse scattering cross section observed by Hiraka et al. Such a
scenario is in fact suggested by the corresponding thermal neutron data taken
on BT9 using a somewhat coarser energy resolution of $\approx 0.50$ meV HWHM.
As shown in Fig. 1, an apparent elastic diffuse scattering cross section is
observed up to temperatures as high as 650 K.
Figure 8: (a) Temperature dependence of the diffuse scattering measured on
the HFBS; data are integrated in energy from $\pm$ 5 $\mu eV$ and in $Q$ over
detectors 10-16 as illustrated in Fig. 6. (b) Temperature dependence of the
diffuse scattering intensity measured on SPINS at (1.05,0.95,0) by Hiraka et
al. Hiraka The SPINS energy resolution of $\delta E$=120 $\mu$eV HWHM
provided the energy integration.
Motivated by these results, we looked for evidence of a dynamic component of
the strong diffuse scattering using the NCNR High-Flux Backscattering
Spectrometer (HFBS), which provides an elastic energy resolution $\delta
E=0.4$ $\mu$eV HWHM. We oriented PMN crystal #4 in the $(hk0)$ scattering
plane, which is the same geometry used for the triple-axis studies discussed
in the previous sections. Fig. 6 displays the resulting locations of each of
the 16 HFBS detectors relative to the butterfly-shaped diffuse scattering
pattern at (100) measured previously using the NCNR Disk Chopper Spectrometer
(DCS). Xu_TOF From this figure it can be seen that detectors 10 through 16
sample different parts of the wings of the strong diffuse scattering. In
particular, detectors 13 and 14 lie close to the (100) Bragg peak. Because the
instrumental $Q$-resolution of the HFBS is relatively poor, we checked for the
presence of Bragg contamination in the integration analysis by removing
contributions from detectors 13 and 14. This did not change any of the
results. Moreover the PMN study by Wakimoto et al. showed the (202) Bragg peak
intensity changes by less than 10% between 50 K and 300 K. Waki_coupling
Therefore, in the following analysis we have integrated the intensity from
detectors 10 through 16.
The energy dependence of the diffuse scattering as a function of temperature
is illustrated in Fig. 7. Panel (a) shows a color contour plot of the peak
linewidth in energy as a function of temperature after subtraction of a high
temperature background. This background was taken to be the average of the
intensity measured at 600 K and 700 K because no strong diffuse scattering is
observed with either cold neutron triple-axis or neutron backscattering
methods at these temperatures. The distance between the dashed lines
represents the measured energy width (full-width at half-maximum) as a
function of temperature. Typical energy scans are displayed in panels (b),
(c), and (d) at 700 K, 300 K, and 100 K, respectively. We see that the peak
centered at $\hbar\omega=0$ is equal to the instrumental energy resolution
$\delta E=0.4$ $\mu$eV HWHM at all temperatures. Based on this analysis, we
conclude that the strong diffuse scattering is elastic on timescales of at
least $\tau\approx\hbar/\delta E\sim 2$ ns. Our results are thus consistent
with the neutron-spin echo study on PMN of Vakhrushev et al. Sergey_NSE
Fig. 8 (a) shows the temperature dependence of the diffuse scattering
intensity integrated over $\pm 5$ $\mu$eV. These data are compared to those
measured on PMN crystal #5 using the cold neutron spectrometer SPINS, which
are plotted in panel (b). That the temperature dependences from the
backscattering and SPINS measurements agree within error in spite of a
300-fold (120 $\mu$eV/0.4 $\mu$eV) difference in energy resolution proves that
static, short-range polar order first appears at much lower temperatures than
has been understood from previous data measured with much coarser energy
resolution. These data thus demand a revised value for the Burns temperature:
$T_{d}=420\pm 20$ K. We mention, however, that if the diffuse scattering
energy width obeys an Arrhenius or some power law form, which has been
suggested for spin glasses in Ref. Som82:25, , then a more detailed analysis
based on data taken closer to the onset of the diffuse scattering at 420 K
will be required to confirm (or reject) the existence of such alternative
dynamic contributions to the diffuse scattering energy width.
To gain a better understanding of the apparent quasielastic nature of the
diffuse scattering, we examined the temperature dependence of the low-energy
transverse acoustic (TA) modes in greater detail. In particular we focused on
measurements of the temperature dependent broadening of the transverse
acoustic phonon over a range of reduced wave vectors $q$ that approach the
zone center ($q=0$). These data are discussed in the following section.
## V TA and TO Phonons: Effects of the PNR
An extensive series of inelastic measurements were made on both PMN crystals
#4 and #5 in the (110) Brillouin zone using the SPINS spectrometer in an
effort to map out the temperature dependence of the TA mode for reduced wave
vectors $q\ll q_{wf}$ and $q\approx q_{wf}$, where $q_{wf}=0.14$ rlu $\approx
0.2$ Å-1 is the wave vector associated with the so-called “waterfall” effect,
and below which the long-wavelength, soft TO modes in PMN, PZN, and PZN-8%PT
are observed to broaden markedly at temperatures below $T_{d}$. Gehring_pzn8pt
; Gehring_Aspen ; Waki_sm ; Gehring_sm ; Gehring_pzn This zone was chosen
because the TA phonon dynamical structure factor for (110) is much larger than
that for (100). The TA phonon energy lineshapes were studied at two very
different values of the reduced wave vector $q$, measured relative to the
(110) zone center, for temperatures between 100 K and 900 K. These data are
presented in Fig. 9. The data shown on the left-hand side of this figure
correspond to $\vec{Q}=(1.035,0.965,0)$ or $q=\sqrt{2}\cdot 0.35$ rlu = 0.05
rlu = 0.077 Å-1, which is less than half of $q_{wf}$. These data show that the
TA lineshape at this small wave vector remains sharp and well-defined at all
temperatures and has an intrinsic energy width that is larger than the
instrumental resolution (shown by the small, solid horizontal bar at 500 K).
By contrast, the data on the right-hand side of Fig.9 correspond to
$\vec{Q}=(1.1,0.9,0)$ or $q=\sqrt{2}\cdot 0.10$ rlu = 0.14 rlu = 0.22 Å-1,
which is nearly equal to $q_{wf}$. In this case it is quite evident that the
TA lineshape broadens dramatically, becoming increasingly ill-defined below
$T_{d}$, especially at 300 K, but then sharpens at lower temperature, e. g. at
100 K. This behavior is the same as that observed for the soft, zone-center TO
mode in PMN by Wakimoto et al. measured at (200). Waki_sm
Figure 9: TA phonon lineshapes in PMN measured on SPINS with cold neutrons at
$\vec{Q}=(1.035,0.965,0)$ and (1.1,0.9,0) from 100 K to 900 K.
In addition to these cold neutron measurements of the TA mode using SPINS,
other measurements were made using thermal neutrons on BT9 to characterize the
TO mode lineshapes for reduced wave vectors $q\approx q_{wf}$ from 300 K to
900 K. In these experiments data were taken using a water-cooled furnace with
PMN crystal #4 in both the (300) and (100) Brillouin zones for $q=q_{wf}=0.14$
rlu and are shown in Fig.10. As can be seen on the left-hand side of this
figure, both the TA and TO modes are well-defined at 900 K. The scattering
intensity below 1-2 meV increases sharply because of the comparatively coarser
energy resolution intrinsic to BT9 ($\approx 0.5$ meV HWHM), which uses
thermal neutrons, compared to that on SPINS. Even so, the TA mode is easily
seen at both 900 K and 700 K. However at 500 K the TA mode is less well-
defined. This occurs in part because the mode has broadened, but also because
the low-energy scattering has increased, which results from the onset of the
strong diffuse scattering due to the PNR. At 300 K the TA mode is nearly
indistinguishable from the sloping background, and the TO mode has become
significantly damped. One also sees a gradual softening of the TO mode from
900 K down to 300 K that is of order 1-2 meV. The same data were taken in the
(100) zone, which appear in the right-hand portion of Fig. 10. Essentially the
same trends are observed in this zone, with the TA mode again becoming almost
indistinguishable from background at 300 K. One difference is that the TA mode
is better separated at all temperatures from the steep increase in scattering
seen at low energies. These data are consistent with the conjecture of Stock
et al. Stock that the low-energy TA phonon is coupled to the diffuse
scattering centered around the elastic position. The inset on the top panel on
the right-hand side displays the full TA-TO phonon spectrum at 900 K out to 20
meV.
Figure 10: TA and TO phonon lineshapes in PMN measured on BT9 with thermal
neutrons near the waterfall wave vector at $\vec{Q}=(3,-0.14,0)$ and
(1,-0.14,0) from 300 K to 900 K. Figure 11: The square of the phonon energy
is plotted versus temperature for the zone-center TO modes measured at (200)
(open diamonds) and (220) (solid diamonds), taken from Wakimoto et al. and
Stock et al. Waki_sm ; Stock , as well as for three TA modes measured at
(1.035,0.965,0) (open triangles), (1.1,0.9,0) (open circles), and (3,-0.14,0)
(solid circles). Linear behavior is consistent with that expected for a
conventional ferroelectric soft mode.
The square of the energies of both the TA and TO modes presented in the two
previous figures are plotted versus temperature in Fig. 11. This is done to
draw attention to the similarity between the low-energy lattice dynamics of
PMN and the behavior expected for a conventional ferroelectric, for which the
soft mode frequency squared $\omega^{2}$ varies linearly with $(T-T_{c})$.
Data for the zone-center soft mode measured at (200) and (220) have been taken
from Wakimoto et al. and Stock et al. and added to the top of Fig. 11 for ease
of comparison. Stock ; Waki_sm The data taken by Wakimoto et al. are based on
energy scans measured at constant-$Q$ at the zone center, whereas Stock et al.
determined the zone-center TO energies by extrapolating from data obtained at
non-zero $q$. The extrapolation technique permits phonon energies to be
extracted in the temperature range where the zone-center TO mode is heavily
damped. These values for the zone-center TO phonon energy have been confirmed
by infrared spectroscopy. Bovtun04:298 What this figure immediately reveals,
then, is a surprising ferroelectric character of the zone center TO mode above
$T_{d}$ and below $T_{c}$, and a corresponding change in the behavior of the
TA mode energy that is bracketed by the same two temperature scales, which are
denoted by the two vertical dash-dotted lines. We note that no such change in
the acoustic phonons was observed in PMN-60%PT where PNR and the associated
strong diffuse scattering are absent; therefore the softening of the TA mode
is directly related to the development of PNR. Stock06:73 A further
interesting feature is the minimum that is present in all of these data near
420 K, which is the same temperature at which the strong diffuse scattering
first appears when measured with high energy resolution, namely the revised
value of the Burns temperature. Therefore, the onset of the diffuse scattering
is directly associated with the softening of the TO mode; this is yet further
evidence that it is associated with the formation of static, short-range polar
correlations.
It is very important to note that the square of the TA phonon energy measured
at $\vec{Q}$=(1.1,0.9,0), which corresponds to $q\sim 0.14$ rlu, shows a much
more pronounced minimum at 420 K than does that measured at
$\vec{Q}$=(1.035,0.965,0), which corresponds to $q\sim 0.05$ rlu. This shows
that long-wavelength TA phonons exhibit a much weaker response to the
formation of static short-range, polar correlations. This can be understood in
terms of a simple physical picture in which those phonons with wavelengths
comparable to the size of the PNR are strongly affected (damped) by the PNR
whereas longer wavelength phonons simply average over the PNR and are thus not
affected by the presence of static, short-range polar correlations. This idea
was previously proposed to explain the anomalous damping of the TO mode for
wave vectors $q\leq q_{wf}$ near the zone center. Gehring_pzn8pt However, no
strong diffuse scattering is seen in PMN-60%PT and thus no PNR are present,
even though the anomalous TO broadening is still observed; hence this TO
broadening, which gives rise to the waterfall effect, cannot be associated
with the presence of static, short-range polar correlations. On the other
hand, the idea that the acoustic phonons are affected by PNR is confirmed by
the absence of any such acoustic phonon broadening in PMN-60%PT. Thus PNR have
a significant effect on the low-energy acoustic phonons over a limited range
of reduced wave vectors that may be related to the size of the PNR. In light
of the diffuse and inelastic scattering data that have been analyzed so far,
we now turn to the detailed measurement of the thermal expansion in PMN.
## VI Thermal Expansion: Invar-like Effect Below $T_{d}$
A variety of experimental techniques have been used to measure the thermal
expansion in both single crystal and powder samples of PMN, which includes
x-ray diffraction, laser dilatometry, and neutron diffraction. Shebanov ;
Arndt ; Dkhil Here we present extremely high $q$-resolution neutron
measurements ($\delta q\approx 0.0005$ rlu = 0.0008 Å-1 HWHM) of the cubic
unit cell lattice parameter $a$ of PMN crystal #5 using the special triple-
axis configuration described in the experimental section. This configuration,
which employs a perfect single crystal of Ge as analyzer, also provides an
exceptional energy resolution $\delta E\approx 5-10$ $\mu$eV HWHM. The
resulting data are plotted in Fig. 12 in terms of the strain $a/a_{0}-1$,
where $a_{0}$ is the lattice parameter at 200 K. Measurements were made on
heating from 30 K to 580 K using a closed-cycle 4He refrigerator after having
first cooled to 30 K. X-ray data measured by Dkhil et al. Dkhil on a single
crystal of PMN, shown as open squares, are provided for comparison.
Figure 12: Lattice strain of single crystal PMN derived from the (220) Bragg
reflection measured from 30 K to 580 K on BT9. Data were obtained on heating
using a perfect Ge(004) analyzer, an incident neutron energy $E_{i}=14.7$ meV,
and horizontal beam collimations of 15′-47′-S-20′-40′ to obtain extremely high
$q$-resolution. X-ray data from Dkhil et al. are indicated by the open squares
for comparison. Dkhil .
Several features are interesting to note. First, at low temperature the system
exhibits an invar-like effect in which the cubic lattice parameter changes by
less than 0.001 Å; indeed, the data below 320 K are consistent with a thermal
expansion of zero. At higher temperatures, however, the average thermal
expansion is $2.5\times 10^{-5}$ 1/K. That these disparate regions of null and
high rates of thermal expansion bracket the revised value for the Burns
temperature suggests that a direct connection exists between the onset of
static, short-range polar correlations and the structural properties of PMN.
This behavior seems to be consistent with that of PZN, which also exhibits an
increase in the thermal expansion at temperatures above that where the diffuse
scattering first appears. Agrawal87:34
There is ample evidence of similar behavior reported by other groups in
samples of PMN and PMN-$x$PT. A low-temperature invar-like effect was observed
in single crystal PMN-10%PT, where a transition to a high rate of thermal
expansion was found at 400 K; the thermal expansion for this sample at high
temperature is $1\times 10^{-5}$ 1/K, which is very close to that measured
here. Gehring_pmn10pt The x-ray study of ceramic samples of PMN by King et
al. also show a transition between low and high rates of thermal expansion,
but larger values for both. King X-ray and neutron work conducted by Bonneau
et al. on PMN powders yielded onset temperatures and values for the thermal
expansion consistent with our single crystal measurements. Bonneau91:91
Finally, an invar effect was also observed in the laser dilatometry study by
Arndt and Schmidt on a ceramic sample of PMN, which covered a range from 300 K
to 800 K. Arndt
Even though there is a general trend towards a larger thermal expansion for
temperatures above that where the diffuse scattering is onset, there is some
sample dependence and some differences between powders and single crystals. As
noted by Ye et al., Ye10pt powder measurements of PMN yield a different slope
for the thermal expansion measurements than do those of Dkhil et al. Dkhil
Also, studies using neutron strain scanning techniques found different thermal
expansion coefficients as a function of depth in single crystal samples.
Conlon04:70 ; Xu05:74 Therefore, part of the discrepancy observed between
different samples may be associated with surface effects. In this regard, we
also note the presence of a change in the slope of the strain curve between
400 K and 500 K; since this change is not observed in other PMN studies, we
believe this feature to be sample-dependent and thus extrinsic.
The phase-shifted model of polar nanoregions proposed by Hirota et al.
provides a plausible starting point from which to understand the anomalous
invar-like behavior in PMN below the Burns temperature $T_{d}$. This model is
based on the observation that the Pb, Mg, Nb, and O ionic displacements
obtained from the diffuse scattering measurements of Vakhrushev et al.
Vakhrushev on PMN can be decomposed into a center-of-mass conserving
component, consistent with the condensation of a soft, transverse optic mode,
and a uniform scalar component, corresponding to an acoustic phase shift. In
so doing Hirota et al. were able to reconcile the discrepancies between the
structure factors of the diffuse scattering and the soft TO modes and, in
particular, to explain the weakness of the diffuse scattering intensity
observed in the vicinity of the (200) Bragg peak. Hirota The idea that the
PNR are uniformly displaced with respect to the underlying cubic lattice in a
direction parallel to the PNR polarization has already been used by Gehring et
al. to explain the persistence of the strong diffuse scattering in a single
crystal of PZN-8%PT against large electric fields applied along [001].
Gehring_efield In that study it was shown that the diffuse scattering near
(003), which measures the projections of the ionic displacements along [001],
decreases under an external [001] field as expected. By contrast, the diffuse
scattering near (300), which measures the projections of the ionic
displacements along [100], i. e. perpendicular to the field direction, remains
unaffected even for field strengths up to 10 kV/cm. Thus a uniform polar state
is not achieved. This surprising behavior can be understood if one assumes
that the electric field applied along [001] removes or reduces the PNR shifts
along [001] while preserving those along [100]. The diffuse scattering should
then decrease anisotropically with field, as is observed. In the context of
the invar-like behavior of PMN below $T_{d}$, we speculate that such uniform
shifts of the PNR could effectively stabilize the lattice against subsequent
thermal expansion at lower temperature. Such a scenario is consistent with the
fact that both PMN-10%PT and PMN-20%PT retain cubic structures down to low
temperature when examined with high $q$-resolution neutron diffraction
methods. Gehring_pmn10pt ; Xu68:03
## VII Discussion and Conclusions
We have presented a comprehensive neutron study of the diffuse scattering in
the relaxor PMN. We have greatly extended the timescales of previous neutron
measurements by taking data on three different spectrometers, BT9, SPINS, and
the HFBS, which provide elastic energy resolutions of 500 $\mu$eV, 120
$\mu$eV, and 0.4 $\mu$eV HWHM, respectively. While the backscattering data
represent a 300-fold improvement in energy resolution over those obtained by
Hiraka et al. Hiraka on SPINS, both yield an onset temperature of $420\pm 20$
K for the diffuse scattering. This indicates that the PNR in PMN are static on
timescales of at least 2 ns below 420 K, but are apparently dynamic at higher
temperatures. Thus the true Burns temperature in PMN, which was originally
interpreted by Burns and Dacol as the condensation temperature of static
regions of local, randomly-oriented, nanometer-scale polarization, Burns is
$420\pm 20$ K, not 620 K. Independent evidence of the existence of purely
dynamic, short-range, polar correlations has been reported in PMN-55%PT, a
composition with no strong diffuse scattering and thus no static PNR, by Ko et
al. who observed phenomena that are typically related to the relaxation of
dynamic PNR. Ko These include significant Brillouin quasielastic scattering,
a softening of the longitudinal acoustic phonon mode, and a deviation of the
dielectric constant from Curie-Weiss behavior over an 80 K temperature
interval above $T_{c}$.
Previous measurements of the temperature dependence of the neutron diffuse
scattering have been extended in this study to 900 K, well above $T_{d}$. In
so doing we have unambiguously established the existence of two types of
diffuse scattering based on the observation of two markedly different
temperature dependencies, one of which vanishes at $T_{d}$ and one of which
does not. We associate the strong, temperature-dependent, diffuse scattering
with the formation of static, short-range, polar correlations (PNR) because of
its well documented response to an external electric field, and because it
first appears at the same temperature at which the soft (polar) TO mode
reaches a minimum in energy. We associate the weak, temperature-independent,
diffuse scattering, which shows no obvious change across either $T_{c}$ or
$T_{d}$, with chemical short-range order because it persists to extremely high
temperature. We further confirm the observations of Hiraka et al., Hiraka who
first characterized the distinctly different reciprocal space geometries of
both types of diffuse scattering, and we show that the bow-tie shape of the
weak diffuse scattering persists up to 900 K $\gg T_{d}$.
Key effects of the strong, temperature-dependent, diffuse scattering, and thus
of the PNR, on the low-energy lattice dynamics of PMN are also highlighted in
this study. The neutron inelastic measurements on PMN-60%PT by Stock et al.
prove conclusively that PNR cannot be the origin of the anomalous broadening
of long-wavelength TO modes observed in PMN, PZN, and other compounds, also
known as the waterfall effect, because PMN-60%PT exhibits the same effect but
no strong diffuse scattering (no PNR). Stock06:73 By contrast, many studies
have shown that PNR do broaden the TA modes in PMN Naberezhnov ; Koo ; Stock
and in PZN-$x$PT, Gop but that such effects are absent in compositions with
no PNR such as PMN-60%PT. Stock06:73 Our cold neutron data show that these
effects are $q$-dependent. Whereas long-wavelength TA modes with reduced wave
vectors $q\ll 0.2$ Å-1 remain well-defined and exhibit a nearly constant
energy width (lifetime) from 100 K to 900 K, shorter wavelength TA modes with
reduced wave vectors $q\approx 0.2$ Å-1 broaden substantially, with the
maximum broadening occurring at $T_{d}=420$ K. This result motivates a very
simple physical picture in which only those acoustic phonons having
wavelengths comparable to the size of the PNR are significantly scattered by
the PNR; acoustic modes with wavelengths much larger than the PNR are largely
unaffected because they simply average over the PNR. Models describing this
effect have been discussed elsewhere. Stock In particular, very recent work
by Xu et al. has revealed the presence of a phase instability in PZN-4.5%PT
that is directly induced by such a PNR-TA phonon interaction. This is shown to
produce a pronounced softening and broadening of TA modes in those zones where
the diffuse scattering is strong, and provides a natural explanation of the
enormous piezoelectric coupling in relaxor materials. Xu_Nature
In addition, we have performed neutron measurements of the thermal expansion
with extremely high $q$\- and $\hbar\omega$-resolution over a broad
temperature range extending well below $T_{c}$ and far above $T_{d}\sim 420$
K. In agreement with many other studies, our single crystal samples of PMN
exhibit little or no thermal expansion below $T_{d}$, behavior that is
reminiscent of the invar effect, but an unusually large thermal expansion
coefficient of $2.5\times 10^{-5}$ 1/K above $T_{d}$, where the strong diffuse
scattering is absent. The crossover between null and large coefficients of
thermal expansion coincides closely with $T_{d}$, which suggests that the
appearance of static PNR strongly affects the thermal expansion in PMN and
thus provides a structural signature of the formation of short-range, polar
correlations in zero field. The model of uniformly displaced, or phase-
shifted, PNR proposed by Hirota et al., which successfully predicts the
anisotropic response of the strong diffuse scattering to an electric field,
offers a simplistic, yet plausible, framework in which to understand this
anomalous behavior.
Finally, it is satisfying to note that the revised value of $T_{d}=420\pm 20$
K is consistent with the dielectric susceptibility data of Viehland et al.,
from which a Curie-Weiss temperature $\Theta=398$ K was derived. Viehland
Such good agreement between $T_{d}$ and $\Theta$ solidifies our identification
of the strong diffuse scattering with the condensation of the soft TO mode,
which reaches a minimum frequency at $T_{d}=420$ K. However this begs the
question of how one should interpret the original Burns temperature of $\sim
620$ K. At present there are two broadly divergent opinions on this issue, one
of which considers $\sim 620$ K to be a meaningful temperature scale in PMN,
and one of which does not. As it turns out, this debate is closely tied to
another on how many temperature scales are needed to describe the physics of
relaxors. We offer no final resolution to this discussion. Instead, we close
our paper with a brief summary of the primary studies supporting these
contrasting points of view, which is by no means comprehensive, so that the
readers may draw their own conclusions.
A number of experimental studies report evidence of either structural or
dynamic changes in PMN in the temperature range 600 K – 650 K, starting with
the optical index of refraction measurements of Burns and Dacol. Burns Siny
and Smirnova were the first to observe strong, first-order Raman scattering in
PMN at high temperatures, which, being forbidden in centrosymmetric crystals,
implied the presence of some type of lattice distortion. Siny In 2002
Vakhrushev and Okuneva calculated the probability density function for the Pb
ion in PMN within the framework of the spherical layer model using x-ray
powder diffraction data. Vakhrushev_2002 It was shown that this probability
density evolves from a single Gaussian function centered on the perovskite
$A$-site to a double Gaussian form between 635 K and 573 K, and that the
positions of maximum density for the lead ion follow a power law
$(T_{d}-T)^{0.31}$ with $T_{d}=635$ K. This picture was developed further by
Prosandeev et al. within a model that ascribed the changes in the lead
probability density function and subtle variations in the thermal expansion
near 620 K to a crossover from soft-mode to order-disorder dynamics.
Prosandeev In 2004 Egami et al. first proposed that at very high temperatures
the vibrations of the oxygen octahedra are sufficiently faster than those of
the Pb ions that, on average, a high-symmetry, local environment is obtained,
whereas at temperatures below $\sim 680$ K the Pb and O ions become
dynamically correlated. Egami_2004 This led to the subsequent
reinterpretation of the Burns temperature ($\sim 600$ K) as being the point
below which dynamic PNR first form, which was based on the dynamic PDF
measurements of Dmowski et al. obtained with an elastic energy resolution of 4
meV at six temperatures (680 K, 590 K, 450 K, 300 K, 230 K, and 35 K). Dmowski
Interestingly, after integrating their data from -5 meV to +5 meV, Dmowski et
al. saw evidence of a third temperature scale in PMN of order 300 K, and thus
distinct from both $T_{c}\sim 210$ K and $T_{d}\sim 600$ K, which they
associated with the formation of static PNR. Similar ideas have very recently
been put forth by Dkhil et al. based on extremely interesting acoustic
emission and thermal expansion measurements, and also in a general summary
written by Toulouse. Dkhil_3T ; Toulouse
A different approach was taken by Stock et al. who proposed a unified
description of the lead-based relaxors on the basis of striking similarities
between PMN and the Zn-analogue PZN, both of which display strong,
temperature-dependent, diffuse scattering; identical soft mode dynamics; yet
no long-range, structural distortion at low temperature in zero field.
Stock_PZN ; Xu_PZN Arguing by analogy with magnetic systems, Stock et al.
considered a three-dimensional Heisenberg model with cubic anisotropy in the
presence of random fields in which the Heisenberg spin corresponds to the
local ferroelectric polarization, the cubic anisotropy represents the
preferential orientation of the polarization along the ¡111¿ axes, and the
isotropic random magnetic field corresponds to the randomly-oriented local
electric fields that originate from the varying charge of the $B$-site cation.
Following a suggestion by Aharony, Stock et al. considered the case where the
Heisenberg term dominates over the random field term, and the cubic anisotropy
term is the weakest of the three. In this picture there would be just two
distinct, static, temperature scales. For $T>T_{d}$, the cubic anisotropy is
irrelevant and therefore the system should behave like a Heisenberg system in
a random field. In this case the excitation spectrum is characterized by
Goldstone modes and therefore no long-range order is expected in the presence
of random fields. Aharony Instead the system forms polar nanoregions in a
paraelectric background. The second temperature scale $T_{c}$ appears at low
temperatures where the cubic anisotropy term becomes important, and in this
limit the system should resemble an Ising system in the presence of a random
field. This model thus explains the local ferroelectric distortion
characterized by the recovery of the soft-optic mode and, although a 3D Ising
system in equilibrium should display long-range order in the presence of small
random fields, as is observed in magnetic systems, nonequilibrium effects with
long time scales become dominant. The presence of such nonequilibrium effects
may explain the lack of long-range ferroelectric order in PMN and PZN as well
as the history dependence of physical properties such as the linear
birefringence. The phase-shifted nature of the polar nanoregions may also
create another energy barrier against the ordered phase at $T_{c}$. Stock et
al. are also able to explain the physics of compounds beyond the MPB, such as
PMN-60%PT, within this model, for which only one temperature scale ($T_{c}$)
exists. As the Ti concentration Ti increases, the relaxor phase diagram
crosses over to a ferroelectric regime, and this can be understood as an
increase in the strength of the cubic anisotropy term that is simultaneously
accompanied by a reduction of the random fields as must occur in the limit of
pure PbTiO3. Stock06:73 It should be mentioned that other studies have
invoked random fields to explain the static and dynamic properties of
relaxors. Westphal ; Pirc ; Fisch
Independent of the validity of either of the two pictures summarized above,
the seminal finding of our study of remains that the strong diffuse scattering
in PMN first appears at a temperature that depends sensitively on the energy
resolution of the measurement. This fact inevitably raises interesting
questions about the significance of the previous value of the Burns
temperature ($\sim 620$ K). If the strong diffuse scattering in PMN results
from the soft TO mode, then other techniques based on x-ray diffraction,
thermal neutron scattering, or neutron PDF, which provide comparatively much
coarser energy resolution, will be unable to discriminate between low-energy,
dynamic, short-range polar correlations and truly static ($\hbar\omega=0$) PNR
because any low-energy, polar correlations will be integrated into the elastic
channel by the broad energy resolution. Thus as the TO mode softens, the
associated low-energy, polar, spectral weight will fall within the energy
resolution at temperatures above $T_{d}=420$ K, and the net result will be an
artificially higher value of $T_{d}$ that increases with the size of the
energy resolution. This effect should be especially pronounced for phonon
modes that are broad in energy (i. e. that have short lifetimes), and this is
clearly the case for the soft TO mode in PMN, which exhibits a nearly
overdamped lineshape and a minimum frequency precisely at $T_{d}=420$ K. While
it is true that the structure factor of the strong diffuse scattering is
inconsistent with that of the soft TO mode, the phase-shifted model of Hirota
et al. provides a way to reconcile this discrepancy. In this respect, we
simply suggest that the previous value of $T_{d}\sim 620$ K might not
represent a physically meaningful temperature scale in PMN. Future studies
examining the values of $T_{d}$ in other relaxor compounds with improved
energy resolution are thus of interest.
Our reassessment of the Burns temperature $T_{d}$ immediately clarifies an
intimate relationship between the formation of static, short-range polar
correlations and the consequent effects on both the low-energy lattice
dynamics and structure of PMN. Cold neutron triple-axis and backscattering
spectroscopic methods conclusively show the existence of static, short-range
polar correlations, only below $T_{d}=420\pm 20$,K on timescales of at least 2
ns. Thermal neutron measurements of the lattice dynamics reflect the presence
of these static PNR through the presence of a distinct minimum in both the
soft TO and TA mode energies, both of which occur at 420 K. The effect of PNR
on the lattice dynamics is evident only for TA modes having wave vectors of
order 0.2 Å-1, a fact that could be exploited to determine the size of the
PNR. At the same time an enormous change in the coefficient of thermal
expansion is seen near $T_{d}$, below which the crystal lattice exhibits
invar-like behavior.
## VIII Acknowledgments
We would like to thank A. Bokov, Y. Fujii, K. Hirota, D. Phelan, S. Shapiro,
S. Wakimoto, and G. Xu for stimulating discussions. This study was supported
in part by the U. S. - Japan Cooperative Neutron-Scattering Program, the
Natural Sciences and Engineering Research Council (NSERC) of Canada, the
National Research Council (NRC) of Canada, the Japanese Ministry of Monbu-
Kagaku-shou, RFBR grants 08-02-00908 and 06-02-90088NSF, the U. S. Dept. of
Energy under contract No. DE-AC02-98CH10886, the U. S. Office of Naval
Research under grant No. N00014-06-1-0166, and by the NSF under grant No.
DMR-9986442. We also acknowledge the U. S. Dept. of Commerce, NIST Center for
Neutron Research, for providing the neutron scattering facilities used in this
study, some of which are supported in part by the National Science Foundation
under Agreement No. DMR-0454672.
## References
* (1) Z.-G. Ye, Key Engineering Materials Vols. 155-156, 81 (1998).
* (2) S.-E. Park and T. R. Shrout, J. Appl. Phys. 82, 1804 (1997).
* (3) A.A. Bokov and Z.-G. Ye, J. Mat. Science 41, 31 (2006).
* (4) B.E. Vugmeister, Phys. Rev. B 73, 174117 (2006).
* (5) G. Burns and F. H. Dacol, Solid State Commun. 48, 853 (1983); ibid, Phys. Rev. B 28, 2527 (1983).
* (6) P. Bonneau, P. Garnier, E. Husson, and A. Morell, Mat. Res. Bull, 24, 201 (1989).
* (7) N. de Mathan, E. Husson, G. Calvarin, J.R. Gavarri, A.W. Hewat, and A. Morell J. Phys: Condens Matter 3, 8159 (1991).
* (8) J. Zhao, A. E. Glazounov, Q. M. Zhang, and B. Toby, Appl. Phys. Lett. 72, 1048 (1998).
* (9) K. Hirota, S. Wakimoto, and D.E. Cox, J. Phys. Soc. Jpn. 75, 111006 (2006).
* (10) R. Blinc, V. Laguta, and B. Zalar, Phys. Rev. Lett. 91, 247601 (2003).
* (11) V. V. Shvartsman and A. L. Kholkin, Phys. Rev. B 69, 014102 (2004).
* (12) T. Egami, S. Teslic, W. Dmowski, P. K. Davies, and I.-W. Chen, J. Kor. Phys. Soc. 32, S935 (1998); T. Egami, W. Dmowski, S. Teslic, P. K. Davies, I. W. Chen, and H. Chen, Ferroelectrics 206, 231 (1998).
* (13) I.-K. Jeong, T. W. Darling, J. K. Lee, Th. Proffen, R. H. Heffner, J.S. Park, K.S. Hong, W. Dmowski, and T. Egami, Phys. Rev. Lett. 94, 147602 (2005).
* (14) A. Naberezhnov, S. B. Vakhrushev, B. Dorner, and H. Moudden, Eur. Phys. J. B 11, 13 (1999).
* (15) T. Y. Koo, P. M. Gehring, G. Shirane, V. Kiryukhin, S. G. Lee, and S. W. Cheong, Phys. Rev. B 65, 144113 (2002).
* (16) S. Vakhrushev, A. Naberezhnov, S. K. Sinha, Y. P. Feng, and T. Egami, J. Phys. Chem. Solids 57, 1517 (1996).
* (17) H. You and Q. M. Zhang, Phys. Rev. Lett. 79, 3950 (1997).
* (18) G. Xu, G. Shirane, J. R. D. Copley, and P. M. Gehring, Phys. Rev. B 69, 064112 (2004).
* (19) G. Xu, Z. Zhong, Y. Bing, Z.-G. Ye, and G. Shirane, Nature Materials 4, 887 (2005).
* (20) M. Matsuura, K. Hirota, P. M. Gehring, Z.-G. Ye, W. Chen, and G. Shirane, Phys. Rev. B 74, 144107 (2006).
* (21) G. Xu, Z. Zhong, H. Hiraka, and G. Shirane Phys. Rev. B 70, 174109 (2004).
* (22) S. B. Vakhrushev, A. A. Naberezhnov, N. M. Okuneva, and B. N. Savenko, Phys. Solid State 40, 1728 (1998).
* (23) C. Stock, G. Xu, P. M. Gehring, H. Luo, X. Zhao, H. Cao, J. F. Li, D. Viehland, G. Shirane, Phys. Rev. B 76, 064122 (2007).
* (24) P. M. Gehring, K. Ohwada, and G. Shirane, Phys. Rev. B 70, 014110 (2004).
* (25) G. Xu, P. M. Gehring, and G. Shirane, Phys. Rev. B 72, 214106 (2005).
* (26) G. Xu, P. M. Gehring, and G. Shirane, Phys. Rev. B 74, 104110 (2006).
* (27) S. Wakimoto, C. Stock, R. J. Birgeneau, Z.-G. Ye, W. Chen, W. J. L. Buyers, P. M. Gehring, and G. Shirane, Phys. Rev. B 65, 172105 (2002).
* (28) H. Hiraka, S.-H. Lee, P. M. Gehring, G. Xu, and G. Shirane, Phys. Rev. B 70, 184105 (2004).
* (29) C. Stock, H. Luo, D. Viehland, J. F. Li, I. Swainson, R. J. Birgeneau, and G. Shirane, J. Phys. Soc. Jpn., 74, 3002 (2005).
* (30) J. Hlinka, S. Kamba, J. Petzelt, J. Kulda, C. A. Randall, and S. J. Zhang, J. Phys. Condens. Matter 15, 4249 (2003).
* (31) S. N. Gvasaliya, S. G. Lushnikov, and B. Roessli, Cryst. Reports 49, 108 (2004).
* (32) S. N. Gvasaliya, S. G. Lushnikov, and B. Roessli, Phys. Rev. B 69, 092105 (2004).
* (33) The study by Hlinka et al. (Ref. Hlinka_110, ) employed a much narrower energy resolution of 30 $\mu$eV full-width at half-maximum (FWHM) than did the studies of Gvasaliya et al. and Hiraka et al. (200 $\mu$eV FWHM) (Ref.’s Gvasaliya2, and Hiraka, ).
* (34) B. P. Burton, E. Cockayne, S. Tinte, and U. V. Waghmare, Phase Transitions 79, 91 (2006).
* (35) C. Stock, D. Ellis, I.P. Swainson, Guangyong Xu, H. Hiraka, Z. Zhong, H. Luo, X. Zhao, D. Viehland, R. J. Birgeneau, and G. Shirane, Phys. Rev. B., 73, 064107 (2006).
* (36) G. Xu, P. M. Gehring, V. J. Ghosh, and G. Shirane, Acta. Cryst. A60, 598 (2004).
* (37) A. Meyer, R. M. Dimeo, P. M. Gehring, and D. A. Neumann, Rev. Sci. Instrum., 74, 27 59 (2003).
* (38) W. Chen and Z.-G. Ye, unpublished; Z.-G. Ye, P. Tissot, and H. Schmid, Mater. Res. Bull. 25, 739 (1990).
* (39) S. B. Vakhrushev, private communication.
* (40) S. B. Vakhrushev, A. A. Naberezhnov, N. M. Okuneva, and B. N. Savenko, Phys. Solid State 37, 1993 (1995).
* (41) S. Vakhrushev, A. Ivanov, and J. Kulda, Phys. Chem. Chem. Phys. 7, 2340 (2005).
* (42) D. La-Orauttapong, J. Toulouse, Z.-G. Ye, W. Chen, R. Erwin, and J. L. Robertson, Phys. Rev. B 67, 134110 (2003).
* (43) These authors show schematic diffuse scattering intensity contours in Fig. 4 of their paper (no data are shown) for PZN-4.5%PT and PZN-9%PT that are wrong because they do not possess the required crystal lattice symmetry.
* (44) T. R. Welberry, D. J. Goossens, and M. J. Gutmann, Phys. Rev. B 74, 224108 (2006).
* (45) T.R. Welberry, M.J Gutmann, Hyungje Woo, D. J. Goossens, Guangyong Xu, and C. Stock, J. Appl. Cryst. 38, 639 (2005).
* (46) S. Wakimoto, G.A. Samara, R.K. Grubbs, E.L. Venturini, L.A. Boatner, G. Xu, G. Shirane, S.-H. Lee, Phys. Rev. B 74, 054101 (2006).
* (47) M. Pasciak, M. Wolcyrz, and A. Pietraszko, Phys. Rev. B 76, 014117 (2007).
* (48) B.P Burton and E. Cockayne, Phys. Rev. B 60, R12542 (1999).
* (49) V.V. Laguta, M.D. Glinchuk, S.N. Nokhrin, I.P. Bykov, R. Blinc, A. Gregorovic, and B. Zalar, Phys. Rev. B 67, 104106 (2003).
* (50) C. Boulesteix, F. Varnier, A. Llebaria, and E. Husson, J. Solid State. Chem. 108, 141 (1994).
* (51) O. Svitelskiy, J. Toulouse, G. Yong, and Z.-G. Ye, Phys. Rev. B 68, 104107 (2003).
* (52) K. Hirota, Z.-G. Ye, S. Wakimoto, P. M. Gehring, and G. Shirane, Phys. Rev. B 65, 104105 (2002).
* (53) R. Comes, M. Lambert, and A. Guinier, Acta Crystallogr. Sect. A 26, 244 (1970).
* (54) S. N. Gvasaliya, B. Roessli, R. A. Cowley, P. Huber, and S. G. Lushnikov, J. Phys. Condens. Matter 17, 4343 (2005).
* (55) S. Wakimoto, C. Stock, Z.-G. Ye, W. Chen, P. M. Gehring, G. Shirane, Phys. Rev. B 66, 224102 (2002).
* (56) H. Sompolinsky and A. Zippelius, Phys. Rev B 25, 6860 (1982).
* (57) P. M. Gehring, S.-E. Park, and G. Shirane, Phys. Rev. Lett. 84, 5216 (2000).
* (58) P. M. Gehring, S. B. Vakhrushev, and G. Shirane, in Fundamental Physics of Ferroelectrics 2000: Aspen Center for Physics Winter Workshop, edited by R. E. Cohen, AIP Conf. Proc. No. 535 (AIP, New York, 2000), p. 314.
* (59) P. M. Gehring, S. Wakimoto, Z.-G. Ye, and G. Shirane, Phys. Rev. Lett. 87, 277601 (2001).
* (60) P. M. Gehring, S.-E. Park, and G. Shirane, Phys. Rev. B 63, 224109 (2001).
* (61) V. Bovtun, S. Kamba, A. Pashkin, M. Savinov, P. Samoukhina, J. Petzelt, I.P. Bykov, and M.D. Glinchuk, Ferroelectrics, 298, 23 (2004).
* (62) L. A. Shebanov, P. P. Kapostins, and J. A. Zvirgds, Ferroelectrics 56, 53 (1984).
* (63) H. Arndt and G. Schmidt, Ferroelectrics 79, 149 (1988).
* (64) B. Dkhil, J. M. Kiat, G. Calvarin, G. Baldinozzi, S.B. Vakhrushev, and E. Suard, Phys. Rev. B 65, 024104 (2001).
* (65) D.K. Agrawal, A. Halliyal, and J. Belsick, Mat. Res. Bull. 23, 159 (1988).
* (66) P. M. Gehring, W. Chen, Z.-G. Ye, and G. Shirane, J. Phys. Condens. Matter 16, 7113 (2004).
* (67) H. W. King, M. Yildiz, S. H. Ferguson, D. F. Waechter, and S. E. Prasad, Ferroelectric Letters , 55 (2004).
* (68) P. Bonneau, P. Garnier, G. Calvarin, E. Husson, J. R. Gavarri, A.W. Hewat, and A. Morell, J. Sol. State Chem. 91, (1991).
* (69) Z.-G. Ye, Y. Bing, J. Gao, A. A. Bokov, P. Stephens, B. Noheda, and G. Shirane, Phys. Rev. B 67, 104104 (2003).
* (70) K. H. Conlon, H. Luo, D. Viehland, J.F. Li, T. Whan, J.H. Fox, C. Stock, and G. Shirane, Phys. Rev. B 70 172204 (2004).
* (71) G. Xu, P. M. Gehring, C. Stock, K. Conlon, Phase Transitions 79, 135 (2006).
* (72) G. Xu, D. Viehland, J. F. Li, P. M. Gehring, and G. Shirane, Phys. Rev. B 68, 212410 (2003).
* (73) J.-H. Ko, S. Kojima, A. A. Bokov, and Z.-G. Ye, Appl. Phys. Lett. 91, 252909 (2007).
* (74) G. Xu, J. Wen, C. Stock, and P. M. Gehring, Nature Mat. 7, 562 (2008).
* (75) D. Viehland, S. J. Jang, L. E. Cross, and M. Wuttig, Phys. Rev. B 46, 8003 (1992).
* (76) I. G. Siny and T. A. Smirnova, Ferroelectrics 90, 191 (1989).
* (77) S. B. Vakhrushev and N. M. Okuneva, in Fundamental Physics of Ferroelectrics, edited by R. E. Cohen, AIP Conf. Proc. No. 626 (AIP, New York, 2002), p. 117.
* (78) http://arxiv.org/ftp/cond-mat/papers/0506/0506132.pdf
* (79) T. Egami, W. Dmowski, I.-K. Jeong, R. H. Heffner, J.-S. Park, K.-S. Hong, M. Hehlen, and F. Trouw, in Fundamental Physics of Ferroelectrics edited by R. E. Cohen and P. M. Gehring, Williamsburg Conf. Proc. (Washington, DC, 2004), p. 58. (http://people.gl.ciw.edu/cohen/meetings/ferro2004/Ferro2004Abstractbook.pdf)
* (80) W. Dmowski, S. B. Vakhrushev, I.-K. Jeong, M. P. Hehlen, F. Trouw, and T. Egami, Phys. Rev. Lett. 100, 137602 (2008).
* (81) M. Roth, E. Mojaev, E. Dul’kin, P. Gemeiner, B. Dkhil, Phys. Rev. Lett. 98, 265701 (2007).
* (82) J. Toulouse, Ferroelectrics 369, 203 (2008).
* (83) C. Stock, R. J. Birgeneau, S. Wakimoto, J. S. Gardner, W. Chen, Z.-G. Ye, and G. Shirane, Phys. Rev. B 69, 094104 (2004).
* (84) G. Xu, Z. Zhong, Y. Bing, Z.-G. Ye, C. Stock, and G. Shirane, Phys. Rev. B 67, 104102 (2003).
* (85) A. Aharony, private communication.
* (86) V. Westphal, W. Kleemann, and M. D. Glinchuk, Phys. Rev. Lett. 68, 847 (1992).
* (87) R. Pirc and R. Blinc, Phys. Rev. B 60, 13470 (1999).
* (88) R. Fisch, Phys. Rev. B 67, 094110 (2003).
|
arxiv-papers
| 2009-04-27T18:32:12 |
2024-09-04T02:49:02.195341
|
{
"license": "Public Domain",
"authors": "P. M. Gehring, H. Hiraka, C. Stock, S.-H. Lee, W. Chen, Z.-G. Ye, S.\n B. Vakhrushev, Z. Chowdhuri",
"submitter": "Peter M. Gehring",
"url": "https://arxiv.org/abs/0904.4234"
}
|
0904.4297
|
# Generalized thermo vacuum state derived by the partial trace method
††thanks: Worked supported by the National Natural Science Foundation of China
under Grant 10775097.
Li-yun Hu1,2 and Hong-yi Fan1 Corresponding author. E-mail: hlyun2008@126.com
1Department of Physics, Shanghai Jiao Tong University, Shanghai 200030, China
2College of Physics & Communication Electronics, Jiangxi Normal University,
Nanchang 330022, China
###### Abstract
By virtue of the technique of integration within an ordered product (IWOP) of
operators we present a new approach for deriving generalized thermo vacuum
state which is simpler in form that the result by using the Umezawa-Takahashi
approach, in this way the thermo field dynamics can be developed. Applications
of the new state are discussed.
Keywords: the partial trace method, generalized thermal vacuum state, thermo
field dynamics, the IWOP technique
PACS numbers: 03.65.-w, 42.50.-p
## I Introduction
In nature every system is immersed in an environment, the problem about the
system interacting with the environment is a hot topic in quantum information
and quantum optics. To describe system-environment evolution Takahashi-Umezawa
introduced thermo field dynamics (TFD) 1 ; 2 , with which one may convert the
calculations of ensemble averages at finite temperature
$\left\langle
A\right\rangle=\mathtt{tr}\left(A\rho\right)/Z\left(\beta\right),\text{
}\rho=e^{-\beta H},$ (1)
to equivalent expectation values with a pure state
$\left|0(\beta)\right\rangle$, i.e.,
$\left\langle A\right\rangle=\left\langle
0(\beta)\right|A\left|0(\beta)\right\rangle,$ (2)
where $\beta=1/kT$, $k$ is the Boltzmann constant, and
$Z\left(\beta\right)=\mathtt{tr}\rho=\mathtt{tr}e^{-\beta H}$ is the partition
function; $H$ is the system’s Hamiltonian. Then how to find the explicit
expression of $\left|0(\beta)\right\rangle?$ If one expands
$\left|0(\beta)\right\rangle$ in terms of the energy eigenvector set of $H$,
$\left|0(\beta)\right\rangle=\sum_{n}\left|n\right\rangle f_{n}(\beta),$ and
then substituting it into Eq.(2), which results in
$f_{n}^{\ast}(\beta)f_{m}(\beta)=Z^{-1}\left(\beta\right)e^{-\beta
E_{n}}\delta_{n,m}$ (after comparing with Eq.(1)). By introducing a fictitious
mode, $\left\langle\tilde{n}\right.\left|\tilde{m}\right\rangle=\delta_{n,m},$
then Takahashi-Umezawa obtained
$\left|0(\beta)\right\rangle=Z^{-1/2}\left(\beta\right)\sum_{n}e^{-\beta
E_{n}/2}\left|n,\tilde{n}\right\rangle.$ (3)
Thus the worthwhile convenience in Eq.(2) is at the expense of introducing a
fictitious field (or called a tilde-conjugate field, denoted as operator
$\tilde{a}^{\dagger}$) in the extended Hilbert space, i.e., the original
optical field state $\left|n\right\rangle$ in the Hilbert space $\mathcal{H}$
is accompanied by a tilde state $\left|\tilde{n}\right\rangle$ in
$\mathcal{\tilde{H}}$. A similar rule holds for operators: every Bose
annihilation operator $a$ acting on $\mathcal{H}$ has an image $\tilde{a}$
acting on $\mathcal{\tilde{H}}$,
$\left[\tilde{a},\tilde{a}^{\dagger}\right]=1$. These operators in
$\mathcal{H}$ are commutative with those in $\mathcal{\tilde{H}}$.
For a harmonic oscillator the Hamiltonian is $\hbar\omega a^{\dagger}a,$
$\left|n\right\rangle=a^{\dagger n}/\sqrt{n!}\left|0\right\rangle,$Takahashi-
Umezawa obtained the explicit expression of $\left|0(\beta)\right\rangle$ in
doubled Fock space,
$\left|0(\beta)\right\rangle=\text{sech}\theta\exp\left[a^{\dagger}\tilde{a}^{\dagger}\tanh\theta\right]\left|0\tilde{0}\right\rangle=S\left(\theta\right)\left|0\tilde{0}\right\rangle,$
(4)
which is named thermo vacuum state, and $S\left(\theta\right)$ thermo
operator,
$S\left(\theta\right)\equiv\exp\left[\theta\left(a^{\dagger}\tilde{a}^{\dagger}-a\tilde{a}\right)\right],$
(5)
which is similar in form to the a two-mode squeezing operator except for the
tilde mode. $\theta$ is a parameter related to the temperature by
$\tanh\theta=\exp\left(-\frac{\hbar\omega}{2kT}\right).$
An interesting question thus challenges us: For the Hamiltonian being
$H=\omega a^{\dagger}a+\kappa^{\ast}a^{\dagger 2}+\kappa a^{2},$ then what is
the corresponding thermo vacuum state? One may wonder if this question is
worth of paying attention since this $H$ can be diagonalized by the Bogoliubov
transformation as a new harmonic oscillator, correspondingly, the thermo
vacuum state for $H$ can be obtained by acting the same transformation on
$\left|0(\beta)\right\rangle$ in (4) (see Eq. (A9) in the Appendix). To make
this issue worthwhile, we emphasize that we shall adopt a completely new
approach to construct thermo vacuum state and our result is simpler in form
than that in Eq. (A10). Our work is arranged as follows. In Sec. 2 by re-
analyzing Eqs. (1)-(2) we shall introduce a new method (the partial trace
method) to find the explicit expression of $\left|0(\beta)\right\rangle$ in
(4). Then using this method, we obtain the expression of
$\left|0(\beta)\right\rangle$ in Eq. (4) in Sec. 3. For the degenerate
parametric amplifier, we derive a generalized thermal vacuum state
$\left|\phi(\beta)\right\rangle$ in Sec. 4. Section 5 is devoting to
presenting some applications of $\left|\phi(\beta)\right\rangle$.
## II The partial trace method
Following the spirit of TFD, for a density operator $\rho=e^{-\beta
H}/Z\left(\beta\right)$ with Hamiltonian $H$, we can suppose that the ensemble
averages of a system operator $A$ may be calculated as
$A=\mathtt{tr}\left(\rho
A\right)=\left\langle\psi(\beta)\right|A\left|\psi(\beta)\right\rangle,$ (6)
where $\left|\psi(\beta)\right\rangle$ corresponds to the pure state in the
extended Hilbert space.
Let $\mathtt{Tr}$ denote the trace operation over both the system freedom
(expressed by $\mathtt{tr}$) and the environment freedom by
$\widetilde{\mathtt{tr}}$, i.e.,
$\mathtt{Tr}=\mathtt{tr}\widetilde{\mathtt{tr}}$, then we have
$\displaystyle\left\langle\psi(\beta)\right|A\left|\psi(\beta)\right\rangle$
$\displaystyle=$
$\displaystyle\mathtt{Tr}\left[A\left|\psi(\beta)\right\rangle\left\langle\psi(\beta)\right|\right]$
(7) $\displaystyle=$
$\displaystyle\mathtt{tr}\left[A\widetilde{\mathtt{tr}}\left|\psi(\beta)\right\rangle\left\langle\psi(\beta)\right|\right].$
Note that
$\widetilde{\mathtt{tr}}\left|\psi(\beta)\right\rangle\left\langle\psi(\beta)\right|\neq\left\langle\psi(\beta)\right|\left.\psi(\beta)\right\rangle,$
(8)
since $\left|\psi(\beta)\right\rangle$ involves both real mode and fictitious
mode. Comparing Eq.(7) with Eq.(1) we see
$\widetilde{\mathtt{tr}}\left|\psi(\beta)\right\rangle\left\langle\psi(\beta)\right|=e^{-\beta
H}/Z\left(\beta\right).$ (9)
Eq.(9) indicates that, for a given Hamiltonian $H$, if we can find a density
operator of pure state $\left|\psi(\beta)\right\rangle$ in doubled Hilbert
space, whose partial trace over the tilde freedom may lead to density operator
$e^{-\beta H}/Z\left(\beta\right)$ of the system, then the average value of
operator $A$ can be calculated as an equivalent expectation value with a pure
state $\left|\psi(\beta)\right\rangle,$ i.e., $\left\langle
A\right\rangle=\mathtt{tr}\left(Ae^{-\beta
H}/Z\left(\beta\right)\right)=\left\langle\psi(\beta)\right|A\left|\psi(\beta)\right\rangle$.
In particular, when $H=\hbar\omega a^{\dagger}a,$ a free Bose system, Eq. (9)
becomes
$\widetilde{\mathtt{tr}}\left|0(\beta)\right\rangle\left\langle
0(\beta)\right|=\left(1-e^{-\beta\hbar\omega}\right)e^{-\beta\hbar\omega
a^{\dagger}a}\equiv\rho_{c},$ (10)
$\rho_{c}$ is the density operator of chaotic field. This equation enlightens
us to have a new approach for deriving $\left|0(\beta)\right\rangle\colon$
$\left|0(\beta)\right\rangle\left\langle 0(\beta)\right|$ in doubled Hilbert
space should be such constructed that its partial trace over the tilde freedom
may lead to density operator $\rho_{c}$ of the system. In the following we
shall employ the technique of integration within an ordered product (IWOP) of
operators 3 ; 4 ; 5 to realize this goal.
## III Derivation of $\left|0(\beta)\right\rangle$ in Eq.(4) via the new
approach
Using the normally ordered expansion formula 6
$e^{-\beta\hbar\omega
a^{\dagger}a}=\colon\exp\left\\{\left(e^{-\beta\hbar\omega}-1\right)a^{\dagger}a\right\\}\colon,$
(11)
(where the symbol $\colon\colon$ denotes the normal ordering form of
operator), and the IWOP technique we have
$\displaystyle\colon\exp\\{\left(e^{-\beta\hbar\omega}-1\right)a^{\dagger}a\\}\colon$
(12) $\displaystyle=$ $\displaystyle\int\frac{d^{2}z}{\pi}\colon
e^{-\left|z\right|^{2}+z^{\ast}a^{{\dagger}}e^{-\beta\hbar\omega/2}+zae^{-\beta\hbar\omega/2}-a^{{\dagger}}a}\colon.$
Remembering the ordering form of vacuum projector operator
$\left|0\right\rangle\left\langle 0\right|=\colon e^{-a^{{\dagger}}a}\colon$,
we can rewrite Eq.(12) as
$\displaystyle\colon\exp\\{\left(e^{-\beta\hbar\omega}-1\right)a^{\dagger}a\\}\colon$
(13) $\displaystyle=$
$\displaystyle\int\frac{d^{2}z}{\pi}e^{z^{\ast}a^{{\dagger}}e^{-\beta\hbar\omega/2}}\left|0\right\rangle\left\langle
0\right|e^{zae^{-\beta\hbar\omega/2}}\left\langle\tilde{z}\right.\left|\tilde{0}\right\rangle\left\langle\tilde{0}\right.\left|\tilde{z}\right\rangle,$
where $\left|\tilde{z}\right\rangle$ is the coherent state 7 ; 8 in fictitous
mode
$\text{ \ \
}\left|\tilde{z}\right\rangle=\exp\left(z\tilde{a}^{\dagger}-z^{\ast}\tilde{a}\right)\left|\tilde{0}\right\rangle,\tilde{a}\left|\tilde{z}\right\rangle=z\left|\tilde{z}\right\rangle,\left\langle\tilde{0}\right.\left|\tilde{z}\right\rangle=e^{-\left|z\right|^{2}/2\text{\
}}.$ (14)
Further, multipling the factor $\left(1-e^{-\beta\hbar\omega}\right)$ to both
sides of Eq.(13) and using the completeness of coherent state
$\int\frac{d^{2}z}{\pi}\left|\tilde{z}\right\rangle\left\langle\tilde{z}\right|=1$,
we have
$\displaystyle\left(1-e^{-\beta\hbar\omega}\right)\times\text{ Eq}.(\ref{13})$
(15) $\displaystyle=$
$\displaystyle\left(1-e^{-\beta\hbar\omega}\right)\int\frac{d^{2}z}{\pi}\text{
}\left\langle\tilde{z}\right|e^{z^{\ast}a^{{\dagger}}e^{-\beta\hbar\omega/2}}\left|0\tilde{0}\right\rangle\left\langle
0\tilde{0}\right|e^{zae^{-\beta\hbar\omega/2}}\left|\tilde{z}\right\rangle$
$\displaystyle=$
$\displaystyle\left(1-e^{-\beta\hbar\omega}\right)\int\frac{d^{2}z}{\pi}\text{
}\left\langle\tilde{z}\right|e^{a^{{\dagger}}\tilde{a}^{{\dagger}}e^{-\beta\hbar\omega/2}}\left|0\tilde{0}\right\rangle\left\langle
0\tilde{0}\right|e^{a\tilde{a}e^{-\beta\hbar\omega/2}}\left|\tilde{z}\right\rangle$
$\displaystyle=$
$\displaystyle\widetilde{\mathtt{tr}}\left[0(\beta)\left\langle
0(\beta)\right|\right],$
where
$\left|0(\beta)\right\rangle=\sqrt{1-e^{-\beta\hbar\omega}}\exp\left[a^{\dagger}\tilde{a}^{\dagger}e^{-\beta\hbar\omega/2}\right]\left|0\tilde{0}\right\rangle,$
(16)
which is the same as Eq.(4). Thus, according to Eq.(10), from the chaotic
field operator we have derived the thermo vacuum state, this is a new
approach, which has been overlooked in the literature before.
## IV Generalized thermo vacuum state
$\left|\phi\left(\beta\right)\right\rangle$
Now, we consider a degenerate parametric amplifier whose Hamiltonian is
$H=\omega a^{\dagger}a+\kappa^{\ast}a^{\dagger 2}+\kappa a^{2},$ (17)
whose normalized density operator $\rho$ is
$\rho\left(\mathtt{tr}e^{-\beta H}\right)=e^{-\beta H}=e^{-\beta\left(\omega
a^{\dagger}a+\kappa^{\ast}a^{\dagger 2}+\kappa a^{2}\right)}.$ (18)
Recalling that
$\frac{1}{2}\left(a^{\dagger}a+\frac{1}{2}\right),\frac{1}{2}a^{\dagger 2}$
and $\frac{1}{2}a^{2}$ obey the SU(1,1) Lie algebra, thus we can derive a
generalized identity of operator 9 ; 10 as follows:
$\displaystyle\exp\left[fa^{\dagger}a+ga^{\dagger 2}+ka^{2}\right]$ (19)
$\displaystyle=$ $\displaystyle e^{-f/2}e^{\frac{ga^{\dagger
2}}{\mathcal{D}\coth\mathcal{D}-f}}e^{\left(a^{\dagger}a+\frac{1}{2}\right)\ln\frac{\mathcal{D}\text{sech}\mathcal{D}}{\mathcal{D}-f\tanh\mathcal{D}}}e^{\frac{ka^{2}}{\mathcal{D}\coth\mathcal{D}-f}},$
where we have set $\mathcal{D}^{2}=f^{2}-4kg.$ Thus Comparing Eq.(18) with
Eq.(19) we can recast Eq.(18) into the following form
$\left(\mathtt{tr}e^{-\beta H}\right)\rho=\sqrt{\lambda
e^{\beta\omega}}\exp\left[E^{\ast}a^{\dagger
2}\right]\exp\left[a^{\dagger}a\ln\lambda\right]\exp\left[Ea^{2}\right],$ (20)
where we have set
$\displaystyle D^{2}$ $\displaystyle=$
$\displaystyle\omega^{2}-4\left|\kappa\right|^{2},$ $\displaystyle\lambda$
$\displaystyle=$ $\displaystyle\frac{D}{\omega\sinh\beta D+D\cosh\beta D},$
(21) $\displaystyle E$ $\displaystyle=$
$\displaystyle\frac{-\lambda}{D}\kappa\sinh\beta D.$
Further, using the formula in Eqs.(11) and (13), we have
$\displaystyle\left(\mathtt{tr}e^{-\beta H}\right)\rho$ (22) $\displaystyle=$
$\displaystyle\sqrt{\lambda e^{\beta\omega}}\exp\left[E^{\ast}a^{\dagger
2}\right]\colon\exp\left\\{\left(\lambda-1\right)a^{\dagger}a\right\\}\colon\exp\left[Ea^{2}\right]$
$\displaystyle=$ $\displaystyle\sqrt{\lambda
e^{\beta\omega}}\int\frac{d^{2}z}{\pi}e^{E^{\ast}a^{\dagger
2}+\sqrt{\lambda}z^{\ast}a^{{\dagger}}}\left|0\right\rangle\left\langle
0\right|e^{Ea^{2}+\sqrt{\lambda}za}\left\langle\tilde{z}\right.\left|\tilde{0}\right\rangle\left\langle\tilde{0}\right.\left|\tilde{z}\right\rangle$
$\displaystyle=$ $\displaystyle\sqrt{\lambda
e^{\beta\omega}}\int\frac{d^{2}z}{\pi}\left\langle\tilde{z}\right|e^{E^{\ast}a^{\dagger
2}+\sqrt{\lambda}z^{\ast}a^{{\dagger}}}\left|0\tilde{0}\right\rangle\left\langle
0\tilde{0}\right.e^{Ea^{2}+\sqrt{\lambda}za}\left|\tilde{z}\right\rangle$
$\displaystyle=$ $\displaystyle\sqrt{\lambda
e^{\beta\omega}}\widetilde{\mathtt{tr}}\left[e^{E^{\ast}a^{\dagger
2}+\sqrt{\lambda}a^{{\dagger}}\tilde{a}^{{\dagger}}}\left|0\tilde{0}\right\rangle\left\langle
0\tilde{0}\right.e^{Ea^{2}+\sqrt{\lambda}a\tilde{a}}\right]$
$\displaystyle\equiv$ $\displaystyle\left(\mathtt{tr}e^{-\beta
H}\right)\widetilde{\mathtt{tr}}\left[\left|\phi\left(\beta\right)\right\rangle\left\langle\phi\left(\beta\right)\right|\right],$
which indicates that the pure state in doubled Fock space for Hamiltonian in
(17) can be considered as
$\left|\phi\left(\beta\right)\right\rangle=\sqrt{\frac{\lambda^{1/2}e^{\beta\omega/2}}{Z\left(\beta\right)}}e^{E^{\ast}a^{\dagger
2}+\sqrt{\lambda}a^{{\dagger}}\tilde{a}^{{\dagger}}}\left|0\tilde{0}\right\rangle,$
(23)
where the partition function $Z\left(\beta\right)$ is determined by
$Z\left(\beta\right)=\mathtt{tr}e^{-\beta H}=\mathtt{tr}\left\\{\sqrt{\lambda
e^{\beta\omega}}e^{E^{\ast}a^{\dagger 2}}\colon
e^{\left(\lambda-1\right)a^{\dagger}a}\colon e^{Ea^{2}}\right\\}.$ (24)
Using $\int\frac{d^{2}z}{\pi}\left|z\right\rangle\left\langle z\right|=1$ and
the integral formula 11
$\displaystyle\int\frac{d^{2}z}{\pi}\exp\left(\zeta\left|z\right|^{2}+\xi
z+\eta z^{\ast}+fz^{2}+gz^{\ast 2}\right)$ (25) $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{\zeta^{2}-4fg}}\exp\left[\frac{-\zeta\xi\eta+\xi^{2}g+\eta^{2}f}{\zeta^{2}-4fg}\right],$
whose convergent condition is Re$\left(\zeta\pm f\pm g\right)<0,\
$Re$\left(\frac{\zeta^{2}-4fg}{\zeta\pm f\pm g}\right)<0,$ we can get
$Z\left(\beta\right)=\sqrt{\frac{\lambda
e^{\beta\omega}}{\left(1-\lambda\right)^{2}-4\left|E\right|^{2}}}=\frac{e^{\beta\omega/2}}{2\sinh\left(\beta
D/2\right)}.$ (26)
Thus the normalized state for Eq.(18) in doubled Fock space is given by
$\left|\phi\left(\beta\right)\right\rangle=\sqrt{2\lambda^{1/2}\sinh\left(\beta
D/2\right)}e^{E^{\ast}a^{\dagger
2}+\sqrt{\lambda}a^{{\dagger}}\tilde{a}^{{\dagger}}}\left|0\tilde{0}\right\rangle,$
(27)
and the internal energy of system is
$\left\langle H\right\rangle_{e}=-\frac{\partial}{\partial\beta}\ln
Z\left(\beta\right)=\frac{D\coth\left(\beta D/2\right)-\omega}{2},$ (28)
which leads to the distribution of entropy
$\displaystyle S$ $\displaystyle=$
$\displaystyle-k\mathtt{tr}\left(\rho\ln\rho\right)=\frac{1}{T}\left\langle
H\right\rangle_{e}+k\ln Z\left(\beta\right)$ (29) $\displaystyle=$
$\displaystyle\frac{D}{2T}\coth\left(\beta
D/2\right)-k\ln\left[2\sinh\left(\beta D/2\right)\right].$
In particular, when $\kappa=0,$ leading to $D=\omega,$ so Eq.(27) just reduces
to $\left|0(\beta)\right\rangle$ with $\omega\rightarrow\hbar\omega$, and
Eqs.(28) and (29) become
$\frac{\omega}{2}\left(\coth\left(\beta\omega/2\right)-1\right)$ and
$\frac{\omega}{2T}\coth\left(\beta\omega/2\right)-k\ln\left[2\sinh\left(\beta\omega/2\right)\right],$
respectively, as expected 12 . Thus by virtue of the technique of IWOP we can
display the partial trace method to deduce the pure state representation for
some new density operators of light field at finite temperature.
## V Applications of generalized thermo vacuum state
### V.1 Internal energy distribution of the system
As an application of Eq.(27), we can evaluate the each term’s contribution to
energy in Hamiltonian. Based on the idea from Eq.(1) to (2), the system
operator $A$ can be calculated as $\left\langle
A\right\rangle_{e}=\left\langle\phi\left(\beta\right)\right|A\left|\phi\left(\beta\right)\right\rangle.$
Thus uisng the completeness of coherent state and the integral formula Eq.(25)
as well as noticing
$\left\langle\phi\left(\beta\right)\right.\left|\phi\left(\beta\right)\right\rangle=1$,
$\left(1-\lambda\right)^{2}-4\left|E\right|^{2}=4\lambda\sinh^{2}\left(\beta
D/2\right),$ then we have
$\displaystyle\left\langle\omega a^{\dagger}a\right\rangle_{e}$
$\displaystyle=$
$\displaystyle\omega\left\langle\phi\left(\beta\right)\right|\left(aa^{\dagger}-1\right)\left|\phi\left(\beta\right)\right\rangle$
(30) $\displaystyle=$ $\displaystyle 2\omega\lambda^{1/2}\sinh\left(\beta
D/2\right)\frac{\partial}{\partial\lambda}$
$\displaystyle\times\int\frac{d^{2}z}{\pi}e^{-\left(1-\lambda\right)\left|z\right|^{2}+Ez^{2}+E^{\ast}z^{\ast
2}}-\omega$ $\displaystyle=$
$\displaystyle\lambda^{1/2}\frac{\partial}{\partial\lambda}\frac{2\omega\sinh\left(\beta
D/2\right)}{\sqrt{\left(1-\lambda\right)^{2}-4\left|E\right|^{2}}}-\omega$
$\displaystyle=$
$\displaystyle\frac{\omega}{2}\left(\frac{\omega}{D}\coth\beta D/2-1\right),$
and
$\displaystyle\left\langle\kappa^{\ast}a^{\dagger 2}\right\rangle_{e}$
$\displaystyle=$ $\displaystyle 2\kappa^{\ast}\lambda^{1/2}\sinh\left(\beta
D/2\right)\frac{\partial}{\partial
E^{\ast}}\frac{1}{\sqrt{\left(1-\lambda\right)^{2}-4EE^{\ast}}}$ (31)
$\displaystyle=$
$\displaystyle-\frac{\left|\kappa\right|^{2}}{D}\allowbreak\coth\beta D/2,$
as well as
$\left\langle\kappa
a^{2}\right\rangle_{e}=-\frac{\left|\kappa\right|^{2}}{D}\allowbreak\coth\beta
D/2.$ (32)
From Eqs.(31) and (32) we see that the two items ($\kappa^{\ast}a^{\dagger 2}$
and $\kappa a^{2})$ have the same energy contributions to the system, as
expected. Combing Eqs.(30)-(32) we can also check Eq.(28).
### V.2 Wigner function and quantum tomogram
The Wigner function plays an important role in studying quantum optics and
quantum statistics 13 ; 14 . It gives the most analogous description of
quantum mechanics in the phase space to classical statistical mechanics of
Hamilton systems and is also a useful measure for studying the nonclassical
features of quantum states. In addition, the Wigner function can be
reconstructed by measuring several quadratures
$P\left(\hat{x}_{\theta}=\hat{x}\cos\theta+\hat{p}\sin\theta\right)$ with a
homodyne detection and then applying an inverse Radon transform—quantum
homodyne tomography 15 . Using Eq.(26) one can calculate conveniently the
Wigner function and quantum tomogram. Recalling that the single-mode Wigner
operator $\Delta\left(z\right)$ in coherent state representation is given by
16 ; 17
$\Delta\left(\alpha\right)=e^{2\left|\alpha\right|^{2}}\int\frac{d^{2}z}{\pi^{2}}\left|z\right\rangle\left\langle-z\right|e^{-2\left(z\alpha^{\ast}-z^{\ast}\alpha\right)},$
(33)
thus the Wigner function is
$\displaystyle W\left(\alpha\right)$ $\displaystyle=$
$\displaystyle\left\langle\phi\left(\beta\right)\right|\Delta\left(\alpha\right)\left|\phi\left(\beta\right)\right\rangle$
(34) $\displaystyle=$ $\displaystyle
e^{2\left|\alpha\right|^{2}}\int\frac{d^{2}z}{\pi^{2}}\left\langle\phi\left(\beta\right)\right.\left|z\right\rangle\left\langle-z\right.\left|\phi\left(\beta\right)\right\rangle
e^{-2\left(z\alpha^{\ast}-z^{\ast}\alpha\right)}$ $\displaystyle=$
$\displaystyle\frac{\tanh\left(\beta
D/2\right)}{\pi}e^{-\frac{2}{D}\left[\omega\left|\alpha\right|^{2}+\left(\kappa\alpha^{2}+\kappa^{\ast}\alpha^{\ast
2}\right)\right]\tanh\left(\beta D/2\right)},$
where we have noticed
$\left(1+\lambda\right)^{2}-4\left|E\right|^{2}=\frac{4D\cosh^{2}\beta
D/2}{\omega\sinh\beta D+D\cosh\beta D}$ and used the integral formula (25). In
particular, when $\kappa=0,$ Eq.(34) reduces to
$W\left(\alpha\right)=\frac{\tanh\left(\beta
D/2\right)}{\pi}\exp\left\\{-2\left|\alpha\right|^{2}\tanh\left(\beta
D/2\right)\right\\},$ (35)
which is just the Wigner function of thermo vacuum state
$\left|0\left(\beta\right)\right\rangle$.
On the other hand, we can derive the tomography (Radon transform of Wigner
function) of the system by using Eq.(26). Recalling that, for single-mode
case, the Radon transform of the Wigner operator is just a pure state density
operator 18 ,
$\int\delta\left(q-fq^{\prime}-gp^{\prime}\right)\Delta\left(\alpha\right)\mathtt{d}q^{\prime}\mathtt{d}p^{\prime}=\left|q\right\rangle_{f,g\text{
}f,g}\left\langle q\right|,$ (36)
where $\alpha=(q+\mathtt{i}p)/\sqrt{2},$ and $\left(f,g\right)$ are real,
$\left|q\right\rangle_{f,g}=C\exp\left[\frac{\sqrt{2}}{A}qa^{{\dagger}}-\frac{e^{\mathtt{i}2\varphi}}{2}a^{{\dagger}2}\right]\left|0\right\rangle,$
(37)
and
$C=\left[\pi\left(f^{2}+g^{2}\right)\right]^{-1/4}\exp\\{-q^{2}/[2(f^{2}+g^{2})]\\},A=f-\mathtt{i}g=\sqrt{f^{2}+g^{2}}e^{-\mathtt{i}\varphi}.$
Eq.(37) is named as the intermediate coordinate-momentum representation 18 .
From Eq.(36) and Eq.(26) it then follows that the tomogram can be calculated
as
$\mathcal{R}\left(q\right)_{f,g}\equiv\left\langle\phi\left(\beta\right)\right.\left|q\right\rangle_{f,g\text{
}f,g}\left\langle
q\right.\left|\phi\left(\beta\right)\right\rangle=\int\frac{d^{2}z}{\pi}\left|{}_{f,g}\left\langle
q\right|\left\langle\tilde{z}\right.\left|\phi\left(\beta\right)\right\rangle\right|^{2}.$
(38)
Then submitting Eqs.(37) and (26) into Eq.(38), we obtain
$\displaystyle\mathcal{R}\left(q\right)_{f,g}$ $\displaystyle=$
$\displaystyle\frac{2\sinh\left(\beta
D/2\right)}{C^{-2}\sqrt{\lambda+\left|G\right|^{2}/\lambda}}$ (39)
$\displaystyle\times\exp\left\\{2q^{2}\left[\frac{1-\lambda\text{Re}G^{-1}}{\left|A\right|^{2}\left(\lambda+\left|G\right|^{2}/\lambda\right)}+\text{Re}\frac{2E}{A^{2}G}\right]\right\\},$
where we have used Eq.(25) and set $G=1+2e^{\mathtt{i}2\varphi}E$. Eq.(39) is
the positive-definite tomogram, as expected. As far as we are concerned, this
result has not been reported in the literature before.
In sum, by virtue of the technique of integration within an ordered product
(IWOP) of operators we have presented a new approach for deriving generalized
thermo vacuum state which is simpler in form that the result by using the
Umezawa-Takahashi approach, in this way the thermo field dynamics can be
developed.
Appendix:
As a comparison of our new approach with the usual way of deriving thermo
vacuum state in TFD theory, in this appendix, we shall derive the explicit
expression of $\left|\phi\left(\beta\right)\right\rangle$ by diagonalizing
Hamiltonian (17). For this purpose, we introduce two unitary operators: one is
a single mode squeezing operator,
$S=\exp\left(\frac{\nu}{\mu}\frac{a^{{\dagger}2}}{2}\right)\exp\left[\left(a^{{\dagger}}a+\frac{1}{2}\right)\ln\frac{1}{\mu}\right]\exp\left(-\frac{\nu}{\mu}\frac{a^{2}}{2}\right),$
(A1)
where $\mu$ and $\nu$ are squeezing parameters satisfying the unitary-modulate
condition $\mu^{2}-\nu^{2}=1$; and the other is a rotational operator,
$R=\exp\left(\frac{i\phi}{2}a^{\dagger}a\right),$ which lead to the following
transformations,
$\displaystyle SaS^{{\dagger}}$ $\displaystyle=\mu a-\nu a^{{\dagger}},\text{
}Sa^{{\dagger}}S^{{\dagger}}=\mu a^{{\dagger}}-\nu a,$ (A2) $\displaystyle
S^{{\dagger}}aS$ $\displaystyle=\mu a+\nu a^{{\dagger}},\text{
}S^{{\dagger}}a^{{\dagger}}S=\mu a^{{\dagger}}+\nu a,$ (A3)
and
$RaR^{{\dagger}}=ae^{-\frac{i\phi}{2}},\text{
}Ra^{{\dagger}}R^{{\dagger}}=a^{{\dagger}}e^{\frac{i\phi}{2}}.$ (A4)
Thus, under the unitary transform $SR$, we have (setting
$\kappa=\left|\kappa\right|e^{i\phi}$)
$\displaystyle H^{\prime}$ $\displaystyle=SRHR^{{\dagger}}S^{{\dagger}}=\omega
Sa^{\dagger}aS^{{\dagger}}+\left|\kappa\right|Sa^{\dagger
2}S^{{\dagger}}+\left|\kappa\right|Sa^{2}S^{{\dagger}}$
$\displaystyle=\left(\omega\mu^{2}+\omega\nu^{2}-4\left|\kappa\right|\mu\nu\right)a^{{\dagger}}a+\left(\omega\nu-2\left|\kappa\right|\mu\right)\nu$
$\displaystyle+\left(\left|\kappa\right|\mu^{2}+\left|\kappa\right|\nu^{2}-\omega\mu\nu\right)\left(a^{{\dagger}2}+a^{2}\right).$
(A5)
In order to diagonalize Eq.(A5), noticing $\mu^{2}-\nu^{2}=1$ and making
$\left|\kappa\right|\left(\mu^{2}+\nu^{2}\right)-\omega\mu\nu=0,$ whose
solution is given by
$\mu^{2}=\frac{\omega}{2\omega^{\prime}}+\frac{1}{2},\nu^{2}=\allowbreak\frac{\omega}{2\omega^{\prime}}-\frac{1}{2},\omega^{\prime}=\sqrt{\omega^{2}-4\left|\kappa\right|^{2}}.$
(A6)
then Eq.(A5) becomes
$H^{\prime}=\omega^{\prime}\left(a^{\dagger}a+\frac{1}{2}\right)-\frac{1}{2}\omega.$
(A7)
i.e., the diagonalization of Hamiltonian is completed.
According to Eq.(16), the thermal vacuum state corresponding to density
operator $\rho^{\prime}=e^{-\beta H^{\prime}}/\mathtt{tr}\left(e^{-\beta
H^{\prime}}\right)=e^{-\beta\omega^{\prime}a^{\dagger}a}/\mathtt{tr}\left(e^{-\beta\omega^{\prime}a^{\dagger}a}\right)$
is given by
$\left|0(\beta)\right\rangle=\sqrt{1-e^{-\beta\omega^{\prime}}}\exp\left[a^{\dagger}\tilde{a}^{\dagger}e^{-\beta\omega^{\prime}/2}\right]\left|0\tilde{0}\right\rangle.$
(A8)
Thus the generalized thermal vacuum state is
$\displaystyle\left|\phi^{\prime}(\beta)\right\rangle$
$\displaystyle=R^{{\dagger}}S^{{\dagger}}\left|0(\beta)\right\rangle$
$\displaystyle=\sqrt{1-e^{-\beta\omega^{\prime}}}R^{{\dagger}}S^{{\dagger}}\exp\left[a^{\dagger}\tilde{a}^{\dagger}e^{-\beta\omega^{\prime}/2}\right]\left|0\tilde{0}\right\rangle.$
(A9)
Using the transformation in (A3), (A4) and noticing Eq.(A1) as well as
$\frac{\nu}{\mu}=\sqrt{\frac{\omega-\omega^{\prime}}{\omega+\omega^{\prime}}}$,
we can finally put Eq.(A9) into the following form
$\displaystyle\left|\phi^{\prime}(\beta)\right\rangle$
$\displaystyle=\sqrt{\left(1-e^{-\beta\omega^{\prime}}\right)/\mu}\exp\left[\frac{1}{\mu}e^{-\left(\beta\omega^{\prime}+i\phi\right)/2}a^{{\dagger}}\tilde{a}^{\dagger}\right.$
$\displaystyle\left.-\frac{\nu e^{-i\phi}}{2\mu}a^{{\dagger}2}+\frac{\nu
e^{-\beta\omega^{\prime}}}{2\mu}\tilde{a}^{\dagger
2}\right]\left|0\tilde{0}\right\rangle.$ (A10)
Comparing Eq.(A10) with Eq.(27), we see that Eq.(27) is simpler in form than
that in Eq.(A10).
ACKNOWLEDGEMENT: Worked supported by the National Natural Science Foundation
of China under Grant 10775097 and 10874174. E-mail: hlyun2008@126.com.
## References
* (1) Y. Takahashi, H. Umezawa, Collective Phenomena 2 (1975) 55\.
* (2) Memorial Issue for H. Umezawa, Int. J. Mod. Phys. B 10 (1996) 1563.
* (3) Hong-yi Fan, Hai-liang Lu and Yue Fan, Ann. Phys. 321 (2006) 480.
* (4) A. Wünsche, J. Opt. B: Quant. Semiclass. Opt. 1 (1999) R11.
* (5) Hong-yi Fan, J. Opt. B: Quant. Semiclass. Opt. 5 (2003) R147.
* (6) W. H. Louisell, Quantum Statistical Properties of Radiation (New York: John Wiley 1973).
* (7) R. J. Glauber, Phys. Rev. 130 (1963) 2529; R. J. Glauber, Phys. Rev. 131 (1963) 2766.
* (8) J. R. Klauder and B. S. Skargerstam, Coherent States (World Scientific, Singapore 1985).
* (9) S. M. Barnett and P. M. Radmore, Methods in Theoretical Quantum Ooptics (Clarendon press, Oxford 1997).
* (10) Li-yun Hu and Hong-yi Fan, Commun. Theor. Phys. (Beijing, China) 51 (2009) 321.
* (11) R. R. Puri, Mathematical Methods of Quantum Optics (Springer-Verlag, Berlin, 2001), Appendix A.
* (12) R. K. Pathria, Statistical Mechannics, 2nd ed. Elsevier (World Scientific, Singapore 2001) Pte Ltd.
* (13) E. Wigner, Phys. Rev. 40 (1932) 749.
* (14) Wolfgang P. Schleich, Quantum Optics in Phase Space, (Wiley-VCH, Birlin 2001).
* (15) A. I. Lvovsky and M. G. Raymer, Rev. Mod. Phys. 81 (2009) 299 and references therein.
* (16) M. O. Scully and M. S. Zubairy, Quantum Optics (Cambridge: Cambidge University Press, 1997).
* (17) Li-yun Hu and Hong-yi Fan, J. Opt. Soc. Am. B 25 (2008) 1955.
* (18) Hong-yi Fan and Hai-ling Chen, Chin. Phys. Lett. 18 (2001) 850.
|
arxiv-papers
| 2009-04-28T02:43:53 |
2024-09-04T02:49:02.207239
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Li-yun Hu and Hong-yi Fan",
"submitter": "Liyun Hu",
"url": "https://arxiv.org/abs/0904.4297"
}
|
0904.4298
|
# A Dynamic Programming Implemented $2\times 2$ non-cooperative Game Theory
Model for ESS Analysis
Chen Shi ,Fang Yuan Department of Entomology and Program of Operations
Research, the Pennsylvania State University, PA, 16801, USADepartment of
Computer Sciences and Engineering and Program of Operations Research,the
Pennsylvania State University, PA, 16801, USA
###### Abstract
Game Theory has been frequently applied in biological research since 1970s.
While the key idea of Game Theory is Nash Equilibrium, it is critical to
understand and figure out the payoff matrix in order to calculate Nash
Equilibrium. In this paper we present a dynamic programming implemented method
to compute $2\times 2$ non-cooperative finite resource allocation game’s
payoff matrix. We assume in one population there exists two types of
individuals, aggressive and non-aggressive and each individual has equal and
finite resource. The strength of individual could be described by a function
of resource consumption in different development stages. Each individual
undergoes logistic growth hence we divide the development into three stages:
initialization, quasilinear growth and termination. We first discuss the
theoretical frame of how to dynamic programming to calculate payoff matrix
then give three numerical examples representing three different types of
aggressive individuals and calculate the payoff matrix for each of them
respectively. Based on the numerical payoff matrix we further investigate the
evolutionary stable strategies (ESS) of the games.
Keywords: Dynamic Programming, Finite Resource Allocation, Game Theory,
Evolutionary Stable Strategy (ESS)
## 1 Introduction
Game Theory is originally introduced by John von Neumann and Oskar Morgenstern
(Neumann J and Morgenstern O. 1944) in 1944. Later John F. Nash (Nash JF.
1950) has made significant contribution to Game Theory by introducing Nash
Equilibrium and proving every finite game has mixed Nash Equilibrium, which
becomes the core idea of Game Theory. Since then Game Theory has been widely
applied in various disciplines such as social sciences (most notably
economics), political science, international relations, computer science and
philosophy. So far eight game theorists have won Nobel prizes in economics.
John M. Smith in 1973 formalized another central concept in Game Theory called
the Evolutionary Stable Strategy (ESS, Smith JM. 1973). Various work has been
done using ESS to investigate the behavior and evolutionary path of animals (
Takada T and Kigami J. 1991, Crowley PH. 2000, Nakamaru M and Sasaki A. 2003,
Matsumura S and Hayden TJ. 2005, Wolf N and Mangel M. 2007, Krivan V, Cressman
R and Schneider C. 2008, Hamblin S and Hurd PL. 2009).
The simplest yet most common case of ESS game is $2\times 2$ non-cooperative
game. Multiple player game is more realistic but according to Poincare-
Bendixson theorem, multiplayer dynamic system would result in chaos hence we
only consider $2\times 2$ games (Yi T et al. 1997). While the key idea in Game
Theory is Nash Equilibrium, we must define the payoff matrix very carefully in
order to compute Nash Equilibrium. However, most of the research regarding ESS
has arbitrarily assigned payoff matrix. To overcome this problem, we want to
use some realistic quantities to define payoff matrix for the game. M-Gibbons
has shown a neighbor intervention model (Mesterton-Gibbons M, Sherratt TN.
2008) and Luther further discussed whether food is worth fighting for (Luther
RM, Broom M and Ruxton GD. 2007). Just has studied the aggressive loser in an
ESS game (Just W, Morris MR and Sun X. 2006). Inspired by their ideas, we will
use food source to compute the payoff of aggressive and non-aggressive players
in a $2\times 2$ game.
While we assume the food source is finite and equivalent to any member in the
population, it is natural to use dynamic programming (DP) to figure out the
optimal foraging strategy as a resource allocation problem. Animal growth rate
is a logistic curve and we divide the whole growth process into three distinct
stages: initialization, quasilinear growth and termination. In different
stages, the payoff is a linear function of food source with different slope
and our goal is to determine the maximum total payoff at the end of growth
using dynamic programming.
Al-Tamimi has suggested using dynamic programming to implement Game Theory
model for designing (Al-Tamimi A, Abu-Khalaf M and Lewis FL. 2007) but their
model is zero-sum. We will first present a more realistic general sum game
framework, then discuss three different types of aggressive players, calculate
the numerical payoff matrix for each case and determine the ESS for them. Our
work is the first of this kind to combine dynamic programming and Game Theory,
two different optimization tools together to solve real biological problem.
## 2 Defining the Model
A typical $2\times 2$ non-cooperative general sum game has the following form
where $P_{ji}$ defines the payoff of player _i_ in _j_ th strategy
combination.
Strategy | Non-aggressive | Aggressive
---|---|---
Non-aggressive | $(P_{111},P_{112})$ | $(P_{121},P_{122})$
Aggressive | $(P_{211},P_{212})$ | $(P_{221},P_{222})$
Table 1. Payoff Matrix of non-cooperative general sum game
$P_{ijk}$ denotes the payoff of player k when it uses strategy i and its
components uses j. Here we have two types of strategies: aggressive (2) and
non-aggressive (1). Aggressive players would fight their neighbor and try to
get their resources. Non-aggressive players only concentrate on their own food
source and never fight back even when they are attacked. However, if two
aggressive players meet, it would result in a severe fight and both players
are terribly hurt. This definition is similar to that of ”Chicken-Dare” or
”Hawk-Dove” game. The Nash Equilibrium is defined as:
$\textbf{Definition 1. }x\in\Theta$ is a Nash Equilibrium if
$x\in\tilde{\beta}(x)$, where $\Theta$ is the mixed strategy space and
$\tilde{\beta}$ is the mixed strategy best response correspondence.
Because this is a $2\times 2$ finite symmetric game,
$\Delta^{NE}\neq\emptyset$ by Kakutani’s Theorem. Next we switch to a
population perspective and define Evolutionary Stable Strategy (ESS) as
follows:
$\textbf{Definition 2. }x\in\Delta$ is an ESS if for every strategy $y\neq x$
there exists some $\bar{\epsilon}_{y}\in(0,1)$ such that $u[x,\epsilon
y+(1-\epsilon)x]>u[y,\epsilon y+(1-\epsilon)x]$ holds for all
$\epsilon\in(0,\bar{\epsilon}_{y})$ where $\epsilon$ is the proportion of
mutant strategy.
Basically, ESS is a subset of Nash Equilibrium. We use Maynard’s criterion to
test whether a Nash Equilibrium is an ESS:
$\textbf{Theorem 1. }\Delta^{ESS}=\\{x\in\Delta^{NE}:u(y,y)<y(x,x),\forall
y\in\beta^{*}(x),y\neq x\\}.$
To perform all these analysis, we must first define the payoff matrix of our
original game. We will use DP to determine the numerical payoff values for the
four strategy combinations. Assume each player has a total of _N_ food sources
for the entire development period and in each stage at least _1_ resource
should be consumed in order to maintain basal metabolism. As we have discussed
before, the development period is divided into three stages: growth
initialization, quasilinear growth and growth termination, hence the player
could consume ${1\cdots N-2}$ resources in each stage. While the growth is
logistic and nonlinear, we could use linear approximation in each stage as
follows where y is the payoff in each stage and x is the resource consumed:
$y=\begin{cases}ax,&\text{Growth Initialization},\\\ bx,&\text{Quasilinear
Growth},\\\ cx,&\text{Growth Termination}.\end{cases}$ (1)
Because logistic curve has a sigmoid shape and is usually symmetric, it is
reasonable to set $a=c$ to reduce computational intensity. The coefficients
$a$ and $b$ has biological meaning of the efficiency of converting food
sources into its own energy and in our model $b>a$. The DP model is written as
follows:
$\text{Maximize }z=\sum^{3}_{i=1}r_{i}x_{i}$ $\text{Subject to
}\sum^{3}_{i=1}x_{i}=N$
The backward DP Formulation for this model is:
OVF: $f_{k}(x)=$ optimal return for the allocation of x units of resource to
stage k $\cdots$ 3\.
ARG: $(k,x)=$ (stage, units of resource consumed).
OPF: $P_{k}(x)=$ units of resource consumed at stage k.
RR: $f_{k}(x)=max_{x_{k}=0,1,\cdots x}(r_{k}x_{k}+f_{r+1}(x-x_{k}))$,
$x=1\cdots N-2$
BC: $f_{N}(x)=r_{N}(x)$
ANS: $f_{1}(x)$
For the non-aggressive and non-aggressive strategy combination, we assume both
players do not interfere each other. In this case, we would only solve the DP
for one of them and by symmetry, the other player should adopt same strategy
to maximize its total payoff. The cost in each stage and state is shown in
Table 2 and we could calculate the optimal value using DP.
For the non-aggressive and aggressive strategy combination, the cost table is
similar to Table 2. The difference is we must define different $a$ and $b$
values for both strategies. Same thing happens for the aggressive and
aggressive combination. Once we have figured out the payoff values for each
combination we could complete the payoff matrix and further investigate the
ESS.
State/Stage | 1 | 2 | 3
---|---|---|---
1 | a | b | a
2 | 2a | 2b | a
$\cdots$ | $\cdots$ | $\cdots$ | $\cdots$
N-2 | (N-2)a | (N-2)b | (N-2)a
Table 2. Cost Table in Different Stages and States
## 3 Model Results
### 3.1 Type I Model: Final Battle
In this simplest case, we assume both aggressive player and non-aggressive
player only fight after they have depleted all their resources. In real
ecosystem, some animals don’t fight while they are young. In fact, they may
even help each other (Taborsky M. 2001)! They fight only when they are
sexually matured. So here we don’t even have to bother DP. We assume the
optimal payoffs of non-aggressive and non-aggressive combination is $(1,1)$
and aggressive player could take advantage of half of the non-aggressive
player’s payoff but lose 80% of its own payoff when it encounters another
aggressive player. The payoff matrix is:
Strategy | Non-aggressive | Aggressive
---|---|---
Non-aggressive | (1,1) | (0.5,1.5)
Aggressive | (1.5,0.5) | (0.2,0.2)
Table 3. Payoff Matrix of Type I Model
In this game there are two pure strategy Nash Equilibria: nonaggressive-
aggressive and aggressive- nonaggressive combinations. There is another mixed
strategy Nash Equilibrium where both player use aggressive strategy with
probability $\dfrac{5}{8}$ and non-aggressive strategy with probability
$\dfrac{3}{8}$. All these three Nash Equilibria give aggressive strategy 1.5
payoff and that of non-aggressive is 0.5.
According to Smith Maynard’s criterion, both pure Nash Equilibria are not pure
ESS and the mixed Nash Equilibrium is the only ESS in this game.
### 3.2 Type II Model: Modified Final Battle
In this modified final battle model, we assume the fight occurs at the end of
growth as well. Suppose $a=1$ and $b=2$ for non-aggressive player and
aggressive strategy player follows a different development path. Because most
of the development happens during the quasilinear stage, the simplest case is
to assign a smaller b coefficient for aggressive player, for instance,
$b=1.5$. To make our model more realistic, we will modify the model assumption
by taking development lag, development saturation and both cases into account,
thus the second development stage of aggressive player is not linear.
Development lag describes very slow development when consuming less than
certain amount of resources (lower threshold). Development saturation
describes no development when consuming more than certain amount of resources
(upper threshold). Here we present some typical returns with respect to
different resources allocation in different development conditions.
| Stage 1 | | | | Stage 2 | | | Stage 3
---|---|---|---|---|---|---|---|---
State/Case | Linear | Linear | Lag 1 | Lag 2 | Sat. 1 | Sat. 2 | Lag + Sat. | Linear
1 | 1 | 1.5 | 1 | 1 | 2 | 2 | 1 | 1
2 | 2 | 3 | 1 | 1 | 4 | 4 | 2 | 2
3 | 3 | 4.5 | 1 | 1 | 6 | 6 | 4 | 3
4 | 4 | 6 | 2 | 1 | 8 | 6 | 8 | 4
5 | 5 | 7.5 | 4 | 2 | 8 | 6 | 8 | 5
6 | 6 | 9 | 8 | 4 | 8 | 6 | 8 | 6
7 | 7 | 10.5 | 8 | 8 | 8 | 6 | 8 | 7
8 | 8 | 12 | 8 | 8 | 8 | 6 | 8 | 8
Table 4. Return Table in Different Development Conditions for Aggressive
Player
| Stage 1 | Stage 2 | Stage 3
---|---|---|---
State/Case | Linear | Linear | Linear
1 | 1 | 2 | 1
2 | 2 | 4 | 2
3 | 3 | 6 | 3
4 | 4 | 8 | 4
5 | 5 | 10 | 5
6 | 6 | 12 | 6
7 | 7 | 14 | 7
8 | 8 | 16 | 8
Table 5. Return Table for Non-Aggressive Player
Based on table 4, there are 6 different types of return in 2nd stage for the
aggressive player. While for the non-aggressive player, its returns in
different stages are provided in table 5. So we could formulate a total of 7
DP problems for both players. Solve these 7 DP problems we have got the
optimal allocation strategy and the corresponding maximum return, shown in
table 6. Please note in table 6 the condition title ”linear”, ”Lag” and
”Saturation” is for the second stage of entire process only. The notion is
different from that of table 4 and 5.
| Non-aggressive | | | Aggressive | | |
---|---|---|---|---|---|---|---
Condition | Linear | Linear | Lag 1 | Lag 2 | Sat. 1 | Sat. 2 | Lag + Sat.
$P_{1}(x)$ | 1 | 1 | 1 | 1 | 1 | 1 | 1
$P_{2}(x)$ | 8 | 8 | 6 | 7 | 4 | 3 | 4
$P_{3}(x)$ | 1 | 1 | 3 | 2 | 5 | 6 | 5
$f_{1}(x)$ | 18 | 14 | 12 | 11 | 14 | 13 | 14
Table 6. Optimal Allocation Strategy and Returns
Since the entire development is symmetric, the resource allocated in stage 1,
$P_{1}(x)$ and in stage 3, $P_{3}(x)$ should be interchangeable. For instance,
in Lag 2 condition, the aggressive player could either consumer 1 unit
resource in stage 1 and 3 units of resource in stage 3, or 3 units in stage 1
and 1 unit in stage 3; the final total returns are just identical.
Now we have got the optimal return so we could calculate the payoff matrix
based on our previous definition. Depending on different conditions, the total
return of aggressive player varies from 11 to 14. Here we use 12 as instance
and calculate the payoff matrix:
Strategy | Non-aggressive | Aggressive
---|---|---
Non-aggressive | (18,18) | (9,21)
Aggressive | (21,9) | (2.4,2.4)
Table 7. Payoff Matrix of One Condition of Type II Model
In this game there are three Nash Equilibria, almost the same as in type I
game except that mixed Nash Equilibrium requires $\dfrac{11}{16}$ non-
aggressive and $\dfrac{5}{16}$ aggressive strategy. From a population point of
view, by applying Maynard’s criterion, the mixed Nash Equilibrium is the only
ESS in the evolutionary game.
### 3.3 Type III Model: Battles in Every Stage
In this model the aggressive player will fight in all stages to maximize its
payoff. While it is difficult to model the interaction of fighting for food
source, instead we give the aggressive player larger coefficients than in
Model I and non-aggressive player smaller coefficients when they encounter.
For the aggressive-aggressive strategy combination, we simply give both
players 0 because of fighting severity. Since they fight in each stage, there
is no final battle in this circumstance. In this model we also consider two
different conditions: $b=0.5$ for non-aggressive player and $b=1.5$ for
aggressive player; $b=1.5$ for non-aggressive player and $b=2.5$ for
aggressive player. The DP results are shown in the following table:
Condition | 1 | 1 | 2 | 2
---|---|---|---|---
Player | Non-aggressive | Aggressive | Non-aggressive | Aggressive
$P_{1}(x)$ | 8 | 1 | 1 | 1
$P_{2}(x)$ | 1 | 8 | 8 | 8
$P_{3}(x)$ | 1 | 1 | 1 | 1
$f_{1}(x)$ | 9.5 | 14 | 14 | 22
Table 8. Optimal Allocation Strategy and Returns
So the payoff matrix for condition one is:
Strategy | Non-aggressive | Aggressive
---|---|---
Non-aggressive | (18,18) | (9.5,14)
Aggressive | (14,9.5) | (0,0)
Table 9. Payoff Matrix of One Condition of Type III Model
The non-aggressive and non-aggressive strategy combination is the only Nash
Equilibrium in this game; there is no mixed strategy Nash Equilibrium. This is
also the ESS by Maynard’s criterion.
For condition two:
Strategy | Non-aggressive | Aggressive
---|---|---
Non-aggressive | (18,18) | (14,22)
Aggressive | (22,14) | (0,0)
Table 10. Payoff Matrix of One Condition of Type III Model
In this game there are three Nash Equilibria, almost the same as in type I
game except that mixed Nash Equilibrium requires $\dfrac{7}{9}$ non-aggressive
and $\dfrac{2}{9}$ aggressive strategy. From a population point of view, by
applying Maynard’s criterion, the mixed Nash Equilibrium is the only ESS in
the evolutionary game.
## 4 Discussion
Though we use DP to find out the optimal allocation strategy, we have already
found under certain circumstances, for instance if $b>a$, most of our
resources should be allocated to the second stage, the quasilinear growth. In
Type II Model we have also realized the growth does not necessarily be a
linear function. Here we present a criterion to test if we could use the
growth function directly to allocate resources optimally:
Assume symmetric growth still holds and define $y=f(x)$ for both growth
initialization and growth termination and $y=g(x)$ for quasilinear growth.
Notice the term ”quasilinear growth” here does not mean the growth function is
linear, it could be nonlinear anyway. If the following is true then we should
allocate most of our resource in the quasilinear growth stage and minimum for
growth initialization and termination: $g(x)$ is not concave and
$g^{\prime}(x)\geq f^{\prime}(x),\forall x$.
However, this criterion is only sufficient but not necessary. It is possible
to investigate the sufficient and necessary condition but the computational
intensity is almost the same of using DP because we must compute the first
order partial derivative (gradient) of $f(x)+f(y)+g(10-x-y)$ with respect to
$x$ and $y$, the resource allocated in stage 1 and 3, and determine the
structure of the gradient.
In this research project we focus on finite and equal resource allocation
problem for both players. However, our approach could be extended to unequal
resources because we use DP to determine the optimal strategy for each player
so it does not matter whether the resources are equal for both players. In
other words, the player should not worry about the total amount of their
resource (and actually they cannot determine the amount of resource because it
is pre-specified.) but rather concentrate on how to optimize the return from
the resource (the optimal strategy). It is also possible to assume infinite
resource, however the consumption of the player is bounded so infinite
resource allocation problem could be transformed to finite resource allocation
problem. As we have discussed before, saturation is a reasonable assumption to
deal with infinite resource. Therefore, we could use DP to solve almost all
types of resources allocation problem for 2 player game.
Another possible improvement of our approach is to introduce stochastic
component into the model. Instead of assigning a specific amount of resources,
we could assume the food resource is from a certain probability distribution,
say, normal distribution. In effect this is the extension of unequal resource
allocation problem for 2 players. Besides, it is reasonable to assign a
minimum threshold of development and if the player fail to reach that
threshold it then dies. The remaining resources are transferred to its
neighbor (its competitor). In this circumstance DP could still be applied but
we expect the formulation is much more complicated. When we reach the optimal
strategy of resource allocation we could still apply Game Theory to determine
Nash Equilibrium for a given game but it is difficult to give a close form
representation of what ESS looks like in this scenario because of
stochasticity. We could use simulation to determine the evolutionary path and
this approach is more realistic and useful.
## References
* [1] Abu-Khalaf M Al-Tamimi A and Lewis FL. Adaptive critic designs for discrete-time zero-sum games with application to h(infinity) control. IEEE Trans Syst Man Cybern B Cybern, 37(1):240–7, 2007.
* [2] Morris MR Just W and Sun X. The evolution of aggressive losers. Behav Processes, 74(3):342–50, 2007.
* [3] Broom M Luther RM and Ruxton GD. Is food worth fighting for? ess’s in mixed populations of kleptoparasites and foragers. Bull Math Biol., 69(4):1121–46, 2007.
* [4] Mesterton-Gibbons M and Sherratt TN. Neighbor intervention: a game-theoretic model. Theor Popul Biol., 256(2):263–75, 2009.
* [5] Nakamaru M and Sasaki A. Can transitive inference evolve in animals playing the hawk-dove game? J Theor Biol., 222(4):461–70, 2003.
* [6] Taborsky M. The evolution of bourgeois, parasitic, and cooperative reproductive behaviors in fishes. J Hered., 92(2):100–10, 2001.
* [7] J Nash. Equilibrium points in n-person games. PNAS, 36(1):48–49, 1950.
* [8] Crowley PH. Hawks, doves, and mixed-symmetry games. J Theor Biol., 204(4):543–63, 2000.
* [9] Hardling R. Arms races, conflict costs and evolutionary dynamics. J Theor Biol., 196(2):163–67, 1999.
* [10] Hamblin S and Hurd PL. When will evolution lead to deceptive signaling in the sir philip sidney game? Theor Popul Biol., 2009.
* [11] Matsumura S and Hayden TJ. When should signals of submission be given?-a game theory model. J Theor Biol., 240(3):425–33, 2006.
* [12] Price Smith John Maynard and George R. The logic of animal conflict. Nature, 246:15–18, 1973.
* [13] Takada T and Kigami J. The dynamical attainability of ess in evolutionary games. Math Biol., 29(6):513–29, 1991.
* [14] Krivan V, Cressman R, and Schneider C. The ideal free distribution: a review and synthesis of the game-theoretic perspective. Theor Popul Biol., 73(3):403–425, 2008.
* [15] John von Neumann and Oskar Morgenstern. Theory of games and economic behavior. Princeton University Press, 1944.
* [16] Zhi-gang J Yi T, Qi-sen Y and Zu wang W. Evolutionarily stable strategy, stable state, periodic cycle and chaos in a simple discrete time two-phenotype model. J Theor Biol., 188(1):21–7, 1997.
|
arxiv-papers
| 2009-04-28T02:52:08 |
2024-09-04T02:49:02.213282
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Chen Shi and Fang Yuan",
"submitter": "Shi Chen",
"url": "https://arxiv.org/abs/0904.4298"
}
|
0904.4349
|
# Transmission of doughnut light through a bull’s eye structure
Lu-Lu Wang Xi-Feng Ren111renxf@ustc.edu.cn Rui Yang Key Laboratory of
Quantum Information, University of Science and Technology of China, Hefei
230026, People’s Republic of China Guang-Can Guo Guo-Ping
Guo222gpguo@ustc.edu.cn Key Laboratory of Quantum Information, University of
Science and Technology of China, Hefei 230026, People’s Republic of China
###### Abstract
We experimentally investigate the extraordinary optical transmission of
doughnut light through a bull’s eye structure. Since the intensity is vanished
in the center of the beam, almost all the energy reaches the circular
corrugations (not on the hole), and excites surface plasmons which propagate
through the hole and reradiate photons. The transmitted energy is about 32
times of the energy input on the hole area. It is also interesting that the
transmitted light has a similar spatial shape with the input light even though
the diameter of the hole is much smaller than the wavelength of light.
###### pacs:
78.66.Bz,73.20.MF, 71.36.+c
The phenomenon of extraordinary optical transmission (EOT) through metallic
films which were perforated by nanohole arrays was first observed a decade
agoEbbesen98 . It is generally believed that surface plasmons (SPs) in metal
surface play a crucial role in this process, during which photons first
transform into SPs and then back to photons againMoreno ; liu . Such SPs are
involved in a wide range of applicationsBarnes03 ; Ozbay06 . The report of EOT
phenomenon attracts considerable attention because it shows that more light
than Bethe’s prediction could be transmitted through the holesBethe . This
stimulates much fundamental research and promotes subwavelength apertures as a
core element of new optical devices. For EOT in periodic hole arrays, not only
the polarization propertiesElli04 ; Koer04 ; RenAPL but also the spatial mode
propertiesren06 ; ren062 are widely discussed. Even for a single aperture
surrounded by circular corrugations, we can also get high transmission
efficiencies and a well-defined spectrum since the periodic corrugations act
as an antenna to couple the incident light into SPsthio ; Lezec .
Usually, the light transmitted through the subwavelength holes can be divided
into two parts: one is the directly transmitted light and the other comes from
the surface plasmon assisted transmission process. Here we present a new
method to eliminate the influence of the first part in EOT phenomenon by using
a doughnut input light and a bull’s eye structure. Since the intensity is null
in the center of the beam, there is no light illuminating on the single hole
directly. Almost all the energy reaches the circular corrugations, and excites
SPs which propagate through the hole and reradiate photons (as shown in Fig.1
). It is also interesting that the transmitted light has a similar spatial
shape with the input light even though the diameter of the hole is much
smaller than the wavelength of light.
Figure 1: Sketch map of our protocol. The typical doughnut light has an
intensity null on the beam axis. Almost all the energy reaches the circular
corrugations, excite surface plasmons which propagate through the hole and
reradiate photons. Figure 2: (Color online) Transmission efficiency as a
function of wavelength for bull’s eye structure(Black square dots) and similar
structure without hole in center(Red round dots). Inset is a scanning electron
microscope picture of our bull’s eye structure(groove periodicity, 500 nm;
groove depth, 60 nm; hole diameter, 250nm; film thickness, 135 nm).
Inset of Fig. 2 is a scanning electron microscope picture of our bull’s eye
structure. The thickness of the gold layer is $135$ $nm$. The cylindrical
hole($250$ $nm$ diameter) and the grooves are produced by a Focused Ion Beam
Etching system (FIB, DB235 of FEB Co.). The grooves have a period of $500nm$
with the depth $60nm$ and width $250nm$. Transmission spectra of the hole
array are recorded by a Silicon avalanche photodiode (APD) single photon
detector coupled with a monochromator through a fiber. White light from a
stabilized tungsten-halogen source passes through a single mode fiber and a
polarizer (only vertical polarized light can pass), then illuminates on the
sample. The hole array is set between two lenses with the focus of $35mm$. The
light exiting from the hole array is launched into the monochromator. The
transmission spectra are shown in Fig. 2(Black square dots), in which the
transmission efficiency is determined by normalizing the intensity of
transmitted light over the intensity before the sample. At the resonant
frequency (632.8 nm in the experiment), the transmission efficiency is about
$2.55\%$, much higher than that of the non-resonant case. To verify the
phenomenon does not come from the direct transmitted light, we use another
sample as a comparison. The new sample also has a Bull’s eye geometry, but
without hole in center. The transmission efficiencies are all about $1.0\%$
and there is no transmission peak as shown in Fig. 2(Red round dots), which
verify that the transmission peak for bull’s eye structure come from the
surface plasmons assisted transmission process. In the following experiments,
we use the bull’s eye structure with a hole in center to investigate the
extraordinary optical transmission of doughnut light.
The typical doughnut light is produced by changing its orbital angular
momentum (OAM), which is associated with the transverse phase front of a light
beam. Light field of photons with OAM can be described by means of Laguerre-
Gaussian ($LG_{p}^{l}$) modes with two mode indices $p$ and $l$Allen92 . The
$p$ index gives the number of radial nodes and the $l$ index represents the
number of the $2\pi$-phase shifts along a closed path around the beam center.
Light with an azimuthal phase dependence $e^{-il\varphi}$ carries a well-
defined OAM of $l\hbar$ per photonAllen92 . When $l=0$, the light is in the
general Gaussian mode, while when $l\neq 0$, the associated phase
discontinuity produces an intensity null on the beam axis. If the mode
function is not a pure LG mode, each photon of this light is in a
superposition state, with the weights dictated by the contributions of the
comprised different $l$th angular harmonics. For the sake of simplification,
we can consider only LG modes with the index $p=0$. Computer generated
holograms (CGHs)ArltJMO ; VaziriJOB , a kind of transmission holograms, are
used to change the winding number $l$ of LG mode light. Inset of Fig. 3. shows
part of a typical CGH($n=1$) with a fork in the center. Corresponding to the
diffraction order $m$, the $n$ fork hologram can change the winding number $l$
of the input beam by $\Delta l_{m}=m*n$. In our experiment, we use the first
order diffraction light ($m=1$) and the efficiency of our CGHs is about
$30\%$. Superposition mode is produced using a displaced hologramVaziriJOB ,
which is particularly suitable for producing superposition states of
$LG_{0}^{l}$ mode with the Gaussian mode beam.
Figure 3: Experimental setup. A computer generated hologram(CGH) is used to
change the OAM of the laser beam. The polarized laser beam is directed into
the microscope and focused on the metal plate using a 100X objective lens
(Nikon, NA=0.90). Transmitted light is collected by another 100X objective
lens (Nikon, NA=0.80). Inset, pictures of part of a typical CGH($n=1$) and
produced light with the first order mode.
The experimental setup is shown in Fig. 3. The OAM of the laser light(632.8nm
wavelength) is changed by a CGH, while the polarization is controlled by a
polarization beam splitter (PBS, working wavelength 632.8 nm) followed by a
half wave plate (HWP, working wavelength 632.8 nm). The polarized laser beam
is directed into the microscope and focused on the metal plate using a 100X
objective lens (Nikon, NA=0.90) with a diameter about $3.8\mu m$. The CCD
camera before the objective lens is used to adjust the position of the hole
structure. Transmitted light is collected by another 100X objective lens
(Nikon, NA=0.80) and recorded by another CCD camera. The relative position of
the beam center to the hole is estimated as follows: We detect the
transmission of the Gaussian beam with the sample moved by a three-dimensional
stage (Suruga Seiki Co., Ltd. B71-80A). When the center of beam is coincided
with the center of hole, the maximum transmission is achieved. A CCD camera is
also used as an assistant to observe the picture directly. Since the doughnut
light is produced via the movement of hologram which does not affect the
optical path, we can realize the protocol that the position of zero electric
field coincides with the center of the hole.
Figure 4: CCD pictures of light beam before (upper) and after (lower) the
bull’s eye structure. The light power is decreased to give clear pictures. A,
B, C are the cases for light with Gaussian mode ($l=0$), the first order
mode($l=1$) and a typical superposition mode
$(a\left|0\right\rangle+b\left|1\right\rangle)/\sqrt{a^{2}+b^{2}}$ (where $a$
and $b$ are real numbers) respectively.
Transmission efficiencies are measured for light with Gaussian mode ($l=0$),
the first order mode($l=1$) and a typical superposition mode
$(a\left|0\right\rangle+b\left|1\right\rangle)/\sqrt{a^{2}+b^{2}}$, where $a$
and $b$ are real numbers). When the hologram is placed in the beam center, the
OAM of the first diffraction order light is $1$, while for hologram in the
beam edge, the OAM is $0$. In the middle part, the output light is in the
superposition mode of $0$ and $1$. The results for $0$ and $1$ order mode
light are $2.55\%$, and $2.28\%$ respectively. The transmission efficiency for
the superposition mode light is between the upper two cases and can be changed
with the ratio of $a$ and $b$ when we move the hologram. In all the cases,
transmission efficiency is much larger than the value obtained from the
classical theoryBethe . The reason is that the interaction of the incident
light and surface plasmon is made allowed by coupling through the grating
momentum and obeys conservation of the momentum
$\overrightarrow{k}_{sp}=\overrightarrow{k}_{0}\pm i\overrightarrow{G}_{x}\pm
j\overrightarrow{G}_{y},$ (1)
where $\overrightarrow{k}_{sp}$ is the surface plasmon wave vector,
$\overrightarrow{k}_{0}$ is the component of the incident wave vector that
lies in the plane of the sample, $\overrightarrow{G}_{x}$ and
$\overrightarrow{G}_{y}$ are the reciprocal lattice vectors, and i, j are
integers. $\overrightarrow{G}_{x,y}=2\pi/d_{x,y}$ are the lattice vectors in
the $x,y$ directions respectively, and $d_{x,y}$ are the periodicity of the
structure in the $x$ and $y$ direction. While in the practical experiments,
the Eq.1 can not be satisfied simply, because many parameters can influence
the resonant frequency, for example, the thickness of the metal film, the
width of the grooves, as mentioned in Degi . Due to the symmetry of the Bull’s
eye structure, the polarization of the light has no influence on the whole
process. We can see that the transmission efficiency for Gaussian mode light
is larger than that of the first order mode light. Although it is hard to give
a precise explanation, the possible factors may be that the additional
transmissions of Gaussian mode light from directly passing light, SPs excited
from the hole edge by scattering, and lower propagating loss in the hole. This
lower loss comes from the waveguide property of the hole in which the Gaussian
mode light has a higher transmission efficiency than that of other modes as
shown inMoreno ; ruan .
Calculation shows that the energy in the beam center (250nm diameter) is only
about $0.04\%$ of the whole doughnut light. Comparing with the SPs assisted
transmission efficiency $1.28\%$, we can find that the transmitted energy is
about 32 times of the directly illuminating light on the hole area. This can
be the evidence that the transmitted light in the case of doughnut mode
results from the surface plasmon assisted transmission process.
CCD pictures are also taken for the three cases as shown in Fig. 4. The light
power is decreased to give clear pictures. It is interesting that the spatial
shape of the light was still preserved after the plasmon assisted transmission
process, even though the hole diameter (250 nm) is much smaller than the light
wavelength (632.8 nm). Since the spatial shape of the light is determined by
its OAM, which is associated with the transverse phase front of a light beam,
we can conclude that the OAM of the photons are not influenced in this
process. It has been proven in many works that the phase of the photons can be
preserved in the surface plasmons assisted transmission process, here we show
that the helical wavefront of photons can also be transferred to SPs and
carried by themren06 .
In conclusion, we investigate the extraordinary optical transmission
phenomenon through a subwavelength aperture surrounded by circular
corrugations when the light is in the doughnut shape. Since all the energy
reaches the circular corrugations but not on the hole, the directly
transmitted light can be ignored. The present experiment could provide
intriguing prospects for both the exploiting of the surface plasmon based
devices and the study of fundamental physics issues.
This work was funded by the National Basic Research Programme of China (Grants
No.2009CB929600 and No. 2006CB921900), the Innovation funds from Chinese
Academy of Sciences, and the National Natural Science Foundation of China
(Grants No. 10604052 and No.10874163).
## References
* (1) T.W. Ebbesen, H. J. Lezec, H. F. Ghaemi, T. Thio, and P. A. Wolff, Nature(London) 391, 667 (1998).
* (2) L. Martin-Moreno, F. J. Garcia-Vidal, H. J. Lezec, K. M. Pellerin, T. Thio, J. B. Pendry, and T. W. Ebbesen, Phys. Rev. Lett. 86, 1114 (2001).
* (3) Haitao Liu, Philippe Lalanne, Nature(London) 452, 728 (2008).
* (4) W. L. Barnes, A. Dereux, T. W. Ebbesen, Nature 424, 824 (2003).
* (5) E. Ozaby, Science 311, 189 (2006).
* (6) H. A. Bethe, Phys. Rev. 66, 163 (1944).
* (7) J. Elliott, I. I. Smolyaninov, N. I. Zheludev, and A. V. Zayats, Opt. Lett. 29, 1414 (2004).
* (8) K. J. K. Koerkamp, S. Enoch, F. B. Segerink, N. F. van Hulst, and L. Kuipers, Phys. Rev. Lett. 92, 183901 (2004).
* (9) X. F. Ren, G. P. Guo, Y. F. Huang, Z. W. Wang, and G. C. Guo, Appl. Phys. Lett. 90, 161112 (2007).
* (10) X. F. Ren, G. P. Guo, Y. F. Huang, Z. W. Wang, and G. C. Guo, Opt. Lett. 31, 2792, (2006).
* (11) X. F. Ren, G. P. Guo, Y. F. Huang, C. F. Li, and G. C. Guo, Europhys. Lett. 76, 753 (2006).
* (12) T. Thio, K. M. Pellerin, R. A. Linke, H. J. Lezec, and T. W. Ebbesen, Opt. Lett. 26, 1972 (2001).
* (13) H. J. Lezec, A. Degiron, E. Devaux, R. A. Linke, L. Martin-Moreno, F. J. Garcia-Vidal, T. W. Ebbesen, Science 297, 820 (2002).
* (14) L. Allen, M. W. Beijersbergen, R. J. C. Spreeuw, and J. P. Woerdman, Phys.Rev.A. 45, 8185, (1992).
* (15) J. Arlt, K. Dholokia, L. Allen, and M. Padgett. J. Mod. Opt. 45, 1231 (1998).
* (16) A. Vaziri, G. Weihs, and A. Zeilinger. J. Opt. B: Quantum Semiclass. Opt 4, s47 (2002).
* (17) A. Degiron and T.W. Ebbesen, Opt. Express 12, 3694 (2004).
* (18) Zhichao Ruan, and Min Qiu, Phys. Rev. Lett. 96, 233901 (2006).
|
arxiv-papers
| 2009-04-28T09:34:54 |
2024-09-04T02:49:02.221378
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Lu-Lu Wang, Xi-Feng Ren, Rui Yang, Guang-Can Guo, Guo-Ping Guo",
"submitter": "Xifeng Ren",
"url": "https://arxiv.org/abs/0904.4349"
}
|
0904.4369
|
# Scalar dark matter-Higgs coupling in the case of electroweak symmetry
breaking driven by unparticle
E. O. Iltan
Physics Department, Middle East Technical University
Ankara, Turkey
E-mail address: eiltan@newton.physics.metu.edu.tr
###### Abstract
We study the possible annihilation cross section of scalar dark matter and its
coupling $\lambda_{D}$ to the standard model Higgs in the case of the
electroweak symmetry breaking driven by unparticle. Here the annihilation
process occurs with the help of three intermediate scalars which appear after
the mixing. By respecting the annihilation rate which is compatible with the
current relic density we predict the tree level coupling $\lambda_{D}$. We
observe that the unparticle scaling $d_{u}$ plays a considerable role on the
annihilation process and, therefore, on the coupling $\lambda_{D}$.
Research aims to understand the nature of the dark matter reaches great
interest since it contributes almost $23\%$ of present Universe [1, 2, 3]. The
existence of dark matter can not be explained in the framework of the standard
model (SM) and, therefore, one needs to go beyond. There exist a number of
dark matter candidates in various scenarios such as Supersymmetry, minimal
Universal Extra Dimensions and Little Higgs with Tparity. It is believed that
a large amount of dark matter is made of cold relics and they are in the class
nonrelativistic cold dark matter. Weakly Interacting Massive Particles (WIMPs)
are among such dark matter candidates with masses in the range $10$ GeV- a few
TeV. They are stable, interacting only through weak and gravitational
interactions and disappearing by pair annihilation (see for example [4, 5] for
further discussion). From the theoretical point of view a chosen discrete
symmetry drives the stability in many scenarios mentioned above .
In the present work we introduce an additional scalar SM singlet field
$\phi_{D}$, called as darkon, which was considered first by Silveria [6] and
studied by several authors [7, 8, 9, 10, 11, 12], and take the lagrangian
obeying the $Z_{2}$ symmetry $\phi_{D}\rightarrow-\phi_{D}$
$\displaystyle
L_{D}=\frac{1}{2}\,\partial_{\mu}\,\phi_{D}\,\,\partial^{\mu}\,\phi_{D}-\frac{\lambda}{4}\,\phi_{D}^{4}-\frac{1}{2}\,m_{0}^{2}\,\phi_{D}^{2}-\lambda_{D}\,\phi_{D}^{2}\,(\Phi_{1}^{\dagger}\,\Phi_{1})\,,$
(1)
where $\Phi_{1}$ is the SM Higgs field and $\phi_{D}$ has no vacuum
expectation value. The $Z_{2}$ symmetry considered ensures that the darkon
field can appear as pairs and they are stable in the sense that they do not
decay into any other SM particles. They can disappear by annihilating as pairs
into SM particles with the help of the exchange particle(s). Now, we will
introduce an electroweak symmetry breaking mechanism and study its effect on
the Darkon annihilation cross section.
A possible hidden sector beyond the SM can be among the candidates to explain
this breaking. Such a hidden sector has been proposed by Georgi [13, 14] as a
hypothetical scale invariant one with non-trivial infrared fixed point. It is
beyond the SM at high energy level and it comes out as new degrees of freedom,
called unparticles, being massless and having non integral scaling dimension
$d_{u}$. The interaction(s) of unparticle with the SM field(s) in the low
energy level is defined by the effective lagrangian (see for example [15]).
The possibility of the electroweak symmetry breaking due to the mixing between
the unparticle and the Higgs boson has been introduced in [16] (see also
[17]). In [16] the idea was based on the interaction of the SM scalar sector
with the unparticle operator in the form
$\lambda\,(\Phi^{\dagger}\,\Phi)\,O_{U}$ where $\Phi$ is the SM scalar and
$O_{U}$ is the unparticle operator with mass dimension $d_{u}$ (see [18, 19,
20, 21, 22, 23]). By using the fact that unparticles look like a number of
$d_{u}$ massless particles with mass dimension one and, therefore, the
operator $O_{u}$ can be considered in the form of
$(\phi^{*}\,\phi)^{\frac{d_{u}}{2}}$, the interaction term
$\displaystyle
V\sim\lambda\,(\Phi^{\dagger}\,\Phi)\,(\phi^{*}\,\phi)^{\frac{d_{u}}{2}}\,,$
(2)
is induced and it drives the electroweak symmetry breaking in the tree level
[16]111See [24] for the necessity of the radiative corrections for the
electroweak symmetry breaking from hidden sector due to the interaction in the
form $\lambda\,(\Phi^{\dagger}\,\Phi)\,\phi^{*}\,\phi$.. Recently, in [25],
this idea has been applied to the extended scalar sector which was obtained by
introducing a shadow Higgs one, the complex scalar $\phi_{2}$ in addition to
the SM Higgs222Here the $U(1)_{s}$ invariant Lagrangian including the shadow
sector and the SM one reads $\displaystyle
L=L_{SM}-\frac{1}{4}\,X^{\mu\nu}\,X_{\mu\nu}+|\Big{(}\partial_{\mu}-\frac{1}{2}\,g_{s}\,X_{\mu}\Big{)}\,\phi_{2}|^{2}-V(\Phi_{1},\phi_{2},\phi)$
where $g_{s}$ is the gauge coupling of $U(1)_{s}$ (see [26]).. This choice
leads to a richer scalar spectrum which appears as three scalars, the SM Higgs
$h_{I}$, $h_{II}$ and the heavy $h_{III}$, after the mixing mechanism (see
Appendix for a brief explanation of the toy model used.). These three scalars
are the exchange particles in our analysis and we have the annihilation
process $\phi_{D}\,\phi_{D}\rightarrow h_{I}\,(h_{II},\,h_{III})\rightarrow
X_{SM}$. The total averaging annihilation rate of $\phi_{D}\,\phi_{D}$ reads
$\displaystyle<\sigma\,v_{r}>$ $\displaystyle=$
$\displaystyle\frac{8\,\lambda_{D}^{2}\,(n_{0}^{2}\,\rho_{0}^{2})}{2\,m_{D}}\,\Bigg{|}\,\frac{c_{\alpha}^{2}}{(4\,m_{D}^{2}-m_{I}^{2})+i\,m_{I}\,\Gamma_{I}}+\frac{s_{\alpha}^{2}\,c_{\eta}^{2}}{(4\,m_{D}^{2}-m_{II}^{2})+i\,m_{II}\,\Gamma_{II}}$
(3) $\displaystyle+$
$\displaystyle\frac{s_{\alpha}^{2}\,s_{\eta}^{2}}{(4\,m_{D}^{2}-m_{III}^{2})+i\,m_{III}\,\Gamma_{III}}\,\Bigg{|}^{2}\,\Gamma(\tilde{h}\rightarrow
X_{SM})\,,$
where $\Gamma(\tilde{h}\rightarrow
X_{SM})=\sum_{i}\,\Gamma(\tilde{h}\rightarrow X_{i\,SM})$ with virtual Higgs
$\tilde{h}$ having mass $2\,m_{D}$ (see [27, 28]). Here $v_{r}$ is the average
relative speed of two darkons333Here the assumption is that the speed of the
dark matter $\phi_{D}$ is small enough to have the approximated result (see
for example [12]), $v_{r}=\frac{2\,p_{CM}}{m_{D}}$ with center of mass
momentum $p_{CM}$. The total annihilation rate can be restricted by using the
present dark matter (DM) abundance. The WMAP collaboration [29] provides a
precise determination of the present DM abundance (at two sigma level) as
$\displaystyle\Omega\,h^{2}=0.111\pm 0.018\,.$ (4)
Finally, by using the expression which connects the annihilation cross section
to the relic density
$\displaystyle\Omega\,h^{2}=\frac{x_{f}\,10^{-11}\,GeV^{-2}}{<\sigma\,v_{r}>}\,,$
(5)
with $x_{f}\sim 25$ (see for example [2, 12, 30, 31, 32]) one gets the bounds
$\displaystyle<\sigma\,v_{r}>=0.8\pm 0.1\,pb\,,$
which of the order of $(1-2)\times 10^{-9}\,GeV^{-2}$. This is the case that
s-wave annihilation is dominant (see [33] for details.).
Discussion
In the present work we extend the scalar sector by considering a shadow Higgs
one with complex scalar and in order to achieve the electroweak symmetry
breaking at tree level we assume that the unparticle sector, proposed by
Georgi [13], couples to both scalars. Furthermore we introduce so called
darkon field, which is a SM singlet with vanishing vacuum expectation value
and it couples to the SM Higgs doublet, with coupling $\lambda_{D}$. After the
symmetry breaking considered in our toy model the tree level interaction $DDh$
appears with strength $v_{0}\lambda_{D}$ and this coupling is responsible for
the annihilation cross section which agrees with the present observed dark
matter relic density (eq.(4)).
Here we study the dependence of coupling $\lambda_{D}$ to the parameters of
the model used, the Darkon mass, the scale dimension $d_{u}$ and the parameter
$s_{0}$. In our calculations we take the Darkon mass in the range of $10\leq
m_{D}\leq 80$ and use the central value of the annihilation cross section,
namely $<\sigma\,v_{r}>=0.8\,pb$. Notice that in the toy model we consider
there exist three intermediate scalars which appear after the mixing and they
drive the annihilation process with different couplings.
In Fig.1 we plot $m_{D}$ dependence of $\lambda_{D}$ for $d_{u}=1.1$. Here the
solid-long dashed-dashed-dotted line represents $\lambda_{D}$ for
$m_{I}=110\,GeV$, $s_{0}=0.1$-$m_{I}=110\,GeV$, $s_{0}=0.5$-$m_{I}=120\,GeV$,
$s_{0}=0.1$-$m_{I}=120\,GeV$, $s_{0}=0.5$. $\lambda_{D}$ is of the order of
$0.1$ for the small values of $m_{D}$, $m_{D}\leq 30\,GeV$, and for the range
$m_{D}\geq 70\,GeV$, in the case that the mass $m_{I}$ is restricted to
$m_{I}=110\,GeV$ and $120\,GeV$. In the intermediate region $\lambda_{D}$
drops and increases drastically. Since there exist three intermediate
particles, at the values of $m_{D}$ which satisfies the equalities
$m_{D}=\frac{m_{i}}{2}$ ( $i=I,II,III$), $\lambda_{D}$ decreases to reach the
appropriate annihilation cross section which is compatible with the current
relic density. In the figure we have two small values of $\lambda_{D}$ for
each set of $m_{D},\,d_{u}$ and $s_{0}$ due to resonant annihilations and the
third one is out of the chosen $m_{D}$ range which does not include the mass
values of the order of $\frac{m_{III}}{2}$. On the other hand $\lambda_{D}$
increases up to the values $0.5$ due to the effects of possible interferences
of intermediate scalar propagators. The figure shows that increasing mass of
$m_{I}$ result in a shift of $\lambda_{D}$ curve and, for its increasing
values, $\lambda_{D}$ increases (decreases) for light (heavy) Darkon in order
to satisfy the observed relic abundance. Fig.2 is the same as Fig.1 but for
$d_{u}=1.5$. It is observed that there is an considerable enhancement in the
coupling $\lambda_{D}$ for the some intermediate values of $m_{D}$ and first
suppressed value(s) of $\lambda_{D}$ appears for lighter $m_{D}$ compared to
case of $d_{u}=1.1$. This is due to the fact that the mass $m_{II}$ becomes
lighter with the increasing values of $d_{u}$ and the resonant annihilation
occurs for a lighter Darkon mass.
Now we study $d_{u}$ and $s_{0}$ dependence of the coupling $\lambda_{D}$ to
understand their effect on the annihilation rate more clear.
In Fig.3 we present $d_{u}$ dependence of $\lambda_{D}$ for $s_{0}=0.1$. Here
the solid-long dashed-dashed-dotted-dash dotted line represents $\lambda_{D}$
for $m_{I}=110\,GeV$, $m_{D}=20\,GeV$-$m_{I}=120\,GeV$,
$m_{D}=20\,GeV$-$m_{I}=110\,GeV$, $m_{D}=30\,GeV$-$m_{I}=120\,GeV$,
$m_{D}=30\,GeV$-$m_{I}=110\,GeV$, $m_{D}=60\,GeV$. It is observed that for
heavy Darkon $\lambda_{D}$ is not sensitive to $d_{u}$. For the light Darkon
particle case $\lambda_{D}$ is sensitive to $d_{u}$ around the numerical
values which the threshold $m_{D}\sim\frac{m_{II}}{2}$ are reached.
Fig.4 represents $s_{0}$ dependence of $\lambda_{D}$ for $d_{u}=1.5$. Here the
solid-long dashed-dashed-dotted-dash dotted line represents $\lambda_{D}$ for
$m_{I}=110\,GeV$, $m_{D}=20\,GeV$-$m_{I}=120\,GeV$,
$m_{D}=20\,GeV$-$m_{I}=110\,GeV$, $m_{D}=70\,GeV$-$m_{I}=120\,GeV$,
$m_{D}=70\,GeV$-$m_{I}=110\,GeV$, $m_{D}=60\,GeV$. This figure shows that for
light (heavy) Darkon $\lambda_{D}$ decreases (increases) with increasing
values of $s_{0}$ since the increase in $s_{0}$ causes that the masses
$m_{II}$ and $m_{III}$ become lighter. With the decrease in mass $m_{II}$
$m_{D}$ reaches $\frac{m_{II}}{2}$ and the resonant annihilation occurs for
light Darkon. For the heavy one, $m_{D}$ becomes far from $\frac{m_{II}}{2}$
as $s_{0}$ decreases and the annihilation rate becomes small.
In summary, we consider that the electroweak symmetry breaking at tree level
occurs with the interaction of the SM Higgs doublet, the hidden scalar and the
hidden unparticle sector. Furthermore we introduce an additional scalar SM
singlet stable field $\phi_{D}$, which is a dark matter candidate. This scalar
disappears by annihilating as pairs into SM particles with the help of the
exchange scalars, appearing after the electroweak symmetry breaking and the
mixing. By respecting the annihilation rate which does not contradict with the
current relic density, we predict the tree level coupling $\lambda_{D}$ which
drives the annihilation process. We see that the unparticle scaling $d_{u}$
and the parameter $s_{0}$ play considerable role on the annihilation and,
therefore, on the coupling $\lambda_{D}$. Once the dark matter mass $m_{D}$ is
fixed by the dark matter search experiments, it would be possible to
understand the mechanism behind the possible annihilation process and one
could get considerable information about the electroweak symmetry breaking.
Appendix
Here we would like to present briefly (see [25] for details) the possible
mechanism of the electroweak symmetry breaking coming from the coupling of
unparticle and the scalar sector of the toy model used. The scalar potential
which is responsible for the unparticle-neutral scalars mixing reads:
$\displaystyle V(\Phi_{1},\phi_{2},\phi)$ $\displaystyle=$
$\displaystyle\lambda_{0}(\Phi_{1}^{\dagger}\,\Phi_{1})^{2}+\lambda^{\prime}_{0}(\phi_{2}^{*}\,\phi_{2})^{2}+\lambda_{1}(\phi^{*}\,\phi)^{2}$
(6) $\displaystyle+$ $\displaystyle
2\lambda_{2}\,\mu^{2-d_{u}}\,(\Phi_{1}^{\dagger}\,\Phi_{1})\,(\phi^{*}\,\phi)^{\frac{d_{u}}{2}}+2\lambda^{\prime}_{2}\,\mu^{2-d_{u}}\,(\phi_{2}^{*}\,\phi_{2})\,(\phi^{*}\,\phi)^{\frac{d_{u}}{2}}\,,$
where $\mu$ is the parameter inserted in order to make the couplings
$\lambda_{2}$ and $\lambda^{\prime}_{2}$ dimensionless. In order to find the
minimum of the potential V along the ray $\Phi_{i}=\rho\,N_{i}$ with
$\Phi_{i}=(\Phi_{1},\phi_{2},\phi)$ (see [24])
$\displaystyle\Phi_{1}=\frac{\rho}{\sqrt{2}}\left(\begin{array}[]{c c}0\\\
N_{0}\end{array}\right)\,\,;\phi_{2}=\frac{\rho}{\sqrt{2}}\,N^{\prime}_{0}\,\,;\phi=\frac{\rho}{\sqrt{2}}\,N_{1}\,,$
(9)
in unitary gauge, and the potential V
$\displaystyle V(\rho,N_{i})$ $\displaystyle=$
$\displaystyle\frac{\rho^{4}}{4}\,\Bigg{(}\lambda_{0}\,N_{0}^{4}+\lambda^{\prime}_{0}\,N^{\prime\,4}_{0}+2\,\Big{(}\frac{\hat{\rho}^{2}}{2}\Big{)}^{-\epsilon}\,(\lambda_{2}\,N_{0}^{2}+\lambda^{\prime}_{2}\,N^{\prime\,2}_{0})\,N_{1}^{d_{u}}+\lambda_{1}\,N_{1}^{4}\Bigg{)}\,,$
(10)
the stationary condition $\frac{\partial V}{\partial N_{i}}|_{\vec{n}}$ along
a special $\vec{n}$ direction should be calculated444Here
$\epsilon=\frac{2-d_{u}}{2}$ and $\vec{N}$ is taken as the unit vector in the
field space as $N_{0}^{2}+N_{0}^{\prime 2}+N_{1}^{2}=1\,.$. Finally, one gets
the minimum values of $N_{i}$, namely $n_{i}$ as
$\displaystyle
n^{2}_{0}=\frac{\chi}{1+\chi+\kappa}\,\,,\,\,\,n^{\prime\,^{2}}_{0}=\frac{1}{1+\chi+\kappa}\,\,,\,\,\,n^{2}_{1}=\frac{\kappa}{1+\chi+\kappa}\,\,,\,\,\,$
(11)
where
$\displaystyle\chi=\frac{\lambda^{\prime}_{0}}{\lambda_{0}}\,\,,\,\,\,\kappa=\sqrt{\frac{d_{u}}{2}}\,\sqrt{\frac{\lambda^{\prime}_{0}\,(\lambda_{0}+\lambda^{\prime}_{0})}{\lambda_{0}\,\lambda_{1}}}\,,$
(12)
for $\lambda_{2}=\lambda^{\prime}_{2}$ which we consider in our calculations.
By using eq.(11), the nontrivial minimum value of the potential is obtained as
$\displaystyle V(\rho,n_{i})$ $\displaystyle=$
$\displaystyle-\frac{\rho^{4}}{4}\,\Bigg{(}\lambda_{0}\,n_{0}^{4}+\lambda^{\prime}_{0}\,n^{\prime\,4}_{0}\Bigg{)}\,\epsilon\,.$
(13)
This is the case that the minimum of the potential is nontrivial, namely
$V(\rho,n_{i})\neq 0$ for $1<d_{u}<2$, without the need for CW mechanism (see
[24] for details of CW mechanism). The stationary condition fixes the
parameter $\rho$ as,
$\displaystyle\rho=\rho_{0}=\Bigg{(}\frac{-2^{\epsilon}\,\lambda_{2}\,n_{1}^{d}}{\lambda_{0}\,n_{0}^{2}}\Bigg{)}^{\frac{1}{2\,\epsilon}}\,\mu\,,$
(14)
and one gets
$\displaystyle\hat{\rho_{0}}^{2}=(\frac{\rho_{0}}{\mu})^{2}=2\,\Big{(}\frac{d}{2}\Big{)}^{\frac{d}{4-2\,d}}\,\Big{(}\frac{-\lambda_{2}}{\lambda^{\prime}_{0}}\Big{)}\,\Big{(}1-\sqrt{\frac{d}{2}}\,\frac{\lambda^{\prime}_{0}}{\lambda_{2}}+\frac{\lambda^{\prime}_{0}}{\lambda_{0}}\Big{)}\,,$
(15)
by using the eq.(11) with the help of the restriction
$\displaystyle\lambda_{2}=-\sqrt{\frac{\lambda_{0}\,\lambda^{\prime}_{0}\,\lambda_{1}}{\lambda_{0}+\lambda^{\prime}_{0}}}\,.$
(16)
Here the restriction in eq.(16) arises when one chooses $d_{u}=2$ in the
stationary conditions.
Now, we study the mixing matrix of the scalars under consideration. The
expansion of the fields $\Phi_{1}$, $\phi_{2}$ and $\phi$ around the vacuum
$\displaystyle\Phi_{1}=\frac{1}{\sqrt{2}}\left(\begin{array}[]{c c}0\\\
\rho_{0}\,n_{0}+h\end{array}\right)\,\,;\phi_{2}=\frac{1}{\sqrt{2}}\,(\rho_{0}\,n^{\prime}_{0}+h^{\prime})\,\,;\phi=\frac{1}{\sqrt{2}}\,(\rho_{0}\,n_{1}+s),$
(19)
results in the potential (eq. (6))
$\displaystyle V(h,h^{\prime},s)\\!\\!\\!\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\\!\\!\\!\frac{\lambda_{0}}{4}\,(\rho_{0}\,n_{0}+h)^{4}+\frac{\lambda^{\prime}_{0}}{4}\,(\rho_{0}\,n^{\prime}_{0}+h^{\prime})^{4}+\frac{\lambda_{1}}{4}\,(\rho_{0}\,n_{1}+s)^{4}+2^{-\frac{d}{2}}\,\lambda_{2}\,\mu^{2\,\epsilon}\,(\rho_{0}\,n_{0}+h)^{2}\,(\rho_{0}\,n_{1}+s)^{d_{u}}$
(20) $\displaystyle+$ $\displaystyle
2^{-\frac{d}{2}}\,\lambda^{\prime}_{2}\,\mu^{2\,\epsilon}\,(\rho_{0}\,n^{\prime}_{0}+h^{\prime})^{2}\,(\rho_{0}\,n_{1}+s)^{d_{u}}\,,$
and the mass matrix
$(M^{2})_{ij}=\frac{\partial^{2}\,V}{\partial\,\phi_{i}\,\partial\,\phi_{j}}|_{\phi_{i}=0}$
with $\phi_{i}=(h,h^{\prime},s)$ as
$\displaystyle(M^{2})_{ij}=2\,\rho_{0}^{2}\,n_{0}^{2}\,\left(\begin{array}[]{ccc}\lambda_{0}&0&-\Big{(}\frac{d_{u}\,\lambda_{0}}{2}\Big{)}^{\frac{3}{4}}\,\Big{(}\frac{\lambda^{\prime}_{0}\,\lambda_{1}}{\lambda_{0}+\lambda^{\prime}_{0}}\Big{)}^{\frac{1}{4}}\\\
\\\
0&\lambda_{0}&-\Big{(}\frac{d_{u}\,\lambda_{0}}{2}\Big{)}^{\frac{3}{4}}\,\Big{(}\frac{\lambda_{0}^{2}\,\lambda_{1}}{\lambda^{\prime}_{0}\,(\lambda_{0}+\lambda^{\prime}_{0})}\Big{)}^{\frac{1}{4}}\\\
\\\
-\Big{(}\frac{d_{u}\,\lambda_{0}}{2}\Big{)}^{\frac{3}{4}}\,\Big{(}\frac{\lambda^{\prime}_{0}\,\lambda_{1}}{\lambda_{0}+\lambda^{\prime}_{0}}\Big{)}^{\frac{1}{4}}&-\Big{(}\frac{d_{u}\,\lambda_{0}}{2}\Big{)}^{\frac{3}{4}}\,\Big{(}\frac{\lambda_{0}^{2}\,\lambda_{1}}{\lambda^{\prime}_{0}\,(\lambda_{0}+\lambda^{\prime}_{0})}\Big{)}^{\frac{1}{4}}&(2-\frac{d_{u}}{2})\,\sqrt{\frac{d_{u}}{2}}\,\sqrt{\frac{\lambda_{0}\,\lambda_{1}\,(\lambda_{0}+\lambda^{\prime}_{0})}{\lambda^{\prime}_{0}}}\,,\\\
\end{array}\right)$ (26)
with eigenvalues
$\displaystyle m_{I}^{2}$ $\displaystyle=$ $\displaystyle
2\,\lambda_{0}\,n_{0}^{2}\,\rho_{0}^{2}\,,$ $\displaystyle m_{II}^{2}$
$\displaystyle=$
$\displaystyle\lambda_{0}\,n_{0}^{2}\,\rho_{0}^{2}\,\Bigg{(}1+(2-\frac{d_{u}}{2})\,\sqrt{\frac{d_{u}\,s_{10}\,(1+s_{0})}{2\,s_{0}}}-\sqrt{\Delta}\Bigg{)}\,,$
$\displaystyle m_{III}^{2}$ $\displaystyle=$
$\displaystyle\lambda_{0}\,n_{0}^{2}\,\rho_{0}^{2}\,\Bigg{(}1+(2-\frac{d_{u}}{2})\,\sqrt{\frac{d_{u}\,s_{10}\,(1+s_{0})}{2\,s_{0}}}+\sqrt{\Delta}\Bigg{)}\,,$
(27)
where
$\displaystyle\Delta=d_{u}\,\sqrt{\frac{2d_{u}\,s_{10}\,(1+s_{0})}{s_{0}}}+\Bigg{(}1+(\frac{d_{u}}{2}-2)\,\sqrt{\frac{d_{u}\,s_{10}\,(1+s_{0})}{2\,s_{0}}}\Bigg{)}^{2}\,.$
(28)
Here we used the parametrization
$\displaystyle\lambda^{\prime}_{0}=s_{0}\,\lambda_{0}\,,\,\,\,\lambda_{1}=s_{10}\,\lambda_{0}\
\,.$ (29)
The physical states $h_{I},\,h_{II},\,h_{III}$ are connected to the original
states $h,\,h^{\prime},\,s$ as
$\displaystyle\left(\begin{array}[]{c}h\\\ h^{\prime}\\\ s\\\
\end{array}\right)=\left(\begin{array}[]{ccc}c_{\alpha}&-c_{\eta}\,s_{\alpha}&s_{\eta}\,s_{\alpha}\\\
s_{\alpha}&c_{\eta}\,c_{\alpha}&-s_{\eta}\,c_{\alpha}\\\
0&s_{\eta}&c_{\eta}\\\ \end{array}\right)\left(\begin{array}[]{c}h_{I}\\\
h_{II}\\\ h_{III}\\\ \end{array}\right)\,,$ (39)
where $c_{\alpha\,(\eta)}=cos\,\alpha\,(\eta)$,
$s_{\alpha\,(\eta)}=sin\,\alpha\,(\eta)$ and
$\displaystyle tan\,2\,\alpha$ $\displaystyle=$
$\displaystyle\frac{2\,\sqrt{s_{0}}}{s_{0}-1}\,,$ $\displaystyle tan\,2\,\eta$
$\displaystyle=$
$\displaystyle\Big{(}\frac{d_{u}}{2}\Big{)}^{\frac{3}{4}}\,\frac{2\Big{(}s_{0}\,s_{10}\,(1+s_{0})\,\Big{)}^{\frac{1}{4}}}{(1-\frac{d_{u}}{4})\,\sqrt{2\,d_{u}\,s_{10}\,(1+s_{0})}-\sqrt{s_{0}}}\,.\,$
(40)
When $d_{u}\rightarrow 2$, the state $h_{II}$ is massless in the tree level
and it has the lightest mass for $1<d_{u}<2$. $h_{I}$ and $h_{III}$ can be
identified as the SM Higgs boson and heavy scalar coming from the shadow
sector, respectively.
The final restriction is constructed by fixing the vacuum expectation value
$v_{0}=n_{0}\,\rho_{0}$, by the gauge boson mass $m_{W}$ as
$\displaystyle
v_{0}^{2}=\frac{4\,m_{W}^{2}}{g_{W}^{2}}=\frac{1}{\sqrt{2}\,G_{F}}\,,$ (41)
where $G_{F}$ is the Fermi constant. By using eqs. (11) and (15) we get
$\displaystyle\hat{v}_{0}^{2}=c_{0}\,\frac{s_{10}\,\sqrt{2\,s_{0}\,(1+s_{0})}+s_{0}\,\sqrt{d_{u}\,s_{10}}}{\sqrt{d\,s_{0}\,(1+s_{0})}+(1+s_{0})\,\sqrt{2\,s_{10}}}\,,$
(42)
with $c_{0}=2\,\Big{(}\frac{d_{u}}{2}\Big{)}^{\frac{d_{u}}{2\,(2-\,d_{u})}}$.
The choice of the parameter $\mu$ around weak scale as $\mu=v_{0}$ results in
the additional restriction which connects parameters $s_{0}$ and $s_{10}$ (see
eq. (42) by considering $\hat{v}_{0}^{2}=1$) as
$\displaystyle s_{10}=\frac{1+s_{0}}{c_{0}^{2}\,s_{0}}\,.$ (43)
When $d_{u}\rightarrow 2$,
$s_{10}\rightarrow\frac{e}{4}\,\frac{1+s_{0}}{s_{0}}$ and when
$d_{u}\rightarrow 1$, $s_{10}\rightarrow\frac{1+s_{0}}{2\,s_{0}}$. It is shown
that the ratios are of the order of one and the choice $\mu=v_{0}$ is
reasonable (see [16] for the similar discussion.)
## References
* [1] G. Jungman, M. Kamionkowski and K. Griest, Phys. Rept. 267, 195 (1996).
* [2] G. Bertone, D. Hooper and J. Silk, Phys. Rept. 405, 279 (2005).
* [3] E. Komatsu et al., Astrophys. J. Suppl. Ser. 180, 330 (2009).
* [4] Francesco D Eramo Phys. Rev. D76, 083522 (2007).
* [5] Wan-Lei Guo, Xin Zhang, hep-ph/0904.2451
* [6] V. Silveira and A. Zee, Phys. Lett. B161, 136 (1985).
* [7] D. E. Holz and A. Zee, Phys. Lett. B517, 239 (2001).
* [8] J. McDonald, Phys. Rev. D50, 3637 (1994).
* [9] B. Patt and F. Wilczek (2006), hep-ph/0605188.
* [10] O. Bertolami, R. Rosenfeld, Int. J. Mod. Phys. A23, 4817 (2008).
* [11] H. Davoudiasl, R. Kitano, T. Li, and H. Murayama, Phys. Lett. B609, 117 (2005).
* [12] X.-G. He, T. Li, X.-Q. Li, and H.-C. Tsai, Mod. Phys. Lett. A22, 117 (2005).
* [13] H. Georgi, Phys. Rev. Lett. 98, 221601 (2007).
* [14] H. Georgi, Phys. Lett. B650, 275 (2007).
* [15] S. L. Chen and X. G. He, Phys. Rev. D76, 091702 (2007).
* [16] J.P. Lee, hep-ph/0803.0833 (2008).
* [17] T. Kikuchi, hep-ph/0812.4179 (2008).
* [18] P. J. Fox, A. Rajaraman and Y. Shirman, Phys. Rev. D76, 075004 (2007).
* [19] A. Delgado, J. R. Espinosa and M. Quiros, JHEP 0710, 094 (2007).
* [20] A. Delgado, J. R. Espinosa, J. M. No, M. Quiros, Phys. Rev. D79, 055011 (2009).
* [21] M. Bander, J. L. Feng, A. Rajaraman and Y. Shirman, Phys. Rev. D76, 115002 (2007).
* [22] T. Kikuchi and N. Okada, Phys. Lett. B661, 360 (2008).
* [23] F. Sannino, R. Zwicky, Phys. Rev. D79, 015016 (2009).
* [24] E. Gildener, S. Weinberg, Phys. Rev. D13, 3333 (1976).
* [25] E. Iltan, hep-ph/0901.0544.
* [26] W-F Chang, J. N. Ng, J. M. S. Wu, Phys. Rev. D74, 095005 (2006).
* [27] C. Bird, P. Jackson, R. Kowalewski and M. Pospelov, Phys. Rev. Lett. 93, 201803 (2004).
* [28] C. Bird, R. Kowalewski and M. Pospelov, Mod. Phys. Lett. A21, 457 (2006).
* [29] D. N. Spergel et.al, (WAMP), Astrophys. J. Suppl. Ser. 148 175 (2003).
* [30] G. Servant and T. M. P. Tait, Nucl. Phys. B650, 391 (2003).
* [31] S. Gopalakrishna, S. Gopalakrishna, A. de Gouvea, W. Porod, JCAP 0605, 005 (2006).
* [32] S. Gopalakrishna, S. J. Lee, J. D. Wells, hep-ph/0904.2007
* [33] E. W. Kolb and M. S. Turner, The Early Universe (Addison- Wesley, Reading, MA, 1990).
Figure 1: $\lambda_{D}$ as a function of $m_{D}$ for $d_{u}=1.1$. Here the
solid-long dashed-dashed-dotted line represents $\lambda_{D}$ for
$m_{I}=110\,GeV$, $s_{0}=0.1$-$m_{I}=110\,GeV$, $s_{0}=0.5$-$m_{I}=120\,GeV$,
$s_{0}=0.1$-$m_{I}=120\,GeV$, $s_{0}=0.5$. Figure 2: The same as Fig. 1 but
for $d_{u}=1.5$. Figure 3: $\lambda_{D}$ as a function of $d_{u}$ for
$s_{0}=0.1$. Here the solid-long dashed-dashed-dotted-dash dotted line
represents $\lambda_{D}$ for $m_{I}=110\,GeV$,
$m_{D}=20\,GeV$-$m_{I}=120\,GeV$, $m_{D}=20\,GeV$-$m_{I}=110\,GeV$,
$m_{D}=30\,GeV$-$m_{I}=120\,GeV$, $m_{D}=30\,GeV$-$m_{I}=110\,GeV$,
$m_{D}=60\,GeV$. Figure 4: $\lambda_{D}$ as a function of $s_{0}$ for
$d_{u}=1.5$. Here the solid-long dashed-dashed-dotted-dash dotted line
represents $\lambda_{D}$ for $m_{I}=110\,GeV$,
$m_{D}=20\,GeV$-$m_{I}=120\,GeV$, $m_{D}=20\,GeV$-$m_{I}=110\,GeV$,
$m_{D}=70\,GeV$-$m_{I}=120\,GeV$, $m_{D}=70\,GeV$-$m_{I}=110\,GeV$,
$m_{D}=60\,GeV$.
|
arxiv-papers
| 2009-04-28T11:37:16 |
2024-09-04T02:49:02.226947
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "E. Iltan",
"submitter": "Erhan Iltan",
"url": "https://arxiv.org/abs/0904.4369"
}
|
0904.4479
|
# IRS Spectroscopy and Multi-wavelength Study of Luminous Star-forming
Galaxies at $z\simeq 1.9$
J.-S. Huang,11affiliation: Harvard-Smithsonian Center for Astrophysics, 60
Garden Street, Cambridge, MA 02138 S.M. Faber,22affiliation: University of
California Observatories/Lick Observatory, University of California, Santa
Cruz, CA 95064 E. Daddi,33affiliation: Laboratoire AIM, CEA/DSM-CNRS-
Université Paris Diderot, DAPNIA/Service d’Astrophysique, CEA Saclay, Orme des
Merisiers, 91191 Gif-sur-Yvette Cedex, France E. S. Laird,44affiliation:
Astrophysics Group, Imperial College London, Blackett Laboratory, Prince
Consort Road, London SW7 2AZ K. Lai,11affiliation: Harvard-Smithsonian Center
for Astrophysics, 60 Garden Street, Cambridge, MA 02138 A.
Omont,55affiliation: Institut d’Astrophysique de Paris-CNRS, 98bis Boulevard
Arago, F-75014 Paris, France Y. Wu,66affiliation: IPAC, California Institute
of Technology, 1200 E. California, Pasadena, CA 91125 J. D.
Younger11affiliation: Harvard-Smithsonian Center for Astrophysics, 60 Garden
Street, Cambridge, MA 02138 K. Bundy,77affiliation: Reinhardt Fellow,
Department of Astronomy and Astrophysics, University of Toronto, Toronto, ON
M5S 3H8, Canada A. Cattaneo, 88affiliation: Astrophysikalisches Institut
Potsdam, an der Sternwarte 16, 14482 Potsdam, Germany S. C.
Chapman,99affiliation: Institute of Astronomy, Cambridge, CB3 0HA, UK.;
University of Victoria, Victoria, BC V8P 1A1 Canada C.J.
Conselice,1010affiliation: University of Nottingham, School of Physics &
Astronomy, Nottingham NG7 2RD M. Dickinson,1111affiliation: NOAO, 950 North
Cherry Avenue, Tucson, AZ 85719 E. Egami,1212affiliation: Steward Observatory,
University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721 G. G.
Fazio,11affiliation: Harvard-Smithsonian Center for Astrophysics, 60 Garden
Street, Cambridge, MA 02138 M. Im, 1313affiliation: Department of Physics and
Astronomy, FPRD, Seoul National University, Seoul 151-747, Korea D.
Koo,22affiliation: University of California Observatories/Lick Observatory,
University of California, Santa Cruz, CA 95064 E. Le Floc’h,1414affiliation:
IfA, University of Hawaii, Honolulu, HI 96822 C. Papovich,1212affiliation:
Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson,
AZ 85721 D. Rigopoulou,1515affiliation: Department of Astrophysics, Oxford
University, Keble Road, Oxford, OX1 3RH, UK I. Smail,1616affiliation:
Institute for Computational Cosmology, Durham University, Durham, UK M. Song,
1313affiliation: Department of Physics and Astronomy, FPRD, Seoul National
University, Seoul 151-747, Korea P. P. Van de Werf,1717affiliation: Leiden
Observatory, Leiden University, P.O. Box 9513, NL-2300 RA Leiden, Netherlands
T. M. A. Webb,1818affiliation: Department of Physics, McGill University,
Montr?al, QC, Canada C. N. A. Willmer,1212affiliation: Steward Observatory,
University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721 S. P.
Willner,11affiliation: Harvard-Smithsonian Center for Astrophysics, 60 Garden
Street, Cambridge, MA 02138 & L. Yan,66affiliation: IPAC, California Institute
of Technology, 1200 E. California, Pasadena, CA 91125
###### Abstract
We analyze a sample of galaxies chosen to have $F_{24\micron}>0.5mJy$ and
satisfy a certain IRAC color criterion. IRS spectra yield redshifts, spectral
types, and PAH luminosities, to which we add broadband photometry from optical
through IRAC wavelengths, MIPS from 24-160 µm, 1.1 millimeter, and radio at
1.4 GHz. Stellar population modeling and IRS spectra together demonstrate that
the double criteria used to select this sample have efficiently isolated
massive star-forming galaxies at $z\sim 1.9$. This is the first starburst-
dominated ULIRG sample at high redshift with total infrared luminosity
measured directly from FIR and millimeter photometry, and as such gives us the
first accurate view of broadband SEDs for starburst galaxies at extremely high
luminosity and at all wavelengths. Similar broadband data are assembled for
three other galaxy samples – local starburst galaxies, local AGN/ULIRGS, and a
second 24µm-luminous $z\sim 2$ sample dominated by AGN. $L_{PAH}/L_{IR}$ for
the new $z\sim 2$ starburst sample is the highest ever seen, some three times
higher than in local starbursts, whereas in AGNs this ratio is depressed below
the starburst trend, often severely. Several pieces of evidence imply that
AGNs exist in this starburst dominated sample, except two of which even host
very strong AGN, while they still have very strong PAH emission. The ACS
images show most objects have very extended morphologies in the rest-frame UV
band, thus extended distribution of PAH molecules. Such an extended
distribution prevents further destruction PAH molecules by central AGNs. We
conclude that objects in this sample are ULIRGs powered mainly by starburst;
and the total infrared luminosity density contributed by this type of objects
is $0.9-2.6\times 10^{7}L_{\odot}/Mpc^{3}$.
cosmology: observations — galaxies: dust emission — galaxies: mid-infrared
††slugcomment: v2
## 1 INTRODUCTION
Formation of massive galaxies provides a critical test of theories of galaxy
formation and evolution. Before modern deep observations, the most massive
galaxies known were local elliptical galaxies with no ongoing star formation.
The classical model for these objects (e.g., Eggen et al., 1962) was
monolithic formation at high redshifts, followed by passive evolution. A more
recent galaxy formation theory in the Cold Dark Matter (CDM) paradigm
(Blumenthal et al., 1984; White & Frenk, 1991; Cole et al., 2000) predicts
quite the opposite scenario: a galaxy-galaxy merging-tree model. In this
scenario, small galaxies formed early in cosmic time, and massive galaxies
were assembled later at much lower redshifts by a series of mergers.
Observations of local Ultra-Luminous InfraRed Galaxies (ULIRGs,
$L_{IR}>10^{12}$
$L_{\sun}$)111$L_{IR}\equiv\int_{8~{}\micron}^{1000~{}\micron}L_{\nu}d\nu$
(Sanders & Mirabel, 1996) detected by IRAS are consistent with the merger
theory. Most local ULIRGs have disturbed morphologies, consistent with being
merging systems (Kim et al., 2002). ULIRGs in later stages of merging have
$r^{-1/4}$ light profiles (James et al., 1999; Rothberg & Joseph, 2004).
Genzel et al. (2001) and Tacconi et al. (2002) measured local ULIRG dynamical
masses and found an average of $10^{11}$ $M_{\sun}$. These features are
consistent with numerical simulation studies of galaxy mergers, indicating
that local ULIRGs are merging systems transforming gas-rich galaxies into
$L_{*}$ elliptical galaxies (Kormendy & Sanders, 1992; Mihos & Hernquist,
1996; Barnes et al., 1996; Tacconi et al., 2002).
The story is different at $z\gtrsim 2$. Deep near-infrared surveys (Franx et
al., 2003; Glazebrook et al., 2004; McCarthy et al., 2004; Daddi et al., 2005;
Labbe et al., 2005) have identified apparently luminous passive galaxies
already in place at $z\sim 2$, implying that they formed at even higher
redshifts. The existence of galaxies with $M_{*}>10^{11}$ $M_{\sun}$ at high
redshifts may challenge the merger theory of forming such objects at lower
redshifts. However, Cattaneo et al. (2008) used a semi-analytic model to show
that significant numbers of $M_{*}>10^{11}M_{\odot}$ galaxies were in places
by $z\sim 2$ but many also formed at lower redshifts, that is, there was a
whole ”downsizing” trend for massive galaxies to form their stars early, but
it is merely statistical not absolute. Possibly consistent with this is the
fact that the contribution of LIRGs and ULIRGs to the total IR luminosity
density is more than 70% at $z=1$ (Le Floc’h et al., 2005) compared to a
negligible percentage locally (Sanders & Mirabel, 1996; Huang et al., 2007b).
Moreover, the redshift surveys for Sub-Millimeter Galaxies (SMGs) by Chapman
et al. (2003, 2005) reveal a rapidly evolving ULIRG population at $1.7<z<2.8$.
Such strong evolution is also seen in ULIRGs selected with BzK color and MIPS
24 µm flux at $z\sim 2$ with their number density apparently 3 orders of
magnitude higher than the local number density. Thus local ULIRGs would well
be the tail end of earlier intense activity.
The Spitzer MIPS 24 µm band has been very effective in probing infrared
emission from galaxies at redshifts up to $z\sim 3$ (Huang et al., 2005; Webb
et al., 2006; Papovich et al., 2006; Rigopoulou et al., 2006; Daddi et al.,
2005, 2007a, 2007b). Papovich et al. (2006), Webb et al. (2006), Daddi et al.
(2007a, b), and Dey et al. (2008) argued that 24 µm emission from galaxies at
$2<z<3$ is powered by both active galactic nuclei (AGN) and star formation.
Spectroscopic observations of a few 24 µm luminous SMGs and Lyman break
galaxies (LBGs) at $1<z<3$ (Lutz et al., 2005; Huang et al., 2007a; Le Floc’h
et al., 2007; Valiante et al., 2006; Pope et al., 2008) with the Infrared
Spectrograph (IRS) on Spitzer support this view, showing both strong continua
and emission features of Polycyclic Aromatic Hydrocarbons (PAH) in the rest-
frame $6<\lambda<10$ µm. Systematic infrared spectroscopic surveys of 24 µm
luminous but optically faint sources (Houck et al., 2005; Yan et al., 2005)
reveal a dusty, $z\sim 2$ AGN population not detected in optical surveys. Most
of these AGNs are ULIRGs with power-law spectral energy distributions (SEDs)
in the mid-infrared and deep silicate absorption at 9.7 µm (Sajina et al.,
2007a). Weedman et al. (2006b) observed a sample of X-ray AGN with similar
properties though generally less silicate absorption. Optically-faint radio
sources are a mix of AGN and starbursts (Weedman et al., 2006a) but are
predominantly AGN. In general, optically-faint objects have weak infrared
spectral emission features, and most objects are likely to be AGN (Weedman et
al., 2006c). However, not all 24 µm luminous objects at $z\sim 2-3$ are AGN-
dominated. For example, Weedman et al. (2006b) and Farrah et al. (2008) also
identified samples with an apparent 1.6 µm stellar peak in the IRAC 4.5 or 5.8
µm bands. Both samples show a very narrow redshift distribution due to a
selection of the MIPS 24 µm band toward strong 7.7µm PAH emission at $z\sim
1.9$. IRS spectroscopy of 24 µm-luminous SMGs (Lutz et al., 2005; Menendez-
Delmestre et al., 2007; Valiante et al., 2006; Pope et al., 2008; Menendez-
Delmestre et al., 2008) shows similar spectral features, namely strong PAH
emission in objects with a 1.6µm stellar emission bump (Weedman et al., 2006b;
Farrah et al., 2008), indicating intensive star formation in both types of
objects.
This paper presents an IRS spectroscopic and multi-wavelength study of a ULIRG
sample at $z\sim 2$. The sample comes from the All-wavelength Extended Groth-
strip International Survey (AEGIS, Davis et al., 2007), which consists of deep
surveys ranging from X-ray to radio wavelengths. Selection of our sample
catches a starburst-dominated phase of ULIRG with $L_{IR}>5\times 10^{12}$L⊙,
which is very rare among local ULIRGs. In this paper, we will study their
properties including star formation, stellar masses, AGN fractions, and
contribution to the universe’s star formation history. §2 describes the sample
selection. The IRS spectroscopic results are presented in §3, and §4 contains
an analysis of stellar populations, star formation rate, and AGN fraction. §5
summarizes our results. All magnitudes are in the AB magnitude system unless
stated otherwise, and notation such as “[3.6]” means the AB magnitude at
wavelength 3.6 µm. The adopted cosmology parameters are $H_{0}=70$ km s-1
Mpc-1, $\Omega_{M}=0.3$, $\Omega_{\Lambda}=0.7$.
## 2 SAMPLE SELECTION
We wish to study the multi-wavelength properties of star-forming galaxies at
$z\sim 2$. There are many ways of using optical and NIR colors to select such
a sample. The samples with the best spectroscopic confirmation are the $UgR$
color-selected BM/BX sources (Steidel et al., 2004), which have estimated
typical stellar masses of about $10^{9}\sim 10^{10}$ $M_{\sun}$ (Shapley et
al., 2005; Reddy et al., 2006, 2008). The average 24 µm flux density for these
sources is $42\pm 6$ $\mu$Jy (Reddy et al., 2006), which suggests modest rest
frame mid-IR luminosities, consistent with LIRGs ($L_{IR}<10^{12}$
$L_{\sun}$). A different sample, based on near-infrared color selection, is
the Distant Red Galaxies (DRG, $(J-K)_{vega}>2.3$) (Franx et al., 2003; Labbe
et al., 2005). These galaxies are redder and dustier than the UV-selected
BM/BX sources and are believed to be more massive than $10^{11}$$M_{\sun}$
(Labbe et al., 2005; Papovich et al., 2006). Dusty DRGs have estimated total
infrared luminosity in the range $10^{11}<L_{IR}<10^{12}\mbox{$L_{\sun}$}$
(Webb et al., 2006; Papovich et al., 2006). A third sample (Daddi et al.,
2005) uses BzK colors to select galaxies at $z\sim 2$; massive BzK galaxies
are mid-IR luminous. Reddy et al. (2005) compared BM/BX, DRGs, and BzK
galaxies and found that BzK galaxies include most DRGs and BM/BX galaxies.
This comparison is nicely shown in Fig. 9 of Reddy et al. (2005).
An independent way to select galaxies at $z>1.5$ is to use IRAC colors. In
$1.5<z<3.0$, the four IRAC bands probe the rest-frame NIR bands where galaxy
SEDs have similar shape, thus the IRAC colors are very robust in determining
redshift in this range(Huang et al., 2004; Papovich et al., 2008). The 1.6 µm
stellar emission bump can be used to separate galaxies at $z=1.5$. At $z<1.5$,
the IRAC 3.6 and 4.5 µm bands sample galaxy SED from the Jeans tail of cold
stars, thus the $[3.6]-[4.5]<0$. At $z\gtrsim 1.5$, the 1.6 µm stellar
emission bump begins to move into the IRAC 4.5 µm band, making the
$[3.6]-[4.5]>0$. The color criteria is set based on the M82 SED model (Huang
et al., 2004)
$0.05<[3.6]-[4.5]<0.4,~{}~{}~{}and$ (1)
$-0.7<[3.6]-[8.0]<0.5~{}~{}~{}~{}~{}~{}~{}~{}~{}$ (2)
Both color ranges are corresponding to redshift range of $1.5\lesssim
z\lesssim 3.3$. The red color cut in both equations rejects power-law AGNs and
star forming galaxies at $z>3.3$ whose 7.7µm PAH shifts out of the IRS
wavelength range. The selection is based on the rest-frame NIR colors, and is
thus less affected by dust extinction, stellar ages, and metallicities. Figure
1 compares IRAC two-color plots for the DEEP2 galaxies with $z<1.5$ to LBGs,
DRGs, and BzK galaxies. The color criteria of equations 1 and 2 include most
galaxies in the $1.5\lesssim z\lesssim 3.0$ range. Although 8 µm detection is
required for this selection method, selecting at this wavelength has some
additional advantages. Chosen at roughly 2–3 µm rest-frame, the sample
selection is immune to dust reddening and is roughly equivalent to a stellar-
mass-selected sample. It is thus ideal for studying luminous, potentially
massive galaxies (Huang et al., 2004, 2005; Conselice et al., 2007).
The specific IRS targets for this program were selected from a 24 µm sample
(Papovich et al., 2004) in the EGS region. These sources show a clump at the
predicted colors for $1.5<z<3$ in Figure 2, but redshifts were not known in
advance. Individual targets were selected to have IRAC colors satisfying Eqs.
1 and 2 and also $F_{24\micron}>0.5$ mJy. In addition, each candidate was
visually examined in the deep Subaru R-band image (Ashby et al., 2008) to
avoid confused or blended targets. With these criteria, 12 targets were
selected in the 2°$\times$10′ EGS region for IRS observation. Table 1 lists
the sample galaxies. In this redshift range, most sources will be either
ULIRGs with total infrared luminosity $L_{IR}>10^{12}$ $L_{\sun}$ 222 The
total infrared luminosites ($L_{IR}$) for our sample are calculated with MIPS
24, 70, 160 µm and 1.1mm flux densities and Chary-Elbaz (Chary & Elbaz, 2001;
Daddi et al., 2007a, b) SED models. Details are given in §4.2. or AGNs with
high mid-IR luminosities. For convenience, the galaxy nicknames used in the
Spitzer database are used in this paper, but these do not follow proper naming
conventions and should not be used as sole object identifiers. Proper names
are also given in Table 1.
Most of the previous IRS surveys of IR luminous sources at $z\sim 2$ have used
rather different selection criteria. Table 2 summarizes the sample criteria
for various other IRS surveys. Houck et al. (2005) and Yan et al. (2005) used
extreme optical-to-24 µm color to select dusty objects. Objects in these
samples have much redder [3.6]-[8.0] IRAC colors than the majority of 24 µm
sources (Fig. 2) and are mostly AGNs as shown by their strong power-law
continua, but weak or absent PAH emission features (Houck et al., 2005; Yan et
al., 2005). Weedman et al. (2006b) selected AGN using similar criteria. They
also selected a separate starburst-dominated sample at $z\sim 2$ based on the
stellar 1.6 µm emission bump. The exact criterion required the peak flux
density to be at either 4.5 µm or 5.8 µm, thus rejects low-redshift galaxies
and AGN with strong power-law SEDs . The resulting sample is very similar to
ours though overall a bit redder (Fig. 2). All objects in the Weedman et al.
starburst sample show strong PAH emission features.
## 3 IRS Spectroscopy
### 3.1 Observations and Data Reduction
IRS observations of this sample are part of the GTO program for the
Spitzer/IRAC instrument team (program ID: 30327). Objects were observed only
with the IRS Long-Slit Low-Resolution first order (LL1) mode, giving
wavelength coverage $20<\lambda<38$ µm with spectral resolution
$60\lesssim\lambda/\Delta\lambda\lesssim 120$. The wavelength coverage
corresponds to $6\lesssim\lambda\lesssim 13$ µm in the rest-frame for galaxies
at $z\approx 2$. This wavelength range includes strong PAH emission features
at 7.7, 8.6, and 11.3 µm and silicate absorption from 8 to 13 µm (peaking near
9.7 µm). Detecting these features permits redshift measurement and study of
dust properties. Total exposure time for each target was based on its 24 µm
flux density. Mapping mode (Teplitz et al., 2007) was used to place each
object at 6 positions spaced 24″ apart along the 168″ IRS slit. This not only
gives more uniform spectra for the target objects, rejecting cosmic rays and
bad pixels, but also increases sky coverage for possible serendipitous objects
around each target. Table 1 gives the target list and other parameters for the
observations. All data were processed with the Spitzer Science Center
pipeline, version 13.0. Extraction of source spectra was done with both the
SMART analysis package (Higdon et al., 2004) and our customized software. Lack
of IRS coverage at $\lambda\lesssim 20$ µm for this sample is compensated with
deep AKARI 15 µm imaging (Im et al., 2008). All objects except two outside the
AKARI area are detected at 15 µm providing measurement of the continua at
rest-frame $\sim 6$ µm for galaxies at $z\approx 2$.
Figure 3 presents the IRS spectra. PAH emission features at 7.7 and 11.3 µm
and silicate absorption peaking at 9.7 µm are detected from 10 sources,
indicating a narrow redshift range of $1.6<z<2.1$. The PAH emission features
at 7.7 and 11.3 µm show pronounced variations in their profiles and peak
wavelengths. Both 7.7 and 11.3 µm PAH features have at least two components
(Peeters et al., 2002). For example, the 7.7 µm PAH feature has a blue
component at 7.6 µm and a red component at wavelength longwards of 7.7 µm.
Thus different types of PAH spectral templates potentially yield different
redshift measurements. To check this, we use two local MIR spectral templates
with different PAH profiles, an average local starburst spectrum and an
average local ULIRG spectrum to determine redshifts. Both templates yield very
similar redshifts (Table 3). The starburst template fits all objects better
with a typical 2% redshift uncertainty. EGS_b2 is identified at $z=1.59$ with
PAH emission features at 8.6 and 11.3 µm and the [Ne II] emission line at
12.81 µm. Redshift $z=2.03$ for EGS12 is confirmed by detecting $H{\alpha}$ at
1.992 µm (Figure 4) in a NIR spectrum taken with the MOIRC spectrograph on the
Subaru telescope (Egami et al., 2008). The spectrum of EGS_b6, however, shows
two emission lines at 27.7 and 31.1 µm that we are not able to identify
consistently with any redshift. EGS_b6 is resolved to two objects 0$\farcs$7
apart in the HST ACS image (Davis et al., 2007), and an optical spectrum of
this system shows two galaxies at $z=1.02$ and $z=2.001$. We therefore omit
EGS_b6 from the sample for further analysis. The 24 µm images show several
serendipitous objects in slits of all 12 targets, most of which are too faint
to permit redshift identification. Only one source, EGS24a, have
$F_{24~{}\micron}\sim 1$ mJy. This object, found in the slit of EGS24, shows
the silicate absorption feature at $z=2.12$ (Fig. 3).
The redshift distribution of the sample (Fig. 5) is very similar to that of
the starburst-dominated ULIRGs studied by Weedman et al. (2006b), even though
our limiting flux density at 24µm is a factor of two fainter than theirs.
Recently Farrah et al. (2007) use the same criteria to select a larger sample
in the Lockman Hole region for the IRS observation and yield a very similar
redshift distribution. The narrow distribution for starburst-dominated ULIRGs
is due to the selection of strong 7.7 µm PAH emission by the MIPS 24 µm band
at $z\sim 1.9$. The peak of the redshift distributions for Weedman et al.
(2006b), Farrah et al. (2007), and our sample is at this redshift, confirming
the selection effect. On the other hand, luminous 24 µm sources with power-law
SED have a much wider redshift range up to $z\sim 3$ (Houck et al., 2005; Yan
et al., 2005; Weedman et al., 2006b), but they will not pass our IRAC color
criteria or the ”bump” SED criterion in Weedman et al. (2006b) and Farrah et
al. (2007).
### 3.2 PAH Emission Features in ULIRGs
The PAH features visible in the individual spectra of the sample galaxies are
even more prominent in the average spectrum for the sample, as showed in Fig
6, which also stacks local starburst (Brandl et al., 2006) and ULIRG samples
for comparison. The local ULIRG sample is divided into Seyfert, LINER, and HII
sub-samples according to their optical spectral classification (Veilleux et
al., 1999). PAH emission features are found have different feature profiles.
Peeters et al. (2002) classified profiles of each PAH emission feature,
according to the peak wavelength, into 3 main classes: Class A, B, and C. PAH
emission features are known to have more than one component in each feature.
For example, the 7.7 µm PAH emission feature have two major components at 7.6
and 7.8 µm: Class A is defined as 7.6µm dominated PAH; Class B is the 7.8µm
component dominated PAH; and Class C is red component dominated with peak
shifting beyond 7.8µm. The 7.7 µm PAH in the local starburst spectrum appears
to be more consistent with class A with the peak at wavelength shorter than
7.7 µm. All local ULIRG spectra have a typical class B PAH profile, with a red
wing extending beyond 8 µm. In §3.1, we already found that the starburst
template fits each IRS spectrum of our sample better than the ULIRG template.
It is not surprising then that the average 7.7 µm PAH profile of our sample is
more similar to the average starburst spectrum, thus consistent with the class
A. Another significant difference is that our ULIRG sample has an average
$L_{11.3\micron}/L_{7.7\micron}$333$L_{11.3\micron}$ and $L_{7.7\micron}$ are
the 7.7 and 11.3µm PAH emission luminosities defined as $L_{PAH}=4\pi
d_{L}^{2}\int F_{PAH}(\nu)d\nu$ ratio about twice as high as local ULIRGs but
similar to local starbursts (Brandl et al., 2006). We also plot the average
spectra of Yan et al. (2005) in Figure 6 for comparison. The average spectrum
for strong PAH objects in Yan et al. (2005) is more similar to the local
Seyfert type ULIRG, implying a dominant AGN contribution in the spectra of
their sample. We conclude from IRS stacking that the 7.7 µm PAH profiles and
$L_{11.3\micron}/L_{7.7\micron}$ ratios for the present sample are more
consistent with those of local starburst galaxies rather than local ULIRGs.
PAH emission features are a tracer of star formation, one of the energy
sources powering ULIRGs (Genzel et al., 1998; Rigopoulou et al., 1999; Sajina
et al., 2007a). In order to subtract the local continuum, we adopted the
method used by Sajina et al. (2007a), fitting the $5<\lambda<15$ µm spectrum
into three components: the PAH emission features, a power-law continuum, and
the silicate absorption. An iterative fit determined the continuum for each
object. The initial input PAH template was from the NGC 7714 IRS spectrum
after subtracting its power-law continuum. The silicate absorption profile was
from Chiar & Tielens (2006) with central optical depth $\tau_{9.7}$ a free
parameter. The 7.7 and 11.3 µm PAH line luminosities and equivalent widths for
the local starburst sample (Brandl et al., 2006), the local ULIRG sample
(Armus et al., 2007), and the present sample were derived the same way. Brandl
et al. (2006) used a different method to derive the same parameters; their
method would give lower 7.7µm PAH flux densities and luminosities. This is due
to the complicated continuum at $\sim 8$µm. Our 11.3µm PAH flux densities are
consistent with theirs. Table 3 gives the results.
## 4 Multi-Wavelength Studies of ULIRGs at $z\sim 1.9$
AEGIS (Davis et al., 2007) and FIDEL (Dickinson et al., 2007) provide a rich
X-ray to radio data set to study the ULIRG SEDs. Objects in our sample are
measured at many key bands: all are detected in all four IRAC bands (Barmby et
al., 2008), all but two by AKARI at 15 µm (Im et al., 2008), and all at 1.4
GHz (Ivison et al., 2007). Most are also detected at 70 and 160 µm in the
FIDEL survey (Dickinson et al., 2007). Only two objects, EGS14 and EGS_b2, are
detected in the Chandra 200 ks X-ray imaging (Laird et al., 2008). The flux
densities in these key bands trace stellar mass, star formation rate, and AGN
activity. Objects in the present sample were also observed with MAMBO on IRAM,
and most were detected at 1.2 mm (Younger et al., 2008). Table 4 gives the
photometry, and the UV-to-radio SEDs are shown in Figure 7.
The multi-wavelength photometry permits us to compare the present sample with
the sub-millimeter galaxy population. There is a small region covered by SCUBA
in EGS by Webb et al. (2003), but no galaxies in the present sample are in the
SCUBA region. We fit FIR SEDs for the sample and predict their 850 µm flux
densities $F_{850}$ to be in the range $2.2<F_{850}<8.4$mJy (Table 4). These
values are similar to the flux densities for SMGs at the same redshifts
(Chapman et al., 2005; Pope et al., 2006, 2008). The median $F_{850}$ for this
sample is 4.5 mJy, compared with the median $F_{850}$ of 5.5 mJy for SMGs at
$1.5<z<2.2$ found by Chapman et al. (2005) and 7.5 mJy by Pope et al. (2006,
2008). In more detail, 7 out 12 objects in the present sample have $F_{850}$
fainter than 5 mJy, while the flux densities for most SMGs in Chapman et al.
(2005) and Pope et al. (2006, 2008) are brighter than 5 mJy. We therefore
argue that this sample is part of a slightly faint SMG population.
Optical and radio morphologies of the galaxies provide important information
on their assembly histories. HST ACS F814W imaging (Lotz et al., 2008) covers
the central 1°$\times$10′ region of the EGS. EGS 1/4/b2 are outside the ACS
coverage, but rough optical morphologies are available from Subaru $R$-band
images. Optical images of each object are presented with their SEDs in Figure
7. Most objects have irregular or clumpy morphologies in the rest-frame
$2000~{}\hbox{\AA}<\lambda<3000~{}\hbox{\AA}$ bands with a typical size of
1$\farcs$5, suggesting extended star formation in a region with a size of
about 13 kpc. The 1.4 GHz radio imaging of EGS has a mean circular beam width
of $\sim$3$\farcs$8 FWHM (Ivison et al., 2007) and is unable to resolve
morphologies except in a few cases. EGS 23 and 24 show elongated radio
morphologies aligned with their optical extent, indicating that the radio and
rest-frame UV light are from the same extended star formation regions in both
cases.
The spatial distribution of the stellar population is traced by the rest-frame
optical imaging. Windhorst et al. (2002), Papovich et al. (2005), and
Conselice et al. (2005) have argued that UV-dominated star-forming galaxies at
high redshifts have similar morphologies in the rest-frame UV and optical
bands. One outstanding property of galaxies in the present sample is their
extremely red optical-NIR color. Seven objects in the sample have observed
$(R-K)_{\rm Vega}>5$, qualifying them as Extremely Red Objects (ERO). EGS4 is
the reddest with $(R-K)_{\rm Vega}=6.8$. Red colors like these are common
among distant ULIRGs; examples include ERO J164502+4626.4 (=[HR94] 10 or
sometimes “HR 10”) at $z=1.44$ (Elbaz et al., 2002) and CFRS 14.1157 at
$z=1.15$ (Le Floc’h et al., 2007). EROs are commonly seen as counterparts to
SMGs (Smail et al., 1999; Frayer et al., 2004). The red optical-NIR colors,
corresponding to rest $NUV-R$ for our sample, indicate either dust extinction
in these objects or high stellar mass. The stellar population modeling in the
next paragraph suggests objects in our sample have both heavy dust extinction
and high stellar masses. The heavy dust extinction does not seem to reconcile
with objects being detected in the ACS 606W and 814W bands, which are the
rest-frame 1800-2600Åfor galaxies at $z\sim 2$. The irregular and clumpy
morphologies in Figure 7 highly non-uniform dust extinction in the objects in
our sample. Only two objects are undetected in the deep ACS 814W image,
probably due to higher column density of dust in the compact stellar and gas
distribution.
### 4.1 Stellar Population and Mass in ULIRGs
Stellar population modeling provides a way of determining physical parameters
from the observational data, but it is very difficult to model stellar
populations in ULIRGs. Tacconi et al. (2002) measured dynamical masses for a
sample of local ULIRGs with NIR spectroscopy and found stellar masses in the
range of $3\times 10^{10}M_{\odot}<M_{*}<2.4\times 10^{11}M_{\odot}$ with a
mean of $1.1\times 10^{11}M_{\odot}$. Their local sample has a mean absolute
K-band magnitude of $M_{K}=-25.8$ after adopting a dust correction of
$A_{K}=0.7$ mag. The IRAC 8 µm flux densities of this sample correspond to a
similar K-band magnitude range with $<M_{K}>=-25.7$ if the same dust
correction is used. This suggests a similar mass range for our sample,
$M_{*}\sim 10^{11}\mbox{$M_{\sun}$}$.
ULIRGs have a burst star formation history, very young stellar populations,
and non-uniform dust distribution, all of which can introduce large
uncertainties in modeling their stellar populations. On the other hand,
stellar masses are the most robust property against variations in star
formation history, metallicities, and extinction law in modeling stellar
population (Förster Schreiber et al., 2004). We perform a stellar population
analysis on the present sample, mainly to measure their stellar masses. We fit
galaxy SEDs with Bruzual & Charlot et al. (2003, hereafter BC03) stellar
population models with a Salpeter IMF and a constant star formation rate.
Several groups (Shapley et al., 2001; van Dokkum et al., 2004; Rigopoulou et
al., 2006; Lai et al., 2007) have argued that a constant star formation rate
provides a reasonable description of stellar population evolution for galaxies
with ongoing star formation at high redshifts, such as LBGs, Lyman-alpha
emitters (LAEs), and DRGs. The stellar population age, dust reddening
$E(B-V)$, stellar mass, and derived star formation rate from the model fitting
are listed in Table 5, and the model SED fits are shown in Figure 7. Objects
in this sample have estimated stellar masses with $M_{*}>10^{11}$ $M_{\sun}$,
similar to values found for local ULIRGs (Tacconi et al., 2002), DRGs, and BzK
galaxies (Labbe et al., 2005; Daddi et al., 2007a).
### 4.2 Total Infrared Luminosity and Star Formation Rate
Two of the most-popular methods of estimating star formation rates of local
galaxies use total infrared luminosity $L_{IR}$ and radio luminosity
$L_{1.4GHz}$ (Condon, 1992; Kennicutt, 1998). The validity of these methods
needs to be established at high redshift. Most objects in the present sample
are detected at 70 µm, 160 µm, and 1.2 mm, permitting a direct measurement of
total infrared luminosity $L_{IR}$ (Papovich et al., 2007). In practice, we
derived $L_{IR}$ by fitting SED templates (Chary & Elbaz, 2001) to the
observed 70 µm, 160 µm, and 1.2 mm flux densities (Fig. 7). All galaxies in
the sample have $L_{IR}>10^{12}$ $L_{\sun}$ (Table 4), qualifying them as
ULIRGs. EGS14 and EGS21 have $L_{IR}>10^{13}$ $L_{\sun}$ and are thus
HyperLIRGs. All sample galaxies are also detected at 1.4 GHz (Ivison et al.,
2007). We will be able to verify: 1) whether $L_{1.4GHz}$ is correlated with
$L_{IR}$ for ULIRGs at $z\sim 2$; and 2) whether such a correlation at high
redshifts is consistent with the local one (Condon, 1992). Figure 8 plots the
radio luminosity $L_{1.4GHz}$ vs $L_{IR}$ for this sample and variety of local
starburst and ULIRG samples. The FIR-radio ratio q for this sample444 $q={\rm
log}(\frac{F_{FIR}}{3.75^{12}Wm^{-2}})-{\rm
log}(\frac{F_{1.4GHz}}{Wm^{-2}Hz^{-1}})$ defined by Condon (1992) are in Table
4 with a mean $<q>=2.19\pm 0.20$. Kovacs et al. (2006) measured $L_{IR}$ using
350µm 850µm and 1.2mm flux densities and obtained a mean $<q>=2.07\pm 0.3$ for
SMGs at $1<z<3$. Both measurements yields q for ULIRGs at $z\sim 2$ close to,
but smaller than the local value q=2.36. Sajina et al. (2008) showed more
clearly a trend in their AGN dominated sample at $z\sim 2$: sources with
strong PAH emission have q in $1.6<q<2.36$; while all power-law sources have
$q<1.6$. Normally, radio excess is due to non-thermal emission from AGNs, but
galaxy merging can also enhance the non-thermal synchrotron radiation (Condon,
1992). Merging processes are evident in our sample. We will argue in following
paragraphs that AGNs activities may exist in most objects in the sample. In
fact, two X-ray sources, EGS14 and EGS_b2, and the serendipitous power-law
source EGS24a show higher radio excess (lower q) than rest objects in the
sample. Two scenarios can be differentiated by their radio morphologies: AGNs
are point sources and mergers, in most cases, are extended sources. Currently
we cannot determine which scenario is responsible to the radio excess due to
lower resolution of the 1.4gHz radio images (Figure 7).
Another measure used to estimate $L_{IR}$ for local galaxies is the IRAC 8 µm
luminosity, $L_{8\micron}$, though there is considerable debate about how
reliable this method is. $L_{8\micron}$ is defined as $L_{8\micron}=4\pi
d_{L}^{2}(\nu F_{\nu})_{8\micron}$ where $F_{\nu}$ is the rest frame IRAC 8 µm
flux density (Huang et al., 2007b). $L_{8\micron}$ is found to be correlated
with $L_{IR}$ for local galaxies (Wu et al., 2005). The MIPS 24 µm band
directly measures the rest IRAC 8 µm flux densities for our sample. A galaxy’s
8 µm flux density actually has two components (aside from starlight, which can
be subtracted if necessary): the 7.7 µm PAH emission feature complex and a
featureless continuum, coming from an AGN or warm dust in the interstellar
medium. There are several models for the IR emission from galaxies, which
convert $L_{8\micron}$ to $L_{IR}$ (Chary & Elbaz, 2001; Dale & Helou, 2002,
hereafter CE01 and DH02). Empirically, Wu et al. (2005) and Bavouzet et al.
(2008) found a correlation between $L_{8\micron}$ and both $L_{1.4GHz}$ and
$L_{IR}$ for star-forming galaxies. At the high luminosity end, local ULIRGs
deviate from this correlation with higher $L_{8\micron}$/$L_{IR}$ ratios, such
a trend was also see by Rigby et al. (2008).
Figure 9 shows a correlation between $L_{8\micron}$ and $L_{IR}$ for all
populations. However, the $L_{8\micron}-L_{IR}$ relation for objects in our
sample and the local ULIRGs with high $L_{IR}$ has a higher offset than that
for the local starburst galaxies and the model prediction (Chary & Elbaz,
2001; Dale & Helou, 2002). This indicates that, for a given $L_{IR}$,
$L_{8\micron}$ for objects in our sample and some of local ULIRGs is higher
than the model prediction. Thus objects in our sample have an 8µm excess
comparing with the CE01 and DH02 model prediction. The empirical
$L_{8\micron}-L_{IR}$ relation of Bavouzet et al. (2008) derived with samples
at various redshifts matches local starburst galaxies, but predicts much high
$L_{8\micron}$ for ULIRGs and HyperLIRGs. The $L_{8\micron}-L_{IR}$ relation
for our sample permits to estimate $L_{IR}$ for same type of objects with only
24µm flux densities.
Our IRS spectra can be used to separate the PAH from continuum in the (rest) 8
µm band, and each component’s contribution to $L_{8\micron}$ can be measured.
PAH luminosity is thought to be a generally good tracer of star formation
rate, but the $L_{7.7\micron}/L_{IR}$ ratio is known to be luminosity-
dependent, decreasing at high luminosity (Rigopoulou et al., 1999; Desai et
al., 2007; Shi et al., 2007). Figure 10 shows $L_{7.7\micron}/L_{IR}$ versus
$L_{IR}$. In this diagram, each population is well separated from the others.
The average $L_{7.7\micron}/L_{IR}$ ratio for local ULIRGs is seen to be lower
than for local starburst galaxies. The HyperLIRGs in Yan et al. (2005) and
Sajina et al. (2007a) have the lowest $L_{7.7\micron}/L_{IR}$ ratio. In
contrast, the present sample has the highest $L_{PAH}/L_{IR}$ ratio, and the
trend is the same for the 11.3 µm PAH feature (Fig. 11). Objects with such a
high PAH luminosity have neither been found locally nor in the MIPS 24µm
luminous sample at $z\sim 2$ (Yan et al., 2005; Houck et al., 2005).
Starburst galaxies were expected to have the highest $L_{PAH}/L_{IR}$, and
$L_{PAH}/L_{IR}$ was seen to decrease with increasing $L_{IR}$. Our sample
shows a new ULIRG population with much higher PAH emissions at 7.7 and 11.3
µm. We argue that the high $L_{PAH}/L_{IR}$ ratio for our sample is generally
compatible to extrapolation from the $L_{PAH}/L_{IR}-L_{IR}$ relation for
starburst galaxies. Both $L_{7.7\micron}$ and $L_{11.3\micron}$ for local
starburst galaxies are strongly correlated with $L_{IR}$ in Fig. 10 and Fig.
11. We fit both data sets and obtain the following relations:
$L_{IR}\propto(L_{7.7\micron})^{0.69}$ and
$L_{IR}\propto(L_{11.3\micron})^{0.75}$. Both relations convert to
$L_{7.7\micron}/L_{IR}\propto(L_{IR})^{0.45}$ and
$L_{11.3\micron}/L_{IR}\propto(L_{IR})^{0.33}$ respectively, as plotted in
Fig. 10 and Fig. 11. The $L_{PAH}$-$L_{IR}$ relation for local starbursts
predicts a higher $L_{PAH}$-$L_{IR}$ ratio in the $L_{IR}$ range for our
sample. Our sample have high $L_{PAH}$-$L_{IR}$ ratio close to the
extrapolation comparing with other ULIRGs population, indicating a starburst
domination. The deficient PAH emission in our sample implies most likely
existence of AGN in our sample, though strong UV from intensive star forming
region can also destroy PAH.
The MIR spectral properties and $L_{PAH}/L_{IR}$ of our sample are closer to
local starburst galaxies, even though their $L_{IR}$ differs by 2 orders of
magnitude. Farrah et al. (2007) reached the same conclusion by comparing
silicate absorption strength for their sample with those for local ULIRG and
starburst galaxies, and they propose six possible scenarios to explain the
similarity between high redshift ULIRGs and local starburst galaxies. Our
multi-wavelength data set provides further constrain on physical properties of
our sample. The ACS I-band images (Figure 7) show multi-clumpy morphologies
extended to $>10$ Kpc size for most objects in our sample. At $z\sim 2$, the
observed I-band probes the rest NUV band, thus is sensitive to star formation.
Local ULIRGs, however, have much more compact morphologies in the GALEX NUV
images 555The GALEX UV morphologies for local starburst galaxies and local
ULIRGs are from http://galex.stsci.edu/GalexView/.. The extend morphologies of
our sample support both gas-rich merging and starburst geometry scenarios
proposed by Farrah et al. (2007). In this scenario, the silicate dust column
density is reduced after star formation region is stretched to a large scale
during merging. The extended morphologies in rest NUV indicate an extended
star formation in our sample, thus extended distribution of PAH emission. In
such an extended distribution, more PAH can survive in strong UV radiation
field from central AGN than those in a compact distribution. This scenario
thus explains the higher $L_{PAH}/L_{IR}$ in our sample than local ULIRGs.
Star forming galaxies at $z\sim 2$ are found to generally have much less dust
extinction than their local counterparts. Reddy et al. (2006) found that there
is a correlation between $L_{IR}/L_{1600}$ and $L_{1600}+L_{IR}$ for star
forming galaxies at $z\sim 2$, where $L_{1600}$ is the monolithic luminosity
at 1600Å. This correlation has a higher offset than the local relation,
indicating less dust extinction in the line of sight for galaxies at $z\sim
2$. Most objects in our sample lie upon the
$L_{1600}/L_{IR}$-$L_{1600}+L_{IR}$ relation for galaxies $z\sim 2$ (Figure
12). Reddy et al. (2006) argued that dust distribution and star formation
region become more compact in local galaxies. We argue that lower surface
density of dust density and extended star formation region with high SFR
permit to detect both UV and PAH emission from most objects in ours sample.
The star formation rate for a galaxy can be estimated from its FIR and
ultraviolet emission. Specifically, SFR is given as (Kennicutt, 1998; Bell et
al., 2005)
$SFR/(\mbox{$M_{\sun}$}~{}{\rm
yr}^{-1})=C\times(L_{IR}+3.3L_{280})/\mbox{$L_{\sun}$},$ (3)
where $L_{280}$ is the monochromatic luminosity (uncorrected for dust
extinction) at rest frame 280 nm (Wolf et al., 2005). The constant C is
$1.8\times 10^{-10}$ for the Salpeter IMF (Kennicutt, 1998), and $9.8\times
10^{-11}$ for the Kroupa IMF (Bell et al., 2005). In the following text, we
will adopt the Salpeter IMF for all $L_{IR}$-SFR conversion in this paper. SFR
will reduce by a factor of $\sim$2 if we switch to the Kroupa IMF (Bell et
al., 2005). The 280 nm band shifts to the observed $I$ band at $z\approx 2$.
$L_{280}$ was calculated from the ACS F814W magnitude if available or
otherwise the CFHT $I$ magnitude. All objects in our sample have $L_{280}$ in
the range $5\times 10^{8}<L_{280}<3\times 10^{10}\mbox{$L_{\sun}$}$, less than
1% of their $L_{IR}$. The star formation rate seen at rest-frame 280 nm is at
most 20 $M_{\sun}$ yr-1, and most UV light produced by newborn stars is
absorbed by dust and re-emitted in the infrared. Thus we omit the $L_{280}$
contribution in our SFR calculation.
Total infrared luminosity, $L_{IR}$, of ULIRGs may be partly powered by AGNs
(Nardini et al., 2008), thus using $L_{IR}$ may over-estimate their SFR. The
PAH emission only traces star formation, and is free of AGN contamination. We
calculate SFR for our sample with their $L_{PAH}$ using the $L_{PAH}-{\rm
SFR}$ relation, established from local starburst galaxies shown in Figure 10
and Figure 11. Results are given in Table 5. Star formation rates for our
sample converted from $L_{IR}$ using Equation 3 are much higher, with an
average ${\rm SFR}\sim 1000$ $M_{\sun}$ yr-1. $L_{7.7\micron}$ and
$L_{11.3\micron}$ (Table 5) give smaller, star formation rates, in the range
$150<{\rm SFR}<600$ $M_{\sun}$ yr-1 for most objects, that are quite
consistent with the stellar population modeling results. The discrepancy
between both star formation estimations may be due to: 1\. part of star
formation occurs in region with no PAH, thus $L_{PAH}$ underestimates the SFR;
2. $L_{IR}$ contains AGN contribution, thus over-estimate the SFR. It is very
possible that both can happen in one object simultaneously, namely its AGN
destroys PAH in surrounding area where star formation occurs. This will
further increase the discrepancy, so the real SFR should be in between both
estimations.
Our sample have both high star formation rate and high stellar mass,
supporting the galaxy formation in the ”downsizing” mode. The star formation
rates and stellar masses for our sample are consistent with the SFR-stellar
mass relation obtained from BzK galaxies at $z\sim 2$ (Figure 13). Daddi et
al. (2007a) showed that simulated galaxy populations taken from the
Milllennium Simulation lightcones of Kitzbichler & White (2007) and Cattaneo
et al. (2008) failed to re-produce the SFR-star mass relation at $z=2$, thus
underestimate number of ULIRGs at $z\sim 2$.
It has been long anticipated that ULIRGs have a dominant contribution to the
total infrared luminosity density, thus star formation rate density, at $z\sim
2$ (Le Floc’h et al., 2005). We use the $V_{max}$ method to calculate the
total infrared luminosity density for our sample to be $2.6\times
10^{7}L_{\odot}/Mpc^{3}$. The sample of Farrah et al. (2007) with the same
limiting flux yields a density of $8.8\times 10^{6}L_{\odot}/Mpc^{3}$. We
argue that the difference is due to the cosmic variance, because these objects
are massive galaxies and thus have a much stronger spatial correlation. Both
densities are lower than ULIRG $L_{IR}$ density at $z\sim 1$, $\sim
10^{8}L_{\odot}/Mpc^{3}$ for all ULIRGs (Le Floc’h et al., 2005). Most objects
in our sample and those of Farrah et al. (2007) have $L_{IR}>5\times
10^{12}L_{\odot}$, the major contribution to the $L_{IR}$ density at $z\sim 2$
comes from ULIRGs with $10^{12}<L_{IR}<5\times 10^{12}L_{\odot}$ (Huang et
al., 2009).
### 4.3 AGN in the $z\sim 1.9$ ULIRG sample
One direct way of identifying an object as an AGN is to measure its X-ray
luminosity. Two objects in the sample, EGS14 and EGS_b2, are in the main
AEGIS-X catalog from the Chandra 200 ks images (Nandra et al., 2007; Laird et
al., 2008). Their X-ray fluxes $F_{0.5-10~{}keV}$ are $1.2\times 10^{-15}$ and
$6.4\times 10^{-15}$ erg cm-2 s-1 respectively. Calculated X-ray luminosities
$L_{X}$ (Nandra et al., 2007; Georgakakis et al., 2007) are $1.0\times
10^{43}$ erg s-1 for EGS14 and $9.4\times 10^{43}$ erg s-1 for EGS_b2.
Hardness ratios are 0.45 and -0.30, respectively. Therefore EGS14 is a type 2
(obscured) AGN, and EGS_b2 is very close to a type 1 (unobscured) QSO
according to the X-ray luminosity and hardness ratios (Szokoly et al., 2004).
In addition to EGS14 and EGS_b2, EGS1 has a low-significance X-ray
counterpart. At the location of this source there were 6.5 net soft band
counts (10 counts total with an estimated 3.5 count background). This gives a
Poisson probability of a false detection of $3.5\times 10^{-3}$. The source
was not detected in the hard band. If the detection is real, EGS1 has
$F_{0.5-2~{}keV}$ $=3.0\times 10^{-16}$ erg cm-2 s-1 with $L_{2-10~{}keV}$
$=1.1\times 10^{43}$ erg s-1, thus is qualified to be an AGN.
The remaining 10 ULIRGs are not detected in the current Chandra observation.
Stacking in the soft band gives 19.5 counts above an average background of
9.85, corresponding to $F_{0.5-2~{}keV}=3.8\times 10^{-17}$ erg cm-2 s-1 or
$L_{X}=1.1\times 10^{42}$ erg s-1 at 2$\sigma$ significance. There was no
detection in the hard band. Even if EGS1 is added to the stacking, nothing
shows up in the hard band, but the soft band detection significance rises to
3.2$\sigma$. The mean flux is $F_{0.5-2~{}keV}=4.7\times 10^{-17}$ erg cm-2
s-1 or $L_{X}=1.3\times 10^{42}$ erg s-1. This average X-ray luminosity
represents either a very weak AGN or strong star formation. Using the relation
of Ranalli et al. (2003), this average X-ray luminosity corresponds to a star
formation rate of 220 $M_{\sun}$/yr, consistent with the SED and PAH
estimation. However, we argue that the stacked X-ray signal comes from central
point sources. These objects have very extended and elongated morphologies in
the rest-frame NUV band. If the X-ray photons are from these star formation
regions, stacking would not yield any signal unless they are aligned.
Emission in the rest 3–6 µm wavelength range is another indicator of AGN
activity (Carleton et al., 1987; Shi et al., 2007; Alonso-Herrero et al.,
2006; Hines et al., 2006; Jiang et al., 2006; Shi et al., 2007). The longer
end of that range, which has minimal stellar and PAH emission contamination,
is ideal for detecting what is nowadays thought to be hot dust emission
closely related to the AGN accretion disk. Luminosity at these wavelengths
($L_{5}$) can be converted to $L_{IR}$ for QSOs with the QSO SED templates
(Elvis et al., 1994). AKARI 15 µm photometry (Im et al., 2008) provides the
best measurement of $L_{5}$ for our sample. All galaxies within the AKARI
coverage are detected except EGS26, for which the 3$\sigma$ limiting flux
density is $F_{15}<58$ $\mu$Jy (Im et al., 2008). The AKARI 15 µm band is wide
enough to include the 6.2 µm PAH feature for objects with $1.5<z<2.2$, but
this feature is much weaker than the 7.7 µm feature. Thus the AKARI 15 µm band
is a better measure of AGN emission than the MIPS 24 µm band.
In fact, the $F_{15}/F_{24}$ ratio for our sample measures the continuum-to-
PAH ratio, and thus the AGN fraction. Figure 14 shows this ratio versus
redshift. The ratios for the two known AGNs with AKARI coverage, EGS14 and
EGS24a, are very close to the expected values for Seyfert 2’s. EGS11 and EGS12
are similar to expectations for h ii-type ULIRGs. The flux ratios for the
remaining objects in our sample show even more PAH than starbursts, indicating
starburst domination in these objects. SMGs (Pope et al., 2008) have very
similar $F_{15}/F_{24}$ ratios as objects in the present sample, implying the
same properties shared by both samples. The SMGs also show very strong PAH
features in their IRS spectra. This supports the argument that most objects in
the present sample are part of a SMG population, and are starburst dominated
ULIRGs.
A starburst dominated ULIRG can still have a deeply dust-obscured AGN. Many
current theoretical models (e.g., Mihos & Hernquist, 1994, 1996; Dubinski et
al., 1999; Cox, Jonsson, Primack, & Somerville, 2006; Hopkins et al., 2006)
suggest that such a dust-obscured AGN can have a significant contribution to
$L_{IR}$ of a ULIRG. A study of local ULIRG IRS spectra show an average of 15%
$L_{IR}$ are from central dust-obscured AGNs (Nardini et al., 2008). Nardini
et al. (2008) argued that ULIRG luminosity in $5<\lambda<6$ is dominated by
hot dust emission from AGNs. Most objects in the present sample are detected
at 15µm thus permit to measure their rest 5µm luminosities, $L_{5\micron}$,
which trace AGN activity. $L_{5\micron}$ for the present sample is in range of
$9.9<Log(L_{5\micron}/L_{\odot})<12.6$ (Table 4). Using
$(L_{IR}/L_{5\micron})_{QSO}=22.8$ from the Elvis et al. (1994) QSO SED, we
calculate that such a QSO contribution is about 14% of $L_{IR}$ for objects in
our sample, consistent with those for local ULIRGs.
## 5 Summary
The results for the present sample combined with others in Table 1 show that
high-redshift ULIRGs have a diverse range of properties, and different
selection criteria to pick out different populations. The combination of IRAC
colors and MIPS 24 µm flux used here selects ULIRGs with strong 7.7 µm PAH in
a rather narrow redshift range around $z\simeq 1.9$. This sample shows a
starburst dominated stage in gas-rich merging powered ULIRGs at $z\sim 2$. In
this stage, intensive star formation occurs in much extended region with a
typical scale of $\sim 15$ Kpc indicated by their ACS morphologies. Objects in
this sample have higher total infrared luminosities than local ULIRGs, but the
$L_{PAH}/L_{IR}$ ratios for the sample are higher than those of local ULIRGs.
We argue that the high $L_{PAH}/L_{IR}$ ratio is due to the extended PAH
distribution, which is less affected by strong UV emission from central AGNs.
Most objects follows the same $L_{IR}/L_{1600}$-$L_{bol}$ relation as that for
BM/BX, DRG and BzK galaxies, though they are at higher luminosity end.
Stellar masses in this sample already exceed $10^{11}$ $M_{\sun}$. Most stars
must have formed prior to this stage. The SFR-stellar-mass relation for this
sample is also consistent with that for the rest populations at $z\sim 2$,
which is much higher than the theoretical model prediction.
Only a few of the ULIRGs in our sample show direct evidence to have AGNs with
either high X-ray luminosities or hot dust emission in the mid-infrared.
Several pieces of evidence show weak AGNs existing in this starburst dominated
ULIRG sample: systematically higher $L_{1.4GHz}/L_{IR}$ ratio than the local
radio-FIR relation, and an average X-ray emission of $L_{X}=1.3\times 10^{42}$
erg s-1 from point sources. AGN contributes on average 15% of total infrared
luminosity for our sample.
This sample presents an early stage with very intensive star formation but
weak or heavily obscured AGNs. ULIRGs in other samples at similar redshift but
with different selection methods (Yan et al., 2005; Sajina et al., 2007a) have
higher total infrared luminosities and lower PAH luminosities, indicating
increasing AGN and decreasing star formation at higher $L_{IR}$.
This work is based in part on observations made with the Spitzer Space
Telescope, which is operated by the Jet Propulsion Laboratory, California
Institute of Technology under a contract with NASA. Support for this work was
provided by NASA through an award issued by JPL/Caltech. Facilities: Spitzer
## References
* Alexander et al. (2005) Alexander, D., et al. 2005, ApJ, 632, 736
* Alonso-Herrero et al. (2006) Alonso-Herrero, A., et al. 2006, ApJ, 640, 167
* Armus et al. (2007) Armus, L., et al. 2007, ApJ, 656 , 148
* Ashby et al. (2008) Ashby, M., et al. 2008, in preparation
* Barmby et al. (2008) Barmby, P., et al. 2008, ApJ, in press
* Barger et al. (1998) Barger, A., et al. 1998, Nature, 394, 248
* Barnes et al. (1996) Barnes, J., & Hernquist, L. 1996, ApJ, 471, 115
* Bavouzet et al. (2008) Bavouzet, N., et al. 2008, A&A, 479, 83
* Bell et al. (2005) Bell, E., et. al. 2005, ApJ, 625, 23
* Beirao et al. (2006) Beirao, P., et al. 2006, ApJ, 643, 1
* Blumenthal et al. (1984) Blumenthal, G., et al. 1984, Nature, 311, 517
* Brandl et al. (2006) Brandl, B., et al. 2006, ApJ, 653, 1129
* Bruzual & Charlot et al. (2003) Bruzual, G. & Charlot, S. 2003, MNRAS, 344, 1000
* Carilli & Yun (2000) Carilli, C. L., & Yun, M. S. 2000, ApJ, 539, 1024
* Carleton et al. (1987) Carleton, N. P., Elvis, M., Fabbiano, G., Willner, S. P., Lawrence, A., & Ward, M. 1987, ApJ, 318, 595
* Cattaneo et al. (2008) Cattaneo, A., et al. 2008, MNRAS, 389, 567
* Chapman et al. (2002) Chapman, S., et al. 2002, ApJ, 570, 557
* Chapman et al. (2003) Chapman, S., et al. 2003, Nature,422,695
* Chapman et al. (2005) Chapman, S., et al. 2005, ApJ, 622,772
* Chary & Elbaz (2001) Chary, R. & Elbaz, D. 2001, ApJ, 556, 562
* Chiar & Tielens (2006) Chiar, J. E., & Tielens, A. G. G. M. 2006, ApJ, 637, 774
* Cole et al. (2000) Cole, S., et al. 2000, MNRAS, 319, 168
* Condon (1992) Condon, J. J. 1992, ARA&A, 30, 575
* Conselice et al. (2005) Conselice, C., et al. 2005, ApJ, 620, 564
* Conselice et al. (2006) Conselice, C., et al. 2006, ApJ, 660, 55
* Conselice et al. (2007) Conselice, C., et al. 2007a, ApJ, 660, 55
* Conselice (2007) Conselice, C. 2007b, ApJ, 638, 686
* Cox, Jonsson, Primack, & Somerville (2006) Cox, T. J., Jonsson, P., Primack, J. R., & Somerville, R. S. 2006, MNRAS, 373, 1013
* Croton et al. (2006) Croton, D. J., et al. 2006, MNRAS, 365, 11
* Daddi et al. (2005) Daddi, E., et al. 2007, ApJ, 631, 13
* Daddi et al. (2007a) Daddi, E., et al. 2007a, ApJ, 670, 156
* Daddi et al. (2007b) Daddi, E., et al. 2007b, ApJ, 670, 173
* Dale & Helou (2002) Dale, D. & Helou, G. 2002, ApJ, 576, 159
* Davis et al. (2007) Davis, M., et al. 2007, ApJ, 660, 1
* Desai et al. (2007) Desai, V., et al. 2007, ApJ, 669, 810
* Dey et al. (2008) Dey, A., et al. 2008, ApJ, 677, 943
* Dickinson et al. (2007) Dickinson, M., et al. 2007, BAAS, 211, 5216
* van Dokkum et al. (2004) van Dokkum, P., et al. 2004, ApJ, 611,703
* Dubinski et al. (1999) Dubinski, J., Mihos, C., & Henquest, L. 1999, ApJ, 526, 607
* Egami et al. (2008) Egami, E., et al. 2008, in preparation
* Elvis et al. (1994) Elvis, M., et al. 1994, ApJS, 95, 1
* Eggen et al. (1962) Eggen, O. J., et al. 1962, ApJ, 136, 748
* Elbaz et al. (2002) Elbaz, D., et al. 2002, A&A, 381, 1
* Elbaz et al. (2007) Elbaz, D., et al. 2007, A&A, 468, 33
* Farrah et al. (2007) Farrah, D., et al. 2007, ApJ, 667, 149
* Farrah et al. (2008) Farrah, D., et al. 2008, ApJ, 677, 957
* Förster Schreiber et al. (2004) Förster Schreiber, N. M., et al. 2004, ApJ, 616, 40
* Franx et al. (2003) Franx, M., et al. 2003, ApJ, 587,79
* Frayer et al. (2004) Frayer, D., et al. 2004, AJ, 127, 728
* Genzel et al. (1998) Genzel, R., et al. 1998, ApJ, 498, 579
* Genzel et al. (2001) Genzel, R., et al. 2001, ApJ, 563, 527
* Georgakakis et al. (2007) Georgakakis, A., et al. 2007, ApJ, 660, L15
* Glazebrook et al. (2004) Glazebrook, K., et al. 2004, Nature, 430, 181
* Greve et al. (2005) Greve, T., et al. 2005, MNRAS, 359, 1165
* Gu et al. (2006) Gu, Q., et al. 2006, MNRAS, 366, 480
* Higdon et al. (2004) Higdon, S., et al. 2004, PASP, 116, 975
* Hines et al. (2006) Hines, D., et al. 2007, ApJ, 641, 85
* Hopkins et al. (2006) Hopkins, P., et al. 2006, ApJ, 652, 864
* Houck et al. (2005) Houck, J., et al. 2005, ApJ, 622, L105
* Huang et al. (2004) Huang, J., et al. 2004, ApJS, 154, 44
* Huang et al. (2005) Huang, J., et al. 2005, ApJ, 634,136
* Huang et al. (2007a) Huang, J., et al. 2007a, ApJ, 660, 69
* Huang et al. (2007b) Huang, J., et al. 2007b, ApJ, 664, 840
* Huang et al. (2009) Huang, J., et al. 2009, in preparation
* Hughes et al. (1998) Hughes, D., et al. 1998, Nature, 394, 241
* Im et al. (2008) Im, M., et al. 2008, in preparation
* Ivison et al. (2007) Ivison, R., et al. 2007, ApJ, 660, 77
* Hony et al. (2001) Hony, S., Van Kerckhoven, C., Peeters, E., Tielens, A. G. G. M., Hudgins, D. M., & Allamandola, L. J. 2001, A&A, 370, 1030
* Hudgins & Allamandola (1999) Hudgins, D. M., & Allamandola, L. J. 1999, ApJ, 516, L41
* James et al. (1999) James, P. B., et al. 1999, MNRAS, 309, 585
* Jiang et al. (2006) Jiang, L., et al. 2006, AJ, 132, 2127
* Kovacs et al. (2006) Kovacs, A., et al. 2006, ApJ, 650, 592
* Kennicutt (1998) Kennicutt, R. C. 1998, ARA&A, 36,189
* Kim & Sanders. (1998) Kim, D-C. & Sanders, D. B. 1998, ApJS, 119, 41
* Kim et al. (2002) Kim, D-C., et al. 2002, ApJS, 143, 277
* Kitzbichler & White (2007) Kitzbichler, M. G. & White, S. D. M. 2007, MNRAS, 376, 2
* Kormendy & Sanders (1992) Kormendy, J. & Sanders, D. 1992, ApJ, 390, 73
* Labbe et al. (2005) Labbe, I., et al. 2005, ApJ, 624, 81
* Lai et al. (2007) Lai, K., et al. 2007, ApJ, 655, 704
* Laird et al. (2008) Laird, E., et al. 2008, ApJ, submitted
* Le Floc’h et al. (2005) Le Floc’h, E., et al. 2005, ApJ, 632, 169
* Le Floc’h et al. (2007) Le Floc’h, E., et al. 2007, ApJ, 660, L65
* Lotz et al. (2008) Lotz, J., et al. 2008, 672, 177
* Lu et al. (2003) Lu, N., et al. 2003, ApJ, 588, 199
* Lutz et al. (2005) Lutz, D., et al. 2005, ApJ, 625, 83
* Magdis et al. (2008) Magids, G., et al. 2008, MNRAS, in press
* Menendez-Delmestre et al. (2007) Menendez-Delmestre, K., et al. 2007, ApJ, 655, L65
* Menendez-Delmestre et al. (2008) Menendez-Delmestre, K., et al. 2008, in prep
* Mihos & Hernquist (1994) Mihos, C. & Herquist, L. 1994, 437, 611
* Mihos & Hernquist (1996) Mihos, C. & Herquist, L. 1996, 464, 641
* McCarthy et al. (2004) McCarthy, P., et al. 2004, ApJ, 614, 9
* Nandra et al. (2007) Nandra, K., et al. 2008, ApJ, 660, 11
* Nardini et al. (2008) Nardini, E., et al. 2008, MNRAS, submitted.
* Papovich et al. (2004) Papovich, C., et al. 2004, ApJS, 154, 70
* Papovich et al. (2005) Papovich, C., Dickinson, M., Giavalisco, M., Conselice, C. J., & Ferguson, H. C. 2005, ApJ, 631, 101
* Papovich et al. (2006) Papovich, C., et al. 2006, ApJ, 640, 92
* Papovich et al. (2007) Papovich, C., et al. 2007, ApJ, 668, 45
* Papovich et al. (2008) Papovich, C., et al. 2008, ApJ, 676, 206
* Peeters et al. (2002) Peeters, E., Hony, S., Van Kerckhoven, C., Tielens, A. G. G. M., Allamandola, L. J., Hudgins, D. M., & Bauschlicher, C. W. 2002, A&A, 390, 1089
* Pope et al. (2006) Pope, A., et al. 2006, MNRAS, 370, 1185
* Pope et al. (2008) Pope, A., et al. 2008, ApJ, 675, 1171
* Ranalli et al. (2003) Ranalli, P. et al. 2003, å, 399, 39
* Reddy et al. (2005) Reddy, N., et al. 2005, ApJ, 633, 748
* Reddy et al. (2006) Reddy, N., et al. 2006, ApJ, 653, 1004
* Reddy et al. (2008) Reddy, N., et al. 2008, ApJS, 675, 48
* Rigby et al. (2008) Rigby, J., et al. 2008, ApJ, 675, 262
* Rigopoulou et al. (1999) Rigopoulou, et al. 1999, AJ, 118, 2625
* Rigopoulou et al. (2006) Rigopoulou, et al. 2006, ApJ, 648, 81
* Rothberg & Joseph (2004) Rothberg, B. & Joseph, R. D. 2004, AJ, 128, 2098
* Sajina et al. (2007a) Sajina, A., et al. 2007, ApJ, 664, 713
* Sajina et al. (2008) Sajina, A., et al. 2008, ApJ, 683,659
* Sanders et al. (1988) Sanders, D., et al. 1988, ApJ, 328, L35
* Sanders & Mirabel (1996) Sanders, D. & Mirabel, I. F. 1996, ARA&A, 34, 749
* Shapley et al. (2001) Shapley, A., et al. 2001, ApJ, 562, 95
* Shapley et al. (2005) Shapley, A., et al. 2005, ApJ, 626, 698
* Shi et al. (2007) Shi, Y., et al. 2005, ApJ, 629, 88
* Shi et al. (2007) Shi, Y., et al. 2007, ApJ, 669, 841
* Smail et al. (1999) Smail, I., et al. 1999, MNRAS, 308, 1061
* Smith et al. (2007) Smith, J. D., et al. 2007, ApJ, 656, 770
* Steidel et al. (2004) Steidel, C., et al. 2004, ApJ, 604, 534
* Storchi-Bergmann et al. (2005) Storchi-Bergmann, T., et al. 2005, ApJ, 624, 13
* Szokoly et al. (2004) Szokoly, G. P., et al. 2004, ApJS, 155, 271
* Tacconi et al. (2002) Tacconi, L., et al. 2002, ApJ, 580, 73
* Teplitz et al. (2007) Teplitz, H., et al. 2007, ApJ, 659, 941
* Valiante et al. (2006) Valiante, E., et al. 2007, ApJ, 660, 1060
* van Diedenhoven et al. (2004) van Diedenhoven, B., Peeters, E., Van Kerckhoven, C., Hony, S., Hudgins, D. M., Allamandola, L. J., & Tielens, A. G. G. M. 2004, ApJ, 611, 928
* Veilleux et al. (1995) Veilleux, S., et al. 1995, ApJS, 98, 171
* Veilleux et al. (1999) Veilleux, S., et al. 1999, ApJ, 522, 113
* Watabe et al. (2005) Yasuyuki, W., & Masayuki, U. 2005, ApJ, 618, 649
* Watabe et al. (2008) Yasuyuki, W., et al. 2008, ApJ, 677, 895
* Webb et al. (2003) Webb, T. M. A., et al. 2003, ApJ, 597,680
* Webb et al. (2006) Webb, T. M. A., et al. 2006, ApJ, 636, L17
* Weedman et al. (2006a) Weedman, D. W., Le Floc’h, E., Higdon, S. J. U., Higdon, J. L., & Houck, J. R. 2006, ApJ, 638, 613
* Weedman et al. (2006b) Weedman, D., et al. 2006, ApJ, 653, 101
* Weedman et al. (2006c) Weedman, D. W., et al. 2006, ApJ, 651, 101
* White & Frenk (1991) White, S. D. M., & Frenk, C. S. 1991, ApJ, 379, 52
* Windhorst et al. (2002) Windhorst, R., et al. 2002, ApJS, 143, 113
* Wolf et al. (2005) Wolf, C., et al. 2005, ApJ, 630, 771
* Wu et al. (2005) Wu, H., et al. 2005, ApJ, 632, 79
* Yan et al. (2005) Yan, L., et al. 2005, ApJ, 628, 604
* Younger et al. (2008) Younger, J., et al. 2008, in preparation.
Figure 1: IRAC color-color diagram for several samples. The upper left panel
is for the entire AEGIS spectroscopic redshift sample with $0<z<1.5$; the
upper right panel is for the combined BM/BX and LBG samples with confirmed
spectroscopic redshifts (Steidel et al., 2004; Reddy et al., 2008); the lower
left panel is for the DRGs (Franx et al., 2003); and the lower right panel is
for BzK galaxies (Daddi et al., 2005). The boxes in each panel denote the IRAC
color selection for the present sample. The track for the M82 template is also
plotted in each panel. The rest-frame UV-selected BM/BX and LBG galaxies have
generally faint IRAC flux densities and thus larger photometric uncertainties
(Huang et al., 2005; Rigopoulou et al., 2006), increasing the apparent scatter
in the upper right panel.
Figure 2: IRAC color-color diagram for EGS galaxies with $F(24~{}\micron)>80$
$\mu$Jy. Small dots show all such galaxies; large red dots show galaxies in
the current IRS spectroscopic sample, which also requires
$F(24~{}\micron)>500$ $\mu$Jy. The black box shows the IRAC color criteria,
which should select objects at $z>1.5$. The one red dot outside the selection
box is the serendipitous source EGS24a. Objects from other IRS spectroscopic
samples (Table 1) at $z\sim 2$ are plotted for comparison: blue triangles and
green diamonds denote “optically invisible” sources (Houck et al., 2005; Yan
et al., 2005). Blue squares denote the luminous starbursts of Weedman et al.
(2006b).
Figure 3: Observed IRS spectra. The vertical scale is linear but different for
each panel. The gray-scale images are the two-dimensional IRS spectral images
after wavelength and position calibration. Each image shows 5 pixels or
25$\farcs$5 along the slit. The dashed lines indicate the central wavelengths
of the PAH emission features, rest-frame 7.7, 8.6, and 11.3 µm left to right.
EGS24a, the serendipitous object in the slit of EGS24, shows a power-law SED
with strong silicate absorption. Cross-correlation of the template (red dashed
line) of the local ULIRG IRAS F08572+3915 to the spectrum of EGS24a gives
$z=2.12$. For EGS_b2 at $z=1.59$, the 7.7 µm feature is off scale to the left,
and the peak observed at 33.5 µm is the [Ne II] emission line at rest
wavelength 12.81 µm. EGS_b6 is a confused case with combining spectra of two
galaxies at z=1.02 and z=2.0.
Figure 4: Near infrared spectrum of EGS12 taken with the MOIRC spectragraph on
Subaru. There is nothing detected except one emission line at 1.992 µm. We
identify this line as H${\alpha}$ at $z=2.033$, and corresponding rest-frame
wavelengths are marked above the plot. The redshift is consistent with
$z=2.03$ derived from the PAH features in the IRS spectrum (Fig. 3).
Figure 5: Redshift distributions for spectroscopic samples in Table 1. In the
third panel, the red line shows the distribution for the starburst (SB) sample
and the black line for the AGN sample.
Figure 6: Stacked spectrum for ULIRGs in the present sample (black line). The
short wavelength limit for the stacked spectrum is 7 µm. The large dot at rest
wavelength 5.3 µm represents the stacked AKARI 15 µm flux density. The
vertical scale is linear flux density per unit frequency in arbitrary units.
Other lines show stacked spectra of comparison samples: local starburst
galaxies (grey line; Brandl et al., 2006), local Seyfert-type ULIRGs (red
line), local LINER-type ULIRGs (blue line), and local h ii/starburst ULIRGs
(green line). The local ULIRG samples are from the IRAS 1 Jy sample (Kim &
Sanders., 1998) with IRS observations in the IRS GTO program (PID 105; Farrah
et al., 2007; Armus et al., 2007). Types were assigned according to optical
spectroscopy (Veilleux et al., 1995, 1999). The average SED of the present
sample is very similar to those of local LINER and h ii-type ULIRGs, while
average SEDs for objects in Sajina et al. (2007a) are close to the local
Seyfert type ULIRGs with much higher continuum emission.
Figure 7: Spectral energy distributions and morphologies for sample galaxies.
Inset images are in negative grey scale and are 12″ square. The red images
come from HST ACS data (filter F814W) if available; otherwise Subaru $R$.
Black dots represent photometric data, and the blue line is the stellar
population model (BC03) that best fits each source. The red lines are the CE01
dust templates chosen to match the FIR luminosity of each source.
Figure 8: Correlation between $L_{IR}$ and $L_{1.4GHz}$. $L_{IR}$ was
calculated by fitting SED templates (CE01) to the MIPS 70, 160 µm, and MAMBO
1.2 mm photometry. The two points with red circles are the X-ray sources EGS14
and EGS_b2, and the one with a blue circle is the serendipitous object EGS24a.
The sample is plotted together with local starburst galaxies and ULIRGs
against the the local FIR-Radio relation (the thick line, Condon, 1992). The
color coding: Seyferts in the local ULIRG and starburst samples are shown in
red; LINERs in the local ULIRG and starburst samples are shown in green; and
Starburst/HII-type ULIRGs in blue, the same as in Figure 6. This plot shows
that the local starburst galaxies and ULIRGs, and objects in our sample are
all consistent with the local FIR-Radio relation (Condon, 1992). Objects in
Sajina et al. (2007a) show strong radio excesses indicating AGNs in their
sample.
Figure 9: Rest-frame 8 µm luminosity, $L_{8~{}\micron}$ versus total infrared
luminosity, $L_{IR}$. ULIRGs in this sample are shown as filled black dots;
the two points with red circles are the X-ray sources EGS14 and EGS_b2, and
the one with a blue circle is the serendipitous object EGS24a; local starburst
galaxies as triangles, and local ULIRGs as diamonds. Local starbursts and
ULIRGs are color-coded based their spectral classification (Veilleux et al.,
1995, 1999; Brandl et al., 2006): red for Seyfert, green for LINER, and blue
for h ii/starburst. Two template models (Daddi et al., 2007a) and one
empirical (Bavouzet et al., 2008) are also plotted: DH02 models as the dashed
line, CE01 models as the solid line, and the empirical model as blue solid
line. The inset shows the same data for the high-luminosity galaxies but
plotted as the ratio of $L_{8~{}\micron}/L_{IR}$. $L_{8~{}\micron}$ was
measured for each galaxy by convolving its spectrum with the redshifted
bandpass of the IRAC 8 µm filter. In this plot, Rest-frame 8 µm luminosities
for the present sample are correlated with their total infrared luminosities,
but the observed $L_{8\micron}-L_{IR}$ relation is higher than both CE01 and
DH02 model predictions. This implies that both models will over-estimate the
$L_{IR}$ for the present sample based on their $L_{8\micron}$.
Figure 10: The 7.7 µm PAH to total infrared luminosity ratio versus total
infrared luminosity. The $L_{7.7}/L_{IR}$ ratio measure the star formation
contribution in the total infrared luminosity for objects in the sample. The
present sample has the highest $L_{7.7}/L_{IR}$ ratio, indicating that they
are starburst dominated ULIRGs. The $L_{7.7}/L_{IR}$ ratio for the present
sample is still compatible to the empirical relation of $L_{7.7}/L_{IR}\sim
L_{IR}^{0.45}$ from the local starburst galaxies. Objects in the present
sample sample are shown as filled black dots (the X-ray source EGS14 is a
filled dots with red circle, the 7.7µm PAH in the other X-ray sources EGS_b2
is not in our observation band, thus it is not in the diagram); local
starburst galaxies as triangles, and local ULIRGs as diamonds. Local
starbursts and ULIRGs are color-coded based on their spectral classification
(Veilleux et al., 1995, 1999; Brandl et al., 2006): red for Seyfert, green for
LINER, and blue for h ii/starburst. Open circles show ULIRGs and HyperLIRGs
from the Yan et al. (2005) sample with data from Sajina et al. (2007a). The
inserted plot shows a strong correlation between $L_{7.7}$ and $L_{IR}$ for
the local starburst galaxies. Thick lines show linear fits to the correlations
for local starburst galaxies as $L_{IR}\sim L_{7.7}^{0.69}$, which transfers
to $L_{7.7}/L_{IR}\sim L_{IR}^{0.45}$ plotted with the thick line.
Figure 11: The 11.3 µm PAH to total infrared luminosity ratio versus total
infrared luminosity. The plot shows the same pattern as in Figure 10. Objects
in the present sample sample are shown as filled black dots; the two filled
dots with red circles are two X-ray sources, EGS14 and EGS_b2; local starburst
galaxies as triangles, and local ULIRGs as diamonds. Local starbursts and
ULIRGs are color-coded based their spectral classification (Veilleux et al.,
1995, 1999; Brandl et al., 2006): red for Seyfert, green for LINER, and blue
for h ii/starburst. The inserted plot shows a strong correlation between
$L_{11.3}$ and $L_{IR}$ for the local starburst galaxies. Thick lines show
linear fits to the correlations for local starburst galaxies as $L_{IR}\sim
L_{11.3}^{0.75}$, which transfers to $L_{11.5}/L_{IR}\sim L_{IR}^{0.33}$
plotted with the thick line.
Figure 12: The IR-to-UV luminosity ration for galaxies at $z\sim 2$ and $z\sim
0$. The left panel is for galaxies at $z\sim 2$. Our sample is plotted against
galaxies at $z\sim 2$ selected in various bands. The solid lines in both
panels are the $L_{IR}/L_{1600}$-$L_{IR}+L_{1600}$ relation for BM/BX sources,
BzK galaxies, DRGs, and SMGs (Reddy et al., 2006). The right panel is for
local galaxies including normal galaxies (Bell et al., 2005), starburst
galaxies (Brandl et al., 2006), and ULIRGs. Most objects in our sample have
the same relation as the rest of galaxies population at $z\sim 2$. Three
objects in our sample with extreme red colors, together with some DRGs in
Reddy et al. (2006), are off the relation. They locate in the region where
local ULIRGs are, indicating a compact dust distribution in those objects. See
detailed discussion in the text.
Figure 13: The SFR-stellar mass relation for galaxies at z=0, 1, and 2
suggests the ”downsizing” scenario for galaxy formation. The mean
$L_{IR}$-$M_{*}$ relation for BzK at $z\sim 2$ is from Daddi et al. (2007a),
the relations for GOODS galaxies at $z\sim 1$ and SDSS galaxies at $z\sim 0.1$
are from Elbaz et al. (2007). Again, objects in our sample are consistent with
the relation for BM/BX sources and BzK, but at high mass end. Both simulated
galaxy population models of Kitzbichler & White (2007) and Cattaneo et al.
(2008) predicts much lower star formation rate for galaxies with a given
stellar mass.
Figure 14: $F(15~{}\micron)/F(24~{}\micron)$ versus redshifts for the ULIRG
sample (filled circles). The SMGs from Pope et al. (2008) are also plotted
(red open circles). This ratio should measure the continuum (hot dust) to PAH
ratio for objects in this redshift ranger and hence the AGN contribution to
the mid-IR luminosity. Lines show the relations defined by local templates
(Fig. 6): starburst galaxies (grey), Seyfert-type ULIRGs (red), LINER-type
ULIRGs (green), h ii/starburst ULIRGs (blue), PG QSO (dashed line). One X-ray
source, EGS_b2, is outside the AKARI 15 µm coverage and not plotted. The other
X-ray source, EGS14 (red diamond), and the serendipitous object, EGS24a (blue
triangle), have colors consistent with AGN. The 16 µm flux densities for SMGs
are measured from the IRS peak-up imaging. Since the IRS 16 µm peak-up filter
profile is very similar to the AKARI 15 µm filter profile, no correction is
applied to the IRS 16 µm flux densities for SMGs.
Table 1
IRS Observation Parameters
NicknameaaNicknames are the target names in the Spitzer archive and are used for convenience in this paper, but they are not official names and should not be used as standalone source identifications. | EGSIRACbbSource name from Barmby et al. (2008). | RA | Dec | $F(24\micron)$ | Cycles | Exp time
---|---|---|---|---|---|---
| | J2000 | mJy | | s
EGS1 | J142301.49+533222.4 | 14:23:01.50 | +53:32:22.6 | 0.55 | 10 | 7314
EGS4 | J142148.49+531534.5 | 14:21:48.49 | +53:15:34.5 | 0.56 | 10 | 7314
EGS10 | J141928.10+524342.1 | 14:19:28.09 | +52:43:42.2 | 0.62 | 08 | 5851
EGS11 | J141920.44+525037.7 | 14:19:17.44 | +52:49:21.5 | 0.59 | 08 | 5851
EGS12 | J141917.45+524921.5 | 14:19:20.45 | +52:50:37.9 | 0.74 | 05 | 3657
EGS14 | J141900.24+524948.3 | 14:19:00.27 | +52:49:48.1 | 1.05 | 03 | 2194
EGS23 | J141822.47+523937.7 | 14:18:22.48 | +52:39:37.9 | 0.67 | 07 | 5120
EGS24 | J141834.58+524505.9 | 14:18:34.55 | +52:45:06.3 | 0.66 | 07 | 5120
EGS24accSerendipitous source found in the slit while observing EGS24. | J141836.77+524603.9 | 14:18:36.77 | +52:46:03.9 | 0.66 | 07 | 5120
EGS26 | J141746.22+523322.2 | 14:17:46.22 | +52:33:22.4 | 0.49 | 11 | 8045
EGS_b2 | J142219.81+531950.3 | 14:22:19.80 | +53:19:50.4 | 0.62 | 08 | 5851
EGS_b6 | J142102.68+530224.5 | 14:21:02.67 | +53:02:24.8 | 0.72 | 06 | 4388
Table 2
IRS Sample Selection Criteria
Sample | 24 µm flux density | Color criteria
---|---|---
Houck et al. (2005) | $>$0.75 mJy | $\nu F_{\nu}(24\micron)/\nu F_{\nu}(I)>60$
Yan et al. (2005) | $>$0.90 mJy | $\nu F_{\nu}(24\micron)/\nu F_{\nu}(I)>10$ and
| | $\nu F_{\nu}(24\micron)/\nu F_{\nu}(8\micron)>3.16$
Weedman et al. (2006b)(AGN) | $>$1.00 mJy | $F(\hbox{X-ray})\tablenotemark{a}\gtrsim 10^{-15}$ erg cm-2 s-1
Weedman et al. (2006b)(SB) | $>$1.00 mJy | IRAC flux density peak at either 4.5 or 5.8µm
This paper | $>$0.50 mJy | $0<[3.6]-[4.5]<0.4$ and
| | $-0.7<[3.6]-[8.0]<0.5$
Table 3
PAH properties for the Sample
Object | redshifta | redshiftb | $\log L(7.7)$ | $EW(7.7)$ | $\log L(11.3)$ | $EW(11.3)$
---|---|---|---|---|---|---
| | | $L_{\sun}$ | µm | $L_{\sun}$ | µm
EGS1 | 1.95$\pm$0.03 | 1.90$\pm$0.02 | 11.23$\pm$0.03 | 2.38$\pm$0.22 | 10.18$\pm$0.15 | 1.68$\pm$0.26
EGS4 | 1.94$\pm$0.03 | 1.88$\pm$0.02 | 10.89$\pm$0.06 | 0.57$\mp$0.07 | 9.82$\pm$0.37 | 0.17$\pm$0.07
EGS10 | 1.94$\pm$0.02 | 1.94$\pm$0.01 | 11.33$\pm$0.02 | 2.39$\pm$0.12 | 10.04$\pm$0.31 | 0.26$\pm$0.10
EGS11 | 1.80$\pm$0.02 | 1.80$\pm$0.01 | 11.02$\pm$0.05 | 0.79$\pm$0.10 | 10.25$\pm$0.12 | 1.19$\pm$0.16
EGS12 | 2.01$\pm$0.03 | 2.02$\pm$0.03 | 11.37$\pm$0.02 | 1.46$\pm$0.08 | 10.61$\pm$0.11 | 1.28$\pm$0.55
EGS14 | 1.87$\pm$0.06 | 1.86$\pm$0.03 | 11.33$\pm$0.04 | 1.13$\pm$0.09 | 10.63$\pm$0.10 | 2.98$\pm$0.35
EGS21 | 3.01$\pm$0.03 | 3.00$\pm$0.03 | 11.73$\pm$0.06 | 1.59$\pm$0.10 | $\cdots$ | $\cdots$
EGS23 | 1.77$\pm$0.02 | 1.77$\pm$0.01 | 11.15$\pm$0.04 | 1.45$\pm$0.12 | 10.54$\pm$0.05 | 1.08$\pm$0.08
EGS24 | 1.85$\pm$0.03 | 1.85$\pm$0.01 | 11.25$\pm$0.03 | 2.24$\pm$0.18 | 10.56$\pm$0.07 | 0.36$\pm$0.08
EGS26 | 1.77$\pm$0.03 | 1.78$\pm$0.02 | 11.16$\pm$0.03 | 2.61$\pm$0.20 | 10.42$\pm$0.06 | 1.12$\pm$0.18
EGS_b2 | 1.59$\pm$0.01 | 1.60$\pm$0.01 | $\cdots$ | $\cdots$ | 10.45$\pm$0.04 | 0.30$\pm$0.04
Table 4
IR/Radio flux and luminosity of the Sample
Object | $F(3.6~{}\micron)$ | $F(4.5~{}\micron)$ | $F(5.8~{}\micron)$ | $F(8.0~{}\micron)$ | $F(15~{}\micron)$ | $F(24~{}\micron)$ | $F(70~{}\micron)$ | $F(160~{}\micron)$ | $F(850~{}\micron)$bbredshifts obtained with a starburst template. | $F(1.1~{}mm)$ | $F(1.4~{}GHz)$ | $L_{IR}$ccFIR luminosity of the best-fit CE03 template. | q
---|---|---|---|---|---|---|---|---|---|---|---|---|---
| $\mu$Jy | $\mu$Jy | $\mu$Jy | $\mu$Jy | $\mu$Jy | $\mu$Jy | mJy | mJy | mJy | mJy | mJy | $L_{\sun}$ |
EGS1 | 45.0$\pm$0.3 | 55.4$\pm$0.4 | 63.6$\pm$1.5 | 56.3$\pm$1.6 | $\cdots$ ddnot observed | 554$\pm$35 | $<$1.50 | 12.1$\pm$8.9 | 3.3 | 1.86$\pm$0.50 | 0.069$\pm$0.010 | 12.72$\pm$0.15 | 2.25
EGS4 | 32.1$\pm$0.3 | 44.9$\pm$0.4 | 52.1$\pm$1.5 | 40.1$\pm$1.5 | 125$\pm$24 | 557$\pm$22 | 2.4$\pm$0.5 | $<$21.00 | 3.9 | 1.87$\pm$0.48 | 0.062$\pm$0.010 | 12.62$\pm$0.12 | 2.19
EGS10 | 21.8$\pm$0.3 | 23.0$\pm$0.3 | 34.0$\pm$1.4 | 28.9$\pm$1.5 | 77$\pm$28 | 623$\pm$35 | 4.2$\pm$0.7 | 45.5$\pm$8.7 | 5.2 | 1.65$\pm$0.69 | 0.085$\pm$0.014 | 12.83$\pm$0.10 | 2.33
EGS11 | 27.8$\pm$0.3 | 36.9$\pm$0.4 | 38.2$\pm$1.4 | 30.6$\pm$1.5 | 192$\pm$31 | 591$\pm$20 | 5.0$\pm$0.6 | $<$21.00 | 3.3 | 0.85$\pm$0.44 | 0.067$\pm$0.017 | 12.58$\pm$0.09 | 2.24
EGS12 | 27.7$\pm$0.3 | 31.0$\pm$0.3 | 38.3$\pm$1.4 | 33.2$\pm$1.5 | 196$\pm$25 | 743$\pm$23 | 3.9$\pm$0.6 | a | 5.4 | 1.58$\pm$0.47 | 0.036$\pm$0.010 | 12.77$\pm$0.07 | 2.59
EGS14 | 66.1$\pm$0.2 | 89.6$\pm$0.4 | 101.7$\pm$1.5 | 88.4$\pm$1.6 | 457$\pm$39 | 1053$\pm$41 | 3.8$\pm$0.6 | 76.7$\pm$9.6 | 6.4 | 4.54$\pm$0.68 | 0.316$\pm$0.023 | 13.18$\pm$0.06 | 1.95
EGS21 | 39.5$\pm$0.3 | 45.3$\pm$0.4 | 50.7$\pm$1.5 | 35.0$\pm$1.5 | 59$\pm$14 | 605$\pm$23 | 2.8$\pm$0.5 | 34.2$\pm$9.4 | 8.4 | 1.31$\pm$0.35 | 0.070$\pm$0.014 | 13.15$\pm$0.07 | 2.26
EGS23 | 47.8$\pm$0.3 | 60.6$\pm$0.4 | 69.1$\pm$1.5 | 51.3$\pm$1.5 | 132$\pm$29 | 665$\pm$18 | 3.7$\pm$0.4 | 62.4$\pm$8.7 | 4.5 | 1.81$\pm$0.40 | 0.119$\pm$0.015 | 12.79$\pm$0.08 | 2.08
EGS24 | 36.4$\pm$0.3 | 44.0$\pm$0.4 | 46.8$\pm$1.4 | 37.1$\pm$1.5 | 65$\pm$25 | 663$\pm$29 | 3.4$\pm$0.6 | 9.7$\pm$9.0 | 2.7 | 1.49$\pm$0.74 | 0.047$\pm$0.012 | 12.51$\pm$0.18 | 2.16
EGS26 | 31.7$\pm$0.3 | 43.3$\pm$0.4 | 47.1$\pm$1.4 | 34.0$\pm$1.5 | 58$\pm$20 | 492$\pm$16 | 1.5$\pm$0.5 | 21.6$\pm$8.4 | 4.5 | 1.14$\pm$0.36 | 0.097$\pm$0.017 | 12.49$\pm$0.15 | 2.15
EGS24a | 22.3$\pm$0.3 | 32.3$\pm$0.3 | 46.7$\pm$1.5 | 575$\pm$1.6 | 223$\pm$36 | 997$\pm$30 | 2.5$\pm$0.5 | 15.1$\pm$8.4 | 6.4 | 2.87$\pm$0.54 | 0.112$\pm$0.013 | 12.91$\pm$0.10 | 1.91
EGS_b2 | 94.0$\pm$0.2 | 124.8$\pm$0.4 | 115.0$\pm$1.5 | 117.1$\pm$1.6 | $\cdots$ | 616$\pm$30 | 3.4$\pm$0.5 | 21.7$\pm$7.0 | 2.2 | $\cdots$ | 0.151$\pm$0.009 | 12.34$\pm$0.14 | 1.80
Table 5
Stellar Population Fitting Parameters
Name | Age | $E(B-V)$ | $M_{*}$ | SFR
---|---|---|---|---
| Gyr | | $10^{11}$ $M_{\sun}$ | $M_{\sun}$ yr-1
EGS1 | 1.9 | 0.3 | 5 | 240
EGS4 | 1.4 | 0.7 | 5 | 320
EGS10 | 1.1 | 0.4 | 2 | 182
EGS11 | 2.0 | 0.7 | 4 | 196
EGS12 | 00.29 | 0.4 | 1 | 480
EGS14aaredshifts obtained with a ULIRG template.Confused | 00.26 | 0.6 | 3 | 13200
EGS23 | 1.1 | 0.6 | 5 | 400
EGS24 | 00.29 | 0.5 | 5 | 580
EGS26 | 1.8 | 0.6 | 4 | 220
EGS_b2aaEGS14 and EGS_b2 are X-ray sources, and their SEDs may be contaminated by AGNs. | 00.03 | 0.6 | 00.9 | 38000
Table 6
Comparison of Star Formation Rates
Name | SFR(BC03)aaStar formation rate calculated from stellar population model (BC03) fitting | SFR(7.7)bbPrediction from the SED fitting. | SFR(11.3)bbStar Formation Rate calculated from PAH feature luminosity. | SFR($L_{IR}$)ccStar Formation Rate calculated from far infrared luminosity $L_{IR}$
---|---|---|---|---
| $M_{\odot}/yr$ | | |
EGS1 | 240 | 386$\pm$18 | 263$\pm$068 | 0945$\pm$326
EGS4 | 320 | 226$\pm$21 | 142$\pm$090 | 0750$\pm$207
EGS10 | 182 | 452$\pm$14 | 207$\pm$110 | 1217$\pm$280
EGS11 | 196 | 277$\pm$21 | 297$\pm$061 | 0 684$\pm$142
EGS12 | 480 | 481$\pm$15 | 549$\pm$103 | 1060$\pm$171
EGS14 | 13200 | 451$\pm$28 | 568$\pm$097 | 2724$\pm$376
EGS23 | 400 | 340$\pm$21 | 487$\pm$042 | 1110$\pm$204
EGS24 | 580 | 398$\pm$19 | 504$\pm$060 | 0582$\pm$241
EGS26 | 220 | 346$\pm$16 | 390$\pm$041 | 0556$\pm$192
EGS_b2 | 38000 | $\cdots$ | 417$\pm$029 | 0394$\pm$127
|
arxiv-papers
| 2009-04-28T20:11:19 |
2024-09-04T02:49:02.235872
|
{
"license": "Public Domain",
"authors": "J.-S. Huang (SAO), S.M. Faber (UCSC), E. Daddi (Saclay), E.S. Laird\n (Impiral College), K. Lai (UCSC), A. Omont (IAP), Y. Wu (SSC), J.D. Younger\n (IAS), K. Bundy (Berkelay), A. Cattaneo (AIP), S.C. Chapman (IoA), C.J.\n Conselice (Norttinham), M. Dickinson (NOAO), E. Egami (Arizona), G.G. Fazio\n (SAO), M. Im (Seoul National), D. Koo (UCSC), E. Le Floc'h (IfA), C. Papovich\n (Texas AM), D. Rigopoulou (Oxford), I. Smail (Durham), M. Song (Seoul\n National), P.P. Van de Werf (Leiden), T.M.A. Webb (McGill), C.N.A. Willmer\n (Arizona), S.P. Willner (SAO), L. Yan (SSC)",
"submitter": "Jiasheng Huang",
"url": "https://arxiv.org/abs/0904.4479"
}
|
0904.4536
|
# Towards a Dynamical Collision Model of Highly Porous Dust Aggregates
Carsten Güttler Maya Krause Ralf Geretshauser Roland Speith Jürgen Blum
###### Abstract
In the recent years we have performed various experiments on the collision
dynamics of highly porous dust aggregates and although we now have a
comprehensive picture of the micromechanics of those aggregates, the
macroscopic understanding is still lacking. We are therefore developing a
mechanical model to describe dust aggregate collisions with macroscopic
parameters like tensile strength, compressive strength and shear strength. For
one well defined dust sample material, the tensile and compressive strength
were measured in a static experiment and implemented in a Smoothed Particle
Hydrodynamics (SPH) code. A laboratory experiment was designed to compare the
laboratory results with the results of the SPH simulation. In this experiment,
a mm-sized glass bead is dropped into a cm-sized dust aggregate with the
previously measured strength parameters. We determine the deceleration of the
glass bead by high-speed imaging and the compression of the dust aggregate by
x-ray micro-tomography. The measured penetration depth, stopping time and
compaction under the glass bead are utilized to calibrate and test the SPH
code. We find that the statically measured compressive strength curve is only
applicable if we adjust it to the dynamic situation with a “softness”
parameter. After determining this parameter, the SPH code is capable of
reproducing experimental results, which have not been used for the calibration
before.
###### Keywords:
dust collisions, cohesive powder, modelling, planet formation
###### :
96.10.+i
## 1 Introduction
Planets form from micrometer-sized dust grains, colliding at low velocities
and sticking to each other due to attracting van-der-Waals forces Blum and
Wurm (2008). From the interplay between laboratory experiments Blum and Wurm
(2000) and molecular dynamic simulations Paszun and Dominik (2008); Dominik
and Tielens (1997) we have a good picture of the microphysics of those
processes, but as aggregates grow larger (e.g. 1 mm), the understanding of the
collision mechanics is severely lacking. For these sizes (and the
corresponding velocities), collisions between equal sized aggregates rather
lead to bouncing or fragmentation than to sticking.
Millimeter-sized aggregates – as precursors of planets – are expected to be
highly porous, having a volume filling factor (the volume fraction of
material) of only few percent up to few ten percent. This strongly cohesive
material is comparable to millimeter-sized dust clumps present in conventional
dry powders (e.g. cocoa, confectioners’ sugar) but probably being more porous.
One well analyzed analog material are dust aggregates formed by the random
ballistic deposition (RBD) method, experimentally introduced in Ref. Blum and
Schräpler (2004). These dust aggregates have a volume filling factor of
$\phi=0.15$ (for monospheric dust) and are produced in our laboratory in
macroscopic 2.5 cm samples. Using these dust samples, we performed various
collision experiments to study the further growth and evolution of planetary
bodies (e.g. Blum and Wurm, 2008; Langkowski et al., 2008). However, as
experiments cannot be performed for any relevant set of parameters (e.g.
collisions of meter-sized bodies), a collision model is required to cover the
wide parameter space occurring for protoplanetary dust-aggregate collisions,
i.e. dust aggregate sizes of up to 1 km and collision velocities in the range
of $10^{-3}-10^{2}\;{\rm m\;s^{-1}}$. The approach is therefore to measure
macroscopic parameters of the laboratory dust samples and implement them in a
numeric simulation of a dust aggregate collision, which is concurrently
performed in a laboratory experiment. Comparing experiment and simulation and
defining and determining the free parameters, yields a calibrated collision
model to perform collision simulations with parameters unaccessible to
laboratory experiments.
## 2 Smoothed Particle Hydrodynamics for Dust Collisions
For the simulation of dust aggregate collisions we use the Smoothed Particle
Hydrodynamics (SPH) method with extensions for the treatment of solid and
porous media. A comprehensive description of the meshless Lagrangian particle
method SPH can for example be found in Monaghan (2005). In this scheme the
continuous solid objects are discretized into interacting mass packages
(“particles”) carrying all relevant continuous quantities. Time evolution is
computed according to the Lagrangian equations of continuum mechanics:
$\displaystyle\frac{\mathrm{d}\varrho}{\mathrm{d}t}+\varrho\sum_{\alpha=1}^{D}\frac{\partial
v_{\alpha}}{\partial x_{\alpha}}=0$ (1)
$\displaystyle\frac{\mathrm{d}v_{\alpha}}{\mathrm{d}t}=\frac{1}{\varrho}\sum_{\beta=1}^{D}\frac{\partial\sigma_{\alpha\beta}}{\partial
x_{\beta}}$ (2)
Here, $\varrho$ denotes the density, $v$ the velocity, $D$ the dimension and
$\sigma_{\alpha\beta}$ the stress tensor, defined as
$\sigma_{\alpha\beta}=-p\delta_{\alpha\beta}+S_{\alpha\beta}.$ (3)
It consists of a pressure part with pressure $p$ and and a shear part given by
the traceless deviatoric stress tensor $S_{\alpha\beta}$. Its time evolution
is modelled according to Ref. Benz and Asphaug (1994). This set of equations
is closed by a suitable equation of state and describes the elastic behavior
of a solid body. Together with a suitable damage model, which we do not adopt,
the authors of Ref. Benz and Asphaug (1994) have modelled collisions between
brittle basaltic rocks using this scheme.
In contrast to this, we simulate the plastic behavior of porous bodies.
Therefore we adopt a modified version of the porosity model by Sirono Sirono
(2004). According to this approach, plasticity is modelled within the equation
of state and porosity is given by $1-\varrho/\varrho_{0}$, where $\varrho$
denotes the actual and $\varrho_{0}$ the bulk density of the material. The
pressure is limited by the compressive strength $\Sigma(\varrho)$ as upper
bound and the (negative) tensile strength $T(\varrho)$ as lower bound. In
between, the solid body experiences elastic deformation, whereas outside this
regime it is deformed plastically. Thus, the full equation of state reads
$p(\varrho)=\left\\{\begin{array}[]{lc}\Sigma(\varrho)&\varrho>\varrho_{\rm
c}^{+}\\\
K({\varrho^{\prime}}_{0})(\varrho/{\varrho^{\prime}}_{0}-1)&\varrho_{\rm
c}^{-}\leq\varrho\leq\varrho_{\rm c}^{+}\\\ T(\varrho)&\varrho<\varrho_{\rm
c}^{-}\\\ \end{array}\right.$ (4)
The quantity ${\varrho^{\prime}}_{0}$ denotes the density of the material at
zero external stress. $\rho_{\rm c}^{+}$ and $\rho_{\rm c}^{-}$ are limiting
quantities, where the transition between the elastic and plastic regime for
compression and tension, respectively, takes place. Once a limit is exceeded,
the material leaves the elastic path where energy is conserved, and loses
internal energy by following the paths of the compressive and tensile
strength.
In a previous work, Sirono Sirono (2004) adopted power laws from measurements
with toner particles for $\Sigma(\varrho)$, $T(\varrho)$ and the bulk modulus
$K(\varrho)$ in order to simulate porous ice, which is a crude extrapolation.
For our approach, we used the material properties, measured for well defined
RBD dust aggregates Blum and Schräpler (2004).
Figure 1: Compressive strength curve for unidirectional (dashed line),
omnidirectional (solid line), and dynamic (dotted line) compression. The inset
illustrates the setup for the omnidirectional compression measurement.
For the tensile strength we adopted the measurement of Ref. Blum and Schräpler
(2004) who measured the tensile strength for porous ($\phi=0.15$) and compact
($\phi=0.54$) aggregates and found an agreement with a linear dependence
between tensile strength and numbers of contact per cross-sectional area. This
yields a tensile strength of
$T(\phi)=-\left(10^{2.8+1.48\phi}\right)\;{\rm Pa.}$ (5)
For the compressive strength we started with the compression curve
$\phi(\Sigma)$ measured in Ref. Blum and Schräpler (2004) (unidirectional, 1D
compression) and also made a new compression measurement (omnidirectional, 3D,
see inset in Fig. 1, Güttler et al. (2009)). Both compressive strength curves
are displayed in Fig. 1. They resemble in shape and can be described by
$\Sigma(\phi)=p_{\rm
m}\cdot\left(\frac{\phi_{2}-\phi_{1}}{\phi_{2}-\phi}-1\right)^{\Delta\cdot\ln
10}\;,$ (6)
with four free parameters $\phi_{1}$, $\phi_{2}$, $\Delta$, and $p_{\rm m}$.
For unidirectional compression we found $p_{\rm m}=5.6$ kPa, $\phi_{1}=0.15$,
$\phi_{2}=0.33$, and $\Delta=0.33$, while for ominidirectional compression
measurements performed in this work the parameters are identified as $p_{\rm
m}=13$ kPa, $\phi_{1}=0.12$, $\phi_{2}=0.58$, and $\Delta=0.58$. For
unidirectional, static compression, the material can creep sideways, releasing
pressure. As we did not observe this in the dynamic experiments (next section,
Fig. 2), we expect omnidirectional compression, defining the parameters
$\phi_{1}$ and $\phi_{2}$. For low pressures, $\Delta\cdot\ln 10$ is the slope
of a power law found by divers authors Blum and Schräpler (2004); Valverde et
al. (2004) and does not leave very much margin. Thus, we take $p_{\rm m}$ as
the only free parameter with the most influence on material softness, shifting
the compression curve towards lower pressures and determine $p_{\rm m}$ within
the calibration procedure. As the the shear strength was not measured so far,
we follow Sirono Sirono (2004) and take $Y=\sqrt{\Sigma|T|}$.
## 3 Calibration Experiment
Figure 2: The density plot reveals the compaction of the dust sample under the
glass bead (red, saturated) with a volume filling factor of 0.20 to 0.25
(green) in a well confined volume of approximately one sphere volume. The
original dust material has a volume filling factor of 0.15 (light blue).
In the experiments, a solid projectile was dropped into a 2.5 cm diameter,
highly porous dust aggregate consisting of $1.5\;\mu$m SiO2 monomers (RBD dust
aggregate from Blum and Schräpler (2004)). The experiments were performed in
vacuum ($0.1$ mbar) such that gas effects did not play a role. The projectile
was either a glass bead of 1 mm diameter or a cylindrical plastic tube with a
1 or 3 mm diameter epoxy droplet at the bottom (representing a 1 or 3 mm glass
bead). The epoxy projectile had a mass corresponding to a glass bead and
therefore was longer and could be observed for a penetration deeper than its
diameter. For 15 experiments with elongated projectiles, the deceleration
curve and, thus, the penetration depth and the stopping time were measured
with a high-speed camera. The stopping time for one projectile diameter was
found to be rather constant, ($3.0\pm 0.1$) ms for 1 mm projectiles and
($6.2\pm 0.1$) ms for 3 mm projectiles, while the penetration depth depended
on the projectile size and the impact velocity (drop height). Details on the
full deceleration curve can be found in Güttler et al. (2009). The penetration
depth could well be approximated by
$D=\left(8\cdot 10^{-4}\;\frac{\rm m^{2}\;s}{\rm
kg}\right)\cdot\frac{mv}{A}\;,$ (7)
where $v$, $m$, and $A=\pi R^{2}$ are the projectile velocity at the time of
first contact, the projectile mass, and its maximum cross-sectional area (see
also Fig. 4 in the next section). The stopping time and the relation for the
penetration depth will be used for the calibration later on.
Two experiments with a 1 mm glass bead impacting with $(0.8\pm 0.1)$ m s-1
were analyzed using an x-ray micro-tomograph (Micro-CT SkyScan 1074). In this
method, the dust sample with the embedded glass bead was positioned between an
x-ray source and a detector and rotated stepwise. Based on the 400 resulting
transmission images, a 3-dimensional density reconstruction was calculated
using the SkyScan Cone-Beam Reconstruction Software. Assuming cylindrical
symmetry in the axis of penetration, the density was averaged to one vertical
section and displayed in the contour plot in Figure 2. The compaction of the
dust is clearly visible in a confined volume under the sphere. The green color
marking this volume denotes a volume filling factor of 0.20 to 0.25, while the
material around this compaction zone is virtually unaffected with an original
volume filling factor of $\phi\approx 0.15$. The distribution of compacted
material will be used for the verification of the SPH code.
## 4 Calibration and Verification of the SPH Code
In the 2D SPH simulation, an infinite cylinder with 1.1 mm diameter
($\rho=2540$ kg m-3) impacts into a dust sample with a cross section of
$8\times 5\;{\rm mm}^{2}$ at a velocity of 0.65 m s-1. For comparison with the
3D experiments a correction factor of $\frac{8}{3\pi}$ for the penetration
depth is required (see Güttler et al. (2009)). For the calibration we will use
the penetration depth, which has to be 0.82 mm (Eq. 7 and correction factor),
the stopping time of 3 ms, and the compaction under the glass bead from the
x-ray experiments.
Figure 3: Varying the parameter $p_{\rm m}$ in the simulation changes the
softness of the material and yields different penetration depths and stopping
times. The best agreement with the experiments was found for $p_{\rm m}=1.3$
kPa.
Using the shear strength formalism introduced by Sirono Sirono (2004), e.g.
$Y=\sqrt{\Sigma|T|}$, we have one free parameter with influence on the outcome
of the simulations, i.e. the adjustment of the compressive strength curve for
dynamic compression by the parameter $p_{\rm m}$ (see Eq. 6). A detailed study
on the shear model can be found in Güttler et al. (2009).
The parameter $p_{\rm m}$ defines the softness of the material and decreasing
this parameter yields deeper penetrations. Figure 3 describes the calibration
of this parameter. The horizontal line denotes the expected penetration depth
from Eq. 7, while the vertical line represents a mean stopping time of 3 ms
for 1 mm spheres. A value of $p_{\rm m}=1.3$ kPa yields the best agreement
with the experiments and thus we will use this value for further tests.
One test for the validation of the SPH code is the penetration depth relation
for different sizes and different velocities, reproducing Eq. 7.
Figure 4: The penetration depth as a function of momentum per cross-sectional
area can well be reproduced with the SPH code.
The penetration depths of the experiments and the penetration depths of the
simulations (with correction factor) are plotted in Fig. 4. For
$mvA^{-1}\mathchar 13350\relax 1$ kg m-1 s-1 experiment and simulation are in
very good agreement and the SPH code succeeds in the scaling of radius and
velocity. A second validation test is the comparison of the compressed volume
with the results of the x-ray measurement (Fig. 2).
Figure 5: Experimental results (symbols) and simulation (solid line) of the
compressed volume.
From the 3-dimensional density data, one can determine the volume fraction
that is compressed to a given volume filling factor, which is plotted in Fig.
5. The values for $\phi\mathchar 13358\relax 0.2$ represent the original
material being distributed around the mean filling factor of 0.15. Also for
this test, we find a good agreement between experiment and simulation.
## 5 Conclusion
We have developed an SPH code for the description of collisional interaction
between high-porosity dust aggregates. The code was calibrated by static and
dynamic laboratory experiments using macroscopic RBD aggregates with an
uncompressed filling factor of $\phi=0.15$. The calibrated SPH code correctly
predicts the size and velocity dependence of the penetration depth for an
impacting solid projectile as well as the compressed volume.
We thank M.-B. Kallenrode and the University of Osnabrück for providing access
to the XRT setup. Simulations were performed on clusters of the computing
center (ZDV) of the University of Tübingen. This project is funded by the
Deutsche Forschungsgemeinschaft within the Forschergruppe 759 “The Formation
of Planets: The Critical First Growth Phase” under grants Bl 298/7-1, Bl
298/8-1, and Kl 650/8-1.
## References
* Blum and Wurm (2008) J. Blum, and G. Wurm, _Annual Review of Astronomy and Astrophysics_ 46, 21–56 (2008).
* Blum and Wurm (2000) J. Blum, and G. Wurm, _Icarus_ 143, 138–146 (2000).
* Paszun and Dominik (2008) D. Paszun, and C. Dominik, _Astronomy and Astrophysics_ 484, 859–868 (2008), 0802.1832.
* Dominik and Tielens (1997) C. Dominik, and A. G. G. M. Tielens, _The Astrophysical Journal_ 480, 647 (1997).
* Blum and Schräpler (2004) J. Blum, and R. Schräpler, _Physical Review Letters_ 93, 115503 (2004).
* Langkowski et al. (2008) D. Langkowski, J. Teiser, and J. Blum, _The Astrophysical Journal_ 675, 764–776 (2008).
* Monaghan (2005) J. J. Monaghan, _Reports of Progress in Physics_ 68, 1703–1759 (2005).
* Benz and Asphaug (1994) W. Benz, and E. Asphaug, _Icarus_ 107, 98 (1994).
* Sirono (2004) S.-I. Sirono, _Icarus_ 167, 431–452 (2004).
* Güttler et al. (2009) C. Güttler, M. Krause, R. Geretshauser, R. Speith, and J. Blum, The Physics of Protoplanetesimal Dust Agglomerates. IV. Towards a Dynamical Collision Model (2009), submitted.
* Valverde et al. (2004) J. M. Valverde, M. A. S. Quintanilla, and A. Castellanos, _Physical Review Letters_ 92, 258303 (2004).
|
arxiv-papers
| 2009-04-29T05:45:06 |
2024-09-04T02:49:02.247381
|
{
"license": "Public Domain",
"authors": "Carsten G\\\"uttler, Maya Krause, Ralf J. Geretshauser, Roland Speith,\n J\\\"urgen Blum",
"submitter": "Carsten G\\\"uttler",
"url": "https://arxiv.org/abs/0904.4536"
}
|
0904.4730
|
# Extended JC-Dicke model for a two-component atomic BEC inside a cavity
Yong Li Department of Physics and Center of Theoretical and Computational
Physics, The University of Hong Kong, Pokfulam Road, Hong Kong, China Peng
Zhang ERATO, JST, Macroscopic Quantum Control Project, Hongo, Bunkyo-Ku,
Tokyo 113-8656, Japan Z. D. Wang Department of Physics and Center of
Theoretical and Computational Physics, The University of Hong Kong, Pokfulam
Road, Hong Kong, China
###### Abstract
We consider a trapped two-component atomic Bose-Einstein condensate (BEC),
where each atom with three energy-levels is coupled to an optical cavity field
and an external classical optical field as well as a microwave field to form
the so-called $\Delta$-type configuration. After adiabatically eliminating the
atomic excited state, an extended JC-Dicke model is derived under the
rotating-wave approximation. The scaled ground-state energy and the phase
diagram of this model Hamiltonian are investigated in the framework of mean-
field approach. A new phase transition is revealed when the amplitude of
microwave field changes its sign.
###### pacs:
73.43.Nq, 03.75.Kk, 42.50.Pq
## I Introduction
The so-called Dicke model Dicke:1954 describes the interaction of a large
number of two-level systems (e.g., atoms) with a single optical mode. Since
the effective light-matter coupling strength is dependent on the number of
atoms $N$ ($\propto\sqrt{N}$), a sufficiently large $N$ would lead to a
classical phase transition Hepp:1973 ; Wang:1973 (at finite temperatures)
from the normal state, which corresponds to the atomic ground state associated
with the vacuum state of the optical mode, to the superradiant-phase state,
where the phenomenon of superradiance occurs with the finite scaled mean
numbers of both photons and excited-state atoms. Recently, exploration of
quantum phase transitions in the Dicke model at zero temperature has attracted
significant attentions Emary:2003 ; Lambert ; Hou:2004 ; Emary:2004 ; Lee:2004
; Li:2006 ; Vidal:2006 ; Alcalde:2007 ; Dimer:2007 ; Tolkunov:2007 ;
Chen:2007a ; Chen:2007b ; Chen:2008 ; Goto:2008 ; Tsyplyatyev:2008 ;
QHChen:2008 ; Alcalde:2009 ; Larson:2009 ; Huang:2009 . The drastic change
across the critical point due to a qualitative change of the ground state of
the Dicke model has been investigated, in the frameworks of the scaled ground
state energy, macroscopic atomic excited-state population, quantum
entanglement, Berry phase, quantum chaos and so on.
Apart from the standard Dicke model, several more generalized Dicke models
have also been proposed and studied Emary:2004 ; Lee:2004 ; Li:2006 ;
Tolkunov:2007 ; Chen:2007b ; Chen:2008 ; Goto:2008 ; Tsyplyatyev:2008 ;
Alcalde:2009 ; Larson:2009 . In some of them Chen:2007b ; Chen:2008 , the free
atoms in the standard Dicke models are replaced by the atomic Bose-Einstein
condensate (BEC). A BEC describes a collective quantum state of a large number
of atoms and may be used to generate a macroscopic quantum object with a
longer lifetime compared to the free atoms. It has attracted much interest to
combine the BEC with optical cavity in the strong coupling regime Ottl:2006 ;
Brennecke:2007 ; Brennecke:2008 ; Murch:2008 . On the other hand, comparing
with the case of free atoms in the standard Dicke model, there is an
additional atom-atom collision interaction in the BEC, which could lead to the
change of critical phenomenon Chen:2007b . More recently, under some
approximations, Chen et al. Chen:2008 proposed an extended Dicke model based
on a two-component atomic BEC in an optical cavity with the atoms coupled to
both the quantized optical cavity filed and an external classical driving
optical field, where an interesting phase diagram covering phenomena from
quantum optics to atomic BEC is addressed.
In this paper, we consider a trapped two-component atomic BEC where the two
condensated states and an ancillary excited state of each atom form a three-
level $\Delta$-type configuration via the couplings to the optical cavity and
an externally controlled classical optical field as well as an external
microwave field. Such a three-level configuration can be reduced to a two-
level configuration (i.e. the two condensated states) after adiabatically
eliminating the atomic excited state under the large sigle-photon detuning
condition. Under the so-called rotating-wave approximation (RWA), we derive an
effective, extended JC-type JC Dicke model for such a two-component BEC. The
phase diagram for the derived extended JC-Dicke model is also investigated in
detail.
## II Model and Hamiltonian
The setup of the JC-Dicke model under consideration is depicted in Fig. 1. An
optically-trapped Rb atomic BEC under the two-mode approximation with the
atomic states $5^{2}S_{1/2}$ $\left|F=1,m_{f}=-1\right\rangle$ (ground state
$\left|1\right\rangle$) and $\left|F=1,m_{f}=0\right\rangle$ (metastable state
$\left|2\right\rangle$) is placed in a single-mode quantized optical cavity.
These two states of the atomic BEC are coupled via both an external microwave
field and a two-photon process two-photon BEC mediated by an ancillary
excited state $\left|3\right\rangle$ (from $5^{2}P_{3/2}$), where the single-
photon transition between $\left|3\right\rangle$ and the ground state
$\left|1\right\rangle$ (or the metastable state $\left|2\right\rangle$) is
coupled to the quantized optical cavity field (or the external classical
optical field). Here we assume that both the corresponding single-photon
detunings are large and the corresponding two-photon detuning is very small.
For such a case of two-photon Raman process Raman , the ancillary excited
state can be adiabatically eliminated and the effective Hamiltonian of the
two-component BEC reads ($\hbar=1$ hereafter)
$H_{\mathrm{eff}}=H_{at}+H_{ph}+H_{at-ph}+H_{at-cl}+H_{at-at}$ (1)
with
$\displaystyle H_{at}$
$\displaystyle=\nu_{1}c_{1}^{\dagger}c_{1}+(\nu_{2}+\omega_{2}-\omega_{1})c_{2}^{\dagger}c_{2},$
(2a) $\displaystyle H_{ph}$ $\displaystyle=\omega a^{\dagger}a,$ (2b)
$\displaystyle H_{at-op}$
$\displaystyle=\lambda_{\mathrm{eff}}e^{i\omega_{\mathrm{cl}}t}c_{2}^{\dagger}c_{1}a+h.c.,$
(2c) $\displaystyle H_{at-mw}$ $\displaystyle=\Omega
e^{-i\omega_{\mathrm{mw}}t}c_{2}^{\dagger}c_{1}+h.c.,$ (2d) $\displaystyle
H_{at-at}$
$\displaystyle=\frac{\eta_{1}}{2}c_{1}^{\dagger}c_{1}^{\dagger}c_{1}c_{1}+\frac{\eta_{2}}{2}c_{2}^{\dagger}c_{2}^{\dagger}c_{2}c_{2}+\eta_{12}c_{1}^{\dagger}c_{1}c_{2}^{\dagger}c_{2}$
(2e) denoting the energies of the free atoms in the BEC, the free cavity
field, the reduced effective interaction between the BEC with the optical
fields, the interaction of the BEC with the microwave field, and the atom-atom
collision interaction of the BEC, respectively. Here, $c_{1,2}$
($c_{1,2}^{\dagger}$) are the annihilation (creation) bosonic operators for
$\left|1\right\rangle$ and $\left|2\right\rangle$, respectively; $\omega_{i}$
($i=1,2,3$) is the corresponding internal level-energy for atomic state
$\left|i\right\rangle$; $\nu_{l}=\int
d^{3}\mathbf{r}\phi_{l}^{\ast}(\mathbf{r})[-\nabla^{2}/2m+V(\mathbf{r})]\phi_{l}(\mathbf{r})$
($l=1,2$) is the trapped frequency for the states $\left|1\right\rangle$ and
$\left|2\right\rangle$ with $V(\mathbf{r})$ being the trapped potential, $m$
the atomic mass, and $\phi_{l}(\mathbf{r})$ the corresponding condensate
wavefunction; $a$ ($a^{\dagger}$) is the annihilation (creation) operator of
the cavity mode with the frequency $\omega$;
$\lambda_{\mathrm{eff}}=g_{13}\Omega_{23}/\Delta$ is the reduced effective
coupling strength for two-photon Raman process, where $g_{13}$ ($\Omega_{23}$)
is the corresponding coupling strength between the BEC and the quantized
cavity field (classical optical field), $\Delta$ is the large single-photon
detuning: $\Delta\equiv\omega_{3}-$
$(\nu_{2}+\omega_{2})-\omega_{\mathrm{cl}}\gg\\{g_{13},\Omega_{23}\\}$ with
$\omega_{\mathrm{{cl}}}$ the frequency of the classical optical field;
$\Omega$ is the corresponding coupling strength between the BEC and the
microwave field (with the frequency $\omega_{\mathrm{{mw}}}$);
$\eta_{l}=(4\pi\rho_{l}/m)\int
d^{3}\mathbf{r}\left|\phi_{l}(\mathbf{r})\right|^{4}$ and
$\eta_{12}=(4\pi\rho_{12}/m)\int
d^{3}\mathbf{r}\left|\phi_{1}^{\ast}(\mathbf{r})\phi_{2}(\mathbf{r})\right|^{2}$
with $\rho_{l}$ and $\rho_{12}$ ($=\rho_{21}$) the intraspecies and the
interspecies $s$-wave scattering lengths, respectively. It is remarked that
the RWA has been used for all optical/microwave fields coupling to the atomic
BEC.
Figure 1: (Color online) (a) Schematic diagram of the experimental setup for a
trapped BEC of 87Rb atoms. (b) The internal atomic level configuration of BEC:
The $\left|1\right\rangle$$\leftrightarrow$$\left|2\right\rangle$ transition
of the BEC atoms is coupled to the external microwave field with the coupling
strength $\Omega$; The
$\left|1\right\rangle$$\leftrightarrow$$\left|3\right\rangle$ and
$\left|2\right\rangle$$\leftrightarrow$$\left|3\right\rangle$ transitions are
coupled largely-detuned to the optical cavity and the external classical
optical field with the corresponding coupling strengths $g_{13}$ and
$\Omega_{23}$, respectively. (c) When the single-photon detuning $\Delta$ is
large (the corresponding two-photon detuning is assumed to be small), a
reduced two-level configuration can be obtained after eliminating the
ancillary excited state $\left|3\right\rangle$. $\lambda_{\mathrm{eff}}$ is
the effective coupling strength for the two-photon Raman process.
By using the Schwinger relations
$\displaystyle J_{+}$ $\displaystyle=c_{2}^{\dagger}c_{1},\text{ \ \
}J_{-}=c_{1}^{\dagger}c_{2},$ (3a) $\displaystyle J_{z}$
$\displaystyle=\frac{c_{2}^{\dagger}c_{2}-c_{1}^{\dagger}c_{1}}{2},$ (3b)
which fulfill $\left[J_{+},J_{-}\right]=2J_{z},\text{ \
}\left[J_{z},J_{\pm}\right]=\pm J_{\pm},$ (4)
the Hamiltonian (1) can be written as
$\displaystyle H_{\mathrm{eff}}$ $\displaystyle=\omega
a^{\dagger}a+\omega_{0}J_{z}+\frac{\eta}{N}J_{z}^{2}$ $\displaystyle+[(\Omega
e^{-i\omega_{\mathrm{mw}}t}+\frac{\lambda}{\sqrt{N}}e^{i\omega_{\mathrm{cl}}t}a)J_{+}+h.c.]+\text{const},$
(5)
where
$N=c_{2}^{\dagger}c_{2}+c_{1}^{\dagger}c_{1}$ (6)
is the number of the atoms,
$\displaystyle\omega_{0}$
$\displaystyle=\nu_{2}+\omega_{2}-\nu_{1}-\omega_{1}+\frac{N-1}{2}\left(\eta_{2}-\eta_{1}\right),$
(7a) $\displaystyle\eta$
$\displaystyle=(\frac{\eta_{1}+\eta_{2}}{2}-\eta_{12})N,\text{ }$ (7b)
$\displaystyle\lambda$ $\displaystyle=\lambda_{\mathrm{eff}}\sqrt{N},$ (7c)
and the constant term
$\text{const}=\frac{N}{2}[(\nu_{2}+\omega_{2}-\omega_{1}-\frac{\eta_{2}}{2}+\gamma_{2}N)+(\nu_{1}-\frac{\eta_{1}}{2}+\gamma_{1}N)]$
can be neglected in the following consideration.
For $\Omega\neq 0$, to eliminate the time-dependence in Hamiltonian (5), we
perform a unitary transformation
$U=\exp[-i\omega_{\mathrm{{mw}}}J_{z}t-i(\omega_{\mathrm{{mw}}}+\omega_{\mathrm{{cl}}})a^{\dagger}at]$
and obtain an effective Hamiltonian
$\displaystyle H$
$\displaystyle=\omega_{a}a^{\dagger}a+\omega_{b}J_{z}+\frac{\eta}{N}J_{z}^{2}$
$\displaystyle+(\frac{\lambda}{\sqrt{N}}aJ_{+}+\Omega J_{+}+h.c.),$ (8)
where
$\omega_{a}=\left(\omega-\omega_{\mathrm{mw}}-\omega_{\mathrm{cl}}\right)$ and
$\omega_{b}=\left(\omega_{0}-\omega_{\mathrm{mw}}\right)$ are the effective
frequencies in the rotating frame, in which $H$ is independent of time. To the
best of our knowledge, Hamiltonian (8) appears to be a new one in literatures,
and thus we call it as the extended JC-Dicke model.
## III Mean-field ground state energy
We now look into the ground-state properties of Hamiltonian (8) and the
corresponding quantum phases as well as their transitions. We here consider
the case of positive $\omega_{a}$, where the stable ground state is
anticipated for Hamiltonian (8). By using the Holstein-Primakoff
transformation HP
$\displaystyle J_{+}$ $\displaystyle=b^{\dagger}\sqrt{N-b^{\dagger}b},\text{\
}J_{-}=\sqrt{N-b^{\dagger}b}b,$ $\displaystyle J_{z}$
$\displaystyle=b^{\dagger}b-\frac{N}{2}$ (9)
with $[b,b^{\dagger}]=1$, the effective Hamiltonian reads
$\displaystyle H$
$\displaystyle=\omega_{a}a^{\dagger}a+\omega_{b}(b^{\dagger}b-\frac{N}{2})+\frac{\eta}{N}(b^{\dagger}b-\frac{N}{2})^{2}$
$\displaystyle+[(\lambda
a+\Omega\sqrt{N})b^{\dagger}\sqrt{1-\frac{b^{\dagger}b}{N}}+h.c.].$ (10)
Similar to that used in the standard Dicke model Emary:2003 , we here
introduce the displacements for the two shifting boson operators as
$c^{\dagger}=a^{\dagger}-\sqrt{N}\alpha^{\ast}$ and
$d^{\dagger}=b^{\dagger}+\sqrt{N}\beta^{\ast}$ with the complex displacement
parameters $\alpha$ and $\beta$ describing the scaled collective behaviors of
both the atoms and the photons Emary:2003 ; Li:2006 ; Chen:2007b ; Chen:2008 .
In fact, the current method of introducing the displacements is equivalent to
the mean field approach. In this framework, it is clear that
$0\leq\left|\beta\right|\leq 1$.
After expanding the terms in the square in (10), the scaled Hamiltonian can be
written up to the order of $N^{-1}$ as
$H/N=H_{0}+N^{-1/2}H_{1}+N^{-1}H_{2},$ (11)
where
$\displaystyle H_{0}$
$\displaystyle=\omega_{a}\alpha^{\ast}\alpha+\omega_{b}(\beta^{\ast}\beta-\frac{1}{2})+\eta(\beta^{\ast}\beta-\frac{1}{2})^{2}$
$\displaystyle-[\left(\lambda\alpha+\Omega\right)\beta^{\ast}\sqrt{1-\beta^{\ast}\beta}+c.c.]$
(12)
denotes the scaled constant energy, and $H_{1,2}$ denote the linear and
bilinear terms, respectively. It is noted that $H_{0,1,2}$ are independent of
the number of atoms $N$.
The scaled ground-state energy is just given by the scaled constant energy in
the Hamiltonian
$E_{g}^{N}(\alpha,\beta)\equiv\frac{E_{g}(\alpha,\beta)}{N}=H_{0},$ (13)
where the displacements $\alpha$ and $\beta$ should be determined from the
equilibrium condition
$\displaystyle\partial[E_{g}(\alpha,\beta)/N]/\partial\alpha^{\ast}$
$\displaystyle=0,$ (14a)
$\displaystyle\partial[E_{g}(\alpha,\beta)/N]/\partial\beta^{\ast}$
$\displaystyle=0.$ (14b) After some derivations, we find that $\alpha$ is
given by
$\alpha=\frac{\lambda^{\ast}}{\omega_{a}}\beta\sqrt{1-\beta^{\ast}\beta}.$
(15)
and $\beta$ satisfies
$\displaystyle\Omega\sqrt{1-\beta^{\ast}\beta}=\beta[$
$\displaystyle\omega_{b}+w\left(2\beta^{\ast}\beta-1\right)$
$\displaystyle+(\frac{\Omega\beta^{\ast}}{2\sqrt{1-\beta^{\ast}\beta}}+c.c.)],$
(16)
where $w=\eta+\left|\lambda\right|^{2}/\omega_{a}$.
From the above equation (16), it is obvious that $\Omega/\beta$ should be real
since all of the parameters, except for the coupling strengths $\lambda$ and
$\Omega$, are real. Here, we assume the case of real $\Omega$ for simplicity
Note:Rabi frequency . That means, $\beta$ should also be real:
$-1\leq\beta\leq 1$ and satisfy
$0=\omega_{b}\beta\sqrt{1-\beta^{2}}+\Omega\left(2\beta^{2}-1\right)+w\beta\left(2\beta^{2}-1\right)\sqrt{1-\beta^{2}}.$
(17)
For a general real $\Omega$, the scaled ground state energy in Eq. (13) is
given by the displacement $\beta$ (by using Eq. (15) to eliminate the
displacement $\alpha$) as
$E_{g}^{N}(\beta)=\omega_{b}(\beta^{2}-\frac{1}{2})-2\Omega\beta\sqrt{1-\beta^{2}}+w(\beta^{2}-\frac{1}{2})^{2}.$
(18)
The displacement $\beta$ is the non-trivial real solution for equation (17).
In general, there are more than one real solutions for Eq. (17), only the one
that leads to the minimal scaled ground state energy should be chosen.
We would like to remark that the displacements determined from the equilibrium
equations could just make the linear term $H_{1}$ be $0$. At the same time,
the bilinear term $H_{2}$ makes no contribution to the scaled ground state
energy. Therefore, the exact forms of $H_{1,2}$ are not needed in our
analysis, and thus only the constant term $H_{0}$ determines fully the scaled
ground state energy of the current system.
Figure 2: (Color online) (a) The atomic displacement $\beta$ for the ground
state, (b) the square of atomic displacement $\beta^{2}$ for the ground state
(or the scaled magnetization plus $1/2$: $m+1/2$), and (c) the scaled ground
state energy $E_{g}^{N}$ versus $\omega_{b}$ for different coupling strength
$\Omega$. The energy and frequencies are in units of $w$ ($w>0$).
## IV Numerical results and analysis
In this section, we focus on the numerical calculations of the scaled ground
state energy, e.g., $E_{g}^{N}(\beta)$ in Eq. (18), and the corresponding
displacement $\beta$ that makes $E_{g}^{N}(\beta)$ minimal. The minimum
$E_{g}^{N}(\beta)$ (as well as the corresponding $\beta$) is determined by the
three parameters: $\omega_{b}$, $\Omega$, and $w$.
Figure 2 plots $\beta$, $\beta^{2}$, and $E_{g}^{N}(\beta)$ (corresponding to
the ground state of the JC Dicke model) as a function of $\omega_{b}$ for
several values of $\Omega$. All the energies/frequencies in Fig. 2 are in
units of the positive $w$. We mentioned “positive $w$” here since $w$ may be
positive or negative, while the negative $w$ will lead to different results.
Seen from Fig. 2, a second-order normal-superradiant phase transition at
$\omega_{b}/w=1$ should be expected in the absence of microwave field
($\Omega=0$), as in the standard Dicke model Emary:2003 ; while in the
presence of microwave field ($\Omega\neq 0$), the normal phase and the
corresponding transition disappear.
Figure 3: (Color online) The atomic displacement $\beta$ for the ground state
versus the coupling strength $\Omega$ for different $\omega_{b}$. The
frequencies are in units of $w$ ($w>0$). Figure 4: (Color online) (a) The
atomic displacement $\beta$ for the ground state and (b) the scaled ground
state energy $E_{g}^{N}$ versus $\Omega$ and $w$. The frequencies/energies are
in units of $\omega_{b}$ (here we keep $\omega_{b}>0$).
The displacement $\beta$ for the ground state is plotted as a function of
$\Omega$ (in units of positive $w$) for several values of $\omega_{b}$ in Fig.
3. For a positive/negative $\Omega$, the corresponding $\beta$ is also
positive/negative. At the point of $\Omega\rightarrow 0$, the ground-state
$\beta$ may have a jump. In order to look the possible transition phenomenon
at the point of $\Omega\rightarrow 0$, we plot the ground-state $\beta$ and
$E_{g}^{N}(\beta)$ as the function of the parameters $\Omega$ and $w$ (in
units of positive $\omega_{b}$) in the 3-dimensional (3D) Fig. 4. In
experiments, the parameters $\Omega$ and $w$ are controllable, e.g., the
former one can be easily controlled by changing the strength of the microwave
field, the latter one can be controlled by adjusting the atom-atom interaction
interactions by the magnetic-field or/and optical Feshbach resonance
techniques m-Feshbach ; o-Feshbach ; Zhang:2009 . From Fig. 4(a), it is clear
that the displacement $\beta$ has a jump when $w>\omega_{b}$ at the point of
$\Omega\rightarrow 0$. Notably, the scaled ground-state energy is always
continuous at the jump point, but its first derivative with respect to
$\Omega$ does not, which implies a new kind of the first order phase
transition at the parameter point whenever $\Omega$ changes its sign. Making
the replacement: $\Omega\rightarrow-\Omega$, we can find the corresponding
replacements $\beta\rightarrow-\beta$ and $E_{g}^{N}(\beta)\rightarrow
E_{g}^{N}(-\beta)=E_{g}^{N}(\beta)$ according to Eqs. (17) and (18), just as
seen in Fig. 4.
Figure 5: (Color online) The atomic scaled magnetization $M$ versus $\Omega$
and $w$. The parameters are in units of $|\omega_{b}|$ for positive (a) and
negative (b) $\omega_{b}$. Figure 6: (Color online) The $\Omega$-$w$ phase
diagram for (a) the positive $\omega_{b}$ and (b) negative $\omega_{b}$.
We may define the scaled ’magnetization’ as $M\equiv\langle
J_{z}\rangle/N=\beta^{2}-1/2$ for the ground state. A positive magnetization
$M$ means the atomic inversion population: more atoms stay in the upper state
$\left|2\right\rangle$ than those in $\left|1\right\rangle$, while a negative
$M$ means the opposite case. Especially, $M=-1/2$ corresponds to the normal
phase: all atoms stay in the lower states. Figure 5 plots the magnetization
against the parameters $\Omega$ and $w$, where Figs. 5(a) and (b) correspond
respectively to the positive and negative $\omega_{b}$. From Fig. 5, we may
define phases $P_{1,2,3,4}$, as denoted in Fig. 6 and Table 1. The line
$L_{0}$ of normal phase and $L_{12}$ of the superradiant phase, appearing only
in the absence of microwave field, are the board lines of the phases $P_{1,2}$
in Fig. 6(a); so are $L_{0}^{\prime}$ and $L_{34}$ the board lines of
$P_{3,4}$ in Fig. 6(b). Table 1 shows the differences of relevant quantities
for all phases in Fig. 6 note .
We now address the points/line of quantum phase transition in the phase
diagram. Point $A$ in Fig. 6(a) denotes the well-known normal-superradiant
phase transition (along the transverse arrow) in the standard JC Dicke model,
while point $D$ represents a similar one (corresponding to point $A$) for the
case with the atomic near-inversion population when $\omega_{b}$ is negative.
The line $L_{12}$ corresponds to the phase transition line segment from
$P_{1}$ to $P_{2}$, which is of the first-order, as indicated above.
Intuitively, the phases $P_{1}$ and $P_{2}$ may be viewed as the para- and
dia- “magnetic” phases, because $\frac{\partial M}{\partial\Omega}>0$ and $<0$
for $P_{1}$ and $P_{2}$, respectively. Correspondingly, the line segment of
$L_{34}$ is the first-order phase transition line that separates $P_{3}$ and
$P_{4}$, which may be viewed respectively as the dia- and para- “magnetic”
phases.
Table 1: Relevant quantities for different phases $P_{1,2,3,4}$ in the ground state. quantity | P1 | P2 | P3 | P4
---|---|---|---|---
$\omega_{b}$ | $>0$ | $>0$ | $<0$ | $<0$
$\Omega$ | $>0$ | $<0$ | $>0$ | $<0$
$\beta$ | $(0,\frac{\sqrt{2}}{2})$ | $(-\frac{\sqrt{2}}{2},0)$ | $(\frac{\sqrt{2}}{2},1)$ | $(-1,-\frac{\sqrt{2}}{2})$
$M$ | $(-\frac{1}{2},0)$ | $(-\frac{1}{2},0)$ | $(0,\frac{1}{2})$ | $(0,\frac{1}{2})$
$\frac{\partial E_{g}^{N}}{\partial\Omega}$ | $<0$ | $>0$ | $<0$ | $>0$
$\frac{\partial M}{\partial w}$ | $>0$ | $>0$ | $<0$ | $<0$
$\frac{\partial M}{\partial\Omega}$ | $>0$ | $<0$ | $<0$ | $>0$
It is worth pointing that we have neglected the anti-resonant terms and the
corresponding $\hat{A}^{2}$ terms ($\hat{A}$ is the vector potential of the
optical field) in the original Hamiltonian. In the standard Dicke model, it
was pointed out that the anti-resonant terms would also bring an un-neglectful
influence near the critical point and lead to the modification of the result
of the quantum phase transition Duncan:1974 ; Emary:2003 . It was also pointed
out that the quantum phase transition in the standard Dicke model happens in
the effective ultra-strong matter-light coupling regime, where the
$\hat{A}^{2}$ term could also become very strong and may not be omitted.
Notably, if the effect of the $\hat{A}^{2}$ term were not neglected, the
quantum phase transition would be impossible to happen Rzazewski:1975 in the
standard Dicke model. In order to obtain the quantum phase transition in the
Dicke model, it was proposed in Ref. Dimer:2007 to get an effective Dicke
model which does not include the $\hat{A}^{2}$ term. In the current scheme,
similar to Ref. Dimer:2007 , we have obtained an effective extended JC-Dicke
model that does not have the anti-resonant terms and the $\hat{A}^{2}$ terms.
In our original Hamiltonian, it is assumed that the matter-light couplings are
much smaller than the corresponding atomic transition/optical carrier
frequencies, then it is safe to neglect the anti-resonant terms and the
$\hat{A}^{2}$ terms. After the unitary transformation, the time-independent
effective Hamiltonian may lead to the quantum phase transition when the
matter-light coupling is comparable to the effective carrier frequencies.
## V Conclusion
In conclusion, we have derived an extended JC-Dicke model for a two-component
BEC coupled to the quantized optical cavity and the external classical optical
field as well as a microwave field. The scaled ground-state energy and the
phase diagram of this model Hamiltonian have been investigated in the
framework of mean-field approach. A new first-order phase transition has also
been revealed when the amplitude of micromave field changes its sign.
###### Acknowledgements.
We thank Ming-Yong Ye, Zi-Jian Yao, Gang Chen, and Zheng-Yuan Xue for helpful
discussions. This work was supported by the RGC of Hong Kong under Grant No.
HKU7051/06P, the URC fund of HKU, and the State Key Program for Basic Research
of China (No. 2006CB921800).
## References
* (1) R. H. Dicke, Phys. Rev. 93, 99 (1954).
* (2) K. Hepp and E. H. Lieb, Ann. Phys. (N. Y.) 76, 360 (1973).
* (3) Y. K. Wang and F. T. Hioes, Phys. Rev. A 7, 831(1973).
* (4) C. Emary and T. Brandes, Phys. Rev. Lett. 90, 044101 (2003); Phys. Rev. E 67, 066203 (2003).
* (5) N. Lambert, C. Emary, and T. Brandes, Phys. Rev. Lett. 92, 073602 (2004); Phys. Rev. A 71, 053804 (2005).
* (6) X.-W. Hou and B. Hu, Phys. Rev. A 69, 042110 (2004).
* (7) C. Emary and T. Brandes, Phys. Rev. A 69, 053804 (2004).
* (8) C. F. Lee and N. F. Johnson, Phys. Rev. Lett. 93, 083001 (2004).
* (9) Y. Li, Z. D. Wang, and C. P. Sun, Phys. Rev. A 74, 023815 (2006).
* (10) J. Vidal and S. Dusuel, Europhys. Lett. 74, 817 (2006).
* (11) M. A. Alcalde, A. L. L. de Lemos, and N. F. Svaiter, J. Phys. A: Math. Theor. 40, 11961 (2007).
* (12) F. Dimer, B. Estienne, A. S. Parkins, and H. J. Carmichael, Phys. Rev. A 75, 013804 (2007).
* (13) G. Chen, Z. Chen, and J. Liang, Phys. Rev. A 76, 055803 (2007).
* (14) D. Tolkunov and D. Solenov, Phys. Rev. B 75, 024402 (2007).
* (15) G. Chen, Z. Chen, and J. Liang, Phys. Rev. A 76, 045801 (2007).
* (16) G. Chen, X. Wang, J.-Q. Liang, and Z. D. Wang, Phys. Rev. A 78, 023634 (2008).
* (17) H. Goto and K. Ichimura, Phys. Rev. A 77, 053811 (2008).
* (18) O. Tsyplyatyev and D. Loss, arXiv:0811.2386.
* (19) M. A. Alcalde, R. Kullock, and N. F. Svaiterc, J. Math. Phys. 50, 013511 (2009).
* (20) J. Larson and M. Lewenstein, arXiv:0902.1069.
* (21) Q.-H. Chen, Y.-Y. Zhang, T. Liu, and K.-L. Wang, Phys. Rev. A 78, 051801(R) (2008).
* (22) J.-F. Huang, Y. Li, J.-Q. Liao, L.-M. Kuang, and C. P. Sun, arXiv:0902.1575.
* (23) A. Öttl, S. Ritter, M. Köhl, T. Esslinger, Rev. Sci. Instrum. 77, 063118 (2006).
* (24) F. Brennecke, T. Donner, S. Ritter, T. Bourdel, M. Köhl, and T. Esslinger, Nature 450, 268 (2007).
* (25) F. Brennecke, S. Ritter, T. Donner, and T. Esslinger, Science 322, 235 (2008).
* (26) K. W. Murch, K. L. Moore, S. Gupta, and D. M. Stamper-Kurn, Nature Phys. 4, 561 (2008).
* (27) M. R. Matthews, D. S. Hall, D. S. Jin, J. R. Ensher, C. E. Wieman, E. A. Cornell, F. Dalfovo, C. Minniti, and S. Stringari, Phys. Rev. Lett. 81, 243 (1998) ; D. S. Hall, M. R. Matthews, C. E. Wieman, and E. A. Cornell, Phys. Rev. Lett. 81, 1543 (1998) ; D. S. Hall, M. R. Matthews, J. R. Ensher, C. E. Wieman, and E. A. Cornell, Phys. Rev. Lett. 81, 1539 (1998).
* (28) J. Metz, M. Trupke, and A. Beige, Phys. Rev. Lett. 97, 040503 (2006); S.-L. Zhu, H. Fu, C.-J. Wu, S.-C. Zhang, and L.-M. Duan, Phys. Rev. Lett. 97, 240401 (2006); Y. Li, C. Bruder, and C. P. Sun, Phys. Rev. Lett. 99, 130403 (2007).
* (29) The so-called “JC” model is the abbreviation for Jaynes-Cummings model (see E. T. Jaynes and F. W. Cummings, Proc. IEEE 51, 89 (1963)), which describes the coherent coupling of a single two-level atom with one mode of the quantized light field by neglecting the anti-resonant terms in the Hamiltonian. Strictly speaking, our current model including $N$ atoms is more related to the Tavis-Cummings model (see M. Tavis and F. W. Cummings, Phys. Rev. 170, 379 (1968)). For convenience, we keep the well-known name “JC model” here.
* (30) T. Holstein and H. Primakoff, Phys. Rev. 58, 1098 (1940).
* (31) Usually, when an atomic transition is coupled to only one optical/microwave field, the phase factor of the coupling strength (i.e., $\Omega$ here) can be absorbed into the atomic state by redefining the state and thus the coupling strength can be taken to be always “positive”. However, it is not the case if a given atomic transition is coupled to two independent (optical/microwave) fields (or identically coupled to two fields as considered here). In this case, one should not take the both coupling strengths to be always “positive”. In this paper, we consider the coupling strength $\Omega$ to be a real value (either positive or negative) for the sake of simplicity (here an opposite sign of $\Omega$ may be viewed to denote the opposite direction of the “external field-$\Omega$”, as seen from Eq. (8)).
* (32) E. Tiesinga, B. J. Verhaar, and H. T. C. Stoof, Phys. Rev. A 47, 4114 (1993); S. Inouye, M. R. Andrews, J. Stenger, H.-J. Miesner, D. M. Stamper-Kurn, and W. Ketterle, Nature 392, 151, (1998); S. L. Cornish, N. R. Claussen, J. L. Roberts, E. A. Cornell, and C. E. Wieman, Phys. Rev. Lett. 85, 1795 (2000).
* (33) P. O. Fedichev, Yu. Kagan, G. V. Shlyapnikov, and J. T. M. Walraven, Phys. Rev. Lett. 77, 2913 (1996); M. Theis, G. Thalhammer, K. Winkler, M. Hellwig, G. Ruff, R. Grimm, and J. H. Denschlag, Phys. Rev. Lett. 93, 123001 (2004).
* (34) P. Zhang, P. Naidon, and M. Ueda, Phys. Rev. Lett. 103, 133202 (2009).
* (35) We here do not adopt the scenario used in Ref. Chen:2008 to address the Mott-superfluid phase transition. It is because such a phase transition may be well defined in the mean-field framework only for $\omega_{b}=0$.
* (36) G. C. Duncan, Phys. Rev. A 9, 418 (1974).
* (37) K. Rzazewski, K. Wódkiewicz, and W. Zakowicz, Phys. Rev. Lett. 35, 432 (1975).
|
arxiv-papers
| 2009-04-30T01:29:01 |
2024-09-04T02:49:02.255934
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Yong Li, Peng Zhang, and Z. D. Wang",
"submitter": "Yong Li",
"url": "https://arxiv.org/abs/0904.4730"
}
|
0905.0129
|
Physica A 389 (2010) 3193–3217
# Correlations, Risk and Crisis:
From Physiology to Finance
Alexander N. Gorban ag153@le.ac.uk Corresponding author: Centre for
Mathematical Modelling, University of Leicester, University Road, Leicester,
LE1 7RH, UK University of Leicester, Leicester, LE1 7RH, UK Elena V. Smirnova
seleval2008@yandex.ru Siberian Federal University, Krasnoyarsk, 660041, Russia
Tatiana A. Tyukina tt51@le.ac.uk University of Leicester, Leicester, LE1 7RH,
UK
###### Abstract
We study the dynamics of correlation and variance in systems under the load of
environmental factors. A universal effect in ensembles of similar systems
under the load of similar factors is described: in crisis, typically, even
before obvious symptoms of crisis appear, correlation increases, and, at the
same time, variance (and volatility) increases too. This effect is supported
by many experiments and observations of groups of humans, mice, trees, grassy
plants, and on financial time series.
A general approach to the explanation of the effect through dynamics of
individual adaptation of similar non-interactive individuals to a similar
system of external factors is developed. Qualitatively, this approach follows
Selye’s idea about adaptation energy.
###### keywords:
Correlations, Factor, Liebig s Law, Synergy, Adaptation, Selection, Crisis
Indicator
## Introduction: Sources of Ideas and Data
In many areas of practice, from physiology to economics, psychology, and
engineering we have to analyze the behavior of groups of many similar systems,
which are adapting to the same or similar environment. Groups of humans in
hard living conditions (Far North city, polar expedition, or a hospital, for
example), trees under the influence of anthropogenic air pollution, rats under
poisoning, banks in financial crisis, enterprises in recession, and many other
situations of that type provide us with plenty of important problems, problems
of diagnostics and prediction.
For many such situations it was found that the correlations between individual
systems are better indicators than the value of attributes. More specifically,
in thousands of experiments it was shown that in crisis, typically, even
before obvious symptoms of crisis appear, the correlations increase, and, at
the same time, the variance (volatility) increases too (Fig. 1).
On the other hand, situations with inverse behavior were predicted
theoretically and found experimentally [3]. For some systems it was
demonstrated that after the crisis achieves its bottom, it can develop into
two directions: recovering (both the correlations and the variance decrease)
or fatal catastrophe (the correlations decrease, but the variance continues to
increase) (Fig. 1). This makes the problem more intriguing.
Figure 1: Correlations and variance in crisis. The typical picture:
$Cor\uparrow;\,Var\uparrow$ – stress; $Cor\downarrow;\,Var\downarrow$ –
recovering; $Cor\downarrow;\,Var\uparrow$ – approaching the disadaptation
catastrophe after the bottom of the crisis. In this schematic picture, axes
correspond to attributes, normalized to the unite variance in the comfort
state.
If we look only on the state but not on the history then the only difference
between comfort and disadaptation in this scheme is the value of variance: in
the disadaptation state the variance is larger and the correlations in both
cases are low. Qualitatively, the typical behavior of an ensemble of similar
systems, which are adapting to the same or similar environment looks as
follows:
* •
In a well-adapted state the deviations of the systems’ state from the average
value have relatively low correlations;
* •
Under increasing of the load of environmental factors some of the systems
leave the low-correlated comfort cloud and form a low-dimensional highly
correlated group (an order parameter appears). With further increasing of the
load more systems join this highly correlated group. A simplest model based on
the Selye’s ideas about adaptation gives the explanation of this effect (see
Sec.4.1.2);
* •
After the load gets over some critical value, the order parameter disappears
and the correlations decrease but the variance continues to increase.
There is no proof that this is the only scenario of the changes. Perhaps, it
is not. It depends on the choice of parameters, for example. Nevertheless, the
first part (appearance of an order parameter) was supported by plenty of
experiments and the second part (destroying of the order parameter) is also
supported by observation of the systems near death.
Now, after 21 years of studying of this effect [1, 2], we maintain that it is
universal for groups of similar systems that are sustaining a stress and have
an adaptation ability. Hence, a theory at an adequate level of universality is
needed.
In this paper we review some data for different kinds of systems: from humans
to plants [2, 5, 6, 7, 8, 9], and perform also a case study of the thirty
largest companies from the British stock market for the period 2006–2008.
In economics, we use also published results of data analysis for equity
markets of seven major countries over the period 1960–1990 [11], for the
twelve largest European equity markets after the 1987 international equity
market crash [12], and for thirty companies from Deutsche Aktienindex (DAX)
over the period 1988-1999 [13].The analysis of correlations is very important
for portfolio optimization, and an increase of correlations in a crisis
decreases the possibility of risk diversification ([15], Chs. 12, 13). In
1999, it was proposed [16] to use the distance $d_{ij}=\sqrt{2(1-\rho_{ij})}$,
where $\rho_{ij}$ is the correlation coefficient, for the analysis of the
hierarchical structure of a market. (This distance for multidimensional time
series analysis was analyzed previously in Ref. [14].) The performance of this
approach was demonstrated on the stocks used to compute the Dow Jones
Industrial Average and on the portfolio of stocks used to compute the S&P 500
index. This approach was further developed and applied (together with more
standard correlation analysis) for analysis of anatomy of the Black Monday
crisis (October 19, 1987) [17]. In this analysis, hundreds of companies were
used.
Stock price changes of the largest 1000 U.S. companies were analyzed for the
2-year period 1994–1995 [18], and statistics of several of the largest
eigenvalues of the correlation matrix were evidenced to be far from the random
matrix prediction. This kind of analysis was continued for the three major US
stock exchanges, namely, the New York Stock Exchange (NYSE), the American
Stock Exchange (AMEX), and the National Association of Securities Dealers
Automated Quotation (NASDAQ) [19]. Cleaning the correlation matrix by removing
the part of the spectrum explainable by random matrix ensembles was proposed
[20]. Spectral properties of the correlation matrix were analyzed also for 206
stocks traded in Istanbul Stock Exchange Market during the 5-year period
2000–2005 [23].
Linear and nonlinear co-movements presented in the Real Exchange Rate (RER) in
a group of 28 developed and developing countries were studied to clarify the
important question about “crisis contagion” [24]: Do strong correlations
appear before crisis and provide crisis contagion, or do they grow stronger
because of crisis? The spread of the credit crisis (2007–2008) was studied by
referring to a correlation network of stocks in the S&P 500 and NASDAQ-100
indices. Current trends demonstrate that the losses in certain markets, follow
a cascade or epidemic flow along the correlations of various stocks. But
whether or not this idea of epidemic or cascade is a metaphor or a causal
model for this crisis is not so obvious [25].
Most of the data, which we collected by ourselves or found in publications,
support the hypothesis presented in Fig. 1. In all situations, the definitions
of stress and crisis were constructed by experts in specific disciplines on
the basis of specific knowledge. What do “better” and “worse” mean? This is a
nontrivial special question and from the point of view of very practically
oriented researchers the main outcome of modeling may be in the definition of
crisis rather than in the explanation of details [26]. In many situations we
can detect that one man’s crisis is another man’s road to prosperity.
Nevertheless, all the experiments are unbiased in the following sense: the
definitions of the “better–worse” scale were done before the correlation
analysis and did not depend on the results of that analysis. Hence, one can
state, that the expert evaluation of the stress and crisis can be (typically)
reproduced by the formal analysis of correlations and variance.
The basic model of such generality should include little detail, and we try to
make it as simple as possible. We represent the systems, which are adapting to
stress, as the systems which optimize distribution of available amount of
resource for the neutralization of different harmful factors (we also consider
deficit of anything needful as a harmful factor).
The crucial question for these factor–resource models is: what is the resource
of adaptation? This question arose for the first time when Selye published the
concept of adaptation energy and experimental evidence supporting this idea
[28, 29]. After that, this notion was significantly improved [30], plenty of
indirect evidence supporting this concept was found, but this elusive
adaptation energy is still a theoretical concept, and in the modern
“Encyclopedia of Stress” we read: “As for adaptation energy, Selye was never
able to measure it…” [31]. Nevertheless, the notion of adaptation energy is
very useful in the analysis of adaptation and is now in wide use (see, for
example, [32, 33]).
The question about the nature of adaptation resource remains important for the
economic situation too. The idea of exchange helps here: any resource could be
exchanged for another one, and the only question is – what is the “exchange
rate”, how fast this exchange could be done, what is the margin, how the
margin depends on the exchange time, and what is the limit of that exchange.
In the zero approximation we can just postulate the universal adaptation
resource and hide all the exchange and recovering processes. For biophysics,
this exchange idea seems also attractive, but of course there exist some
limits on the possible exchange of different resources. Nevertheless, we can
follow the Selye arguments and postulate the adaptation energy under the
assumption that this is not an “energy”, but just a pool of various
exchangeable resources. When an organism achieves the limits of resources
exchangeability, the universal non-specific stress and adaptation syndrome
transforms (disintegrates) into specific diseases. Near this limit we have to
expect the critical retardation of exchange processes.
In biophysics, the idea of optimization requires additional explanation. The
main source of the optimality idea in biology is the formalization of natural
selection and adaptive dynamics. After works of Haldane (1932) [34] and Gause
(1934) [35] this direction, with various concepts of fitness optimization, was
further developed (see, for example, review papers [36, 37, 38]). To transfer
the evolutionary optimality principles to short and long term adaptation we
need the idea of genocopy-phenocopy interchangeability ([40], p. 117). The
phenotype modifications simulate the optimal genotype, but in a narrower
interval of change. We can expect that adaptation also gives the optimal
phenotype, but the interval of the possible changes should be even narrower,
than for modifications. The idea of convergence of genetic and environmental
effects was supported by analysis of genome regulation [39] (the principle of
concentration-affinity equivalence). This gives a basis for the optimality
assumption in adaptation modeling. For ensembles of man-made systems in
economics, the idea of optimality also can be motivated by selection of
strategies arguments.
To analyze resource redistribution for the compensation of different
environmental factors we have to answer one more question: how is the system
of factors organized? Ecology already has a very attractive version for an
answer. This is Liebig s Law of the Minimum. The principle behind this law is
quite simple. Originally, it meant that the scarcest necessity an organism
requires will be the limiting factor to its performance. A bit more generally,
the worst factor determines the situation for an organism, and free resource
should, perhaps, be assigned for neutralization of that factor (until it loses
its leadership).
The opposite principle of factor organization is synergy: the superlinear
mutual amplification of factors. Adaptation to Liebig’s system of factors, or
to any synergistic system, leads to two paradoxes of adaptation:
* •
Law of the Minimum paradox (Sec. 4.2): If for a randomly selected pair, (
State of environment – State of organism ), the Law of the Minimum is valid
(everything is limited by the factor with the worst value) then, after
adaptation, many factors (the maximally possible amount of them) are equally
important.
* •
Law of the Minimum inverse paradox (Sec. 4.3): If for a randomly selected
pair, ( State of environment – State of organism ), many factors are equally
important and superlinearly amplify each other then, after adaptation, a
smaller amount of factors is important (everything is limited by the factors
with the worst non-compensated values, the system approaches the Law of the
Minimum).
After introduction of the main ideas and data sources, we are in a position to
start more formal consideration.
## 1 Indicators
How can we measure correlations between various attributes in a population? If
we have two variables, $x$ and $y$, the answer is simple: we measure
$(x_{i},y_{i})$ for different individuals ($i=1,...n$, $n>1$ is the number of
measurements). The sample correlation coefficient (the Pearson coefficient) is
$r=\frac{\langle xy\rangle-\langle x\rangle\langle
y\rangle}{\sqrt{\langle(x_{i}-\langle
x\rangle)^{2}\rangle}\sqrt{\langle(y_{i}-\langle y\rangle)^{2}\rangle}}$ (1)
where $\langle...\rangle$ stands for the sample average value: $\langle
x\rangle=\frac{1}{n}\sum_{i}x_{i}$.
If individuals are characterized by more than two attributes
$\\{x^{l}|l=1,...m\\}$ then we have $m(m-1)/2$ correlation coefficients
between them, $r_{jk}$. In biophysics, we usually analyze correlations between
attributes, and each individual organism is represented as a vector of
attribute values.
In analysis of financial time series, the standard situation may be considered
as a “transposed” one. Each object (stock, enterprise, …) is represented by a
vector of values of a variable (asset return, for example) in a window of time
and we study correlations between objects. This is, essentially, just a
difference between $X$ and $X^{T}$, where $X$ is the matrix of data. In
correlation analysis, this difference appears in two operations:
centralization (when we subtract means in the computation of covariance) and
normalization (when we transform the covariance into the correlation
coefficient). In one case, we centralize and normalize the columns of $X$:
subtract average values in columns, and divide columns on their standard
deviations. In another case, we apply these operations to the rows of $X$. For
financial time series, the synchronous averages and variances (“varieties”)
and time averages and variances (“volatilities”) have different statistical
properties. This was clearly demonstrated in a special case study [41].
Nevertheless, such a difference does not appear very important for the
analysis of the total level of correlations in crisis (just the magnitude of
correlation changes, and correlations in time are uniformly less than
synchronous ones, this is in agreement with observations from Ref. [41]). More
details are presented in the special case study below.
In our case study we demonstrated that in the analysis of financial time
series it may be also convenient to study correlations between parameters, not
between individuals. It means that we can study correlation between any two
time moments and consider data from different individuals as values of random
2D vector. It is necessary to stress that this correlations between two time
moments are very different from the standard autocorrelations for stationary
time series (which characterize the sample of all pairs of time moments with a
given lag in time).
For example, let $X_{it}$ be a log-return value for $i$th stock at time moment
$t$ ($i=1,...n$, $t=\tau+1,...\tau+T$). Each row of the data matrix $X_{it}$
corresponds to an individual stock and each column corresponds to a time
moment. If we normalize and centralize data in rows and calculate the
correlation coefficients between rows ($r_{ij}=\sum_{t}X_{it}X_{jt}$ for
centralized and normalized data) then we find the correlations between stocks.
If we normalize and centralize data in columns and calculate the correlation
coefficients between them ($r_{t_{1}t_{2}}=\sum_{i}X_{it_{1}}X_{it_{2}}$ for
centralized and normalized data) then we find the correlations between time
moments. In crisis, dynamics of the correlations between stocks is similar to
behavior of the correlations between time moments. One benefit from use of the
correlations between time moments is absence of averaging in time (locality):
this correlation coefficient depends on data at two time moments. This allows
to analyze the anatomy of crisis in time.
To collect information about correlations between many attributes in one
indicator, it is possible to use various approaches. Fist of all, we can
evaluate the non-diagonal part of the correlation matrix in any norm, for
example, in $L_{p}$ norm
$\|r\|_{p}=\left(\sum_{j>k}|r_{jk}|^{p}\right)^{\frac{1}{p}}.$ (2)
If one would like to study strong correlations, then it may be better to
delete terms with values below a threshold $\alpha>0$ from this sum:
$G_{p,\alpha}=\left(\sum_{j>k,\
|r_{jk}|>\alpha}|r_{jk}|^{p}\right)^{\frac{1}{p}}.$ (3)
This quantity $G_{p,\alpha}$ is a $p$-weight of the $\alpha$-correlation
graph. The vertices of this graph correspond to variables, and these vertices
are connected by edges, if the absolute value of the correspondent sample
correlation coefficient exceeds $\alpha$: $|r_{jk}|>\alpha$. In practice, the
most used indicator is the weight $G=G_{1,0.5}$, which corresponds to $p=1$
and $\alpha=0.5$.
The correlation graphs are used during decades for visualization and analysis
of correlations (see, for example, [1, 42, 43]). Recently, the applications of
this approach is intensively developing in data mining [44, 45, 46] and
econophysics [47, 48].
Another group of indicators is produced from the principal components of the
data. The principal components are eigenvectors of the covariance matrix and
depend on the scales. Under normalization of scales to unit variance, we deal
with the correlation matrix. Let
$\lambda_{1}\geq\lambda_{2}\geq...\lambda_{m}\geq 0$ be eigenvalues of the
correlation matrix. In this paper, we use the eigenvalues and eigenvectors of
the correlation matrix. It is obvious that $\langle\lambda\rangle=1$ and
$m^{p-1}\geq\langle\lambda^{p}\rangle\geq 1$ for $p>1$,
$\langle\lambda^{p}\rangle=1$ if all non-diagonal correlation coefficients are
zero and $\langle\lambda^{p}\rangle=m^{p-1}$ if all correlation coefficients
are $\pm 1$. To select the dominant part of principal components, it is
necessary to separate the “random” part from the “non-random” part of them.
This separation was widely discussed (see, for example, the expository review
[49]).
The simplest decision gives Kaiser’s significance rule: the significant
eigenvalues are those, which are greater than the average value:
$\lambda_{i}>\langle\lambda\rangle$. For the eigenvalues of the correlation
matrix which we study here, it means $\lambda_{i}>1$. This rule works
reasonably well, when there are several eigenvalues significantly greater than
one and the others are smaller, but for a matrix which is close to a random
matrix the performance may not be so good. In such cases this method
overestimates the number of principal components.
In econophysics, another simple criterion for selection of dominant
eigenvalues has become popular [50, 19, 20, 23]. Let us imagine that the
dimension of the data vector $m$ is large and the amount of data points $n$ is
also large, but their ratio $q=n/m$ is not. This is the typical situation when
we analyze data about thousands of stocks: in this case the time window could
not be much larger than the dimension of data vector. Let us compare our
analysis of real correlations to the fictitious correlations, which appear in
$m\times n$ data matrices with independent, centralized, normalized and
Gaussian matrix elements. The distribution of the sample covariance matrix is
the Wishart distribution [21]. If $n\to\infty$ for given $m$ then those
fictitious correlations disappear, but if both $m,n\to\infty$ for constant
$q>1$ then there exists the limit distribution of eigenvalues $\lambda$ with
density
$\begin{split}&\rho(\lambda)=\frac{q}{2\pi}\sqrt{\left(\frac{\lambda_{\max}}{\lambda}-1\right)\left(1-\frac{\lambda_{\min}}{\lambda}\right)}\,;\
\ \lambda_{\min}\leq{\lambda}\leq\lambda_{\max};\\\
&\lambda_{\max/\min}=1+\frac{1}{q}\pm 2\sqrt{\frac{1}{q}}\ .\end{split}$ (4)
If the amount of points is less than dimension of data, ($q<1$) the same
formula with substitution of $q$ by $1/q$ is valid for distribution of non-
zero eigenvalues.
Instead of Kaiser’s rule for dominant eigenvalues of the correlation matrix we
get $\lambda_{i}>\lambda_{\max}$ with $\lambda_{\max}$ given by Eq. (4). If
$q$ grows to $\infty$, this new rule turns back into Kaiser’s rule. If $q$ is
minimal ($q=1$), then the proposed change of Kaiser’s rule is maximal,
$\lambda_{\max}=4$ and for dominant eigenvalues of the correlation matrix it
should be $\lambda_{i}>4$. This new estimate is just an analogue of Kaiser’s
rule in the case when the amount of data vectors is compatible with the
dimension of the data space, and, therefore, the data set is far from the law
of large numbers conditions.
Another popular criterion for the selection of dominant eigenvalues gives the
so-called broken stick model. Consider the closed interval $J=[0,1]$. Suppose
$J$ is partitioned into $m$ subintervals by randomly selecting $m-1$ points
from a uniform distribution in the same interval. Denote by $l_{k}$ the length
of the $k$-th subinterval in the descending order. Then the expected value of
$l_{k}$ is
$\mathbf{E}(l_{k})=\frac{1}{m}\sum_{j=k}^{m}\frac{1}{j}\ .$ (5)
Following the broken stick model, we have to include into the dominant part
those eigenvalues $\lambda_{k}$ (principal components), for which
$\frac{\lambda_{1}}{\sum_{i}\lambda_{i}}>\mathbf{E}(l_{1})\ \&\
\frac{\lambda_{2}}{\sum_{i}\lambda_{i}}>\mathbf{E}(l_{2})\ \&\ ...\ \&\
\frac{\lambda_{k}}{\sum_{i}\lambda_{i}}>\mathbf{E}(l_{k})\ .$ (6)
If the amount of data vectors $n$ is less than the data dimension $m$, then
$m-n$ eigenvalues are zeros, and in Eqs. (5), (6) one should take $n$
subintervals instead of $m$ ones.
It is worth mentioning that the trace of the correlation matrix is $m$, and
the broken stick model transforms (for $m>n$) into
$\lambda_{i}>\sum_{j=i}^{m}\frac{1}{j}$ ($i=1,...k$). From the practical point
of view, this method slightly underestimates the number of dominant
eigenvalues. There are other methods based on the random matrices ensembles,
but nobody knows the exact dimension of the empirical data, and the broken
stick model works satisfactorily and remains “the method of choice”.
To compare the broken stick model to Kaiser’s rule, let us mention that the
first principal component is always significant due to Kaiser’s rule (if there
exists at least one nonzero non-diagonal correlation coefficient), but in the
broken stick model it needs to be sufficiently large: the inequality
$\lambda_{1}>\sum_{j=1}^{m}\frac{1}{j}$ should hold. In a high dimension $m$
we can approximate the sum by the quadrature: $\lambda_{1}\gtrsim\ln m$.
If we have the dominant eigenvalues,
$\lambda_{1}\geq\lambda_{2}\geq...\lambda_{l}>0$, $l<m$, then we can produce
some other measures of the sample correlation:
$\frac{\lambda_{1}}{\lambda_{l}};\;\;\sum_{j=1}^{l-1}\frac{\lambda_{j}}{\lambda_{j+1}};\;\;\frac{1}{m}\sum_{j=1}^{l}\lambda_{j}\
.$ (7)
Together with $\langle\lambda^{p}\rangle$ ($p>1$, the usual choice is $p=2$)
this system of indicators can be used for an analysis of empirical
correlations.
Recently [22] eigenvalues and eigenvectors of the matrix of the absolute
values of the correlation coefficients were used for analysis of the New York
Stock Exchange (NYSE) traded stocks. The transformation from the correlation
matrix to the matrix of absolute values was justified by interpreting the
absolute values as measures of interaction strength without considering
whether the interaction is positive or negative. This approach gives the
possibility to apply the classical theory of positive matrices as well as the
graphical representation of them.
The correlation matrix for financial time series is often positive. Therefore,
it is often possible to apply the theory of positive matrices to analysis of
correlations in financial time series.
The choice of possible indicators is very rich, but happily, many case studies
have shown that in analysis of adaptation the simplest weight $G$ of the
correlation graph performs well (better or not worse than all other indicators
– see the case study below). Similarity of behavior of various indicators,
from simple weight of the correlation graphs to more sophisticated
characteristics based on the principal component analysis is expected.
Nevertheless, it is desirable to supplement the case studies by comparisons of
behavior of different indicators (for example, by scattering plots,
correlation analysis or other statistical tools). In our case study (Sec. 3)
we demonstrate that the indicators behave similarly, indeed.
A similar observation was made in Ref. [17]. There the “asset tree” was
studied, that is the recently introduced minimum spanning tree description of
correlations between stocks. The mean of the normalized dynamic asset tree
lengths was considered as a promising indicator of the market dynamics. It
appeared that a simple average correlation coefficient gives the same signal
in time, as a more sophisticated indicator, the mean of the normalized dynamic
asset tree lengths. (compare Fig. 1 and Fig. 2 from Ref. [17]). In Fig. 12
from that paper very similar behavior of the mean correlation coefficient, the
normalized tree length, and the risk of the minimum risk portfolio, as
functions of time, was demonstrated.
In many publications in econophysics the average correlation coefficient is
used instead of the sums of absolute values in Eq. (3). This is possible
because in many financial applications the orientation of the scales is fixed
and the difference between positive and negative correlations is very
important, for example, for portfolio optimization. In a more general
situation we have to use absolute values because we cannot coordinate a priori
the direction of different axes.
## 2 Correlation and Risk in Physiology
Effect of the simultaneous increase of the correlation and variance under
stress is supported by series of data from human physiology and medicine. In
this Sec., we describe in brief several typical examples. This is a review of
already published experimental work. More details are presented in an extended
e-print [27] and in original works.
### 2.1 Data from Human Physiology
The first physiological system we studied in 1980s was the lipid metabolism of
healthy newborn babies born in the temperate belt of Siberia (the comfort
zone) and in the migrant families of the same ethnic origin in a Far North
city111The parents lived there in standard city conditions.. The blood
analysis was taken in the morning, on an empty stomach, at the same time each
day. All the data were collected during the summer. Eight lipid fractions were
analyzed [1]. The resulting correlation graphs are presented in Fig. 2 a. Here
solid lines represent the correlation coefficient $|r_{ij}|\geq 0.5$, dashed
lines represent correlation coefficient $0.5>|r_{ij}|\geq 0.25$. Variance
monotonically increases with the weight of the correlation graph (Fig. 2 b).
a) b)
Figure 2: a) Correlation graphs of lipid metabolism for newborn babies.
Vertices correspond to different fractions of lipids, solid lines correspond
to correlation coefficient between fractions $|r_{ij}|\geq 0.5$, dashed lines
correspond to $0.5>|r_{ij}|\geq 0.25$. Upper row – Far North (FN), lower row –
the temperate belt of Siberia (TBS). From the left to the right: 1st-3rd days
(TBS – 123 and FN – 100 babies), 4th-6th days (TBS – 98 and FN – 99 babies),
7th-10th days (TBS – 35 and FN – 29 babies). b) The weight of the correlation
graphs (solid lines) and the variance (dashed lines) for these groups.
Many other systems were studied. We analyzed the activity of enzymes in human
leukocytes during the short-term adaptation (20 days) of groups of healthy
20-30 year old men who change their climate zone [51, 52]:
* •
From the temperate belt of Siberia (Krasnoyarsk, comfort zone) to Far North in
summer and in winter;
* •
From Far North to the South resort (Sochi, Black Sea) in summer;
* •
From the temperate belt of Russia to the South resort (Sochi, Black Sea) in
summer.
This analysis supports the basic hypothesis and, on the other hand, could be
used for prediction of the most dangerous periods in adaptation, which need
special care.
We selected the group of 54 people who moved to Far North, that had any
illness during the period of short-term adaptation. After 6 months at Far
North, this test group demonstrates much higher correlations between activity
of enzymes than the control group (98 people without illness during the
adaptation period). We analyzed the activity of enzymes (alkaline phosphatase,
acid phosphatase, succinate dehydrogenase, glyceraldehyde-3-phosphate
dehydrogenase, glycerol-3-phosphate dehydrogenase, and glucose-6-phosphate
dehydrogenase) in leucocytes: $G=5.81$ in the test group versus $G=1.36$ in
the control group. To compare the dimensionless variance for these groups, we
normalize the activity of enzymes to unite sample means (it is senseless to
use the trace of the covariance matrix without normalization because normal
activities of enzymes differ in order of magnitude). For the test group. the
sum of the enzyme variances is 1.204, and for the control group it is 0.388.
Obesity is a serious problem of contemporary medicine in developed countries.
The study was conducted on patients (more than 70 people) with different
levels of obesity [7]. The patients were divided into three groups by the
level of disease. Database with 50 attributes was studied (blood morphology,
cholesterol level including fractions, creatinine, urea).
During 30 days patients received a standard treatment consisting of a diet,
physical activity, pharmacological treatment, physical therapy and
hydrotherapy. It was shown that the weight of the correlation graph $G$ of
more informative parameters was originally high and monotonically dependent on
the level of sickness. It decreased during therapy.
### 2.2 Data from Ecological Physiology of Plants
The effect (Fig. 1) exists for plants too. It was demonstrated, for example,
by analysis of the impact of emissions from a heat power station on Scots pine
[10]. For diagnostic purposes the secondary metabolites of phenolic nature
were used. They are much more stable than the primary products and hold the
information about the past impact of environment on the plant organism for a
longer time.
The test group consisted of 10 Scots pines (Pinus sylvestric L) in a 40 year
old stand of the II class in the emission tongue 10 km from the power station.
The station had been operating on brown coal for 45 years. The control group
of 10 Scots pines was from a stand of the same age and forest type, growing
outside the industrial emission area. The needles for analysis were one year
old from the shoots in the middle part of the crown. The samples were taken in
spring in bud swelling period. The individual composition of the alcohol
extract of needles was studied by high efficiency liquid chromatography. 26
individual phenolic compounds were identified for all samples and used in the
analysis.
No reliable difference was found in the test group and control group average
compositions. For example, the results for Proanthocyanidin content (mg/g dry
weight) were as follows:
* •
Total 37.4$\pm$3.2 (test) versus 36.8$\pm$2.0 (control);
Nevertheless, the sample variance in the test group was 2.56 times higher, and
the difference in the correlations was huge: $G=17.29$ for the test group
versus $G=3.79$ in the control group.
The grassy plants under trampling load also demonstrate a similar effect [9].
The grassy plants in an oak forests are studied. For analysis, fragments of
forests were selected, where the densities of trees and bushes were the same.
The difference between these fragments was in damage to the soil surface by
trampling. The studied physiological attributes were: the height of sprouts,
the length of roots, the diameter of roots, the amount of roots, the area of
leaves, the area of roots. Again, the weight of the correlation graph and the
variance monotonically increased with the trampling load.
### 2.3 The Problem of “No Return” Points
It is practically important to understand where the system is going: (i) to
the bottom of the crisis with the possibility to recover after that bottom,
(ii) to the normal state, from the bottom, or (iii) to the “no return” point,
after which it cannot recover.
Situations between the comfort zone and the crisis has been studied for
dozens‘of systems, and the main effect is supported by much empirical
evidence. The situation near the “no return” point is much less studied.
Nevertheless, some observations support the hypothesis presented for this case
in Fig. 1: when approaching the fatal situation correlations decrease and
variance increases.
This problem was studied with the analysis of fatal outcomes in oncological
[53] and cardiological [4] clinics, and also in special experiments with acute
hemolytic anemia caused by phenylhydrazine in mice [8]. The main result is:
when approaching the no-return point, correlations disappear ($G$ decreases),
and variance typically does continue to increase.
For example, the dynamics of correlations between physiological parameters
after myocardial infarction was studied in Ref. [4]. For each patient (more
than 100 people), three groups of parameters were measured: echocardiography-
derived variables (end-systolic and end-diastolic indexes, stroke index, and
ejection fraction), parameters of central hemodynamics (systolic and diastolic
arterial pressure, stroke volume, heart rate, the minute circulation volume,
and specific peripheral resistance), biochemical parameters (lactate
dehydrogenase, the heart isoenzyme of lactate dehydrogenase LDH1, aspartate
transaminase, and alanine transaminase), and also leucocytes. Two groups were
analyzed after 10 days of monitoring: the patients with a lethal outcome, and
the patients with a survival outcome (with compatible amounts of group
members). These groups do not differ significantly in the average values of
parameters and are not separable in the space measured attributes.
Nevertheless, the dynamics of the correlations in the groups are essentially
different. For the fatal outcome correlations were stably low (with a short
pulse at the 7th day), for the survival outcome, the correlations were higher
and monotonically grew. This growth can be interpreted as return to the
“normal crisis” (the central position in Fig. 1).
Figure 3: Dynamics of weight of the correlation graphs of echocardiography-
derived variables, parameters of central hemodynamics, biochemical parameters,
and also leucocytes during 10 days after myocardial infarction for two groups
of patients: for the survival outcome and for the fatal outcome. Here $G$ is
the sum of the strongest correlations $|r_{ij}|>0.4$, $i\neq j$ [4].
Topologically, the correlation graph for the survival outcome included two
persistent triangles with strong correlations: the central hemodynamics
triangle, minute circulation volume – stroke volume – specific peripheral
resistance, and the heart hemodynamics triangle, specific peripheral
resistance – stroke index – end-diastolic indexes. The group with a fatal
outcome had no such persistent triangles in the correlation graph.
In the analysis of fatal outcomes for oncological patients and in special
experiments with acute hemolytic anemia caused by phenylhydrazine in mice one
more effect was observed: for a short time before death the correlations
increased, and then fell down (see also the pulse in Fig. 3). This short pulse
of the correlations (in our observations, usually for one day, a day which
precedes the fatal outcome) is opposite to the major trend of the systems in
their approach to death. We cannot claim universality of this effect and it
requires additional empirical study.
## 3 Correlations and Risk in Economics. Empirical Data
### 3.1 Thirty Companies from the FTSE 100 Index. A Case Study
#### 3.1.1 Data and Indicators
For the analysis of correlations in financial systems we used the daily
closing values over the time period 03.01.2006 – 20.11.2008 for companies that
are registered in the FTSE 100 index (Financial Times Stock Exchange Index).
The FTSE 100 is a market-capitalization weighted index representing the
performance of the 100 largest UK-domiciled blue chip companies which pass
screening for size and liquidity. The index represents approximately 88.03% of
the UK s market capitalization. FTSE 100 constituents are all traded on the
London Stock Exchange s SETS trading system. We selected 30 companies that had
the highest value of the capital (on the 1st of January 2007) and stand for
different types of business as well. The list of the companies and business
types is displayed in Table 1.
Table 1: Thirty largest companies for analysis from the FTSE 100 index Number | Business type | Company | Abbreviation
---|---|---|---
1 | Mining | Anglo American plc | AAL
2 | | BHP Billiton | BHP
3 | Energy (oil/gas) | BG Group | BG
4 | | BP | BP
5 | | Royal Dutch Shell | RDSB
6 | Energy (distribution) | Centrica | CNA
7 | | National Grid | NG
8 | Finance (bank) | Barclays plc | BARC
9 | | HBOS | HBOS
10 | | HSBC HLDG | HSBC
11 | | Lloyds | LLOY
12 | Finance (insurance) | Admiral | ADM
13 | | Aviva | AV
14 | | LandSecurities | LAND
15 | | Prudential | PRU
16 | | Standard Chartered | STAN
17 | Food production | Unilever | ULVR
18 | Consumer | Diageo | DGE
19 | goods/food/drinks | SABMiller | SAB
20 | | TESCO | TSCO
21 | Tobacco | British American Tobacco | BATS
22 | | Imperial Tobacco | IMT
23 | Pharmaceuticals | AstraZeneca | AZN
24 | (inc. research) | GlaxoSmithKline | GSK
25 | Telecommunications | BT Group | BTA
26 | | Vodafone | VOD
27 | Travel/leasure | Compass Group | CPG
28 | Media (broadcasting) | British Sky Broadcasting | BSY
29 | Aerospace/ | BAE System | BA
30 | defence | Rolls-Royce | RR
Data for these companies are available form the Yahoo!Finance web-site. For
data cleaning we use also information for the selected period available at the
London Stock Exchange web-site. Let $x_{i}(t)$ denote the closing stock price
for the $i$th company at the moment $t$, where $i=\overline{1,30}$,
$t=\overline{1,732}$. We analyze the correlations of logarithmic returns:
$x^{l}_{i}(t)=\ln\frac{x_{i}(t)}{x_{i}(t-1)}$, $t=\overline{2,732}$ in sliding
time windows of length $p=20$, this corresponds approximately to 4 weeks of 5
trading days, $t=\overline{p+1,732}$. The correlation coefficients $r_{ij}(t)$
and all indicators for time moment $t$ are calculated in the time window
$[t-p,t-1]$, which precedes $t$. This is important if we would like to
consider changes in these indicators as precursors of crisis.
The information about the level of correlations could be represented in
several ways. Here we compare 4 indicators:
* •
The non-diagonal part of the correlation matrix in $L_{2}$ norm - $\|r\|_{2}$;
* •
The non-diagonal part of the correlation matrix in $L_{1}$ norm - $\|r\|_{1}$;
* •
The sum of the strongest elements $G=\sum_{j>i,|r_{ij}|>0.5}|r_{ij}|$;
* •
The amount Dim of principal components estimated due to the broken stick
model.
The dynamics of the first three indicators are quite similar. Scatter diagrams
(Fig. 4) demonstrate a strong connection between the indicators. We used the
weight of the correlation graph $G$ (the sum of the strongest correlations
$r_{ij}>0.5$, $i\neq j$) for our further analysis.
Figure 4: Scatter diagrams for three pairs of indicators: $G-\|r\|_{1}$,
$G-\|r\|_{2}$, and $G-$Dim, where Dim is amount of principal components
estimated due to the broken stick model. Figure 5: Dynamics of FTSE index, G,
Variance, and Dimension estimated due to the broken stick model.
Fig. 5 allows us to compare dynamics of correlation, dimension and variance to
the value of FTSE100. Correlations increase when the market goes down and
decrease when it recovers. Dynamics of variance of log-returns has the same
tendency. To analyze the critical periods in more detail, let us select
several time intervals and several positions of the sliding window inside
these intervals.
#### 3.1.2 Correlation Graphs for Companies
a)b)
c)
d)
Figure 6: Correlation graphs for six positions of the sliding time window on
interval 10/04/2006 - 21/07/2006. a) Dynamics of FTSE100 (dashed line) and of
$G$ (solid line) over the interval, vertical lines correspond to the points
that were used for the correlation graphs. b) Thirty companies for analysis
and their distributions over various sectors of economics. c) The correlation
graphs for the first three points, FTSE100 decreases, the correlation graph
becomes more connective. d) The correlation graphs for the last three points,
FTSE100 increases, the correlation graph becomes less connective.
a)b)
c)
d)
Figure 7: Correlation graphs for six positions of the sliding time window on
interval 02/06/2008 - 04/11/2008. a) Dynamics of FTSE100 (dashed line) and of
$G$ (solid line) over the interval, vertical lines correspond to the points
that were used for the correlation graphs. b) Thirty companies for analysis
and their distributions over various sectors of economics. c) The correlation
graphs for the first three points, FTSE100 decreases, the correlation graph
becomes more connective. Between the third and the 4th points FTSE100
increased, and the first graph here is more rarefied than at the third point.
Between the third and the 4th points FTSE100 slightly increased, correlation
decreased, and the first graph at the next row is more rarefied than at the
third point. d) The correlation graphs for the last three points, FTSE100
decreases, the correlation graph becomes more connective.
We extracted 2 intervals for more detailed analysis. The first interval,
10/04/2006 - 21/07/2006, represents the FTSE index decrease and restoration in
spring and summer 2006. The second interval, 02/06/2008 - 04/11/2008, is a
part of the financial crisis. In each interval we selected six points and
analyzed the structure of correlations for each of these points (for a time
window, which precedes this point). For each selected point, we create a
correlation graph, solid lines represent correlation coefficient
$|r_{ij}|\geq\sqrt{0.5}$ ($\sqrt{0.5}=\cos(\pi/4)\approx 0.707$), dashed lines
represent correlation coefficient $\sqrt{0.5}>|r_{ij}|\geq 0.5$: (Figs. 6c,d,
7c,d). On these correlation graphs it is easy to observe, how critical
correlations appear, how are they distributed between different sectors of
economics, and how the crisis moves from one sector to another.
There is no symmetry between the periods of the FTSE index decrease and
recovering. For example, in Fig. 6c we see that at the beginning (falling
down) the correlations inside the financial sector are important and some
correlations inside industry are also high, but in the corresponding
recovering period (Fig. 6d) the correlations between industry and financial
institutions become more important.
All the indicators demonstrate the most interesting behavior at the end of
2008 (Fig. 5). The growth of variance in the last peak is extremely high, but
the increase of correlations is rather modest. If we follow the logic of the
basic hypothesis (Fig. 1), then we should suspect that the system is going to
“the other side of crisis”, not to recovery, but to disadaptation, this may be
the most dangerous symptom.
#### 3.1.3 Graphs for Correlations in Time
The vector of attributes that represents a company is a 20 day fragment of the
time series. In standard biophysical research, we studied correlations between
attributes of an individual, and rarely, correlation between individuals for
different attributes. In econophysics the standard situation is opposite.
Correlation in time is evidenced to be less than correlation between companies
[41]. Nevertheless, correlation between days in a given time window may be a
good indicator of crisis.
Let us use here $G_{T}$ for the weight of the correlation graph in time.
Because correlation in time is less than between stocks, we select here
another threshold: $G_{T}$ is the sum of the correlation coefficients with
absolute value greater then $0.25$. FTSE dynamics together with values of
$G_{T}$ are presented in Fig. 8. Solid lines represent a correlation
coefficient $|r_{ij}|\geq 0.5$, dashed lines represent a correlation
coefficient $0.5>|r_{ij}|\geq 0.25$.
Figure 8: Dynamics of the market $X_{FTSE}$, weight of correlation $G_{T}$ the
sum of the correlation coefficients with absolute value greater then $0.25$,
Variance (volatility), and dimension of the correlation matrix estimated due
to the broken stick model.
On the figures 9, 10 we combined graphs of days correlations - 20 trading days
prior to the selected days.
a)
b)
c)
Figure 9: Graphs for correlation in time for six positions of the sliding time
window on interval 10/04/2006 - 21/07/2006. a) Dynamics of FTSE100 (dashed
line), $G$ (solid line) and $G_{T}$ (dash-and-dot line) over the interval,
vertical lines correspond to the points that were used for the correlation
graphs. b) The correlation graphs for the first three points: FTSE100
decreases and the correlation graph becomes more connective. c) The
correlation graphs for the last three points: FTSE100 increases and the
correlation graph becomes less connective.
a)
b)
c)
Figure 10: Correlation graphs for six positions of the sliding time window on
interval 02/06/2008 - 04/11/2008. a) Dynamics of FTSE100 (dashed line), $G$
(solid line) and $G_{T}$ (dash-and-dot line) over the interval, vertical lines
correspond to the points that were used for the correlation graphs. b) Thirty
companies for analysis and their distributions over various sectors of
economics. c) The correlation graphs for the first three points: FTSE100
decreases and the correlation graph becomes more connective. Between the third
and the 4th points FTSE100 increases, and the first graph here is more
rarefied than at the third point. Between the third and the 4th points FTSE100
slightly increases, correlation decreased, and the first graph at the next row
is more rarefied than at the third point. d) The correlation graphs for the
last three points: FTSE100 decreases and the correlation graph becomes more
connective.
An analysis of the dynamics of $G_{T}$ allows us to formulate a hypothesis:
typically, after the increase of $G_{T}$ the decrease of FTSE100 index follows
(and, the decrease of $G_{T}$ precedes the increase of FTSE100). The time
delay is approximately two working weeks. In that sense, the correlation in
time seems to be better indicator of the future change, than the correlation
between stocks which has no such time gap. On the other hand, the amplitude of
change of $G_{T}$ is much smaller, and some of the decreases of the FTSE100
index could not be predicted by increases of $G_{T}$ (Fig. 8).
These observations are still preliminary and need future analysis for
different financial time series.
A strong correlation between days appears also with some time gap: the links
emerge, not usually between nearest days, but mostly with an interval 4-15
days (see Figs. 9, 10).
### 3.2 Correlations and Crisis in Financial Time Series
In economics and finance, the correlation matrix is very important for the
practical problem of portfolio optimization and minimization of risk. Hence,
an important problem arises: are correlations constant or not? The hypothesis
about constant correlations was tested for monthly excess returns for seven
countries (Germany, France, UK, Sweden, Japan, Canada, and US) over the period
1960-90 [11]. Correlation matrices were calculated over a sliding window of
five years. The inclusion of October 1987 in the window led to an increase of
correlation in that window. After an analysis of correlations in six periods
of five years the null hypothesis of a constant correlation matrix was
rejected. In addition, the conditional correlation matrix was studied. The
multivariate process for asset return was presented as
$R_{t}=m_{t-1}+e_{t};\;\;m_{t-1}=\mathbf{E}(R_{t}|F_{t-1}),$ (8)
where $R_{t}$ is a vector of asset returns and $m_{t-1}$ is the vector of
expected returns at time $t$ conditioned on the information set $F_{t-1}$ from
the previous step. Vector $e_{t}$ is the unexpected (unpredicted) component in
the asset returns. Correlations between its components are called conditional
correlations. It was demonstrated that these conditional correlations are also
not constant. Two types of change were found. Firstly, the correlations have a
statistically significant time trend and grow in time. The average increase in
correlation over 30 years is 0.36. Secondly, correlations in periods of high
volatility (high variance) are higher. To obtain this result, the following
model for the correlation coefficient was identified:
$r^{i,{\rm us}}_{t}=r^{i,{\rm us}}_{0}+r^{i,{\rm us}}_{1}S_{t}^{\rm us},$ (9)
where $r^{i,{\rm us}}_{t}$ is the correlation coefficient between the
unexpected (unpredicted) components in the asset returns for the $i$th country
and the US, $S_{t}$ is a dummy variable that takes the value 1 if the
estimated conditional variance of the US market for time $t$ is greater than
its unconditional (mean) value and 0 otherwise. The estimated coefficient
$r_{1}$ is positive for all countries. The average over all countries for
$r_{0}$ is equal to 0.430, while the average turbulence effect $r_{1}$ is
0.117 [11]. Finally, it was demonstrated that other informational variables
can explain more changes in correlations than just the “high volatility – low
volatility” binning.
To analyze correlations between European equity markets before and after
October 1987, three 76-month periods were compared: February 1975–May 1981,
June 1981–September 1987, and November 1987–February 1994 [12]. The average
correlation coefficient for 13 equity markets (Europe + US) increased from
0.37 in June 1981–September 1987 to 0.5 in November 1987–February 1994. The
amount of significant principal components selected by Kaiser’s rule decreases
from 3 (in both periods before October 1987) to 2 (in the period after October
1987) for all markets and even from 3 to 1 for 12 European markets [12]. Of
course, in average values for such long periods it is impossible to
distinguish the consequences of the October 1987 catastrophe and a trend of
correlation coefficients (that is, presumably, nonlinear).
Non-stationarity of the correlation matrix was demonstrated in a detailed
study of the financial empirical correlation matrix of the 30 companies which
Deutsche Aktienindex (DAX) comprised during the period 1988–1999 [13]. The
time interval (time window) is set to 30 and continuously moved over the whole
period. It was evidenced that the drawups and the drawdowns of the global
index (DAX) are governed, respectively, by dynamics of a significantly
distinct nature. The drawdowns are dominated by one strongly collective
eigenstate with a large eigenvalue. The opposite applies to drawups: the
largest eigenvalue moves down which is compensated by a simultaneous elevation
of lower eigenvalues. Distribution of correlation coefficients for these data
have a distinctive bell-like shape both for one time window (inside one
correlation matrix) and for ensemble of such sliding windows in a long time
period.
This observation supports the idea of applying the theory of the Gaussian
matrix ensembles to the analysis of financial time series. The random matrix
theory gives a framework for analysis of the cross-correlation matrix for
multidimensional time series. In that framework, stock price changes of the
largest 1000 U.S. companies were analyzed for the 2-year period 1994–1995
[18], and statistics of several of the largest eigenvalues was evidenced to be
far from the random matrix prediction, but the distribution of “the rest” of
the eigenvalues and eigenvectors satisfies the random matrix ensemble. The
crucial question is: where is the border between the random and the non-random
parts of spectra? Formula (4) gives in this case $\lambda_{\max}\approx 2$.
The random matrix theory predicts for the Gaussian orthogonal ensembles that
the components of the normalized eigenvectors are distributed according to a
Gaussian probability distribution with mean zero and variance one.
Eigenvectors corresponding to most eigenvalues in the “bulk” ($\lambda<2$)
have the Gaussian distribution, but eigenvectors with bigger eigenvalues
significantly deviate from this. [18].
This kind of analysis was continued for the three major US stock exchanges,
namely the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX),
and the National Association of Securities Dealers Automated Quotation
(NASDAQ) [19]. The concept of “deviating eigenvectors” was developed, these
vectors correspond to the eigenvalues which are systematically outside the
random matrices ensembles predictions. Analysis of “deviating eigenvectors”
which are outside the random matrices ensembles predictions (4) gives
information of major factors common to all stocks, or to large business
sectors. The largest eigenvalue was identified as the “market mode”. During
periods of high market volatility values of the largest eigenvalue are large.
This fact was commented as a strong collective behavior in regimes of high
volatility. For the largest eigenvalue, the distribution of coordinates of the
eigenvector has very remarkable properties:
* •
It is much more uniform than the prediction of the random matrix theory
(authors of Ref. [19] described this vector as “approximately uniform”,
suggesting that all stocks participate in this “market mode”);
* •
Almost all components of that eigenvector have the same sign.
* •
A large degree of cross correlations between stocks can be attributed to the
influence of the largest eigenvalue and its corresponding eigenvector
Two interpretations of this eigenvector were proposed: it corresponds either
to the common strong factor that affects all stocks, or it represents the
“collective response” of the entire market to stimuli.
Spectral properties of the correlation matrix were analyzed also for 206
stocks traded in the Istanbul Stock Exchange Market during the 5-year period
2000–2005 [23]. One of the main results of this research is the observation
that the correlations among stocks are mostly positive and tend to increase
during crises. The number of significant eigenvalues (outside the random
matrix interval) is smaller than it was found in previous study of the well-
developed international market in the US. The possible interpretation is: the
emerging market is ruling by smaller amount of factors.
An increase of correlations in a time of crisis was demonstrated by the
analysis of 150 years of market dynamics [54]. As a result, in the year 2004
it was mentioned very optimistically: “Our tests suggest that the structure of
global correlations shifts considerably through time. It is currently near an
historical high - approaching levels of correlation last experienced during
the Great Depression”. Nevertheless, it remains unclear, does the correlation
cause the transmission chain of collapse or is it inextricably tied to it
[25]?
There are several types of explanation of these correlation effects. One can
look for the specific reasons in the balance between specialization and
globalization, in specific fiscal, monetary, legal, cultural or even language
conditions, in dynamics of fundamental economic variables such as interest
rates and dividend yields, in the portfolio optimization by investors, and in
many similar more or less important processes. These specific explanations
could work, but for such a general effect it is desirable to find a theory of
compatible generality. Now we can mention three sources for such a theory:
1. 1.
Theory of individual adaptation of similar individuals to a similar system of
factors;
2. 2.
Theory of interaction: information interaction, co-ordination, or deeper
integration;
3. 3.
Theory of collective effects in market dynamics.
The first approach (supported by biological data) is a sort of mean-field
theory: everybody is adapting to a field of common factors, and altogether
change the state of that system. There are two types of argumentation here:
similarity of factors, or similarity of adaptation mechanisms (or both):
* •
In the period of crisis the same challenges appear for most of the market
participants, and correlation increases because they have to answer the same
challenge and struggle with the same factors.
* •
In the period of crisis all participants are under pressure. The nature of
that pressure may be different, but the mobilization mechanisms are mostly
universal. Similar attempts at adaptation produce correlation as a consequence
of crisis.
This theory is focused on the adaptation process, but may be included into any
theory of economical dynamics as adaptation feedback. We study the adaptation
of individuals in the “mean field”, and consider dynamics of this field as
external conditions.
The interaction theory may be much more rich (and complicated). For example,
it can consider the following effect of behavior in crisis: there is a lack of
information and of known optimal solutions, therefore, different agents try to
find clues to rational behavior in the behavior of other agents, and the
correlation increases. Coordination in management and in financial politics is
an obvious effect of interaction too, and we can observe also a deeper
integration, which causes fluxes of moneys and goods.
Collective effects in market dynamics may also generate correlations and, on
the other hand, can interact with correlations which appear by any specific or
nonspecific reasons. For example, high levels of correlation often lead to the
loss of dissipation in dynamics and may cause instability.
Further in this work, we focus on the theory of individual adaptation of
similar individuals to a similar system of factors.
## 4 Theoretical approaches
### 4.1 The “Energy of Adaptation” and Factors-Resources Models
#### 4.1.1 Factors and Systems
Let us consider several systems that are under the influence of several
factors $F_{1},...F_{q}$. Each factor has its intensity $f_{i}$ ($i=1,...q$).
For convenience, we consider all these factors as harmful (later, after we
introduce fitness $W$, it will mean that all partial derivatives are non-
positive $\partial W/\partial f_{i}\leq 0$, this is a formal definition of
“harm”). This is just a convention about the choice of axes directions: a
wholesome factor is just a “minus harmful” factor.
Each system has its adaptation systems, a “shield” that can decrease the
influence of these factors. In the simplest case, it means that each system
has an available adaptation resource, $R$, which can be distributed for
neutralization of factors: instead of factor intensities $f_{i}$ the system is
under pressure from factor values $f_{i}-a_{i}r_{i}$ (where $a_{i}>0$ is the
coefficient of efficiency of factor $F_{i}$ neutralization by the adaptation
system and $r_{i}$ is the share of the adaptation resource assigned for the
neutralization of factor $F_{i}$, $\sum_{i}r_{i}\leq R$). The zero value
$f_{i}-a_{i}r_{i}=0$ is optimal (the fully compensated factor), and further
compensation is impossible and senseless.
Interaction of each system with a factor $F_{i}$ is described by two
quantities: the factor $F_{i}$ uncompensated pressure
$\psi_{i}=f_{i}-a_{i}r_{i}$ and the resource assigned to the factor $F_{i}$
neutralization. The question about interaction of various factors is very
important, but, first of all, let us study a one-factor model.
#### 4.1.2 Selye Model
Already simple one–factor models support the observed effect of the
correlation increase. In these models, observable properties of interest
$x_{k}$ $(k=1,...m)$ can be modeled as functions of factor pressure $\psi$
plus some noise $\epsilon_{k}$.
Let us consider one-factor systems and linear functions (the simplest case):
$x_{k}=\mu_{k}+l_{k}\psi+\epsilon_{k}\ ,$ (10)
where $\mu_{k}$ is the mean value of $x_{k}$ for fully compensated factor,
$l_{k}$ is a coefficient, $\psi=f-ar_{f}\geq 0$, and $r_{f}\leq R$ is amount
of available resource assigned for the factor neutralization. The values of
$\mu_{k}$ could be considered as “normal” (in the sense opposite to
“pathology”), and noise $\epsilon_{k}$ reflects variability of norm. This is
not a dynamic equation and describes just one action of resource assignment.
If we add time $t$ then a two-dimensional array appears $x_{kt}$.
We can call these models the “tension–driven models” or even the “Selye
models” because these models may be extracted from the Selye concept of
adaptation energy [28, 29] (Selye did not use equations, but qualitatively
these models were present in his reasoning).
If systems compensate as much of the factor value, as possible, then
$r_{f}=\min\\{R,f/a\\}$, and we can write:
$\psi=\left\\{\begin{array}[]{ll}&f-aR\ ,\ \ {\rm if}\ \ f>aR\ ;\\\ &0,\ \ \
{\rm else.}\end{array}\right.$ (11)
The nonlinearity of the Selye model is in the dependence of $\psi$ on the
factor pressure $f$. Already the simple dependence (11) gives the phase
transition picture. Individual systems may be different by the value of factor
intensity (the local intensity variability), by the amount of available
resource $R$ and, of course, by the random values $\epsilon_{k}$. For small
$f$ all $\psi=0$, all systems are in comfort zone and all the difference
between them is in the noise variables $\epsilon_{k}$. In this state, the
correlations are defined by the correlations in noise values and are,
presumably, low.
With increasing $f$ the separation appears: some systems remain in the comfort
“condensate” ($\psi=0$), and others already do not have enough resource for a
full compensation of the factor load and vary in the value of $\psi$. Two
fractions appear, a lowly correlated condensate with $\psi=0$ and a highly
correlated fraction with different values of $\psi>0$. If $f$ continues to
increase, all individuals move to the highly correlated fraction and the
comfort concentrate vanishes.
If the noise of the norm $\epsilon_{k}$ is independent of $\psi$ then the
correlation between different $x_{k}$ increases monotonically with $f$. With
an increase of the factor intensity $f$ the dominant eigenvector of the
correlation matrix between $x_{k}$ becomes more uniform in the coordinates,
which tend asymptotically to $\pm\frac{1}{\sqrt{m}}$.
The correlation between systems also increases (just transpose the data
matrix), and the coordinates of the dominant eigenvector similarly tend to
values $\frac{1}{\sqrt{n}}$ (which are positive), but this tendency has the
character of a “resource exhausting wave” which spreads through the systems
following the rule (11).
The observation of Ref. [19] partially supports the uniformity of the
eigenvector that corresponds to the largest eigenvalue which “represents the
influence of the entire market that is common to all stocks.” Fig. 8d from
Ref. [19] shows that the components of this eigenvector are positive and
“almost all stocks participate in the largest eigenvector.” Also, in Ref. [13]
it was demonstrated that in the periods of drawdowns of the global index (DAX)
there appears one strongly dominant eigenvalue for synchronous correlations
between 30 companies from DAX. Similar results for 30 British companies are
presented in Figs. 8, 5. In physiology, we also found these “maximum
integration” effects for various loads on organisms [6]. When the pressure is
lower then, instead of one dominant eigenvector which represents all
functional systems of an organism, there appears a group of eigenvectors with
relatively high eigenvalues. Each of these vectors has significant components
for attributes of a specific group of functional systems, and the intersection
of those groups for different eigenvectors is not large. In addition, the
effect of factor “disintegration” because of overload was also observed.
The Selye model describes the first part of the effect (from comfort to
stress), but tells us nothing about the other side of crisis near the death.
#### 4.1.3 Mean Field Realization of Selye’s Model
In this Sec. we present a simple toy model that is the mean field realization
of the Selye model. As a harmful factor for this model we use minus log-return
of the FTSE index: the instant value of factor $f(k)$ at time moment $k$ is
$f(t)=-\log({\rm FTSE}(t+1)/{\rm FTSE}(t))$ (12)
This factor could be considered as the mean field produced by the all objects
together with some outer sources.
The instant values of stocks log-returns of $i$th object $x_{i}(k)$ are
modeled by the Selye model (11):
$x_{i}(t)=-l(f(t)-ar_{i})H(f(t)-ar_{i})+\epsilon_{i}(t)\,,$ (13)
where $H$ is the Heaviside step function.
We compare real data and data for two distributions of resource,
Exponential(30) (subscript “exp”) and Uniform(0,2) (subscript “u”). Random
variables $\epsilon_{i}(t)$ for various $i$ and $t$ are uniformly distributed
i.i.d with zero mean and the variance var$\epsilon$=0.0035. This is the
minimum of the average variance of the log-return values for thirty companies.
The minimum corresponds to the most “quiet” state of market (in the sense of
value of variance) in the time period. We calculated the total variance of 30
companies during the time interval used for analysis (04/07/2007 -
25/10/2007), found the minimal value of the variance and divided by 30\. To
compare results for exponential and uniform distributions we use the same
realization of noise.
The efficiency coefficient $a$ is different for different distributions: we
calibrate it on such a way that for 75% of objects the value $ar_{i}$ is
expected to be below $f$ and 25% are expected to be above $f$ for the same
value of factor $f$: $a_{\rm exp}/a_{\rm u}\approx 1.88$. The ratio of the
coefficients $l_{\rm exp}/l_{\rm u}$ should have (approximately) inverse value
to keep the expected distances the same for the pairs of objects with
$ar_{i}<f$. For qualitative reproduction of the crisis we selected $a_{\rm
exp}=0.032$, $a_{\rm u}=0.017$, $l_{\rm exp}=7.3$, $l_{\rm u}=15.5$.
For each system we calculated the correlation coefficients over the period of
20 days (similar to the analysis made for real data): $G_{\rm exp}$, $G_{\rm
u}$. The right-hand side of the figure represent the dynamics of changes in
correlations between objects. Plots in Fig. 11.1a show the number of objects
in real data that have more than 1, 2, 4, 8, 16 or 20 values of correlations
greater than $0.7$, plots in Fig. 111b represent the number of companies that
have more than 1, 2, 4, 8, 16 or 20 correlations greater than $0.5$.
Similarly, Figs. 11.2a,b and 113.a,b represent the model results for the
exponential (2) and uniform (3) distributions.
Figure 11: The dynamics of indicators of correlation matrices for 1) real
data, 2) system with exponentially distributed resources, 3) system with
uniformly distributed resources. The left-hand part represents the general
dynamics of $G$, $G_{\rm exp}$,$G_{\rm u}$ in comparison to the dynamics of
FTSE over the time period 04/07/2007 - 25/10/2007 . The right-hand part shows
the dynamics of changes in correlations between objects over the interval: a)
number of objects that have more than 1, 2, 4, 8, 16 or 20 values of
correlations greater than $0.7$, b) number of objects that have more than 1,
2, 4, 8, 16 or 20 values of correlations greater than $0.5$.
The qualitative character of crisis is reproduced, but the difference from the
empirical data is also obvious: the plots for real data also bell-shaped with
fluctuations, but they are wider than the model curves and fluctuations do not
go to zero outside the crisis period in reality. The simplest improvement of
the situation may be achieved by introduction of correlated noise and fitting.
In the simplest Selye model we assume zero correlations in the comport zone
but in reality the correlations do not decrease to zero.
Amplitude of noise differs for different companies and we can take its
distribution from empirical data. Coefficient $l$ in the basic Selye model
(10) also depends on the company but in the toy model we take it constant.
One problem exists for all these improvements: they introduce too many
parameters for fitting. Of course, more degrees of freedom available for
fitting give more flexibility in quantitative approximation of the empirical
data. The simplest toy model has two parameters only.
Another way to improvement is the selection of a better mean field factor. Now
we make just a first choice and selected the negative log-return of the FTSE
index as a mean-field harmful factor. The serious modification of model could
take into account the pressure of several factors too.
#### 4.1.4 How to Merge Factors?
Usually, there are many factors. Formally, for $q$ factors one can generalize
the one–factor tension–driven model (10) in the form.
$x_{k}=x_{k}(\psi_{1},\psi_{2},...\psi_{q})+\epsilon_{k}\ .$ (14)
In this equation, the compensated values of factors,
$\psi_{i}=f_{i}-a_{i}r_{i}$, are used and $\sum_{i=1}^{q}r_{i}\leq R$.
Two questions appear immediately: (i) how to find the distribution of
resource, assigned for neutralization of different factors, and (ii) how to
represent the functions $x_{k}(\psi_{1},...\psi_{q})$. Usually, in factor
analysis and in physics both, we start from the assumption of linearity (“in
the first approximation”), but this approximation does not work here properly.
In the simplest reasonable approximation, max-min operations appear instead of
linear operations. This sounds very modern [55] and even a bit extravagant,
but it was discovered many years ago by Justus von Liebig (1840). His “law of
the minimum” states that growth is controlled by the scarcest resource
(limiting factor) [56]. This concept was originally applied to plant or crop
growth. Many times it was criticized, rejected, and then returned and
demonstrated quantitative agreement with experiments [56], [57], [58].
Liebig’s Law of the minimum was extended to more a general conception of
factors, not only for elementary physical description of available chemical
substances and energy. Any environmental factor essential for life that is
below the critical minimum, or that exceeds the maximum tolerable level could
be considered as a limiting one.
The biological generalizations of Liebig’s Law were supported by the
biochemical idea of limiting reaction steps (the modern theory of limiting
steps and dominant systems for multiscale reaction networks is presented in
the recent review [59]). Some of the generalizations went quite far from
agriculture and ecology. The law of the minimum was applied to economics [60]
and to education, for example [61].
According to Liebig’s Law, the tension–driven model is
$x_{k}=\mu_{k}+l_{k}\max_{1\leq i\leq q}\\{\psi_{i}\\}+\epsilon_{k}\ .$ (15)
This model seems to be linear, but its nonlinearity is hidden in dependence of
$\psi_{i}$ on the distribution of factors and the amount of the resource
available.
#### 4.1.5 Optimality and Fitness
Adaptation optimizes the state of the system for a given amount of the
resource available. It may be difficult to find the objective function that is
hidden behind the adaptation process. Nevertheless, even an assumption about
the existence of an objective function and about its general properties helps
in the analysis of the adaptation process. Assume that adaptation should
maximize an objective function $W$ which depends on the compensated values of
factors, $\psi_{i}=f_{i}-a_{i}r_{i}$ for the given amount of available
resource:
$\left\\{\begin{array}[]{l}W(f_{1}-a_{1}r_{1},f_{2}-a_{2}r_{2},...f_{q}-a_{q}r_{q})\
\to\ \max\ ;\\\ r_{i}\geq 0$, $f_{i}-a_{i}r_{i}\geq 0$,
$\sum_{i=1}^{q}r_{i}\leq R\ .\end{array}\right.$ (16)
The only question is: why can we be sure that adaptation follows any
optimality principle? Existence of optimality is proven for microevolution
processes and ecological succession. The mathematical backgrounds for the
notion of “natural selection” in these situations are well established after
works of Haldane (1932) [34] and Gause (1934) [35]. Now this direction with
various concepts of fitness (or “generalized fitness”) optimization is
elaborated in many details (see, for example, review papers [36, 37, 38]).
The foundation of optimization is not so clear for such processes as
modifications of phenotype, and for adaptation in various time scales. The
idea of genocopy-phenocopy interchangeability was formulated long ago by
biologists to explain many experimental effects: the phenotype modifications
simulate the optimal genotype ([40], p. 117). The idea of convergence of
genetic and environmental effects was supported by analysis of genome
regulation [39] (the principle of concentration-affinity equivalence). The
phenotype modifications produce the same change, as evolution of genotype
does, but faster and in a smaller range of conditions (the proper evolution
can go further, but slower). It is natural to assume that adaptation in
different time scales also follows the same direction, as evolution and
phenotype modifications, but faster and for smaller changes. This hypothesis
could be supported by many biological data and plausible reasoning. For social
and economical systems the idea of optimization of individual behavior seems
very natural. The selection arguments may be also valid for such systems.
It seems productive to accept the idea of optimality, and to use it, as far as
this will not contradict the data.
### 4.2 Law of the Minimum Paradox
Liebig used the image of a barrel – now called Liebig’s barrel – to explain
his law. Just as the capacity of a barrel with staves of unequal length is
limited by the shortest stave, so a plant’s growth is limited by the nutrient
in shortest supply. An adaptation system acts as a cooper and repairs the
shortest stave to improve the barrel capacity. Indeed, in well-adapted systems
the limiting factor should be compensated as far as this is possible. It seems
obvious because of the very natural idea of optimality, but arguments of this
type in biology should be considered with care.
Assume that adaptation should maximize a objective function $W$ (16) which
satisfies Liebig’s Law:
$W=W\left(\max_{1\leq i\leq q}\\{f_{i}-a_{i}r_{i}\\}\right)\ ;\ \frac{\partial
W(x)}{\partial x}\leq 0$ (17)
under conditions $r_{i}\geq 0$, $f_{i}-a_{i}r_{i}\geq 0$,
$\sum_{i=1}^{q}r_{i}\leq R$. (Let us recall that $f_{i}\geq 0$ for all $i$.)
Description of the maximizers of $W$ gives the following theorem (the proof is
a straightforward consequence of Liebig’s Law and monotonicity of $W$).
Theorem 1. For any objective function $W$ that satisfies conditions (17) the
optimizers $r_{i}$ are defined by the following algorithm.
1. 1.
Order intensities of factors: $f_{i_{1}}\geq f_{i_{1}}\geq...f_{i_{q}}$.
2. 2.
Calculate differences $\Delta_{j}=f_{i_{j}}-f_{i_{j+1}}$ (take formally
$\Delta_{0}=\Delta_{q+1}=0$).
3. 3.
Find such $k$ ($0\leq k\leq q$) that
$\sum_{j=1}^{k}\left(\sum_{p=1}^{j}\frac{1}{a_{i_{p}}}\right)\Delta_{j}\leq
R\leq\sum_{j=1}^{k+1}\left(\sum_{p=1}^{j}\frac{1}{a_{i_{p}}}\right)\Delta_{j}\
.$
For $R<\Delta_{1}$ we put $k=0$ and if
$R>\sum_{j=1}^{q}\left(\sum_{p=1}^{j}\frac{1}{a_{i_{p}}}\right)\Delta_{j}$
then we take $k=q$.
4. 4.
If $k<q$ then the optimal amount of resource $r_{j_{l}}$ is
$r_{j_{l}}=\left\\{\begin{array}[]{ll}&\frac{f_{j_{l}}}{a_{j_{l}}}-\frac{1}{a_{j_{l}}}\left(\sum_{p=1}^{k}\frac{1}{a_{i_{p}}}\right)^{-1}\left(\sum_{j=1}^{k}\frac{f_{i_{j}}}{a_{i_{j}}}-R\right)\
,\ \ {\rm if}\ \ l\leq k+1\ ;\\\ &0\ ,\ \ \ \ \ {\rm if}\ \ l>k+1\
.\end{array}\right.$ (18)
If $k=q$ then $r_{i}=f_{i}/a_{i}$ for all $i$.
Figure 12: Optimal distribution of resource for neutralization of factors
under Liebig’s Law. (a) histogram of factors intensity (the compensated parts
of factors are highlighted, $k=3$), (b) distribution of tensions $\psi_{i}$
after adaptation becomes more uniform, (c) the sum of distributed resources.
For simplicity of the picture, we take here all $a_{i}=1$.
Proof. This optimization is illustrated in Fig. 12. If
$R\geq\sum_{i}f_{i_{j}}/a_{i_{j}}$ then the pressure of all the factors could
be compensated and we can take $r_{i}=f_{i}/a_{i}$. Now, let us assume that
$R<\sum_{i}f_{i_{j}}/a_{i_{j}}$. In this case, the pressure of some of the
factors is not fully compensated. The adaptation resource is spent for partial
compensation of the $k+1$ worst factors and the remained pressure of them is
higher (or equal) then the pressure of the $k+2$ worst factor:
$\begin{split}&f_{i_{1}}-a_{i_{1}}r_{i_{1}}=\ldots=f_{i_{k+1}}-a_{i_{k+1}}r_{i_{k+1}}=\psi\geq
f_{i_{k+2}}\,,\;\sum_{j=1}^{k+1}r_{i_{j}}=R\,,\mbox{or }\\\
&\sum_{i=1}^{k+1}\Delta_{i}-a_{i_{1}}r_{i_{1}}=\ldots=\Delta_{k+1}-a_{i_{k+1}}r_{i_{k+1}}=\psi-
f_{i_{k+2}}=\theta_{k+1}\geq 0\,,\;\sum_{j=1}^{k+1}r_{i_{j}}=R\,.\end{split}$
(19)
Therefore, for $j=1,\ldots,k+1$ in the optimal distribution of the resource,
$r_{i_{j}}=\frac{1}{a_{i_{j}}}\left(\sum_{i=j}^{k+1}\Delta_{i}-\theta_{k+1}\right)\,,R=\sum_{j=1}^{k+1}r_{i_{j}}=\sum_{j=1}^{k+1}\left(\sum_{p=1}^{j}\frac{1}{a_{i_{p}}}\right)\Delta_{j}-\left(\sum_{j=1}^{k+1}\frac{1}{a_{i_{j}}}\right)\theta_{k+1}\,,\theta_{k+1}\geq
0\,.$ (20)
This gives us the first step in the Theorem 1, the definition of $k$. Formula
(18) for $r_{i_{j}}$ follows also from (19): for $j=1,\ldots,k+1$
$r_{i_{j}}=\frac{f_{i_{j}}-\psi}{a_{i_{j}}}\,,\psi=\left(\sum_{p=1}^{k+1}\frac{1}{a_{i_{p}}}\right)^{-1}\left(\sum_{p=1}^{k+1}\frac{f_{i_{p}}}{a_{i_{p}}}-R\right)\,.\;\;\;\square$
(21)
Hence, if the system satisfies the law of the minimum then the adaptation
process makes the tension produced by different factors
$\psi_{i}=f_{i}-ar_{i}$ (Fig. 12) more uniform. Thus adaptation decreases the
effect of the limiting factor and hides manifestations of Liebig’s Law.
Under the assumption of optimality (16) the law of the minimum paradox becomes
a theorem: if Liebig’s Law is true then microevolution, ecological succession,
phenotype modifications and adaptation decrease the role of the limiting
factors and bring the tension produced by different factors together.
The cooper starts to repair Liebig’s barrel from the shortest stave and after
reparation the staves are more uniform, than they were before. This cooper may
be microevolution, ecological succession, phenotype modifications, or
adaptation. For the ecological succession this effect (Liebig’s Law leads to
its violation by succession) was described in Ref. [62]. For adaptation (and
in general settings too) it was demonstrated in Ref. [1].
The law of the minimum together with the idea of optimality (even without an
explicit form of the objective function) gives us answers to both question:
(i) we now know the optimal distribution of the resource (18), assigned for
neutralization of different factors, and (ii) we can choose the function
$x_{k}(\psi_{1},...\psi_{q})$ from various model forms, the simplest of them
gives the tension–driven models (15).
### 4.3 Law of the Minimum Inverse Paradox
The simplest formal example of “anti–Liebig’s” organization of interaction
between factors gives us the following dependence of fitness from two factors:
$W=-f_{1}f_{2}$: each of the factors is neutral in the absence of another
factor, but together they are harmful. This is an example of synergy: the
whole is greater than the sum of its parts. (For our selection of axes
direction, “greater” means “more harm”.) Let us give the formal definition of
the synergistic system of factors for the given fitness function $W$.
Definition. The system of factors $F_{1},...F_{q}$ is synergistic, if for any
two different vectors of their admissible values $\mathbf{f}=(f_{1},...f_{q})$
and $\mathbf{g}=(g_{1},...g_{q})$ ($\mathbf{f}\neq\mathbf{g}$) the value of
fitness at the average point $(\mathbf{f}+\mathbf{g})/2$ is less, than at the
best of points $\mathbf{f}$, $\mathbf{g}$:
$W\left(\frac{\mathbf{f}+\mathbf{g}}{2}\right)<\max\\{W(\mathbf{f}),W(\mathbf{g})\\}\
.$ (22)
Liebig’s systems of factors violate the synergy inequality (22): if at points
$\mathbf{f}$, $\mathbf{g}$ with the same values of fitness
$W(\mathbf{f})=W(\mathbf{g})$ different factors are limiting, then at the
average point the value of both these factors are smaller, and the harm of the
limiting factor at that point is less, than at both points $\mathbf{f}$,
$\mathbf{g}$, i.e. the fitness at the average point is larger.
The fitness function $W$ for synergistic systems has a property that makes the
solution of optimization problems much simpler. This proposition follows from
the definition of convexity and standard facts about convex sets (see, for
example, [63])
Proposition 1. The synergy inequality (22) holds if and only if all the
sublevel sets $\\{\mathbf{f}\ |\ W(\mathbf{f})\leq\alpha\\}$ are strictly
convex.$\square$
(The fitness itself may be a non-convex function.)
This proposition immediately implies that the synergy inequality is invariant
with respect to increasing monotonic transformations of $W$. This invariance
with respect to nonlinear change of scale is very important, because usually
we don’t know the values of function $W$.
Proposition 2. If the synergy inequality (22) holds for a function $W$, then
it holds for a function $W_{\theta}=\theta(W)$, where $\theta(x)$ is an
arbitrary strictly monotonic function of one variable.$\square$
Already this property allows us to study the problem about optimal
distribution of the adaptation resource without further knowledge about the
fitness function.
Assume that adaptation should maximize an objective function
$W(f_{1}-r_{1},...f_{q}-r_{q})$ (16) which satisfies the synergy inequality
(22) under conditions $r_{i}\geq 0$, $f_{i}-a_{i}r_{i}\geq 0$,
$\sum_{i=1}^{q}r_{i}\leq R$. (Let us remind that $f_{i}\geq 0$ for all $i$.)
Following our previous convention about axes directions all factors are
harmful and $W$ is monotonically decreasing function
$\frac{\partial W(f_{1},...f_{q})}{\partial f_{i}}<0\ .$
We need also a technical assumption that $W$ is defined on a convex set in
$\mathbb{R}^{q}_{+}$ and if it is defined for a nonnegative point
$\mathbf{f}$, then it is also defined at any nonnegative point
$\mathbf{g}\leq\mathbf{f}$ (this inequality means that $g_{i}\leq f_{i}$ for
all $i=1,...q$).
The set of possible maximizers is finite. For every group of factors
$F_{i_{1}},...F_{i_{j+1}}$, ($1\leq j+1<q$) with the property
$\sum_{k=1}^{j}\frac{f_{i_{k}}}{a_{i_{k}}}<R\leq\sum_{k=1}^{j+1}\frac{f_{i_{k}}}{a_{i_{k}}}$
(23)
we find a distribution of resource
$\mathbf{r}_{\\{{i_{1}},...{i_{j+1}}\\}}=(r_{i_{1}},...r_{i_{j+1}})$:
$r_{i_{k}}=\frac{f_{i_{k}}}{a_{i_{k}}}\ \ (k=1,...j)\ ,\ \
r_{i_{j+1}}=R-\sum_{k=1}^{j}\frac{f_{i_{k}}}{a_{i_{k}}}\ ,\ \ r_{i}=0\ \ {\rm
for}\ \ i\notin\\{{i_{1}},...{i_{j+1}}\\}\ .$ (24)
For $j=0$, Eq. (23) gives $0<R\leq f_{i_{1}}/a_{i_{1}}$ and there exists only
one nonzero component in the distribution (24), $r_{i_{1}}=R/a_{i_{1}}$.
We get the following theorem as an application of standard results about
extreme points of convex sets [63].
Theorem 2. Any maximizer for $W(f_{1}-r_{1},...f_{q}-r_{q})$ under given
conditions has the form $\mathbf{r}_{\\{{i_{1}},...{i_{j+1}}\\}}$
(24).$\square$
If the initial distribution of factors intensities,
$\mathbf{f}=(f_{1},...f_{q})$, is almost uniform and all factors are
significant then, after adaptation, the distribution of effective tensions,
$\mathbf{\psi}=(\psi_{1},...\psi_{q})$ ($\psi_{i}=f_{i}-a_{i}r_{i}$), is less
uniform. Following Theorem 2, some of the factors may be completely
neutralized and one additional factor may be neutralized partially. This
situation is opposite to adaptation due to Liebig’s system of factors, where
the amount of significant factors increases and the distribution of tensions
becomes more uniform because of adaptation. For Liebig’s system, adaptation
transforms a low-dimensional picture (one limiting factor) into a high-
dimensional one, and we expect the well-adapted systems have less correlations
than in stress. For synergistic systems, adaptation transforms a high-
dimensional picture into a low-dimensional one (less factors), and our
expectations are inverse: we expect the well-adapted systems have more
correlations than in stress (this situation is illustrated in Fig. 13; compare
to Fig. 12). We call this property of adaptation to synergistic system of
factors the law of the minimum inverse paradox.
Figure 13: Typical optimal distribution of resource for neutralization of
synergistic factors. (a) Factors intensity (the compensated parts of factors
are highlighted, $j=2$), (b) distribution of tensions $\psi_{i}$ after
adaptation becomes less uniform (compare to Fig. 12), (c) the sum of
distributed resources. For simplicity of the picture, we take here all
$a_{i}=1$.
Fitness by itself is a theoretical construction based on the average
reproduction coefficient (instant fitness). It is impossible to measure this
quantity in time intervals that are much shorter than life length. Hence, to
understand which system of factors we deal with, Liebig’s or a synergistic
one, we have to compare the theoretical consequences of their properties.
First of all, we can measure the results of adaptation, and use properties of
the optimal adaptation in ensembles of systems for analysis (Fig. 12, Fig.
13).
There is some evidence about the existence of synergistic systems of factors.
For example, the postsurgical rehabilitation of people suffering lung cancer
of the III and IV clinical groups was studied [3]. Dynamics of variance and
correlations for them have directions which are unusual for Liebig’s systems:
increase of the correlation corresponds to decrease of the variance. Moreover,
analysis of the maxima and minima of correlations and mortality demonstrates
that in this case an increase of correlations corresponds to decrease of
stress. Hence, in Ref. [3] the hypothesis was suggested that in this case some
factors superlinearly increase the harmfulness of other factors, and this is
an example of a synergistic system of factors. Thus, the law of the minimum
inverse paradox may give us a clue to the effect (Fig. 1) near the fatal
outcomes.
## 5 Discussion
### 5.1 Dynamics of the Correlations in Crisis
We study a universal effect in ensembles of similar systems under load of
similar factors: in crisis, typically, correlation increases, and, at the same
time, variance (and volatility) increases too. This effect is demonstrated for
humans, mice, trees, grassy plants, and financial time series. It is
represented as the left transition in Fig. 1, the transition from comfort to
crisis. Already a system of simple models of adaptation to one factor (we call
it the Selye model) gives a qualitative explanation of the effect.
For interaction of several factors two basic types of organization are
considered: Liebig’s systems and synergistic systems of factors. The
adaptation process (as well as phenomodification, ecological succession, or
microevolution) acts differently onto these systems of factors and makes
Liebig’s systems more uniform (instead of systems with limiting factor) and
synergistic systems less uniform. These theorems give us two paradoxes which
explain differences observed between artificial (less adapted) systems and
natural (well-adapted) systems.
Empirically, we expect the appearance of synergistic systems in extremely
difficult conditions, when factors appear that superlinearly amplify the harm
from other factors. This means that after the crisis achieves its bottom, it
can develop into two directions: recovering (both correlations and variance
decrease) or fatal catastrophe (correlations decrease, but variance not). The
transition to fatal outcome is represented as the right transition in Fig. 1.
Some clinical data support these expectations.
### 5.2 Correlations Between the Thirty Largest FTSE Companies
The case study of the thirty largest companies from British stock market for
the period 2006–2008 supports the hypothesis about increasing of the
correlations in crisis. It is also demonstrated that the correlation in time
(between daily data) also has diagnostic power (as well as the correlation
between companies has) and connections between days (Figs. 9, 10) may clearly
indicate and, sometimes, predict the chronology of the crisis. This approach
(use of two time moments instead of the time window) allows to overcome a
smearing effect caused by usage of time windows (about this problem see [47,
48]).
The principal component analysis demonstrates that the largest eigenvalues of
the correlation matrices increase in crisis and under environmental pressure
(before the inverse effect “on the other side of crisis” appears). Different
methods for selection of significant principal components, Kaiser’s rule,
random matrix approach and the broken stick model, give similar results in a
case study. Kaiser’s rule gives more principal components than two other
methods and the higher sensitivity of the indicator DimK causes some
difficulties in interpretation. The random matrix estimates select too small
amount of components, and the indicator Dim${}_{M}P$ seems not sensitive
enough. In our case study the best balance between sensitivity and stability
gives the dimension, estimated by the broken stick model DimBS.
### 5.3 Choice of Coordinates and the Problem of Invariance
All indicators of the level of correlations are non-invariant with respect to
transformations of coordinates. For example, rotation to the principal axis
annuls all the correlations. Dynamics of variance also depends on nonlinear
transformations of scales. Dimensionless variance of logarithms (or “relative
variance”) often demonstrates more stable behavior especially when changes of
mean values are large.
The observed effect depends on the choice of attributes. Nevertheless, many
researchers observed it without a special choice of coordinate system. What
does it mean? We can propose a hypothesis: the effect may be so strong that it
is almost improbable to select a coordinate system where it vanishes. For
example, if one accepts the Selye model (10), (11) then observability of the
effect means that for typical nonzero values of $\psi$ in crisis
$l_{k}^{2}\psi^{2}>{\rm var}(\epsilon_{k})$ (25)
for more than one value of $k$, where var stands for variance of the noise
component (this is sufficient for increase of the correlations). If
$\psi^{2}\sum_{k}l_{k}^{2}\gg\sum_{k}{\rm var}(\epsilon_{k})$
and the set of allowable transformations of coordinates is bounded (together
with the set of inverse transformations), then the probability to select
randomly a coordinate system which violates condition (25) is small (for
reasonable definitions of this probability and of the relation $\gg$). On
another hand, the choice of attributes is never random, and one can look for
the reason of so wide observability of the effect in our (human) ways to
construct the attribute systems.
### 5.4 Two Programs for Further Research
First of all, the system of simple models of adaptation should be fitted to
various data, both economical and biophysical. Classical econometrics [65]
already deals with hidden factors, now we have just to fit a special nonlinear
model of adaptation to these factors.
Another possible direction is the development of dynamical models of
adaptation. In the present form the model of an adaptation describes a single
action, distribution of adaptation resource. We avoid any kinetic modeling.
Nevertheless, adaptation is a process in time. We have to create a system of
models with a minimal number of parameters.
Models of individual adaptation could explain effects caused by external
factors or individual internal factors. They can be also used with the mean-
field models when the interaction between systems is presented as an
additional factor. The models of interaction need additional hypotheses and
data. In this paper, we do not discuss such models, but in principle they may
be necessary, because crisis may be caused not by purely external factors but
by combination of external factors, individual internal dynamics and
interaction between systems.
Acknowledgements. We are very grateful to many people for 21 years of
collaboration, to our first co-author [1] V.T. Manchuk, and to A.G. Abanov,
G.F. Bulygin, R.A. Belousova, R.G. Khlebopros, G.B. Kofman, A.S. Mansurov,
T.P. Mansurova, L.S. Mikitin, A.V. Pershin, L.I. Pokidysheva, M.G Polonskaya,
L.D. Ponomarenko, V.N. Razzhevaikin, K.R. Sedov, S.M. Semenov, E.N. Shalamova,
S.Y. Skobeleva, and G.N. Svetlichnaia. Many physiological data were collected
from Institute for Medical Problems of Northern Regions222State Research
Institute for Medical Problems of Northern Regions, Siberian Branch of Russian
(USSR) Academy of Medical Sciences (Krasnoyarsk).. We also thank the editor
and the anonymous referees of Physica A for careful reading and the fruitful
criticism.
## References
* [1] A.N. Gorban, V.T. Manchuk, E.V. Petushkova (Smirnova), Dynamics of physiological paramethers correlations and the ecological-evolutionary principle of polyfactoriality, Problemy Ekologicheskogo Monitoringa i Modelirovaniya Ekosistem [The Problems of Ecological Monitoring and Ecosystem Modelling], Vol. 10. Gidrometeoizdat, Leningrad, 1987, pp. 187–198.
* [2] K.R. Sedov, A.N. Gorban’, E.V. Petushkova (Smirnova), V.T. Manchuk, E.N. Shalamova, Correlation adaptometry as a method of screening of the population, Vestn. Akad. Med. Nauk SSSR. 1988;(10):69–75. PMID: 3223045
* [3] A.S. Mansurov, T.P. Mansurova, E.V. Smirnova, L.S. Mikitin, A.V. Pershin, How do correlations between physiological parameters depend on the influence of different systems of stress factors? Global & Regional Ecological Problems, R.G. Khlebopros (Ed.), Krasnoyarsk State Technical University Publ., 1994, JSBN 5-230-08348-4, pp. 499-516.
* [4] Strygina S.O., Dement’ev S.N., Uskov V.M., Chernyshova G.I., Dynamics of the system of correlations between physiological parameters in patients after myocardial infarction, in: Mathematics, Computer, Education, Proceedings of conference, Issue 7, Moscow, 2000, pp. 685–689.
* [5] L.I. Pokidysheva, R.A. Belousova, E.V. Smirnova, Method of adaptometry in the evaluation of gastric secretory function in children under conditions of the North, Vestn. Ross. Akad. Med. Nauk. 1996;(5):42–5. PMID: 8924826
* [6] G.N. Svetlichnaia, E.V. Smirnova, L.I. Pokidysheva, Correlational adaptometry as a method for evaluating cardiovascular and respiratory interaction. Fiziol. Cheloveka 23(3) (1997) 58–62. PMID: 9264951
* [7] A.V. Vasil’ev, G.Iu. Mal’tsev, Iu.V. Khrushcheva, V.N. Razzhevaikin, M.I. Shpitonkov. Applying method of correlation adaptometry for evaluating of treatment efficiency of obese patients, Vopr. Pitan. 76(2) (2007), 36–38. PMID: 17561653
* [8] L.D. Ponomarenko, E.V. Smirnova, Dynamical characteristics of blood system in mice with phenilhydrazin anemiya, Proceeding of 9th International Symposium “Reconstruction of homeostasis”, Krasnoyarsk, Russia, March 15-20, 1998, vol. 1, 42–45.
* [9] I.V. Karmanova, V.N. Razzhevaikin, M.I. Shpitonkov, Application of correlation adaptometry for estimating a response of herbaceous species to stress loadings, Doklady Botanical Sciences, Vols. 346–348, 1996 January June, 4–7. [Translated from Doklady Akademii Nauk SSSR, 346, 1996.]
* [10] P.G. Shumeiko, V.I. Osipov, G.B. Kofman, Early detection of industrial emission impact on Scots Pine needles by composition of phenolic compounds, Global & Regional Ecological Problems, R.G. Khlebopros (Ed.), Krasnoyarsk State Technical University Publ., 1994, JSBN 5-230-08348-4, 536–543.
* [11] F. Longin, B. Solnik, Is the correlation in international equity returns constant: 1960-1990? J. International Money and Finance 14 (1) (1995), 3–26.
* [12] I. Meric, G. Meric, Co-Movements of European Equity Markets before and after the 1987 Crash, Multinational Finance J., 1 (2) (1997), 137–152.
* [13] S. Drożdż, F. Grümmer , A.Z. Górski, F. Ruf, J. Speth, Dynamics of competition between collectivity and noise in the stock market, Physica A 287 (2000) 440–449.
* [14] J.C. Gower, (1966) Some distance properties of latent root and vector methods used in multivariate analysis, Biometrika, 53, 325–338.
* [15] R.N. Mantegna, H.E. Stanley, An Introduction to Econophysics: Correlations and Complexity in Finance, Cambridge: Cambridge University Press, (1999).
* [16] R.N. Mantegna, Hierarchical structure in financial markets, Eur. Phys. J. B 11 (1) (1999), 193–197.
* [17] J.-P. Onnela, A. Chakraborti, K. Kaski, J. Kertész, A. Kanto, Dynamics of market correlations: Taxonomy and portfolio analysis, Phys. Rev. E 68 (2003), 056110.
* [18] P. Gopikrishnan, B. Rosenow, L.A.N. Amaral, H.E. Stanley Universal and Nonuniversal Properties of Cross Correlations in Financial Time Series, Phys. Rev. Lett. 83 (1999), 1471–1474.
* [19] V. Plerou, P. Gopikrishnan, B. Rosenow, L.A.N. Amaral, T. Guhr, H.E. Stanley, Random matrix approach to cross correlations in financial data, Phys. Rev. E 65 (2002), 066126.
* [20] M. Potters, J.P. Bouchaud, L. Laloux, Financial applications of random matrix theory: old laces and new pieces, Acta Phys. Pol. B 36 (9) (2005), 2767–2784.
* [21] J. Wishart, The generalised product moment distribution in samples from a normal multivariate population, Biometrika 20A (1-2) (1928), 32- 52.
* [22] T. Heimo, G. Tibely, J. Saramaki, K. Kaski, J. Kertesz, Spectral methods and cluster structure in correlation-based networks, Physica A 387 (23) (2008), 5930–5945.
* [23] S. Çukur, M. Eryig̃it, R. Eryig̃t, Cross correlations in an emerging market financial data, Physica A 376 (2007) 555–564.
* [24] D. Matesanz, G.J. Ortega, Network analysis of exchange data: Interdependence drives crisis contagion, MPRA Paper No. 7720, posted 12 March 2008; e-print: http://mpra.ub.uni-muenchen.de/7720/
* [25] R. Smith, The Spread of the Credit Crisis: View from a Stock Correlation Network (February 23, 2009); e-print http://ssrn.com/abstract=1325803
* [26] A.C. Eliasson, C. Kreuter, On currency crisis: A continuous crisis definition (Deutsche Bank Research Quantitative Analysis Report), Conference paper, X International ”Tor Vergata” Conference on Banking and Finance, December 2001.
* [27] A.N. Gorban, E.V. Smirnova, T.A. Tyukina, e-print: arXiv:0905.0129v2, 2009.
* [28] H. Selye, Adaptation Energy, Nature 141 (3577) (21 May 1938), 926.
* [29] H. Selye, Experimental evidence supporting the conception of “adaptation energy”, Am. J. Physiol. 123 (1938), 758–765.
* [30] B. Goldstone, The general practitioner and the general adaptation syndrome, S. Afr. Med. J. 26 (1952), 88–92, 106–109 PMID: 14901129, 14913266.
* [31] R. McCarty, K. Pasak, Alarm phase and general adaptation syndrome, in: Encyclopedia of Stress, George Fink (ed.), Vol. 1, Academic Press, 2000, 126–130.
* [32] S. Breznitz (Ed.), The Denial of Stress, New York: International Universities Press, Inc., 1983.
* [33] J.K. Schkade, S. Schultz, Occupational Adaptation in Perspectives, Ch. 7 in: Perspectives in Human Occupation: Participation in Life, By Paula Kramer, Jim Hinojosa, Charlotte Brasic Royeen (eds), Lippincott Williams & Wilkins, Baltimore, MD, 2003, 181–221.
* [34] J.B.S. Haldane, The Causes of Evolution, Princeton Science Library, Princeton University Press, 1990.
* [35] G.F. Gause, The struggle for existence, Williams and Wilkins, Baltimore, 1934. Online: http://www.ggause.com/Contgau.htm.
* [36] I.M. Bomze, Regularity vs. degeneracy in dynamics, games, and optimization: a unified approach to different aspects. SIAM Review, 2002, Vol. 44, 394-414.
* [37] J. Oechssler, F. Riedel, On the Dynamic Foundation of Evolutionary Stability in Continuous Models. J. Economic Theory, 2002, Vol. 107, 223-252.
* [38] A.N. Gorban, Selection Theorem for Systems with Inheritance, Math. Model. Nat. Phenom., 2 (4) (2007), 1–45; e-print: cond-mat/0405451
* [39] E. Zuckerkandl, R. Villet, Concentration-affinity equivalence in gene regulation: Convergence of genetic and environmental effects, PNAS U.S.A., 85 (1988), 4784–4788.
* [40] M.J. West-Eberhard, Developmental Plasticity and Evolution, Oxford University Press US, 2003.
* [41] F. Lillo, R.N. Mantegna, Variety and volatility in financial markets, Phys. Rev. E 62 (2000), 6126; e-print cond-mat/0002438.
* [42] J. Whittaker, Graphical Models in Applied Multivariate Statistics. Wiley, Chichester, 1990.
* [43] D.R. Brillinger, Remarks concerning graphical models for time series and point processes. Revista de Econometria, 16 (1996), 1–23.
* [44] R. Fried, V. Didelez, V. Lanius, Partial correlation graphs and dynamic latent variables for physiological time series, in: Baier, Daniel (ed.) et al., Innovations in classification, data science, and information systems. Proceedings of the 27th annual conference of the Gesellschaft fur Klassifikation e. V., Cottbus, Germany, March 12?14, 2003. Springer, Berlin, 2005, 259–266.
* [45] V. Verma, N. Gagvani, Visualizing Intelligence Information Using Correlation Graphs, Proc. SPIE, Vol. 5812(2005), 271–282.
* [46] X.-H. Huynh, F. Guillet and H. Briand, Evaluating Interestingness Measures with Linear Correlation Graph, in: Advances in Applied Artificial Intelligence, Lecture Notes in Computer Science, 4031, Springer Berlin - Heidelberg, 2006, p. 312–321
* [47] J.-P. Onnela, A. Chakraborti, K. Kaski and J. Kertész, Dynamic asset trees and the Black Monday, Physica A 324 (2003), 247–252.
* [48] J.-P. Onnela, K. Kaski and J. Kertész, Clustering and information in correlation based financial networks, Eur. Phys. J. B 38 (2004), 353–362.
* [49] R. Cangelosi, A. Goriely, Component retention in principal component analysis with application to cDNA microarray data, Biology Direct (2007), 2:2. Online: http://www.biology-direct.com/content/2/1/2
* [50] A.M. Sengupta, P.P. Mitra, Distributions of singular values for some random matrices, Phys. Rev. E 60 (1999), 3389–3392.
* [51] G.V. Bulygin, A.S. Mansurov, T.P. Mansurova, E.V. Smirnova, Dynamics of parameters of human metabolic system during the short-term adaptation, Institute of Biophysics, Russian Academy of Sciences, Preprint 180B, 1992.
* [52] G.V. Bulygin, A.S. Mansurov, T.P. Mansurova, A.A. Mashanov, E.V. Smirnova, Impact of health on the ecological stress dynamics. Institute of Biophysics, Russian Academy of Sciences, Preprint 185B, Krasnoyarsk, 1992.
* [53] A.S. Mansurov, T.P. Mansurova, E.V. Smirnova, L.S. Mikitin, A.V. Pershin, Human adaptation under influence of synergic system of factors (treatment of oncological patients after operation), Institute of Biophysics Russian Academy of Sciences, Preprint 212B Krasnoyarsk, 1995.
* [54] W.N. Goetzmann, L. Li, K.G. Rouwenhorst, Long–Term Global Market Correlations (October 7, 2004). Yale ICF Working Paper No. 08-04; e-print: http://ssrn.com/abstract=288421
* [55] G.L. Litvinov, V.P. Maslov (Eds.), Idempotent mathematics and mathematical physics, Contemporary Mathematics, AMS, Providence, RI, 2005\.
* [56] F. Salisbury, Plant Physiology, 1992\. Plant physiology (4th ed.). Wadsworth Belmont, CA
* [57] Q. Paris. The Return of von Liebig’s “Law of the Minimum”, Agron. J., 84 (1992), 1040–1046
* [58] B.S. Cade, J.W. Terrell, R.L. Schroeder, Estimating effects of limiting factors with regression quantiles, Ecology 80 (1) (1999), 311–323.
* [59] A.N. Gorban, O. Radulescu, Dynamic and Static Limitation in Multiscale Reaction Networks, Revisited, Advances in Chemical Engineering 34 (2008), 103–173.
* [60] H.E. Daly, Population and Economics – A Bioeconomic Analysis, Population and Environment 12 (3) (1991), 257–263.
* [61] M.Y. Ozden, Law of the Minimum in Learning. Educational Technology & Society 7 (3) (2004), 5–8.
* [62] F.N. Semevsky, S.M. Semenov. Mathematical modeling of ecological processes. Gidrometeoizdat, Leningrad, 1982 [in Russian].
* [63] R.T. Rockafellar, Convex analysis, Princeton University Press, Princeton, NJ, 1970. Reprint: 1997.
* [64] S.M. Markose, Computability and Evolutionary Complexity: Markets as Complex Adaptive Systems (CAS). Economic J. 115 (504) (2005), F159–F192; e-print: http://ssrn.com/abstract=745578
* [65] G.G. Judge, W.E. Griffiths, R.C. Hill, H. Lütkepohl, T.-C. Lee, The Theory and Practice of Econometrics, Wiley Series in Probability and Statistics, # 49 (2nd ed.), Wiley, New York 1985.
|
arxiv-papers
| 2009-05-01T17:53:34 |
2024-09-04T02:49:02.272946
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "A. N. Gorban, E. V. Smirnova, T. A. Tyukina",
"submitter": "Alexander Gorban",
"url": "https://arxiv.org/abs/0905.0129"
}
|
0905.0246
|
# Entropy-variation with respect to the resistance in quantized RLC circuit
derived by generalized Hellmann-Feynman theorem
Hong-yi Fan1, Xue-xiang Xu1,2and Li-yun Hu${}^{1,2\text{*}}$ 1Department of
Physics, Shanghai Jiao Tong University, Shanghai 200030, China
2College of Physics & Communication Electronics, Jiangxi Normal University,
Nanchang 330022, China
Emails: hlyun@sjtu.edu.cn; hlyun2008@126.com.
###### Abstract
By virtue of the generalized Hellmann-Feynman theorem for ensemble average, we
obtain internal energy and average energy consumed by the resistance $R$ in a
quantized RLC electric circuit. We also calculate entropy-variation with
respect to $R$. The relation between entropy and $R$ is also derived. By
depicting figures we indeed see that the entropy increases with the increment
of $R$.
PACS numbers: 05.30.-d, 42.50.-p, 03.65.-w
## I Introduction
In the field of mesoscopic physics Louisell was the first who quantized a
mesoscopic L-C (inductance $L$ and capacitance $C$) circuit as a quantum
harmonic oscillator 1 . He made it by quantizing electric charge as the
coordinate operator $q,$ while quantizing electric current $I$ multiplied by
$L$ as the momentum operator $p$. Louisell’s work has become more and more
popular because mesoscopic L-C circuits may have wide applications in quantum
computer. However, Louisell only calculated quantum fluctuation of L-C circuit
at zero-temperature. In Ref.2 Fan and Liang pointed out that since electric
current generates Joule thermal effect, one should take thermo effect into
account, thus every physical observable should be evaluated in the context of
ensemble average. Besides, since entropy increases with the generation of
Joule heat, one should consider how the resistance $R$ in RLC electric circuit
affects the variation of entropy. We shall use the generalized Hellmann-
Feynmam theorem (GHFT) for ensemble average to discuss this topic. The usual
Hellmann-Feynman (H-F) theorem states 3 ; 4
$\frac{\partial E_{n}}{\partial\chi}=\left\langle\psi_{n}\right|\frac{\partial
H}{\partial\chi}\left|\psi_{n}\right\rangle,$ (1)
where $H$ (a Hamiltonian which involves parameter $\chi$) possesses the
eigenvector $\left|\psi_{n}\right\rangle,$
$H\left|\psi_{n}\right\rangle=E_{n}\left|\psi_{n}\right\rangle.$ For many
troublesome problems in searching for energy level in quantum mechanics,
people can resort to the H-F theorem to make the analytical calculation.
However, this formula is only available for the pure state, quantum
statistical mechanics is the study of statistical ensembles of quantum
mechanics. A statistical ensemble is described by a density matrix $\rho$,
which is a non-negative, self-adjoint, trace-class operator of trace 1 on the
Hilbert space describing the quantum system. Extending Eq. (1) to the ensemble
average case is necessary and has been done in Refs 5 ; 6 ; 7 .
Our paper is arranged as follows: In Sec. 2 we briefly introduce the GHFT for
ensemble average $\left\langle H\left(\chi\right)\right\rangle_{e},$ where the
subscript $e\ $denotes ensemble average. In Sec. 3 based on von Neuman’s
quantum entropy definition $S=-k\mathtt{tr}\left(\rho\ln\rho\right)$ and using
the GHFT we derive the entropy-variation formula for $\frac{\partial
S}{\partial\chi}$ and its relation to
$\frac{\partial}{\partial\chi}\left\langle
H\left(\chi\right)\right\rangle_{e}.$ In Sec. 4 we use the GHFT to calculate
internal energy of the quantized RLC circuit and its fluctuation, as well as
the average energy consumed by resistance $R$. In Sec. 5 we employ the GHFT to
find the relation between entropy and $R$. By depicting figures we indeed see
that the entropy increases with the increment of resistance $R.$
## II Brief review of the generalized Hellmann-Feynman theorem
For the mixed states in thermal equilibrium described by density operators
$\rho=\frac{1}{Z}e^{-\beta H},\text{ \ \ }\beta=\left(kT\right)^{-1},$ (2)
where $Z=$tr$\left(e^{-\beta H}\right)$ is the partition function ($k$ is
Boltzmann constant and $T$ is temperature), we have proposed the GHFT 5 . Thus
the ensemble average of the Hamiltonian $H$ (which is dependent of parameter
$\chi$) is
$\left\langle H\left(\chi\right)\right\rangle_{e}=\text{tr}\left[\rho
H\left(\chi\right)\right]=\frac{1}{Z\left(\chi\right)}\sum_{j}e^{-\beta
E_{j}\left(\chi\right)}E_{j}\left(\chi\right)\equiv\bar{E}\left(\chi\right),$
(3)
and $\left\langle A\right\rangle_{e}\equiv\mathtt{tr}\left(\rho e^{-\beta
H}\right)$ for arbitrary operator $A$ of system. Performing the partial
differentiation with respect to $\chi,$ we have 5
$\displaystyle\frac{\partial\left\langle H\right\rangle_{e}}{\partial\chi}$
$\displaystyle=$
$\displaystyle\frac{1}{Z\left(\chi\right)}\left\\{\sum_{j}e^{-\beta
E_{j}\left(\chi\right)}\right.$ (4) $\displaystyle\left.\times\left[-\beta
E_{j}\left(\chi\right)+\beta\left\langle
H\right\rangle_{e}+1\right]\frac{\partial
E_{j}\left(\chi\right)}{\partial\chi}\right\\}.$
Then using Eq.(1) we can further write Eq.(4) as
$\frac{\partial}{\partial\chi}\left\langle
H\right\rangle_{e}=\left\langle\left(1+\beta\left\langle
H\right\rangle_{e}-\beta H\right)\frac{\partial
H}{\partial\chi}\right\rangle_{e}.$ (5)
Noting the relation
$\left\langle H\frac{\partial
H}{\partial\chi}\right\rangle_{e}=-\frac{\partial}{\partial\beta}\left\langle\frac{\partial
H}{\partial\chi}\right\rangle_{e}+\left\langle\frac{\partial
H}{\partial\chi}\right\rangle_{e}\left\langle H\right\rangle_{e},$ (6)
when $H$ is independent of $\beta,$ we can reform Eq. (5) as
$\frac{\partial}{\partial\chi}\left\langle
H\right\rangle_{e}=\frac{\partial}{\partial\beta}\left[\beta\left\langle\frac{\partial
H}{\partial\chi}\right\rangle_{e}\right]=\left(1+\beta\frac{\partial}{\partial\beta}\right)\left\langle\frac{\partial
H}{\partial\chi}\right\rangle_{e}.$ (7)
The integration of Eq.(7) yields two forms. One is
$\beta\left\langle\frac{\partial
H\left(\chi\right)}{\partial\chi}\right\rangle_{e}=\int
d\beta\frac{\partial}{\partial\chi}\left\langle H\right\rangle_{e}+K,$ (8)
which deals with integration over $d\beta$ and $K$ is an integration constant;
and the other is
$\left\langle
H\right\rangle_{e}=\int_{0}^{\chi}\left(1+\beta\frac{\partial}{\partial\beta}\right)\left\langle\frac{\partial
H}{\partial\chi}\right\rangle_{e}d\chi+\left\langle
H\left(0\right)\right\rangle_{e},$ (9)
which tackles integration over $d\chi.$ Note that the fluctuation of $H$ can
be obtain by virtue of
$\left(\Delta H\right)^{2}=\left\langle
H^{2}\right\rangle_{e}-\bar{E}^{2}=-\frac{\partial\left\langle
H\right\rangle_{e}}{\partial\beta}.$ (10)
## III Deriving entropy-variation $\frac{\partial S}{\partial\chi}$ and its
relation to $\frac{\partial\left\langle H\right\rangle_{e}}{\partial\chi}$
from $S=-k\mathtt{tr}\left(\rho\ln\rho\right)$
Entropy $S$ in classical statistical mechanics is defined as
$F=U-TS,$ (11)
where $U$ is system’s internal energy or the ensemble average of Hamiltonian
$\left\langle H\right\rangle_{e}$, and $F$ is Helmholtz free energy,
$F=-\frac{1}{\beta}\ln\sum_{n}e^{-\beta E_{n}}.$ (12)
According to Eq.(12) the entropy can not be calculated until systems’ energy
level $E_{n}$ is known. In this work we consider how to derive entropy without
knowning $E_{n}$ in advance, i.e., we will not diagonalize the Hamiltonian
before calculating the entropy, instead, our starting point is using entropy’s
quantum-mechanical definition,
$S=-k\mathtt{tr}\left(\rho\ln\rho\right).$ (13)
It is von Neuman who extended the classical concept of entropy (put forth by
Gibbs) into the quantum domain. Note that, because the trace is actually
representation independent, Eq. (13) assigns zero entropy to any pure state.
However, in many cases $\ln\rho$ is unknown until $\rho$ is diagonalized, so
we explore how to use the GHFT to calculate entropy of some complicated
systems, which, to our knowledge, has not been calculated in the literature
before. Rewriting Eq. (13) as
$S=\beta k\mathtt{tr}(\rho H)+k\mathtt{tr}\left(\rho\ln
Z\right)=\frac{1}{T}\left\langle H\right\rangle_{e}+k\ln Z,$ (14)
where $\left\langle H\right\rangle_{e}$ corresponding to $U$ in Eq.(11), it
then follows
$\frac{\partial
S}{\partial\chi}=\frac{1}{T}\left(\frac{\partial}{\partial\chi}\left\langle
H\right\rangle_{e}-\left\langle\frac{\partial
H}{\partial\chi}\right\rangle_{e}\right),$ (15)
which indicates that the entropy-variation is proportional to the difference
between internal energy’s variation and the ensemble average of
$\frac{\partial H}{\partial\chi}.$ In particular, when $\rho$ is a pure state,
then $\frac{\partial}{\partial\chi}\left\langle
H\right\rangle_{e}=\left\langle\frac{\partial
H}{\partial\chi}\right\rangle_{e},$ $\frac{\partial S}{\partial\chi}=0,$ $S$
is a constant (zero). Supposing the case is
$H=\sum_{i}\chi_{i}H_{i},\text{ }\left\langle
H\right\rangle_{e}=\sum_{i}\chi_{i}\left\langle H_{i}\right\rangle_{e},$ (16)
then due to $\left\langle\frac{\partial
H}{\partial\chi_{i}}\right\rangle_{e}=\left\langle H_{i}\right\rangle_{e},$ we
also have
$\frac{\partial S}{\partial\chi_{i}}=0.$ (17)
Eq. (15) also appears in Ref. 6 , but it does not mention the von Neuman
entropy $S=$ $-k\mathtt{tr}\left(\rho\ln\rho\right)$. Substituting Eq. (15)
into Eq. (7) yields
$T\frac{\partial
S}{\partial\chi}=\beta\frac{\partial}{\partial\beta}\left\langle\frac{\partial
H}{\partial\chi}\right\rangle_{e},$ (18)
this is another form of the entropy-variation formula. It then follows
$TS=\left\langle H\right\rangle_{e}-\int\left\langle\frac{\partial
H}{\partial\chi}\right\rangle_{e}d\chi+C,$ (19)
where $C$ is an integration constant of parameters involved in $H$ other than
$\chi.$
## IV Internal energy and average energy consumed by resistance in the RLC
circuit
In terms of $q-p$ quantum variables $\left(\left[q,p\right]=i\hbar\right)$,
Louisell’s Hamiltonian for the quantized RLC circuit is
$H=\frac{1}{2L}p^{2}+\frac{1}{2C}q^{2}+\frac{R}{2L}(pq+qp).$ (20)
We now use GHFT to calculate the internal energy $\left\langle
H\right\rangle_{e}$. Substituting Eq.(20) into Eq.(5) and letting $\chi$ be
$L$, $C$, and $R,$ respectively, we obtain
$\displaystyle-2L^{2}\frac{\partial\left\langle H\right\rangle_{e}}{\partial
L}$ $\displaystyle=\left\langle\left(1+\beta\left\langle
H\right\rangle_{e}-\beta
H\right)\left(p^{2}+R(pq+qp)\right)\right\rangle_{e},$ (21)
$\displaystyle-2C^{2}\frac{\partial\left\langle H\right\rangle_{e}}{\partial
C}$ $\displaystyle=\left\langle\left(1+\beta\left\langle
H\right\rangle_{e}-\beta H\right)\left(q^{2}\right)\right\rangle_{e},$ (22)
$\displaystyle 2L\frac{\partial\left\langle H\right\rangle_{e}}{\partial R}$
$\displaystyle=\left\langle\left(1+\beta\left\langle H\right\rangle_{e}-\beta
H\right)\left(pq+qp)\right)\right\rangle_{e}.$ (23)
Supposing the eigenvector of Hamiltonian is $\left|\Psi_{n}\right\rangle$,
$H\left|\Psi_{n}\right\rangle=E_{n}\left|\Psi_{n}\right\rangle,$ $E_{n}$ is
the energy eigenvalue, due to
$\left\langle\Psi_{n}\right|\left[q^{2}-p^{2},H\right]\left|\Psi_{n}\right\rangle=0,$
(24)
and
$\left[q^{2}-p^{2},H\right]=\left(\frac{i}{L}+\frac{i}{C}\right)(pq+qp)+2i\frac{R}{L}\left(p^{2}+q^{2}\right),$
(25)
which leads to the following relation
$\left\langle\Psi_{n}\right|\left[\left(\frac{i}{L}+\frac{i}{C}\right)(pq+qp)+2i\frac{R}{L}\left(p^{2}+q^{2}\right)\right]\left|\Psi_{n}\right\rangle=0,$
(26)
and noticing $\left\langle\beta\left\langle H\right\rangle_{e}-\beta
H\right\rangle_{e}=0,$ thus we can have the ensemble average
$\displaystyle\left\langle\left(1+\beta\left\langle H\right\rangle_{e}-\beta
H\right)\right.$
$\displaystyle\left.\times\left[\left(\frac{i}{L}+\frac{i}{C}\right)(pq+qp)+2i\frac{R}{L}\left(p^{2}+q^{2}\right)\right]\right\rangle_{e}\left.=\right.0.$
(27)
Substituting Eqs.(21)-(23) into Eq.(27), we obtain a partial differential
equation
$L^{2}\frac{\partial\left\langle H\right\rangle_{e}}{\partial
L}+C^{2}\frac{\partial\left\langle H\right\rangle_{e}}{\partial
C}+\left(LR-\frac{L^{2}}{2RC}-\frac{L}{2R}\right)\frac{\partial\left\langle
H\right\rangle_{e}}{\partial R}=0,$ (28)
which can be solved by virtue of the method of characteristics 8 ; 9 .
According to it we have the equation
$\frac{dL}{L^{2}}=\frac{dC}{C^{2}}=\frac{dR}{\left(LR-\frac{L^{2}}{2RC}-\frac{L}{2R}\right)},$
(29)
it then follows that
$\frac{1}{L}-\frac{1}{C}=c_{1},\text{
}\frac{R^{2}}{L^{2}}-\frac{1}{LC}=c_{2},$ (30)
where $c_{1}$ and $c_{2}$ are two arbitrary constants. We can now appy the
method above in which the general solution of the partial differenial equation
(28) is found by writing $\left\langle
H\right\rangle_{e}=f\left[c_{1},c_{2}\right],$ i.e.,
$\left\langle
H\right\rangle_{e}=f\left[\frac{1}{L}-\frac{1}{C},\frac{R^{2}}{L^{2}}-\frac{1}{LC}\right],$
(31)
where $f\left[x,y\right]$ is some function of $x,y$. In order to determine the
form of this function, we examine the special case when $R=0$, i.e
$H_{0}=\frac{1}{2L}p^{2}+\frac{1}{2C}q^{2}=\hbar\omega_{0}\left(a^{\dagger}a+\frac{1}{2}\right),$
(32)
where $a=\sqrt{\frac{L\omega_{0}}{2\hbar}}q+i\sqrt{\frac{1}{2\hbar
L\omega_{0}}}p$ with $\omega_{0}=1/\sqrt{LC}$. According to the well-know Bose
statistics formula $\left\langle
H_{0}\right\rangle_{e}=\frac{\hbar\omega_{0}}{2}\coth\frac{\hbar\omega_{0}\beta}{2}$,
then we know
$\left\langle
H\left|{}_{R=0}\right.\right\rangle_{e}=f\left[\frac{1}{L}-\frac{1}{C},-\frac{1}{LC}\right]=\frac{\hbar\omega_{0}}{2}\coth\frac{\hbar\omega_{0}\beta}{2}.$
(33)
To determine the form of function $f\left[x,y\right],$ we let
$x=\frac{1}{L}-\frac{1}{C},$ $y=\frac{-1}{LC},$ then its reverse relations are
$L=\frac{x+\sqrt{x^{2}-4y}}{2y},$ $C=\frac{-x+\sqrt{x^{2}-4y}}{2y},$ and
$\omega_{0}=\sqrt{-y}$. This implies that the form of function
$f\left[x,y\right]$ is
$f\left[x,y\right]=\frac{\hbar\sqrt{-y}}{2}\coth\frac{\hbar\beta\sqrt{-y}}{2},$
(34)
and we obtain the internal energy
$\left\langle
H\right\rangle_{e}=f\left[\frac{1}{L}-\frac{1}{C},\frac{R^{2}}{L^{2}}-\frac{1}{LC}\right]=\frac{\hbar\omega}{2}\coth\frac{\hbar\omega\beta}{2},$
(35)
where $\omega=\sqrt{\frac{1}{LC}-\frac{R^{2}}{L^{2}}}.$
Then according to Eq.(10) the fluctuation of $H$ is
$\left(\Delta
H\right)^{2}=\frac{\hbar^{2}\omega^{2}}{4}\frac{1}{\sinh^{2}\frac{\beta\hbar\omega}{2}}.$
(36)
Using Eq.(8) and the following integration formula,
$\int\frac{1}{e^{ax}-1}dx\allowbreak=\frac{1}{a}\left(\ln\left(e^{ax}-1\right)-ax\right),$
(37)
we have
$\displaystyle\left\langle\frac{\partial H}{\partial R}\right\rangle_{e}$
$\displaystyle=\frac{1}{2L}\left\langle\left(pq+qp\right)\right\rangle_{e}$
$\displaystyle=\frac{1}{\beta}\int\frac{\partial\left\langle
H\right\rangle_{e}}{\partial R}d\beta=-\frac{\hbar\allowbreak R}{2\omega
L^{2}}\coth\frac{\hbar\omega\beta}{2},$ (38)
so the average energy consumed by the resistance is
$\frac{R}{2L}\left\langle\left(pq+qp\right)\right\rangle_{e}=-\frac{\hbar
R^{2}}{2\omega L^{2}}\coth\frac{\hbar\omega\beta}{2},\text{
}\omega=\omega_{0}\sqrt{1-R^{2}C/L},$ (39)
where the minus sign implies that the resistance is a kind of energy consuming
element.
## V Entropy-variation with respect to the resistance
In this section, based on the above results we investigate the influence of
the resistance on the entropy of RLC electric circuit. By substituting
Eqs.(31) and (38) into Eq.(15), it is easily obtained that
$\frac{\partial S}{\partial R}=\allowbreak\frac{\beta
R\hbar^{2}}{TL^{2}}\frac{\exp\left(\hbar\beta\omega\right)}{\left(\exp\left(\hbar\beta\omega\right)-1\right)^{2}}=\frac{\beta
R\hbar^{2}}{4TL^{2}}\frac{1}{\sinh^{2}\frac{\beta\hbar\omega}{2}}.$ (40)
Further, making use of the integral formula
$\int\frac{\ln y}{\left(y-1\right)^{2}}dy=\ln\left(y-1\right)-\frac{y\ln
y}{y-1},$ (41)
we derive the relation between the entropy and the resistance as follows
$S=-k\ln\left[\exp\left(\hbar\beta\omega\right)-1\right]+\frac{1}{T}\frac{\hbar\omega\exp\left(\hbar\beta\omega\right)}{\exp\left(\hbar\beta\omega\right)-1}.$
(42)
Obviously when $R=0$, the entropy in (42) corresponds to LC electric circuit.
Based on the relation in Eq.(42), we in Figure 1 depict the variation of the
entropy as a function of resistance in the range of
$\left[0,\sqrt{L/C}\right]$. The figure illustrates that the entropy has a
monotonically increasing with the resistance $R.$ When $R$ goes to the
limit$\sqrt{L/C},$ the entropy tends to infinity.
In summary, by virtue of the generalized Hellmann-Feynman theorem for ensemble
average, we have obtained internal energy and average energy consumed by the
resistance, we have also calculated entropy-variation with respect to the
resistance in quantized RLC electric circuit. The relation between entropy and
resistance is also derived. By depicting figure we indeed see that the entropy
increases with the increment of $R.$
ACKNOWLEDGMENTS
The work was supported by the National Natural Science Foundation of China
under Grant Nos.10775097 and 10874174.
## References
* (1) W. H. Louisell, _Quantum Statistical Properties of Radiation_ , (John Wiley, New York, 1973).
* (2) H. Y. Fan, and X. T. Liang, Chin. Phys. Lett. 17, 174 (2000).
* (3) H. Hellmann, _Einfuehrung in die Quantenchemie_ (Deuticke, Leipzig, 1937).
* (4) R. P. Feynmann, Phys. Rev. 56, 340 (1939).
* (5) H. Y. Fan and B. Z. Chen, Phys. Lett. A. 203, 95 (1995).
* (6) D Popov, Int. J. Quant. Chem. 69, 159 (1998).
* (7) R. Dhurba, Phys. Rev. A. 75, 032514 (2007).
* (8) M. B. Stephen and M. R. Paul, _Methods in theoretical quantum optics_ , (Clarendon press, Oxford, 1997).
* (9) M. Orszag, Quantum Optics, (Springer-Verlag, Berlin, 2000).
Figure 1: Entropy S as a function of resistance.
|
arxiv-papers
| 2009-05-03T06:52:38 |
2024-09-04T02:49:02.290040
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Hong-yi Fan, Xue-xiang Xu and Li-yun Hu",
"submitter": "Liyun Hu",
"url": "https://arxiv.org/abs/0905.0246"
}
|
0905.0288
|
# Contact process with mobile disorder
Ronald Dickman111dickman@fisica.ufmg.br Departamento de Física, Instituto de
Ciências Exatas,
Universidade Federal de Minas Gerais
C. P. 702, 30123-970, Belo Horizonte, Minas Gerais - Brazil
###### Abstract
I study the absorbing-state phase transition in the one-dimensional contact
process with mobile disorder. In this model the dilution sites, though
permanently inactive, diffuse freely, exchanging positions with the other
sites, which host a basic contact process. Even though the disorder variables
are not quenched, the critical behavior is affected: the critical exponents
$\delta$ and $z$, the ratio $\beta/\nu_{\perp}$ and the moment ratio
$m=\langle\rho^{2}\rangle/\rho^{2}$ take values different from those of
directed percolation, and appear to vary with the vacancy diffusion rate.
While the survival probability starting from a single active seed follows the
usual scaling, $P(t)\sim t^{-\delta}$, at the critical point, the mean number
of active sites and mean-square spread grow more slowly than power laws. The
critical creation rate increases with the vacancy density $v$ and diverges at
a value $v_{c}<1$. The scaling behavior at this point appears to be simpler
than for smaller vacancy densities.
###### pacs:
05.70.Jk, 02.50.Ga, 05.40.-a, 05.70.Ln
## I Introduction
Phase transitions to an absorbing state continue to attract much attention in
nonequilibrium statistical physics marro ; hinrichsen ; granada ; lubeck04 ;
odor04 , and have stimulated efforts to characterize the associated
universality classes. Absorbing-state transitions arise in models of
epidemics, population dynamics and autocatalytic chemical reactions, and
underly self-organized criticality in sandpile models. The simplest models
exhibiting absorbing-state phase transitions are the contact process (CP)
harris ; marro and its discrete-time version, directed percolation (DP).
Scaling at absorbing state phase transitions has been studied extensively,
both theoretically and numerically, and more recently, in experiments takeuchi
. A central conclusion is that critical behavior of the DP type is generic for
models exhibiting a continuous phase transition to an absorbing state, in the
absence of any additional symmetries or conserved quantities janssen ;
grassberger . In particular, allowing particles to diffuse at a finite rate
does not affect the critical behavior of the contact process iwandcp .
The effect of quenched disorder on the CP, in the form of random dilution, as
well as of randomly varying creation rates associated with each site, has been
investigated in detail, beginning with the studies of Noest noest , and in
subsequent analyses of its unusual scaling properties agrd ; janssen97 ;
bramson ; webman ; cafiero ; hooyberghs ; vojta ; hoyos ; deoliveira08 .
Little attention has been given to the effect of mobile disorder, for example
in the form of diffusing passive sites. An exception is the recent study by
Evron et al. of a CP in which certain sites have a higher creation rate than
others, with the “good” and “bad” sites diffusing on the lattice evron . These
authors report anomalies in the off-critical behavior of the model.
Lack of interest in the effect of mobile disorder at an absorbing-state phase
transition presumably reflects a belief that it should not lead to any
significant changes in scaling properties, and might in fact be irrelevant. In
equilibrium, annealed disorder at fixed concentration leads to “Fisher
renormalization” of the critical exponents if the specific heat exponent
$\alpha>0$ in the pure model, and does not alter the exponents if $\alpha<0$
fisherren . The condition for Fisher renormalization (FR), like that of
Harris’ criterion harris74 , can be extended to models for which (as in the
present case) $\alpha$ is not defined berker , by using the equivalent
relation, $d\nu<2$. (In the present context $\nu$ corresponds to the
correlation length exponent $\nu_{\perp}$.) A subtlety nevertheless arises in
extending the notion of FR to nonequilibrium systems, because this phenomenon
requires equilibrium between the degrees of freedom associated with the
disorder (such as vacancy positions) and the other elements comprising the
system (e.g., spin variables). In the present case there is no energy function
that could favor (via a Boltzmann factor $e^{-\beta H}$) one disorder
configuration over another; all disorder configurations equally probable,
since the CP degrees of freedom have no influence over the vacancy dynamics.
We therefore speak of “mobile” rather than “annealed” disorder. (Of course,
those disorder configurations that are more favorable to the maintenance of
activity in the CP will survive longer and so might dominate long-time
averages over surviving realizations.)
In the FR scenario, the exponent $\alpha$ is replaced by $-\alpha/(1-\alpha)$,
whilst the other static critical exponents (with the exception of $\eta$) are
multiplied by a factor of $1/(1-\alpha)$. Annealed disorder is expected to be
relevant for critical dynamics if $\nu(d+z)<2$, where $z$ is the dynamic
exponent alonso-munoz . In the case of the contact process, under the FR
scenario, the critical exponents $\beta$ and $\nu_{\perp}$ would be
renormalized by the same factor. (If we confide in the hyperscaling relation
$\alpha=2-d\nu_{\perp}$, we would expect that
$\beta\to\beta/(d\nu_{\perp}-1)$, and similarly for the other static
exponents.) It is less clear what would happen to the exponents governing
critical dynamics, such as $z$ or $\delta$, or to an order-parameter moment
ratio such as $m\equiv\langle\rho^{2}\rangle/\langle\rho\rangle^{2}$.
In this work I study the CP with mobile vacancies. The vacancies diffuse
freely, at rate $D$, exchanging positions with nondiluted sites, be they
active or inactive, which host the usual CP. Aside from its intrinsic
interest, processes of this sort should find application in epidemic modelling
(wandering of immune individuals in a susceptible population), and in
population dynamics (for example, a diffusing population of a third species in
a predator-prey system). The presence of diffusing impurities is in principle
relevant to experimental realizations of directed percolation in fluid media
takeuchi . It transpires that the critical creation rate $\lambda_{c}$
diverges at a certain vacancy density $v_{c}(D)<1$, so that an active state is
impossible for $v>v_{c}$. It is thus of interest to study the limiting case of
$\lambda\to\infty$, which corresponds the limit of an epidemic that propagates
extremely rapidly, and can be impeded only by fragmenting the population.
The remainder of this paper is organized as follows. In Sec. II I define the
model and discuss some of its basic properties as well as the results of the
pair approximation. Sec. III presents the results of various simulation
approaches, and in Sec. IV the implications for scaling behavior are
discussed.
## II Model
In the basic contact processharris each site of a lattice is in one of two
states, “active” or “inactive”. Transitions of an active site to the inactive
state occur spontaneously (i.e., independent of the other states of the sites)
at unit rate, while an inactive site becomes active at rate $\lambda n/q$,
where $n$ is the number of active nearest neighbors and $q$ the total number
of neighbors. The configuration with all sites inactive is absorbing. The
model is known to exhibit a continuous phase transition between the active and
absorbing states at a critical creation rate $\lambda_{c}$; in the one-
dimensional CP $\lambda_{c}=3.29785(2)$. (Figures in parentheses denote
uncertainties.) The transition falls in the universality class of directed
percolation. In the CP with mobile disorder (CPMV), a fraction $v$ of the
sites are vacant, and do not participate in the usual CP dynamics. The
vacancies hop on the lattice at rate $D$. In a hopping transition, a vacancy
trades its position with its right or left neighbor. Denoting vacancies, and
active and inactive sites by v, 1 and 0, respectively, we have the transitions
v1 $\to$ 1v and v0 $\to$ 0v at rate $D/2$, and similarly for vacancy hopping
to the left. (From here on, an “inactive” site denotes one that is not a
vacancy, but that happens not to be active.) The typical evolution shown in
Fig. 1 illustrates that regions of higher than average vacancy density tend to
be devoid of activity, and vice-versa.
Figure 1: Typical space-time evolution, $L=200$, $v=0.1$, $D=1$, and
$\lambda=4.095$. Red points: active sites; black points: vacancies. This image
corresponds to 200 time steps, with time increasing downward.
The CPMV is related to a model studied recently by Evron, Kessler and Shnerb
(EKS) evron , in which half the sites are “good”, meaning they have a creation
rate $\lambda_{1}$, while the creation rate at the other, “bad” sites is
$\lambda_{2}<\lambda_{1}$. The good and bad sites diffuse on the lattice at
rate $D$, independent of their state (active or inactive). The CPMV resembles
the limit $\lambda_{2}\to 0$ of the EKS model, but the correspondence is not
exact, since bad sites can in fact become active in the latter system, even if
$\lambda_{2}=0$. (For $\lambda_{2}=0$, a bad site can never produce offspring,
but it can always be activated by an active neighbor. In the CPMV by contrast,
the vacancies can never be activated.)
While the EKS model with $\lambda_{2}>0$ represents a weaker form of disorder
than the CPMV, one might expect the two models to fall in the same
universality class. In principle, both models should have the same continuum
limit, given by the following pair of stochastic partial differential
equations:
$\frac{\partial\rho}{\partial t}={\cal
D}\nabla^{2}\rho+(a+\gamma\phi)\rho-b\rho^{2}+\eta({\bf x},t)$ (1)
and
$\frac{\partial\phi}{\partial t}=\overline{\cal
D}\nabla^{2}\phi+\nabla\cdot{\boldmath\xi}({\bf x},t)$ (2)
where $\rho({\bf x},t)$ is the activity density and $\phi({\bf x},t)$ the
density of nonvacant (or “good”) sites, whose evolution is completely
independent of $\rho$. Without the term $\propto\phi\rho$, Eq. (1) is the
usual continuum description of directed percolation janssen ; grassberger ;
cardy , with $\eta({\bf x},t)$ a zero-mean, Gaussian noise with
autocorrelation $\langle\eta({\bf x},t)\eta({\bf y},s)\rangle\propto\rho({\bf
x},t)\delta({\bf x}-{\bf y})\delta(t-t^{\prime})$. The second term on the
r.h.s. of Eq. (2) similarly represents a conserved delta-correlated noise. It
is not my intention to analyze these equations here. They merely serve to
highlight the observation that if, as seems plausible, they represent the
continuum limit of both models, then the models should have a common scaling
behavior.
Suppose that, as seems reasonable, the CPMV exhibits a continuous phase
transition to the absorbing state at a critical creation rate
$\lambda_{c}(v,D)$. For any nonzero vacancy concentration, $\lambda_{c}$ must
diverge as $D\to 0$, since we are then dealing with a CP on the line with
frozen dilution, which has no active phase for finite $\lambda$. The limiting
value of $\lambda_{c}$ as $D\to\infty$ is a subtler issue. If all three
species were to diffuse infinitely rapidly, the system would be well mixed,
and would exhibit mean-field like behavior. Note however that vacancy
diffusion does not change the ordering of the active and inactive sites. Thus
infinitely rapid vacancy diffusion does not correspond to a mean-field limit,
but rather to a system in which a fraction $v$ of sites are momentarily
replaced, independently at each instant and at each lattice site, by inert
elements. Hence we should expect to observe the behavior of the pure model, at
an effective reproduction rate $\lambda^{\prime}=(1-v)\lambda$, so that the
critical point is renormalized to
$\lambda_{c}^{\prime}=\lambda_{c,pure}/(1-v)$ in this limit.
While disorder that diffuses infinitely rapidly is irrelevant to critical
behavior, one may argue that it is relevant for finite $D$. Consider a
correlated region in the CP, with characteristic size $\xi$ and duration
$\tau$. If fluctuations in the vacancy density on this spatial scale relax on
a time scale $\tau_{\phi}\ll\tau$, then the CP will be subject, effectively,
to a disorder that is uncorrelated in time, which is irrelevanthinrichsenbjp .
But since the fluctuations relax via diffusion, $\tau_{\phi}\sim\xi^{2}$. In
the neighborhood of the critical point,
$\xi\sim|\lambda-\lambda_{c}|^{-\nu_{\perp}}$ and $\tau\sim\xi^{z}$, with
$z=\nu_{||}/\nu_{\perp}$, so that $\tau_{\phi}\sim\tau^{2/z}$, suggesting that
diffusing disorder is relevant for $z<2$, provided of course that quenched
disorder is relevant. (The two conditions are conveniently written as
$d\nu_{\perp}<2$ and $d\nu_{||}<4$.) In directed percolation this inequality
is satisfied in $d<4$ space dimensions.
It is perhaps worth mentioning that the present model is not equivalent to the
diffusive epidemic process (DEP) dep . In the latter model, there are two
kinds of particle (healthy and infected, say), which diffuse on the lattice.
Infected particles recover spontaneously while a healthy particle may become
infected if it occupies the same site as an infected particle. (There is no
limit on the occupation number at a given site in the DEP.) While it is
tempting to identify the active and inactive sites of the CPMV with
(respectively) the infected and healthy particles of the DEP, the analogy is
not valid since, in the CPMV, vacancy diffusion does not cause the nondiluted
sites to change their relative positions, as noted above.
A simple and often qualitatively reliable approach to estimating the phase
diagram is via $n$-site approximations, where $n=1$ corresponds to simple
mean-field theory. At order $n$, the $m$-site joint probability distributions
(for $m>n$) are approximated on the basis of the $n$-site distribution benav ;
mancam . Since simple mean-field theory is insensitive to diffusion, the
lowest order of interest here is $n=2$ (the pair approximation), which is
derived in the Appendix. The resulting transition line, $\lambda_{c}(D)$, is
compared with simulation results (for $v=0.1$) in Fig. 2. The pair
approximation turns out to be in rather poor agreement with simulation. In
particular, it fails to show that $\lambda_{c}$ diverges as $D\to 0$. Some
improvement might be expected using larger clusters, but it is not clear if
the correct asymptotic behavior would be captured even for large $n$.
Figure 2: Critical reproduction rate $\lambda_{c}$ versus diffusion rate $D$,
for dilution $v=0.1$. Points: simulation; solid line: pair approximation;
dashed line: limiting ($D\to\infty$) value, $\lambda_{c}=\lambda_{c,pure}/v$.
For a fixed diffusion rate $D>0$, we expect $\lambda_{c}$ to increase
monotonically with vacancy concentration $v$. One may ask whether
$\lambda_{c}$ remains finite for any $v<1$, or whether it diverges at some
vacancy concentration that is strictly less than unity. Now, for very large
$\lambda$, a string of two or more active sites is essentially immortal, for
each time a site becomes inactive (which occurs at rate 1), it is immediately
reactivated by its neighbor. Thus only isolated (nondiluted) sites can remain
inactive. For $v$ sufficiently large, we expect an isolated active site to
become inactive before it can make contact with another nondiluted site, so
that $\lambda_{c}\to\infty$ at some critical vacancy concentration
$v_{c}(D)<1$. The following simple argument furnishes an estimate of $v_{c}$.
The rate of loss of activity due to isolated active sites becoming inactive is
$\simeq\rho v^{2}$ (that is, the mean-field estimate for the density of
isolated active sites). On the other hand, the rate of reactivation of
inactive sites may be estimated as $D\rho v(1-\rho-v)$, i.e., the vacancy
diffusion rate times the mean-field estimate for the density of active-vacant-
inactive triplets. The resulting equation of motion,
$d\rho/dt=-v^{2}\rho+D\rho v(1-\rho-v)$, has the stationary solution
$\rho=1-v-v/D$, so that the activity density vanishes at $v=D/(1+D)\equiv
v_{c}$. While this approximation is too crude to yield a quantitative
prediction, the expression for $v_{c}$ does at least attain the correct
limiting values for $D\to 0$ and $D\to\infty$. (The pair approximation does
not yield a useful prediction for $v_{c}$.) Simulation results for $v_{c}$ are
reported in Sec. IIId.
## III Simulations
I performed three series of simulations of the model. The first series follows
the evolution starting from a maximally occupied configuration (all nonvacant
sites occupied) from the initial transient through the quasi-stationary (QS)
state, in which mean properties of surviving realizations are time-
independent. These simulations will be referred to as “conventional”, to
distinguish them from the second series, which employ the QS simulation method
qssim . The third series studies the spread of activity starting from an
initial seed. In all cases, the initial positions of the $vL$ vacancies are
selected at random from the $L$ sites of the ring. A new selection is
performed at each realization.
### III.1 Conventional simulations
In conventional simulations (CS), the fraction $\rho$ of active sites and the
moment ratio $m=\langle\rho^{2}\rangle/\rho^{2}$ are calculated as functions
of time. The time increment associated with a given event is $\Delta
t=[(1+\lambda)N_{a}+DN_{v}]^{-1}$, where $N_{a}$ and $N_{v}$ denote the
numbers of active sites and of vacancies, respectively. $\rho(t)$ and $m(t)$
are calculated as averages over surviving realizations in time intervals that
for large $t$ represent uniform intervals of $\ln t$, a procedure sometimes
called “logarithmic binning”. In addition to the QS values of $\rho$ and $m$,
conventional simulations furnish the mean lifetime $\tau$ from the asymptotic
decay of the survival probability, $P_{s}\sim e^{-t/\tau}$. Estimates for the
critical exponents $\delta$ and $z$ are obtained using the relations
$\rho(t)\sim t^{-\delta}$ and $m(t)-1\sim t^{1/z}$ silva . The latter
relations describe the approach to the QS regime in a large system at its
critical point. Finite-size scaling (FSS) theory implies that, at the critical
point, $\rho_{QS}\sim L^{-\beta/\nu_{\perp}}$, while $\tau\sim L^{z}$, and
that $m$ converges to a finite limiting value as $L\to\infty$ dic-jaff . Note
that we have, in principal, two independent ways to estimate the exponent $z$,
one involving the scaling of the lifetime with system size, the other using
the approach of $m$ to its QS value. (These scaling behaviors have been
verified at absorbing-state transitions in various models without disorder;
scaling in the presence of quenched disorder is generally more complicated.
Whether simple FSS applies for mobile disorder is an open question; the
results shown below provide partial confirmation.)
I performed CS for system sizes $L=100,200,...,1600$, averaging over $N_{r}$
independent realizations, where $N_{r}=100\,000$ for $L=100$, and decreases
gradually with system size, to a value of 2 000 - 5 000 for $L=1600$. The run
times are such that all realizations eventually fall into the absorbing state.
In analyzing the CS data, I use two criteria for determining the critical
reproduction rate $\lambda_{c}$: (1) power-law scaling of the QS density,
$\rho_{QS}$, with system size, and (2) approach of the moment ratio $m_{QS}$
to a well defined limiting value with increasing system size. Verification of
(1) is facilitated by studying the curvature of $\ln\rho_{QS}$ versus $\ln L$,
by calculating a quadratic fit to the data, and determining the range of
$\lambda$ for which the quadratic term is zero to within uncertainty. It is
also useful to plot $L^{\beta/\nu_{\perp}}\rho_{QS}$ versus $L$, on log
scales, to check for curvature visually, as illustrated in Fig. 3. (Here
$\beta/\nu_{\perp}$ is estimated from the data for the value of $\lambda$
showing the least curvature.) Verification of the second criterion is
facilitated by plotting $m$ versus $1/L$; for subcritical values this graph
veers upward as $1/L\to 0$ and vice-versa, as shown in Fig. 4.
Figure 3: Scaled order parameter $\rho^{*}=L^{\beta/\nu_{\perp}}\rho$ versus
system size in conventional simulations for $v=0.1$, $D=2$, and (lower to
upper) $\lambda$ = 3.905, 3.910, 3.915, and 3.920. Error bars are smaller than
symbols.
Figure 4: Moment ratio $m=\langle\rho^{2}\rangle/\rho^{2}$ versus inverse
system size in conventional simulations for $v=0.1$, $D=2$, and (upper to
lower) $\lambda$ = 3.905, 3.910, 3.915, and 3.920.
The behavior of the activity density as a function of time, at criticality, is
shown in Fig. 5, for the case $v=0.1$ and $D=1$. In the pure contact process
the activity follows a power law, with very weak corrections to scaling, for
comparable times and system sizes marro . Here by contrast, there is a
significant positive curvature to the plots of $\rho(t)$ on log scales. It is
nevertheless possible to estimate the exponent $\delta$ from the data at
longer times and larger system sizes. Since the graphs of $\rho(t)$ are not
linear (on log scales) it is not possible to collapse the data onto a unique
master curve describing both the stationary and transient regimes, as is in
fact possible in the contact process without dilution. The quantity $m(t)-1$
shows cleaner power-law scaling over a somewhat larger period, as shown in
Fig. 6, allowing a rather precise estimate for the exponent $z$.
Figure 5: Order parameter $\rho(t)$ for $v=0.1$, $D=1$, and $\lambda=4.099$.
System sizes (upper to lower) $L$ = 200, 400, 800, and 1600. The slope of the
straight line is -0.0863.
Figure 6: Evolution of $m(t)-1$ for $v=0.1$, $D=1$, and $\lambda=4.099$.
System sizes as in Fig. 5. The slope of the straight line is 0.40.
Table I summarizes the results of the conventional simulations. The critical
parameters such as $\beta/\nu_{\perp}$ and $m$ are calculated for each value
of $\lambda$ simulated, and then evaluated at $\lambda_{c}$ using linear
interpolation. Each fit to the data involves an error bar, to which we must
add the (typically larger) uncertainty induced by the uncertainty in
$\lambda_{c}$ itself. In Table I, the values cited for the dynamic exponent
$z$ are obtained from the growth of the moment ratio (using $m-1\sim
t^{1/z}$), and not from the FSS relation for the lifetime, $\tau\sim L^{z}$.
In fact, the growth of the lifetime appears to be slower than a power law at
the critical point, as shown in Fig. 7 for $v=0.1$ and $D=1$. In all the cases
shown (including $\lambda=4.101$, above the critical value), the plots of
$\tau^{*}=L^{-z}\tau$ curve downward. (In this plot, for purposes of
visualization I use $z=2.4$, derived from a linear fit to the data form
$\tau(L)$ at $\lambda=4.099$.)
Figure 7: Scaled lifetime $\tau^{*}=L^{-z}\tau$ versus system size for
$v=0.1$, $D=1$, and (lower to upper) $\lambda=4.097$, 4.099, and 4.101.
As noted in the Introduction, averages restricted to surviving realizations
may yield statistical properties of the vacancies that differ from those of a
random mixture. I studied the density of nearest-neighbor vacancy pairs; for a
random mixture $\rho_{vv}=N(N-1)/L^{2}$, where $N=vL$ is the number of
vacancies. At the critical point ($\lambda=4.099$, $D=1$, and $v=0.1$), I find
that $\rho_{vv}$ grows with time, reaching a value about 2% greater than the
random mixture value in a system with $L=100$. For system sizes $L=200$ and
800, the increase over the random mixture value is 1% and 0.2%, respectively.
Thus selection effects on the vacancy configuration appear to be quite weak.
### III.2 Quasistationary simulations
This simulation method is designed to sample directly the quasistationary (QS)
regime of a system with an absorbing state, as discussed in detail in qssim .
The method involves maintaining a list of active configurations during the
evolution of a given realization. When a transition to the absorbing state is
imminent, the system is instead placed in one of the active configurations,
chosen randomly from the list. This procedure has been shown to reproduce
faithfully the properties obtained (at much greater computational expense) in
conventional simulations. The theoretical basis for the QS simulation method
remains valid in the presence of variables that continue to evolve even in the
absence of activity, such as the vacancy positions in the CPMV, or the
positions of healthy particles in the diffusive epidemic process epidif2 .
I performed QS simulations on rings of $L$ sites, as the conventional
simulations. The simulation yields a histogram of the accumulated time during
which the system has exactly 1, 2,…n,…, active sites, which is used to
calculate the QS order parameter $\rho_{QS}$ and moment ratio $m_{QS}$. The
lifetime $\tau$ is given by the mean time between attempted visits to the
absorbing state, in the QS regime. Results are obtained from averages over 10
independent realizations, each lasting $t_{max}=10^{8}$ time units, with an
initial portion (104 to $10^{5}$ time steps, depending on the system size),
discarded to ensure the system has attained the QS state. One thousand saved
configurations are used to implement the QS simulation procedure qssim .
Values of the replacement probability $p_{rep}$ range from $10^{-3}$ to
$6\times 10^{-5}$ (smaller values for larger $L$); during the initial
relaxation period, $p_{rep}=0.01$.
Comparing the results of QS and conventional simulations near the critical
point, I find no significant differences for $v=0.1$ and $D=0.5$, in systems
of up to 1600 sites. For $D=1$, the results for $\rho$ and $m$ again agree to
within uncertainty, but the QS simulations yield slightly smaller values for
the lifetime than conventional simulations. (For $L=800$, the difference is
about 5%.) The difference can be understood if we suppose that certain initial
vacancy configurations are more favorable than others for the survival of
activity. In the conventional simulations each initial configuration makes a
single contribution to the average survival time, whereas in QS simulations,
realizations starting with less favorable configurations (with shorter
lifetimes) visit the absorbing state more frequently than those starting from
favorable ones, and so tend to dominate the average for the lifetime. This
presumably reflects the slow relaxation of the vacancy density fluctuations.
An advantage of conventional simulations in this respect is that many more
initial configurations are sampled than in the QS simulations. (The former
also provide information on the relaxation process, that is not available in
QS simulations.) Despite the minor difference in lifetime estimates, the
otherwise excellent agreement between the two simulation methods lends
confidence to the results obtained.
### III.3 Spreading simulations
In light of the surprising results of the conventional simulations, it is of
interest to study the model using a complementary approach, which follows the
spread of activity, starting from a localized seed. Such spreading or “time-
dependent” simulations were pioneered in the context of contact process by
Grassberger and de la Torre torre . Starting with a single active site at the
origin, one determines (as an average of a set of many realizations) the
survival probability $P(t)$, mean number of active sites $n(t)$, and mean-
square spread, $R^{2}(t)=\langle\sum_{j}x_{j}(t)^{2}\rangle/n(t)$. (Here the
sum is over all active sites, with $x_{j}$ the position of the $j$-th active
site, and the brackets denote an average over all realizations.) The spreading
process is followed up to a certain maximum time $t_{m}$, with the system
taken large enough that activity does not reach the boundaries up to this
time. The expected scaling behaviors at the critical point are conventionally
denoted $P(t)\sim t^{-\delta}$, $n(t)\sim t^{\eta}$ and $R^{2}(t)\sim
t^{z_{s}}$. Away from the critical point, these quantities show deviations
from power laws. $P(t)$, for example, decays exponentially for
$\lambda<\lambda_{c}$, and approaches a nonzero asymptotic value,
$P_{\infty}$, for $\lambda>\lambda_{c}$.
In the CP with mobile disorder, this scaling picture appears to hold for the
survival probability but not for $n(t)$ and $R^{2}(t)$. For example, Fig. 8
shows the decay of the survival probability for $v=0.1$ and $D=2$; $P(t)$ is
well fit by an asymptotic power law with $\delta=0.0971(5)$. (Note however
that at shorter times ($t<100$) the decay is governed by a larger exponent,
with a value of about 0.147.) The small differences between estimates for
$\delta$ characterizing the decay of the order parameter in CS, and that of
the survival probability in spreading simulations ($\delta_{c}$ and
$\delta_{s}$, respectively, in Table I), can be attributed to the deviations
from a pure power law noted above in the CS. (Note also that the uncertainties
reported for $\delta_{s}$ do not include any contribution due to the
uncertainty in $\lambda_{c}$, and so are smaller than the uncertainties in
$\delta_{c}$.) The behaviors of $n$ and $R^{2}$ at the critical point are
illustrated in Fig. 9. While I have not found a simple functional form capable
of fitting these data, it is clear that the growth is slower than a power law.
At short times $n$ and $R^{2}$ follow apparent power laws, with exponents of
0.35 and 1.22, respectively, which are comparable to the DP spreading
exponents $\eta\simeq 0.314$ and $z_{s}\simeq 1.27$.
Figure 8: Lower curve: survival probability $P$ versus time for $v=0.1$ and
$D=2$, $\lambda=3.915$, average over $10^{4}$ realizations. Upper curve:
Scaled survival probability $t^{\delta}P(t)$ using $\delta=0.0971$.
Figure 9: Mean number $n$ of active sites (lower curve) and mean-square
distance from origin $R^{2}$ (upper curve) versus time; parameters as in Fig.
8.
### III.4 Critical vacancy concentration
At a fixed diffusion rate, one expects $\lambda_{c}$ to be an increasing
function of $v$, as noted in Sec. II; simulation results for $D=1$ confirm
this. (For $v\geq 0.2$ I determine $\lambda_{c}$ in conventional simulations,
using the criteria discussed above, in studies of systems with $L\leq 800$.)
To determine the critical vacancy concentration $v_{c}(D)$, at which
$\lambda_{c}$ diverges, I perform conventional simulations at an effectively
infinite creation rate. This is done by (1) allowing only isolated active
sites to become inactive, at a rate of unity, and (2) activating any
nondiluted site the instant it gains an active neighbor. (Thus a string of
inactive sites are activated simultaneously when the right- or leftmost site
acquires an active neighbor via vacancy motion.) These studies yield
$v_{c}=0.517(1)$ for $D=1$, and $v_{c}\simeq 0.4182(5)$ for $D=0.2$. Fig. 10
shows $\lambda_{c}(v)$ for $D=1$; the data suggest that
$\lambda_{c}\sim(v_{c}-v)^{-1.16}$ (see inset). The available results are
consistent with $\beta/\nu_{\perp}$, and the critical moment ratio $m_{c}$,
being constant along the critical line for fixed $D=1$.
I turn now to the scaling behavior at the critical vacancy density $v=v_{c}$,
$\lambda=\infty$. For $D=1$, finite-size scaling analysis of the order
parameter yields $\beta/\nu_{\perp}=0.184(20)$. Unlike in the case $v=0.1$
discussed above, here the survival time follows a power-law with $z=1.98(3)$,
close to the value for $z$ obtained via analysis of the growth of $m(t)$. The
short-time scaling behavior $\rho\sim t^{-\delta}$ is followed quite clearly
(see Fig. 11), without the strong crossover effects observed for $v=0.1$ (Fig.
5). Fig. 12 is a scaling plot of $m-1$, using $t^{*}=t/L^{z}$, showing a near-
perfect data collapse for three system sizes, and a clean power law, $m-1\sim
t^{1/z}$. There is also good evidence for power-law scaling of the survival
probability and of $n$ and $R^{2}$ at $v_{c}$, as shown in Fig. 13. Thus the
scaling behavior at the critical vacancy density appears to be simpler than
that observed for $v=0.1$. Results for critical parameters at the critical
vacancy density are summarized in Table II. For the two diffusion rates
studied, the exponents and $m_{c}$ agree to within uncertainty. The spreading
exponents satisfy the hyperscaling relation torre $4\delta+2\eta=dz$ to
within uncertainty. The principal inconsistency in these data is in regard to
the scaling relation $z=2/z_{s}$. In fact $2/z_{s}$ is about 8% (5%) greater
than $z_{m}$ for $D=0.2$ ($D=1$). Studies of larger systems and a more precise
determination of $v_{c}$ should help to resolve this discrepancy.
Figure 10: Critical creation rate $\lambda_{c}$ versus vacancy concentration
$v$ for $D=1$ The dashed vertical line marks $v_{c}$. Inset: the same data
plotted versus $v_{c}-v$; the slope of the straight line is -1.16.
Figure 11: Order parameter versus time for $v=0.5176$, $\lambda=\infty$, with
$D=1$. System sizes $L=796$ (upper) and 1592 (lower). The slope of the
straight line is -0.093.
Figure 12: Scaling plot of $m-1$ versus $t^{*}=t/L^{z}$ using $z=1.98$.
Parameters $v=0.5176$, $\lambda=\infty$, and $D=1$. System sizes $L=398$ (+),
$L=796$ ($\times$), and 1592 (circles). The slope of the straight line is
0.51.
Figure 13: Spreading simulations at the critical vacancy density. Solid lines
(lower to upper): $P(t)$, $n(t)$ and $R^{2}(t)$. The corresponding dotted
lines show the scaled quantities $P^{*}=t^{\delta}P$, $n^{*}=t^{-\eta}n$, and
$R^{*2}=t^{-z_{s}}R^{2}$, using $\delta=0.086$, $\eta=0.307$, and
$z_{s}=0.965$. Parameters $v=0.516$, $\lambda=\infty$, $D=1$, system size
$L=6000$.
## IV Discussion
I study the one-dimensional contact process with mobile vacancies, verifying
that the model exhibits a continuous phase transition between the active and
the absorbing phases, as in the undiluted model. The scaling behavior is,
however, considerably more complex than the Fisher renormalization scenario
that might be expected by analogy with equilibrium systems with annealed
disorder. (The analogy, as noted, is somewhat flawed since the disorder
variables are not in equilibrium with the other degrees of freedom.) Cluster
approximations appear to be incapable of providing an accurate phase diagram.
Quantitative results on critical behavior are obtained using conventional
simulations (CS) and spreading studies. Quasistationary simulations yield
results consistent with CS, except for a slightly smaller survival time.
The scaling behavior is anomalous in several aspects. First, the growth of the
lifetime $\tau$ at the critical point appears to be slower than a power law,
as is also the case for mean number of active sites and mean-square spread in
spreading simulations at the critical point. The latter finding is reminiscent
of the one-dimensional CP with quenched disorder in the form of an
annihilation rate randomly taking one of two values, independently at each
site bramson ; webman . In this case an intermediate phase arises, in which
the survival probability, starting from a single active site, attains a
nonzero value as $t\to\infty$, but the active region grows in a sublinear
manner with time. The spreading simulations also show a crossover from scaling
with exponent values similar to those of DP at short times, to a different
behavior at longer times.
In Sec. II it was argued that $\lambda_{c}\to\infty$ as $D\to 0$, and that
$\lambda_{c}\to\lambda_{c,pure}/(1-v)$ in the opposite limit. The available
data are consistent with this (see Fig. 2); extrapolation of the data for
$v=0.1$ yields a limiting ($D\to\infty$) value of $\lambda_{c}=3.65$, quite
near $\lambda_{c,pure}/(1-v)\simeq 3.664$.
For vacancy density $v=0.1$, the critical exponents $\delta$ and $z$, the
ratio $\beta/\nu_{\perp}$, and moment ratio $m_{c}$ vary systematically with
the vacancy diffusion rate (Table I). With increasing $D$ these parameters
tend to their directed percolation values, as might be expected, since the
limit $D\to\infty$ corresponds to the usual contact process, with a
renormalized creation rate. There is preliminary evidence that some critical
properties are insensitive to varying the vacancy concentration $v$ at a fixed
diffusion rate. Preliminary results for critical behavior at the critical
vacancy density $v_{c}$ suggest a simpler scaling behavior, without strong
crossover effects, at this point. The static, transient, and spreading
behavior is quite analogous to that observed in the basic contact process, but
with a different set of critical exponents. This motivates the conjecture that
for $v>0$ all points along the critical line $\lambda_{c}(v)$ flow, in the
renormalization group sense, to the same fixed point. For smaller values of
$v$ one would then expect to observe a crossover between DP scaling and that
of the new fixed point, associated with mobile disorder. More extensive
studies are needed to verify and fill out this picture.
The effects of mobile reported here are more extensive than those found by
Evron et al. in a related model (EKS) evron . These authors do not find, for
example, changes in the static critical exponents. In comparing the two
models, it should be noted that the disorder in the CPMV is stronger than that
involved in the EKS model. The difference nevertheless raises an interesting
puzzle regarding the universality of disorder effects.
Intuitively, the anomalous scaling properties of the CP with mobile disorder
are associated with fluctuations in the local, time-dependent concentration of
vacancies, which relax slowly in the long-wavelength limit. Developing a
quantitative theory of how such fluctuations change the critical behavior is a
challenge for future work, with potential applications in epidemiology and the
dynamics of spatially distributed populations. Investigation of the two-
dimensional case is also relevant for applications, and of intrinsic interest
for developing a more complete understanding of scaling.
Acknowledgments
I thank Thomas Vojta, José A. Hoyos, and Miguel A. Muñoz for helpful comments.
This work was supported by CNPq and Fapemig, Brazil.
## References
* (1) J. Marro and R. Dickman, Nonequilibrium Phase Transitions in Lattice Models (Cambridge University Press, Cambridge, 1999).
* (2) H. Hinrichsen, Adv. Phys. 49 815 (2000).
* (3) M. A. Muñoz, R. Dickman, R. Pastor-Satorras, A. Vespignani, and S. Zapperi, in Procedings of the Sixth Granada Seminar on Computational Physics, edited by J. Marro and P. L. Garrido (AIP, New York, 2001).
* (4) S. Lübeck, Int. J. Mod. Phys. B 18, 3977 (2004).
* (5) G. Ódor, Rev. Mod. Phys 76, 663 (2004).
* (6) T. E. Harris, Ann. Probab. 2, 969 (1974).
* (7) K. A. Takeuchi, M. Kuroda, H. Chaté, and M. Sano, Phys. Rev. Lett. 99, 234503 (2007).
* (8) H.-K. Janssen, Z. Phys. B 42, 151 (1981).
* (9) P. Grassberger, Z. Phys. B 47, 465 (1982).
* (10) I. Jensen and R. Dickman, J. Phys. A26, L151 (1993).
* (11) A. J. Noest, Phys. Rev. Lett. 57, 90 (1986); Phys. Rev. B38, 2715 (1988).
* (12) A. G. Moreira and R. Dickman, Phys. Rev. E54, R3090 (1996); R. Dickman and A. G. Moreira, Phys. Rev. E57, 1263 (1998).
* (13) H. K Janssen, Phys. Rev. E55, 6253 (1997).
* (14) M. Bramson, R. Durrett, and R. H. Schonmann, Ann. Prob. 19, 960 (1991).
* (15) I. Webman, D. ben-Avraham, A. Cohen, and S. Havlin, Phil. Mag. B 77, 1401 (1998).
* (16) R. Cafiero, A. Gabrielli, and M. A. Muñoz, Phys. Rev. E 57, 5060 (1998).
* (17) J. Hooyberghs, F. Igloi, and C. Vanderzande, Phys. Rev. Lett. 90, 100601 (2003); Phys. Rev. E 69, 066140 (2004).
* (18) T. Vojta and M. Dickison, Phys. Rev. E 72, 036126 (2005).
* (19) J. A. Hoyos, Phys. Rev. E 78, 032101.
* (20) M. M. de Oliveira and S. C. Ferreira, J. Stat. Mech. (2008) P11001.
* (21) G. Evron, D. A. Kessler, and N. M. Shnerb, eprint: arXiv:0808.0592.
* (22) M. E. Fisher, Phys. Rev. 176, 257 (1968).
* (23) A. B. Harris, J. Phys. C 7, 1671 (1974).
* (24) A. N. Berker, Physica A 194, 72 (1993).
* (25) J. J. Alonso and M. A. Muñoz, Europhys. Lett. 56, 485 (2001).
* (26) J. L. Cardy and R. L. Sugar, J. Phys. A 13, L423 (1980).
* (27) H. Hinrichsen, Braz. J. Phys. 30, 69 (2000).
* (28) R. Kree, B. Schaub, and B. Schmittmann, Phys. Rev. A 39, 2214 (1989).
* (29) D. ben-Avraham and J. K hler, Phys. Rev. A 45, 8358 (1992).
* (30) R. Dickman, Phys. Rev. E 66, 036122 (2002).
* (31) R. da Silva, R. Dickman, and J. R. Drugowich de Fel cio Phys. Rev. E 70, 067701 (2004).
* (32) R. Dickman and J. K. Leal da Silva, Phys. Rev. E, 58 4266 (1998).
* (33) M. M. de Oliveira and R. Dickman, Phys. Rev. E 71, 016129 (2005).
* (34) R. Dickman and D. S. Maia, J. Phys. A 41, 405002 (2008).
* (35) P. Grassberger and A. de la Torre, Ann. Phys. (N.Y.) 122, 373 (1979).
* (36) I. Jensen, J. Phys. A 29, 7013 (1996).
Table I. Critical parameters of the CPMV with vacancy density $v=0.1$.
$\delta_{c}$ and $\delta_{s}$ denote, respectively, the exponent as determined
in conventional and in spreading simulations. DP values from jensen96 .
$D$ | $\lambda_{c}$ | $\beta/\nu_{\perp}$ | $m_{c}$ | $z$ | $\delta_{c}$ | $\delta_{s}$
---|---|---|---|---|---|---
0.5 | 4.375(2) | 0.175(3) | 1.076(2) | 2.65(4) | 0.0745(15) | 0.0765(2)
1 | 4.099(1) | 0.191(3) | 1.085(2) | 2.49(1) | 0.086(2) | 0.0837(5)
2 | 3.915(1) | 0.205(3) | 1.096(3) | 2.36(5) | 0.101(4) | 0.0971(5)
5 | 3.7746(10) | 0.235(4) | 1.123(4) | 1.92(2) | 0.137(3) | 0.1293(5)
DP | 3.29785(2) | 0.2521(1) | 1.1736(1) | 1.58074(4) | 0.15947(3) | (=$\delta_{c}$)
Table II. Critical parameters of the CPMV at the critical vacancy density
$v_{c}$. $\delta_{c}$ and $\delta_{s}$ as in Table I. $z_{m}$ is the dynamic
exponent as determined from the growth of $m$ in conventional simulations. DP
values from jensen96 .
$D$ | $v_{c}$ | $\beta/\nu_{\perp}$ | $m_{c}$ | $z_{m}$ | $\delta_{c}$ | $\delta_{s}$ | $\eta$ | $z_{s}$
---|---|---|---|---|---|---|---|---
0.2 | 0.4182(5) | 0.174(6) | 1.083(3) | 1.95(4) | 0.087(2) | 0.086(2) | 0.303(3) | 0.95(1)
1 | 0.517(1) | 0.184(20) | 1.084(11) | 1.98(3) | 0.091(4) | 0.086(2) | 0.307(1) | 0.965(10)
DP | - | 0.2521(1) | 1.1736(1) | 1.58074(4) | 0.15947(3) | (=$\delta_{c}$) | 0.31368(4) | 1.26523(3)
Appendix: Pair approximation
It is straightforward to construct the pair approximation to the CPMV. For
convenience I denote vacant, inactive, and active sites by $v$, 0, and 1,
respectively. In this approximation the dynamical variables are the
probabilities $p(00)\equiv(00)$, $(01)$, etc. Note that since vacancies hop
independently of the states (0 or 1) of the nonvacant sites, $(vv)=v^{2}$ in
the stationary state, and during the entire evolution, if this equality holds
initially. Since $v=(vv)+(0v)+(1v)$, we then have $(1v)=v-v^{2}-(0v)$, leaving
four independent pair probabilities. (The normalization condition,
$(00)+(11)+(vv)+2[(01)+(0v)+(1v)]$, permits to eliminate one further
variable.)
Consider, for example, the transition $00\to 01$. It occurs at rate
$\lambda/2$ provided the site to the right of the pair bears a particle, so
the contribution to $d(00)/dt$ due to this event is $-(\lambda/2)(001)$. (The
mirror event makes an identical contribution.) In the pair approximation,
three-site probabilities are written in terms of the one- and two-site
probabilities, for example, $(001)\simeq(00)(01)/(0)$. Proceeding in this
manner we obtain
$\frac{d(00)}{dt}=2(01)+D\frac{(0v)^{2}}{v}-\frac{(00)}{(0)}[\lambda(01)+D(0v)]$
(3)
$\frac{d(01)}{dt}=(11)+\frac{\lambda}{2}\frac{(00)(01)}{(0)}+D\frac{(0v)(1v)}{v}-(01)\left[1+\frac{D}{2}\left(\frac{(1v)}{(1)}+\frac{(0v)}{(0)}\right)+\frac{\lambda}{2}\left(1+\frac{(10)}{(0)}\right)\right]$
(4)
$\displaystyle\frac{d(0v)}{dt}$ $\displaystyle=$
$\displaystyle(1v)+\frac{D}{2}\left[\frac{(00)(0v)}{(0)}+v(0v)+\frac{(01)(1v)}{(1)}\right]$
(5)
$\displaystyle-\frac{D}{2}(0v)\left[\frac{(0v)}{v}+\frac{(1v)}{v}+\frac{(0v)}{(0)}\right]-\frac{\lambda}{2}(0v)\frac{(10)}{(0)}$
and
$\frac{d(11)}{dt}=\lambda(01)\left(1+\frac{(10)}{(0)}\right)+D\frac{(1v)^{2}}{v}-(11)\left(2+D\frac{(1v)}{(1)}\right)$
(6)
Integrating these equations numerically, one can determine the critical
creation rate $\lambda_{c}$ as a function of $v$ and $D$.
|
arxiv-papers
| 2009-05-03T21:21:27 |
2024-09-04T02:49:02.295872
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Ronald Dickman",
"submitter": "Ronald Dickman",
"url": "https://arxiv.org/abs/0905.0288"
}
|
0905.0334
|
# Dynamics near the $p:-q$ Resonance
Sven Schmidt1,3 and Holger R. Dullin4
1School of Mathematics Loughborough University LE11 3TU, Loughborough, UK
3Schlumberger Oilfield UK Plc Abingdon Technology Centre OX14 1UJ, Abingdon,
UK sschmidt@abingdon.oilfield.slb.com 4School of Mathematics and Statistics
The University of Sydney Sydney NSW 2006, Australia hdullin@usyd.edu.au
###### Abstract
We study the dynamics near the truncated $p:\pm q$ resonant Hamiltonian
equilibrium for $p$, $q$ coprime. The critical values of the momentum map of
the Liouville integrable system are found. The three basic objects reduced
period, rotation number, and non-trivial action for the leading order dynamics
are computed in terms of complete hyperelliptic integrals. A relation between
the three functions that can be interpreted as a decomposition of the rotation
number into geometric and dynamic phase is found. Using this relation we show
that the $p:-q$ resonance has fractional monodromy. Finally we prove that near
the origin of the $1:-q$ resonance the twist vanishes.
Key words: fractional monodromy, resonant oscillator, vanishing twist,
singular reduction
MSC2000 numbers: 37J15, 37J20, 37J35, 37J40, 70K30
## 1 Introduction
The phase space of an integrable system is foliated into invariant tori almost
everywhere. Exceptions occur when the integrals are not independent, i.e. when
the energy-momentum map has critical values. Monodromy is a manifestation of
the singular nature of the fibre over a critical value of the energy–momentum
mapping. The term was introduced in Duistermaat’s 1980 paper [7] as the
simplest obstruction to the existence of global and smooth action–angle
coordinates. Consider a closed, non–contractible loop $\Gamma$ in the image of
the energy–momentum mapping consisting entirely of regular values. Assume the
loop is traversed from $\Gamma(0)$ to $\Gamma(1)$ where $\Gamma(0)=\Gamma(1)$.
For each regular value $\Gamma(s)$ the period lattice $P_{\Gamma(s)}$ gives
the periods of the flows generated by energy and momentum, and thus encodes
the transformation to action-angle variables. The period lattice
$P_{\Gamma(s)}$ at $\Gamma(0)$ and at $\Gamma(1)$ are related by a unimodular
transformation. If this transformation is non-trivial the system is said to
have monodromy. Cushman and Bates [6] use the ratios of periods, i.e. the
rotation number, to compute monodromy. Geometrically monodromy means that the
torus bundle over the loop $\Gamma$ is non-trivial.
If the loop of regular values is contractible through regular values there is
no monodromy. Thus there must be at least one critical value inside the loop
$\Gamma$ for monodromy to occur, and the nature of the critical fibre in
principle determines the monodromy. For non–degenerate focus–focus points the
critical fibre is a pinched torus with $p$ pinches, and the monodromy matrix
of a loop around the critical value of the focus-focus point is
$\begin{pmatrix}1&0\\\ p&1\end{pmatrix}$ [12, 20].
Recently Nekhoroshev et al. [15, 16] found fractional monodromy in certain
resonant oscillators. In fractional monodromy the change of basis of the
period lattice is not an element of $SL(2,\mathbb{Z})$ but of
$SL(2,\mathbb{Q})$ instead. This would be incompatible with the Liouville-
Arnold theorem which gives the actions uniquely up to transformations in
$SL(2,\mathbb{Z})$. Thus for fractional monodromy the loop $\Gamma$
necessarily has to cross critical values. The critical fibre has to be such
that it is possible to continuously pass a sub-lattice of the period lattice
through it [16, 11], also see [10]. The first example was the $1:-2$ resonant
Hamiltonian, with an appropriate compactification [15, 16, 11]. In [14] and
[19] this has been extended to the compactified $p:-q$ resonance.
The present paper is based on the thesis [18]. It is a generalization from the
$1:-2$ resonance as treated in [5] to the general $p:-q$ resonance. We
independently obtained results similar to those presented in [19], but our
approach is complementary. There are two main differences. The first main
difference is that we base our approach on the computation of the action
variables. We derive the remarkable formula relating the action variables
$I_{1}$ and $I_{2}$ by
$I_{2}=\frac{p+q}{4\pi}\Delta hT-I_{1}W$ (1.1)
where $\Delta h$ is the value of the non-linear part of the Hamiltonian, $T$
is the period, and $W$ is the corresponding rotation number. This formula can
be interpreted as decomposing the rotation number into a dynamical phase
proportional to $T$ and a geometric phase proportional to the action $I_{2}$
[13]. Then we show that $W$ changes by $1/(pq)$ upon traversing a loop around
the equilibrium, and thus by (1.1) the action $I_{2}$ changes by $I_{1}/(pq)$
and the system has fractional monodromy. The second main difference is that we
analyze the dynamics near the equilibrium point in detail. The period,
rotation number, and action are dominated by certain singular contributions,
and we show that sufficiently close to the equilibrium point these are the
leading order contributions. This allows us to perform the computation without
any compactification, thereby proving the conjecture made in [16], that
fractional monodromy is independent of the details of the compactification.
Finally we obtain the new result that the isoenergetic non-degeneracy
condition of the KAM theorem is violated near the $1:-q$ resonance, i.e. the
system has vanishing twist near the equilibrium point.
The paper is structured as follows. First we derive the resonant Hamiltonian
normal form near an elliptic–elliptic equilibrium point in $p:-q$ resonance
which is then studied in the reduced phase space after applying singular
reduction. After discussing the structure of the reduced phase space we derive
the set of critical values of the energy–momentum mapping. The period of the
reduced flow, the rotation number, and the non-trivial action of the system
are computed in the next section. We then prove fractional monodromy for the
$p:-q$ resonance. Finally we prove that the twist vanishes in $1:-q$ resonant
systems close to the equilibrium point.
## 2 Resonant Hamiltonain Equilibria
Near a non-resonant elliptic equilibrium point a Hamiltonian can be
transformed to Birkhoff normal form up to arbitrary high order. The truncated
Birkhoff normal form depends on the actions only, and to lowest order these
actions are those of the two harmonic oscillators of the diagonalized
quadratic part of the Hamiltonian. When the frequencies are $p:\pm q$
resonant, the normal form contains additional so called resonant terms that
depend on resonant linear combinations of the angles. Even though the non-
linear non-resonant terms are generically present, we will assume that all the
non-linear non-resonant terms up to the order $p+q$ are vanishing Without this
assumption the dynamics near the equilibrium would be dominated by the non-
resonant terms when $p+q\geq 5$. For the low order resonant cases no such
assumption is necessary, and therefore the results presented in [5] for the
$1:-2$ resonance are relevant for generic Hamiltonian systems. By contrast,
the integrable systems studied in this paper have higher codimension
(increasing with $p+q$) in the class of Hamiltonian systems.
Denote a point in phase space $T^{*}\mathbb{R}^{2}$ with coordinates
$(p_{1},q_{1},p_{2},q_{2})$ and let the symplectic structure be
$\omega=dq_{1}\wedge dp_{1}+dq_{2}\wedge dp_{2}$. Assume the origin
$(p_{1},q_{1},p_{2},q_{2})=(0,0,0,0)\in T^{*}\mathbb{R}^{2}$ is an elliptic
equilibrium point of the system whose eigenvalues
$\pm\operatorname{i}\omega_{1}$, $\pm\operatorname{i}\omega_{2}$ have ratio
$p/q$, where $p$ and $q$ coprime positive integers. Then the quadratic part of
the Hamiltonian $H$ near the equilibrium point can be brought into the form
$H_{2}=\frac{p}{2}\left(p_{1}^{2}+q_{1}^{2}\right)+\sigma\frac{q}{2}\left(p_{2}^{2}+q_{2}^{2}\right)\,,$
(2.2)
by scaling $\omega_{1}$ to $p$ by a linear change of time. For $\sigma=+1$ the
quadratic Hamiltonian $H_{2}$ is definite, while for $\sigma=-1$ it is
indefinite. The system is said to be in $p:\pm q$ resonance, and mostly we are
interested in the case $\sigma=-1$.
The classical treatment of resonant equilibria, see e.g. [2], is as follows.
Denote $(A_{i},\phi_{i})$, $i=1,2$, the canonical polar coordinates
corresponding to $(p_{i},q_{i})$. The resonant Birkhoff normal form depends on
$A_{1}$, $A_{2}$ and the resonant combination (the so called secular term)
$-\sigma q\phi_{1}+p\phi_{2}$. The normal form Hamiltonian truncated at order
$p+q$ is
$H=pA_{1}+\sigma
qA_{2}+\sum\mu_{ij}A_{1}^{i}A_{2}^{j}+\mu\sqrt{A_{1}^{q}A_{2}^{p}}\cos(-\sigma
q\phi_{1}+p\phi_{2}+\varphi)\,,$
where the phase $\varphi$ can be set to zero by a shift of the angles. In
order to reduce the number of variables a linear symplectic transformation to
$(J_{i},{{}\psi_{i}})$ is performed. The resonant combination is introduced as
a new angle ${{}\psi_{2}}$, while the cyclic angle ${{}\psi_{1}}$ is conjugate
to $J_{1}=pA_{1}+\sigma qA_{2}=H_{2}$. The complete transformation is given by
$J=M^{-t}A$, ${{}\psi}=M\phi$, where $M$ contains two arbitrary integers $a$
and $b$ restricted by $\det M=bp-\sigma aq=1$. The new Hamiltonian depends on
$J_{1},J_{2},{{}\psi_{2}}$ only, and is therefore integrable. The lowest order
term in ${{}\psi_{2}}$ has coefficients proportional to
$\sqrt{A_{1}^{q}A_{2}^{p}}$. Setting the non-linear non-resonant terms
$\mu_{ij}$ equal to zero the Hamiltonian becomes
$H=J_{1}+\mu\sqrt{(bJ_{1}-\sigma
qJ_{2})^{q}(-aJ_{1}+pJ_{2})^{p}}\cos{{}\psi_{2}}\,.$
Reverting back to the original Euclidean coordinates the resonant term reads
$\Delta
H=\Re\left[(p_{1}+\operatorname{i}q_{1})^{q}(p_{2}-\sigma\operatorname{i}q_{2})^{p}\right]\,,$
and the Hamiltonian simply is $H=H_{2}+\mu\Delta H$. The functions $H_{2}$ and
$\Delta H$ are in involution and independent almost everywhere, and thus the
system is Liouville integrable. This is the integrable system we are going to
analyze in this paper. Note that unlike previous work [16, 11, 19], we do not
add higher order terms to $\Delta H$ to compactify the system. Moreover, we
will study the integrable system defined by the Hamiltonian $H$, instead of
the Hamiltonian $\Delta H$. For the discussion of monodromy the two are
equivalent, since monodromy is a feature of the Liouville foliation, that does
not depend on the dynamics on the individual tori. Also, they share the same
singularly reduced system. However, we are also interested in the physically
relevant dynamics of the Birkhoff normal form in the context of KAM theory,
and therefore we analyse the Hamiltonian $H$ and not the Hamiltonian $\Delta
H$. In particular, when considering the rotation number of the full system the
difference is crucial. Nevertheless, we will study the momentum map $F=(\Delta
H,H_{2})$ (which is not the energy-momentum map for the Hamiltonian $H$), and
the Hamiltonian $H=H_{2}+\mu\Delta H$ is a (linear) function of the momenta.
## 3 Reduction
In this section we review the steps necessary to reduce to a single degree of
freedom using the symmetry $H_{2}$. We state only the results and refer the
interested reader to the standard literature, for example Cushman [6], Abraham
and Marsden [1] and Broer et. al. [4], for the $p:-q$ resonance in particular
also see [10]. These steps retrace the derivation reviewed in the previous
section in a more geometric way.
The flow of the resonant quadratic part $H_{2}$ is the $S^{1}$–group action
$\displaystyle\Phi^{H_{2}}:S^{1}\times\mathbb{C}^{2}$
$\displaystyle\longrightarrow\mathbb{C}^{2}$
$\displaystyle\left(t,z_{1},z_{2}\right)$
$\displaystyle\mapsto\left(z_{1}\exp\left(p\operatorname{i}t\right),z_{2}\exp\left(\sigma
q\operatorname{i}t\right)\right),\quad z_{1},\,z_{2}\in\mathbb{C}$
where $z_{i}=\frac{1}{\sqrt{2}}\left(p_{i}+\operatorname{i}q_{i}\right)$,
$i=1$, $2$. This action is non–degenerate except on the axis $z_{1}=0$ and
$z_{2}=0$ on which points have isotropy subgroup $\mathbb{Z}_{q}\subset S^{1}$
and $\mathbb{Z}_{p}\subset S^{1}$, respectively (if $q,p>1$). The invariants
of this group action are
$\pi_{1}=z_{1}\bar{z}_{1},\quad\pi_{2}=z_{2}\bar{z}_{2},\quad\pi_{3}=\Re(z_{1}^{q}z_{2}^{p}),\quad\pi_{4}=\Im(z_{1}^{q}z_{2}^{p})$
(3.3)
for $\sigma=-1$. For $\sigma=+1$ instead
$\pi_{3}=\Re(z_{1}^{q}\bar{z}_{2}^{p}),\pi_{4}=\Im(z_{1}^{q}\bar{z}_{2}^{p})$
are the invariants.
Because of the non-trivial isotropy we employ singular reduction in order to
reduce the system to one degree of freedom. The reduced phase space
$P_{h_{2}}=H_{2}^{-1}(h_{2})/S^{1}$
is given by the relation
$\pi_{3}^{2}+\pi_{4}^{2}=\pi_{1}^{q}\pi_{2}^{p},\quad\pi_{1}\geq
0,\;\pi_{2}\geq 0,$ (3.4)
hence it is a semi–algebraic variety in the invariants. The ambient Poisson
space is endowed with the Poisson structure
$\displaystyle\left\\{\pi_{1},\pi_{2}\right\\}$ $\displaystyle=0,\quad$
$\displaystyle\left\\{\pi_{1},\pi_{3}\right\\}$
$\displaystyle=q\mspace{2.0mu}\pi_{4}$
$\displaystyle\left\\{\pi_{1},\pi_{4}\right\\}$
$\displaystyle=-q\mspace{2.0mu}\pi_{3},\quad$
$\displaystyle\left\\{\pi_{2},\pi_{3}\right\\}$ $\displaystyle=-\sigma
p\mspace{2.0mu}\pi_{4}$ $\displaystyle\left\\{\pi_{2},\pi_{4}\right\\}$
$\displaystyle=p\mspace{2.0mu}\sigma\pi_{3},\quad$
$\displaystyle\left\\{\pi_{3},\pi_{4}\right\\}$
$\displaystyle={\textstyle\frac{1}{2}}{\pi_{1}^{q-1}\pi_{2}^{p-1}}\left(\sigma
p^{2}\pi_{1}-q^{2}\pi_{2}\right)$
with Casimir $\mathcal{H}_{2}=p\pi_{1}+q\sigma\pi_{2}$ and symplectic leaves
the reduced phase spaces $P_{h_{2}}$.
Fixing the Casimir $\mathcal{H}_{2}=h_{2}$ we choose to eliminate $\pi_{2}$
from (3.4) and thus get an equation for the reduced phase space in the form
$\pi_{3}^{2}+\pi_{4}^{2}=\pi_{1}^{q}\left(\frac{h_{2}-p\pi_{1}}{\sigma
q}\right)^{p}.$ (3.5)
The interval of valid $\pi_{1}$ is determined by the requirements $\pi_{1}\geq
0$ and $\pi_{2}\geq 0$, thus
$\displaystyle\sigma$ $\displaystyle=+1:$ $\displaystyle\pi_{1}$
$\displaystyle\in[0,h_{2}/p]$ (3.6) $\displaystyle\sigma$
$\displaystyle=-1,\;h_{2}>0:\;$ $\displaystyle\pi_{1}$
$\displaystyle\in[h_{2}/p,\infty)$ $\displaystyle\sigma$
$\displaystyle=-1,\;h_{2}\leq 0:\;$ $\displaystyle\pi_{1}$
$\displaystyle\in[0,\infty).$
Figure 1: The section $\pi_{4}=0$ of the reduced phase space $P_{h_{2}}$ for
the $2:\pm 3$ resonance. Upper left: $\sigma=+1$ and $h_{2}=1.1$. Upper right:
$\sigma=-1$, $h_{2}=0.5$. Lower left: $\sigma=-1$, $h_{2}=-0.5$. Lower right:
$\sigma=-1$, $h_{2}=0$. For $h_{2}\not=0$ points $\pi_{1}=0$ are cusp
singularities, and points $\pi_{1}=h_{2}/2$ are conical singularities, both
due to the non-trivial isotropy of $\Phi^{H_{2}}$.
Fig. 1 shows sections $\pi_{4}=0$ of the rotationally symmetric reduced phase
space $P_{h_{2}}$ for the $2:\pm 3$ resonance for all four relevant cases
$\sigma=+1$, $\sigma=-1$ and $h_{2}>0$, $h_{2}<0$ and $h_{2}=0$. For
$\sigma=+1$ the reduced phase space is compact with a cusp–singularity ($q=3$)
at the origin $\pi_{1}=0$ and a conical singularity ($p=2$) at
$\pi_{1}=h_{2}/2$ (where $\pi_{2}=0$). These singularities in the reduced
phase space are a result of the non–trivial isotropy of the group action
$\Phi^{H_{2}}$ at $z_{1}=\pi_{1}=0$ and $z_{2}=\pi_{2}=0$.
For $\sigma=-1$ the reduced phase space is non-compact. The singular points
are of the same type as in the compact case, but they exist separately for
positive or negative $h_{2}$. For $h_{2}>0$ there is a conical singularity
($p=2$) at $\pi_{2}=0$ and for $h_{2}<0$ there is a cusp singularity ($q=3$)
at $\pi_{1}=0$. For $h_{2}$ there is a singularity of order $p+q=5$ at
$\pi_{1}=0$ (in the compact case $\sigma=1$ the reduced phase space is merely
a point for $h_{2}=0$).
It follows from equation (3.5) that in the general $p:\pm q$ resonance there
is a singularity of order $q$ at $\pi_{1}=0$ and of order $p$ at $\pi_{2}=0$,
assuming $h_{2}\not=0$. If $h_{2}=0$ the order of the singularity is $p+q$.
Note that in the above statement a singularity of order 1 means no
singularity.
Expressed in the invariant the integral $\Delta H$ simply becomes $\pi_{3}$,
so that the reduced Hamiltonian is
$\mathcal{H}(\pi_{1},\pi_{2},\pi_{3})=p\pi_{1}+\sigma
q\pi_{2}+\mu\pi_{3}=h_{2}+\mu\pi_{3}\,.$ (3.7)
As mentioned earlier, the truncated resonant Birkhoff normal form of a generic
resonant Hamiltonian system would also contain terms $\pi_{1}^{i}\pi_{2}^{j}$,
with $i+j\leq(p+q)/2$, but in order to maximize the effect of the resonant
term these are assumed to be zero.
## 4 The Energy–Momentum Mapping
Let
$F:({{}p_{1}},{{}q_{1}},{{}p_{2}},{{}q_{2}})\mapsto\left(H_{2}({{}p_{1}},{{}q_{1}},{{}p_{2}},{{}q_{2}}),\Delta
H({{}p_{1}},{{}q_{1}},{{}p_{2}},{{}q_{2}})\right)$
be the momentum map of the integrable system, and denoted its value by
$(h_{2},\Delta h)$. The elliptic equilibrium at the origin is a critical point
of the momentum map $F$ since both integrals are of order $\geq 2$ in
$(p_{i},q_{i})$, so that $\operatorname{rank}DF(0)=0$. Furthermore, this point
is degenerate in the sense of the momentum map [3]. For this we need to show
that the Hessians of $H_{2}$ and $\Delta H$ are not linear independent at
$x=0$. This follows immediately from the fact that $\Delta H$ is of order
$p+q\geq 3$ in $(p_{i},q_{i})$ so that its Hessian vanishes identically at the
origin.
Figure 2: The critical values $(h_{2},\Delta h)$ of the energy–momentum
mapping are shown as red branches. The top left picture corresponds to the
$1:-q$ resonance, the top right one to the $p:-q$ resonance with $p>1$. The
degenerate equilibrium point is marked by a disk at the origin for
$\sigma=-1$. The lower left picture corresponds to the $2:3$ resonance with
$\sigma=+1$.
The critical values $(h_{2},\Delta h)$ of the momentum mapping for the $p:\pm
q$ resonance (i.e. the bifurcation diagram) are shown in fig. 2, and are
described in the following lemma.
###### Lemma 1.
The entire line $\Delta h=0$ is critical for $p>1$, $q>2$ and $\sigma=-1$. For
$p=1$, the critical values are given by $\Delta h=0$ and $h_{2}\leq 0$ and for
$\sigma=+1$ they are given by $\Delta h=0$ and $h_{2}\geq 0$. Moreover, there
are two additional branches emanating from the origin of the bifurcation
diagram for $\sigma=+1$.
###### Proof.
A point in reduced phase space is critical if 1) it is a singular point of the
reduced phase space or 2) if it is a point of tangency of the surface
$\Delta\mathcal{H}=\Delta h$ (which is simply a horizontal plane) and the
reduced phase space. The first condition together with (3.6) gives that
$\pi_{1}=\pi_{3}=\pi_{4}=0$ is critical for $\sigma h_{2}\geq 0$ and that
$\pi_{1}=h_{2}/p,\pi_{3}=\pi_{4}=0$ is critical for $h_{2}\geq 0$. The
corresponding critical values of the momentum map are $(h_{2},0)$. The second
condition requires the gradients of the two integrals to be parallel, thus
$\pi_{4}=0$, $\pi_{3}\not=0$, and the derivative of the right hand side of
(3.5), $\pi_{1}^{q}\pi_{2}^{p}$, with respect to $\pi_{1}$ vanishes. This
gives $p^{2}\pi_{1}=\sigma q^{2}\pi_{2}$. Since $\pi_{1},\pi_{2}$ are non-
negative there is no tangency for $\sigma=-1$ (appart from the singular point
with $h_{2}=0$). For $\sigma=1$ there are two additional families of critical
values emerging from the origin in a cusp of order $p+q$. ∎
## 5 Dynamics near the Degenerate Equilibrium
In this section we derive equations for the period $T$ of the reduced flow,
the rotation number $W$ of the full system, and the second action $I_{2}$. A
prominent feature of the reduced period $T$ is its algebraic (rather than
logarithmic) divergence when approaching the degenerate equilibrium point.
This is especially easy to see once we introduce weighted polar coordinates
$(\rho,\theta)$ in the bifurcation diagram. In these coordinates, $T$ and $W$
separate into $\rho$– and $\theta$–dependent contributions which considerably
simplify the computations.
### 5.1 The Reduced Period
An equation for $T$ is derived by separation of variables from
$\dot{\pi}_{1}=\left\\{\pi_{1},\mathcal{H}\right\\}=\mu\left\\{\pi_{1},\pi_{3}\right\\}=\mu
q\pi_{4}.$ (5.8)
Using the equation for the reduced phase space (3.5) together with $\Delta
h=\mu\pi_{3}$, $\pi_{4}^{2}$ can be written as a polynomial in $\pi_{1}$. It
follows that the reduced period is defined on the hyperelliptic curve
$\Gamma=\left\\{(\pi_{1},w)\in\mathbb{C}^{2}\mid
w^{2}=Q(\pi_{1})\right\\}\quad\text{where}\quad
Q(z)=\mu^{2}q^{2-p}z^{q}\left(\sigma(h_{2}-pz)\right)^{p}-(q\Delta h)^{2}$
Separating the variables in equation (5.8) and integrating yields
$T(h_{2},\Delta h)=\oint\frac{d\pi_{1}}{w}.$ (5.9)
Our main focus is the $p:-q$ resonance with non–compact fibration. Thus the
integral along a closed loop as it stands makes no sense. We define the
reduced period by dividing the dynamics into two parts: Dynamics close to the
equilibrium point, and dynamics far away from it. If the system is
compactified by an appropriate higher order term (as in [16, 11]) we may
assume that the dynamics far away from the equilibrium will eventually return
to the neighbourhood of the equilibrium. The time spent on this return loop is
a smooth function of initial conditions if we assume that there are no
additional critical points in it. Specifically we consider a Poincaré section
at some small but finite value of $\pi_{1}$ intersecting stable and unstable
invariant manifolds, similarly to the analysis done for symplectic invariants
in [17, 9] and the $1:-2$ resonance in [5]. The contribution of the near-
dynamics (from the section of the stable manifold to the section of the
unstable manifold) is divergent when approaching the equilibrium point, while
the contribution of the far-dynamics (from the section of the unstable
manifold to the section of the stable manifold) remains a smooth and bounded
function.
For convenience we modify the truncated period thus defined one more time.
Notice that the integral of $d\pi_{1}/w$ from any finite positive $\pi_{1}$ to
$\infty$ is finite, and smoothly depends on the parameters. Thus changing the
truncated period integral to an integral over the non-compact domain only
changes it by a smooth function, and the same argument applies. As a result we
can treat the closed loop integral of the non-compact system as our leading
order period. In particular it correctly describes the leading order divergent
terms when approaching the equilibrium point.
An alternative point of view that combines the last two steps (first
restriction to the near-dynamics, then the extension to $\infty$) is to
consider the integral in a compactification of the complex plane into a
projective space. In this space the integral for $T$ is compact, and this also
explains why it is bounded in the first place. This approach was first used in
[5].
In [19] two approaches are followed. The first one generalises the treatment
of the $1:-2$ resonance [11, 10] to the $p:-q$ resonance. There is a
privileged compactification which prevents the rotation number from diverging
when approaching the critical values. Using this compactification the period
lattice, i.e. reduced period and rotation number, is computed. Then the
authors use the Newton polygon to find the leading order terms of these
integrals in the limit approaching the origin of the bifurcation diagram. The
analogue of our polynomial $Q$ appears in their work as the Newton polygon
approximation. What our approach shows is that the resulting hyperelliptic
integrals have direct dynamical meaning as explained above. In the second
approach in [19] the problem is complexified and monodromy is found as Gauss-
Manin monodromy of a loop with complex $(h,\Delta h)$ that avoids critical
values. As the authors point out the singularity at the origin is not of Morse
type, and this is related to the fact the singularity is degenerate in the
sense of the momentum map. As a result the period diverges algebraically
(instead of as usually logarithmically).
###### Lemma 2.
The reduced period $T$ diverges algebraically with exponent $\left|\Delta
h\right|^{\frac{2}{p+q}-1}$ upon approaching the degenerate equilibrium point
on a curve with non–vanishing derivative at the origin. For $p=1$, the period
diverges like $\left|h_{2}\right|^{-\frac{p+q}{2}+1}$ on the line $\Delta h=0$
when $h_{2}\rightarrow 0$.
###### Proof.
Consider the polynomial $Q$ as a polynomial in $\pi_{1}$, $h_{2}$, and $\Delta
h$. It is weighted homogeneous, where $h_{2}$ and $\pi_{1}$ must have the same
weight, so that their weight is $2$ while that of $\Delta h$ is $p+q$.
Therefore we introduce weighted polar coordinates $(\rho,\theta)$ in the image
of the momentum map by
$\displaystyle\Delta h$ $\displaystyle=\rho^{p+q}\sin\theta$ (5.10a)
$\displaystyle h_{2}$ $\displaystyle=\rho^{2}\cos\theta$ (5.10b)
Together with $\pi_{1}=\rho^{2}x$ it follows that
$T(h_{2},\Delta
h)=\rho^{-(p+q)+2}\oint\frac{dx}{\tilde{w}}=:\rho^{-(p+q)+2}A(\theta)$ (5.11)
where
$\tilde{w}^{2}=\mu^{2}q^{2-p}x^{q}\left(\sigma(\cos\theta-
px)\right)^{p}-(q\sin\theta)^{2}$
is independent of $\rho$. Thus $T$ factors into a radial and an angular
contribution $A(\theta)$. The transformation for $(h_{2},\Delta h)$ to
$(\rho,\theta)$ is $C^{0}$ at the origin but $C^{\infty}$ everywhere else.
The period $T$ diverges with $\rho^{2-p-q}$. When approaching the origin on a
line with non-vanishing slope (in the original variables $(h_{2},\Delta h)$
the contribution in $\Delta h\sim\rho^{p+q}$ is of leading order. Thus
$T\sim|\Delta h|^{2/(p+q)-1}$. When $p=1$ the line segment $\Delta h=0$ with
$h_{2}>0$ is not critical. Approaching the origin along this line there is no
contribution from $\Delta h$, and thus only $h_{2}\sim\rho^{2}$ is relevant,
and the result follows. ∎
### 5.2 The Rotation Number
Recall from the introduction the symplectic coordinate system with
$\\{{{}\psi_{i}},J_{i}\\}=1$. The reduced period gives the period of
${{}\psi_{2}}$, while the rotation number gives the advance of
${{}\psi_{1}}/(2\pi)$ during that period. The ODE for ${{}\psi_{1}}$ is
$\dot{{{}\psi}}_{1}=\left\\{\theta_{1},H\right\\}\,.$ (5.12)
Integration of this ODE for time $T$ gives the rotation number. Note that the
angle ${{}\psi_{1}}$ is not be globally defined, but all we need is the
derivative of ${{}\psi_{1}}$, see the comments in [5].
The interpretation of the rotation number is similar to the interpretation of
the period: The true (compactified) rotation number will differ from $R$ by a
smooth function. The leading order singular part of the rotation number is
contained in $R$.
###### Proposition 3.
The rotation number $R$ of the $p:\pm q$ resonance is given by
$R(h_{2},\Delta
h)=\frac{-\sigma}{2\pi}\oint\left(1+\mu\frac{\pi_{3}}{2}\left(\frac{qb}{\pi_{1}}-\frac{ap}{\pi_{2}}\right)\right)\frac{d\pi_{1}}{w}.$
###### Proof.
The angle ${{}\psi_{1}}$ satisfies the following Poisson brackets:
$\displaystyle\left\\{{{}\psi_{1}},\pi_{1}\right\\}$ $\displaystyle=b$
$\displaystyle\left\\{{{}\psi_{1}},\pi_{2}\right\\}$ $\displaystyle=-a$
$\displaystyle\left\\{{{}\psi_{1}},\pi_{3}\right\\}$
$\displaystyle=\frac{\pi_{3}}{2}\left(\frac{qb}{\pi_{1}}-\frac{ap}{\pi_{2}}\right).$
These brackets follow from expressing $\pi_{i}$ in terms of the canonical
variables $(J_{i},{{}\psi_{i}})$, noting $\pi_{1}=A_{1}$, $\pi_{2}=A_{2}$,
$\pi_{3}=A_{1}^{q/2}A_{2}^{p/2}\cos{{}\psi_{2}}$, and then computing the
canonical brackets, using
$\frac{\partial\pi_{1}}{\partial J_{1}}=b,\quad\frac{\partial\pi_{2}}{\partial
J_{1}}=-a\,.$
Thus, the differential equation for ${{}\psi_{1}}$ (5.12) becomes
$\dot{{{}\psi}}_{1}=1+\mu\frac{\pi_{3}}{2}\left(\frac{qb}{\pi_{1}}-\frac{ap}{\pi_{2}}\right).$
Changing the integration variable from $t$ to $\pi_{1}$ using (5.8) gives
${{}\psi_{1}}$ as an Abelian integral. Comparison of the limiting behaviour of
the rotation number and insisting on the relations $R=\omega_{1}/\omega_{2}$
and $\partial H/\partial I_{i}=\omega_{i}$ gives the overall sign $-\sigma$ in
$R$. ∎
We refer to the three integrals $R$ is composed of as
$\displaystyle R(h_{2},\Delta h)$
$\displaystyle=-\sigma\left(\frac{1}{2\pi}T(h_{2},\Delta h)+W(h_{2},\Delta
h)\right),$ $\displaystyle W(h_{2},\Delta h)$
$\displaystyle=W_{1}(h_{2},\Delta h)+W_{2}(h_{2},\Delta h)$
where
$\displaystyle W_{1}(h_{2},\Delta h)$
$\displaystyle=\phantom{-}\frac{qb}{4\pi}\Delta
h\oint\frac{d\pi_{1}}{\pi_{1}w},$ $\displaystyle W_{2}(h_{2},\Delta h)$
$\displaystyle=-\frac{ap}{4\pi}\Delta h\oint\frac{d\pi_{1}}{\pi_{2}w}.$
Expressing these functions in the weighted polar coordinates $(\rho,\theta)$
gives
$\displaystyle B_{1}(\theta)$
$\displaystyle=\frac{qb}{4\pi}\sin\theta\oint\frac{dx}{x\tilde{w}}$ (5.13)
$\displaystyle B_{2}(\theta)$
$\displaystyle=-\frac{apq}{4\pi}\sin\theta\oint\frac{dx}{(px-\cos\theta)\tilde{w}}.$
(5.14)
Note that $B_{1}$ and $B_{2}$ do not depend on $\rho$, which is the main
virtue of the weighted polar coordinates. Nevertheless, the original functions
$W_{1}$ and $W_{2}$ are not continuous at $(h_{2},\Delta h)=(0,0)$. The reason
is that they take different values when approaching the origin along different
lines $\theta=const$.
Figure 3 shows a plot of $B_{1}(\theta)$ on the left and $B_{2}(\theta)$ on
the right on a loop $\Gamma(\theta)$ around the origin of the bifurcation
diagram fig. 2 top right with constant $\rho$ for the $2:-3$ resonance.
Figure 3: Plot of $B_{1}(\theta)$ (left) and $B_{2}(\theta)$ (right) for the
$2:-3$ resonance. $a=-1$, $b=2$. The corresponding bifurcation diagram is
shown in fig. 2 top right. $B_{1}$ is discontinuous on the branch of critical
values with $\Delta h=0$ and $h_{2}<0$, i.e. $\theta=\pm\pi$, with total jump
equal to ${{}-b/q}$. $B_{2}$ is discontinuous on the branch of critical values
with $\Delta h=0$ and $h_{2}>0$, i.e. $\theta=0$, with total jump equal to
${{}-a/p}$, see Lemma 5.
The functions $B_{1}$ and $B_{2}$ are periodic, but discontinuous. The
discontinuity occurs where the critical points are crossed. The functions can
be made continuous by an appropriate shift, but then they will not be periodic
any more. They cannot be made periodic and continuous at the same time. It
turns out that this behaviour is the reason fractional monodromy occurs in the
$p:-q$ resonance, see section 6. From the derivation of $B_{1}$ and $B_{2}$ it
is clear that they are not really independent functions, but can be obtained
from each other by exchanging $p$ and $q$, and shifting $\theta$ by $\pi$.
Note that if we would have considered $\Delta H$ as our Hamiltonian instead of
$H$ the rotation number would not have the diverging contribution from $T$; it
would be $W=W_{1}+W_{2}$ instead of $R=T/2\pi+W$. For the vanishing twist
described later on this contribution is crucial.
### 5.3 The Non–Trivial Action
The flow of $H_{2}$ is periodic with period $2\pi$, and therefore
$I_{1}=H_{2}$ is one action of the system. The other action $I_{2}$ is a non-
trivial function of $\Delta h$ (or $h$) and $I_{1}$. From the local canonical
coordinates mentioned in the introduction we know that $J_{2}$ and
${{}\psi_{2}}$ are conjugate variables. Translating this to the reduced system
gives
###### Proposition 4.
The non–trivial action $I_{2}$ of the $p:\pm q$ resonance is
$I_{2}(I_{1},\Delta h)=-\sigma\left(\frac{p+q}{4\pi}\Delta
h\mspace{2.0mu}T(I_{1},\Delta h)-I_{1}W(I_{1},\Delta h)\right).$ (5.15)
###### Proof.
Using the reduced Poisson bracket we find
$\\{\cos^{-1}\frac{\pi_{3}}{\sqrt{\pi_{1}^{q}\pi_{2}^{p}}},a\pi_{1}+b\pi_{2}\\}=-\sigma\,.$
The action $I_{2}$ is therefore given by
$I_{2}(h_{2},\Delta
h)=\frac{\sigma}{2\pi}\oint\arccos\frac{\pi_{3}}{\sqrt{\pi_{1}^{q}\pi_{2}^{p}}}\mspace{2.0mu}(a\mspace{1.0mu}d\mspace{1.0mu}\pi_{1}+b\mspace{1.0mu}d\mspace{1.0mu}\pi_{2}).$
Integration by parts gives
$I_{2}=\frac{-\sigma}{2\pi}\oint\frac{\pi_{3}}{\sqrt{\pi_{1}^{q}\pi_{2}^{p}-\pi_{3}^{2}}}\left(\left(\frac{aq}{2}+\frac{ap^{2}}{2q}\frac{\pi_{1}}{\pi_{2}}\right)d\pi_{1}+\left(\frac{bp}{2}+\frac{bq^{2}}{2p}\frac{\pi_{2}}{\pi_{1}}\right)d\pi_{2}\right).$
Now $\pi_{1}$ and $\pi_{2}$ are related by the Casimir
$I_{1}=h_{2}=p\pi_{1}+\sigma q\pi_{2}$. Thus
$d\pi_{2}=\sigma\frac{p}{q}d\pi_{1}$ and with $\Delta h=\mu\pi_{3}$ this
equation becomes
$I_{2}=\frac{-\sigma}{2\pi}\left(\frac{\Delta h}{2}(p+q)\left(bp-\sigma
aq\right)\oint\frac{d\pi_{1}}{w}+\frac{\Delta
h\mspace{2.0mu}h_{2}\mspace{2.0mu}a\mspace{2.0mu}p}{2}\oint\frac{d\pi_{1}}{\pi_{2}w}-\frac{\Delta
h\mspace{2.0mu}h_{2}\mspace{2.0mu}b\mspace{2.0mu}q}{2}\oint\frac{d\pi_{1}}{\pi_{1}w}\right).$
Using $\det M=bp-\sigma aq=1$, $h_{2}=I_{1}$, and recalling that
$W=W_{1}+W_{2}$ gives the result. Notice that the final answer is independent
of the choice of integers $a$ and $b$ in $M$, as long as $\det M=1$. ∎
This expression for $I_{2}$ reduces to the one found for the $1:-2$ resonance
[5], where $p+q=3$, $a=0$ and $b=1$. In the next section on fractional
monodromy we shall prove that the terms $W_{1}$ and $W_{2}$ are discontinuous
(see fig. 3) at $\theta=\pm\pi$ and $\theta=\pm 0$ respectively. They can be
made non-periodic and continuous, thus causing the action $I_{2}$ to be
globally multivalued.
Even though $T$ diverges algebraically like $\Delta h^{\frac{2}{p+q}-1}$
(Lemma 2), the action $I_{2}$ goes to zero like $\rho^{2}\sim\Delta
h^{\frac{2}{p+q}}$ when approaching the equilibrium point. The action $I_{2}$
does have the interpretation of a phase space volume. Even though the system
is non-compact the action does not diverge when approaching the equilibrium
point. The geometric reason is that $I_{2}$ measures the volume relative to
the (unbounded) separatrix, which is finite. The boundary terms from the
partial integration cancel, see [18] for the details.
Another interpretation of this formula is obtained by solving it for the
rotation number. This gives a decomposition of the rotation number into a
dynamical phase proportional to $T$ and a geometric phase proportional to the
action $I_{2}$, compare e.g. [13].
## 6 Fractional Monodromy
In this section we establish the fact that the ${{}p:-q}$ resonance has
fractional monodromy. We explicitly calculate the monodromy matrix $M$ that
gives the transformation of the actions $I_{1}$ and $I_{2}$ after one full
anticlockwise cycle around a loop $\Gamma$ enclosing the degenerate
equilibrium point at the origin of the bifurcation diagram fig. 2 top row. We
then describe the singular fibres corresponding to the critical values the
loop $\Gamma$ crosses.
###### Lemma 5.
The function $B_{1}(\theta)$ satisfies
$\lim_{\theta\rightarrow\pm\pi^{\mp}}B_{1}(\theta)=\pm\frac{b}{2q}.$ (6.16)
The function $B_{2}(\theta)$ satisfies
$\lim_{\theta\rightarrow 0^{\pm}}B_{2}(\theta)=\mp\frac{a}{2p}.$ (6.17)
This is the main technical result, but instead of duplicating the proof here,
we refer to [19]. The main addition to their work at this point is the
interpretation of these integrals. In [19] they appeared as the leading order
Newton-polygon approximation of the compactified rotation number integrals. In
our approach they appear directly as the integrals of the rotation number of
the non-compact system interpreted as explained before.
Using this technical result we are now giving another proof of fractional
monodromy in the $p:-q$ resonance which is based on the explicit expression of
the action obtained earlier. Recall that $I_{1}$ is the globally smooth action
$H_{2}$ of the system as introduced in the introduction.
###### Theorem 6.
The $p:-q$ resonance has fractional monodromy near the degenerate elliptic
equilibrium point. The actions change according to
$\begin{pmatrix}I_{1}^{\prime}\\\\[5.0pt]
I_{2}^{\prime}\end{pmatrix}=\begin{pmatrix}1&0\\\\[5.0pt]
{{}-}\frac{1}{pq}&1\end{pmatrix}\begin{pmatrix}I_{1}\\\\[5.0pt]
I_{2}\end{pmatrix}$
after one full anticlockwise cycle on a loop $\Gamma$ around the degenerate
equilibrium point in the bifurcation diagram.
###### Proof.
Assume the loop $\Gamma$ with fixed $\rho=\rho_{0}$ is traversed in the
mathematical positive sense. When crossing the line $\Delta h=0$, $h_{2}<0$,
$\frac{B_{1}(\pi^{-})}{2\pi}=\frac{b}{2q}$, and
$\frac{B_{1}(-\pi^{+})}{2\pi}=-\frac{b}{2q}$. Hence, the effective jump of
$W_{1}$ becomes ${{}-\frac{b}{q}}$. A similar argument holds for $W_{2}$ at
$\Delta h=0$, $h_{2}>0$. Although the loop is traversed such that $\theta$
crosses the line $\Delta h=0$, $h_{2}>0$ from below, $W_{2}$ has the opposite
sign of $W_{1}$. Thus, the effective jump of $W_{2}$ is ${{}-\frac{a}{p}}$.
Using the form of the action variable from proposition 4, it follows that the
action $I_{2}$ changes like
${{}I_{2}\rightarrow I_{2}-I_{1}\left(\frac{a}{p}+\frac{b}{q}\right)}$
upon completing a full cycle. Due to
$\frac{a}{p}+\frac{b}{q}=\frac{aq+bp}{pq}=\frac{1}{pq},$
the monodromy is (independently of the integers $a$ and $b$) given by
${{}I_{2}\rightarrow I_{2}-\frac{1}{pq}I_{1}}.$
∎
The singular fibre of the critical values with $h_{2}<0$ depends on the value
of $q$. Consider the Poincaré section $p_{2}=0$ and $\dot{p}_{2}<0$. The
critical level has $\Delta h=\pi_{3}=0$. Thus we need to study the level set
determined by the three equations $p|z_{1}|^{2}+\sigma q|z_{2}|^{2}=h_{2}<0$,
$\Re(z_{1}^{q}z_{2}^{p})=0$, and $p_{2}=0$ near the critical point $z_{1}=0$.
From the first and last equation we obtain
$q_{2}^{2}=(-h_{2}+p|z_{1}|^{2})/(-\sigma q)$. For small $|z_{1}|$ therefore
$q_{2}$, and hence $z_{2}$, is approximately constant. The remaining equation
thus becomes $\Re(z_{1}^{q})=0$. Writing $z_{1}=r\exp\operatorname{i}\varphi$
gives $r_{1}^{q}\cos q\varphi=0$, so that the level sets of the intersection
of the critical fibre with the Poincaré section are given by the $2q$ rays
with $q\varphi=\pi(n+1/2)$, $n=0,1,\dots,2q-1$. Together they form $q$
straight lines which are the stable and unstable manifolds in alternation, all
passing through the origin.
If the level set is compactified, e.g. by considering
$\Re(z_{1}^{q})+|z_{1}|^{q+1}$, it becomes a flower with $q$ petals. The
action of the flow of $H_{2}$ on the level set is as follows. Start with a
point in the section $p_{2}=0$, hence $z_{2}=q_{2}$. It returns to the section
when the imaginary part of its image vanishes again with positive derivative,
hence for $\Im(q_{2}\exp(\sigma qit))=q_{2}\sin(\sigma qt)=0$, with $\sigma
q\,q_{2}\cos(\sigma qt)<0$. The smallest positive $t$ that solves this is
$t=2\pi/q$. The action of the flow after the return time therefore is
$\Phi^{H_{2}}(2\pi/q,z_{1},z_{2})=(z_{1}\exp(2\operatorname{i}\pi
p/q),z_{2})\,.$
This is simply a rotation by $2\pi p/q$ in the $z_{1}$ plane. As a result the
petals of the flower are mapped into each other, and $p$ is the number of
petals that the rotation advances by. Since $p$ and $q$ are coprime, all
petals are visited before the orbit returns back to the initial one. The
action is the same for either sign of $\sigma$, the difference is that for
$\sigma=-1$ instead of petals we have sectors delineated by stable and
unstable manifolds.
Considering the level set as a whole, and not only in the Poincaré section,
gives a curled torus whose transversal cross section is the petal, and whose
tubes rotate by $2\pi p/q$ when they complete one longitudinal cycle. The fact
that we discussed the flow of $H_{2}$ instead of the flow of $H$ (or $\Delta
H$) is immaterial for the topology of the level set since the flows commute.
For $h_{2}>0$ in a similar way a flower with $p$ petals appears in the section
$p_{1}=0$.
All critical values are degenerate when $q>2$. This is a crucial difference
between the $1:-2$ resonance and the higher order resonances. In particular
the terms $\mu_{ij}$ that were set to zero in the normal form could completely
change the bifurcation diagram. How to envisage the singularity at the
equilibrium point with $h_{2}=\Delta h=0$ is unclear. Both curled tori limit
to this unstable degenerate equilibrium point, but the above argument breaks
down since $|h_{2}|$ cannot be assumed large compared to $|z_{i}|$ any more.
## 7 Vanishing Twist in the $1:-q$ Resonance
For the $p:-q$ resonance with $p>1$ the bifurcation diagram is divided into
two halves by the critical line $\Delta h=0$. We suspect that in this case the
twist does not vanish for regular values, but we have not been able to find a
proof of this. However, when $p=1$ the line of critical values stops at the
origin, see fig. 2. This is therefore a typical case where one can expect
vanishing twist to occur, see [8] for a general topological proof. In this
particular case using the weighted polar coordinates allows for a simple
analytical proof of vanishing twist.
###### Theorem 7.
For the $1:-q$ resonance, $q\geq 2$, the twist vanishes near the degenerate
equilibrium point on the curve $\Delta h=0$, $h_{2}>0$.
###### Proof.
By definition the twist vanishes if the rotation number $R$ has a critical
point on the energy surface, i.e. when $R$ does not change from torus to
torus. On the energy surface this is written as $\partial R/\partial
I_{1}|_{H=const}$. In the image of the energy momentum map the condition is
satisfied when the lines of constant rotation number and the lines of constant
energy are tangent to each other. This implies that $\nabla R$ and $\nabla H$
are parallel. The gradients can be computed in any coordinate system, and we
choose the weighted polar coordinates. Thus the condition for vanishing twist
is
$\frac{\partial R}{\partial\rho}\frac{\partial
H}{\partial\theta}-\frac{\partial R}{\partial\theta}\frac{\partial
H}{\partial\rho}=0\,.$
Now $H=H_{2}+\mu\Delta H$ where $H_{2}$ is order 2 in $\rho$ and $\Delta H$ is
order $p+q$ in $\rho$. For small $\rho$ we can therefore neglect $\Delta H$,
and find $\partial H/\partial\rho\approx 2\rho\cos\theta$, $\partial
H/\partial\theta\approx-\rho^{2}\sin\theta$. Recalling the factorisation
$T=\rho^{-(p+q)+2}A(\theta)$ from (5.11) we find
$\displaystyle-\sigma 2\pi\frac{\partial R}{\partial\rho}$
$\displaystyle=\frac{\partial
T}{\partial\rho}=\left(-(p+q)+2\right)\rho^{-(p+q)+1}A(\theta),$
$\displaystyle-\sigma 2\pi\frac{\partial R}{\partial\theta}$
$\displaystyle=\frac{\partial
T}{\partial\theta}+{B_{1}^{\prime}}(\theta)+{B_{2}^{\prime}}(\theta)\approx\rho^{-(p+q)+2}A^{\prime}(\theta)\,.$
The leading order contribution comes from $T$ alone. Altogether this gives
$-\sigma 2\pi\left(\frac{\partial R}{\partial\rho}\frac{\partial
H}{\partial\theta}-\frac{\partial R}{\partial\theta}\frac{\partial
H}{\partial\rho}\right)=\rho^{-(p+q)+3}\left((p+q-2)A(\theta)\sin\theta-2A^{\prime}(\theta)\cos\theta+O(\rho)\right)$
up to lowest order in $\rho$. The solution $\theta=0$ follows from symmetry:
$A(\theta)$ is even in $\theta$ thus $A^{\prime}(\theta)$ is odd in $\theta$,
and thus both terms are odd in $\theta$. For $p=1$ the line $\theta=0$ is a
line of regular values, and therefore we have shown that the twist vanishes
along $\theta=0$ asymptotically near the origin. ∎
### Acknowledgements
The authors would like to thank the Department of Applied Mathematics at the
University of Colorado, Boulder for its hospitality in 2006/7 where this work
was completed. HRD was supported in part by a Leverhulme Research Fellowship.
## References
* [1] Ralph Abraham and Jerrold E. Marsden. Foundations of Mechanics. Westview Press, 2nd edition, 1978.
* [2] V.I. Arnold, V.V. Kozlov, and A.I. Neishtadt. Mathematical Aspects of Classical and Celestial Mechanics. Springer-Verlag, 2nd edition, 1997.
* [3] A. V. Bolsinov and A. T. Fomenko. Integrable Hamiltonian Systems: Geometry, Topology, Classification. Chapman & Hall/CRC, 2004.
* [4] H. Broer, I. Hoveijn, G. Lunter, and G. Vegter. Bifurcations in Hamiltonian Systems, volume 1806 of Lecture Notes in Mathematics. Springer-Verlag, 2003.
* [5] R. H. Cushman, Holger R. Dullin, Heinz Hanßmann, and Sven Schmidt. The $1:\pm 2$ resonance. Regul. Chaotic Dyn., 12(6), 2007.
* [6] Richard H. Cushman and Larry M. Bates. Global Aspects of Classical Integrable Systems. Birkhäuser, 1997.
* [7] J. J. Duistermaat. On Global Action–Angle Coordinates. Comm. Pure Appl. Math., 33(6):687–706, 1980.
* [8] H. R. Dullin and A. V. Ivanov. Rotation function near resonant bifurcations. In A. V. Bolsinov, A. T. Fomenko, and A. A. Oshemkov, editors, Topological Methods in the Theory of Integrable Systems. Taylor and Francis, 2005\.
* [9] H. R. Dullin and S. Vũ Ngọc. Symplectic invariants near hyperbolic-hyperbolic points. Regul. u. Chaotic Dyn., 12:689–716, 2007.
* [10] K. Efstathiou. Metamorphoses of Hamiltonian Systems with Symmetries, volume 1864 of Lecture Notes in Mathematics. Springer-Verlag, 2005.
* [11] K. Efstathiou, R. H. Cushman, and D. A. Sadovskií. Fractional monodromy in the $1:-2$ resonance. Adv. in Math., 209:241–273, 2007.
* [12] V. S. Matveev. Integrable Hamiltonian systems with two degrees of freedom. Topological structure of saturated neighborhoods of saddle-saddle and focus-focus types. Matem. Sbornik, 187(4):29–58, 1996.
* [13] Richard Montgomery. How much does the rigid body rotate? A Berry’s phase from the 18th century. Am. J. Phys., 59(5):394–398, 1991.
* [14] N. N. Nekhoroshev. Fractional monodromy in the case of arbitrary resonances. Sb. Mat., 198(3):383–424, 2007.
* [15] N. N. Nekhoroshev, D. A. Sadovskií, and B. I. Zhilinskií. Fractional monodromy of resonant classical and quantum oscillators. C. R. Acad. Sci. Paris, 335:985–988, 2002.
* [16] N. N. Nekhoroshev, D. A. Sadovskií, and B. I. Zhilinskií. Fractional Hamiltonian Monodromy. Ann. Henri Poincare, 7(6):1099–1211, 2006.
* [17] S. Vũ Ngọc. On semi-global invariants of focus-focus singularities. Topology, 42(2):365–380, 2003.
* [18] S. Schmidt. Resonant Equilibria in Hamiltonian Systems. PhD thesis, Loughborough University, 2008.
* [19] D. Sugny, P. Mardesic, M. Pelletier, A. Jebrane, and H. R. Jauslin. Fractional hamiltonian monodromy from a gauss-–manin monodromy. J. Math. Phys., 49:042701, 2008.
* [20] N. Tien Zung. A note on focus–focus singularities. Differential Geom. Appl., 7(2):123–130, 1997.
|
arxiv-papers
| 2009-05-04T09:27:43 |
2024-09-04T02:49:02.304567
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Sven Schmidt and Holger R. Dullin",
"submitter": "Sven Schmidt",
"url": "https://arxiv.org/abs/0905.0334"
}
|
0905.0395
|
# A liquid helium target system for a measurement of parity violation in
neutron spin rotation
C. D. Bass christopher.bass@nist.gov T. D. Bass B. R. Heckel C. R. Huffer
D. Luo D. M. Markoff A. M. Micherdzinska W. M. Snow H. E. Swanson S. C.
Walbridge Indiana University / IUCF, Bloomington, IN 47408, USA National
Institute of Standards and Technology, Gaithersburg, MD 20899, USA University
of Washington / CENPA, Seattle, WA 98195, USA North Carolina Central
University, Durham, NC 27707, USA
###### Abstract
A liquid helium target system was designed and built to perform a precision
measurement of the parity-violating neutron spin rotation in helium due to the
nucleon-nucleon weak interaction. The measurement employed a beam of low
energy neutrons that passed through a crossed neutron polarizer–analyzer pair
with the liquid helium target system located between them. Changes between the
target states generated differences in the beam transmission through the
polarizer–analyzer pair. The amount of parity-violating spin rotation was
determined from the measured beam transmission asymmetries. The expected
parity-violating spin rotation of order $10^{-6}$ rad placed severe
constraints on the target design. In particular, isolation of the parity-odd
component of the spin rotation from a much larger background rotation caused
by magnetic fields required that a nonmagnetic cryostat and target system be
supported inside the magnetic shielding, while allowing nonmagnetic motion of
liquid helium between separated target chambers. This paper provides a
detailed description of the design, function, and performance of the liquid
helium target system.
###### keywords:
Cold neutrons , Cryogenic targets , Liquid helium , Neutron physics , Nucleon-
nucleon weak interaction , Parity-violation , Polarized neutrons
###### PACS:
21.30.-x , 24.80.+y , 25.10.+s , 29.25.Pj
## 1 Introduction
The Neutron Spin Rotation experiment was conducted to measure the parity-
violating spin rotation angle $\phi_{\rm{PV}}$ of cold neutrons that propagate
through liquid helium to a precision of $3\times 10^{-7}$ rad/m. The liquid
helium target system for this experiment was assembled and tested at the
Indiana University Cyclotron Facility (IUCF) in Bloomington, Indiana, and the
experiment was conducted at the National Institute of Standards and Technology
(NIST) in Gaithersburg, Maryland.
This paper describes the design, commissioning and performance of the liquid
helium target system used in the Neutron Spin Rotation experiment. A more
detailed description of the polarimeter and neutron beam characterization for
this experiment will be addressed in future papers. Section 1 provides a
description of the phenomenon of parity violation in neutron spin rotation
with a brief discussion of the scientific interest in its measurement;
explains the design of the neutron polarimeter and the measurement strategy
for isolating the parity-odd component of the neutron spin rotation; and
describes the overall experimental apparatus, listing the requirements and
constraints on the design of the liquid helium target. Section 2 specifies the
design and construction of the liquid helium target system. Section 3 outlines
the motion control system that was used to move liquid within the target
region. Section 4 describes the nonmagnetic cryostat and the cryogenic
performance of the target system. Section 5 specifies the integration of the
liquid helium target system into the data acquisition system. Section 6
provides details of the performance of the target, and Section 7 offers
conclusions.
### 1.1 Physics Overview
From an optical viewpoint [1], neutron spin rotation is caused by the presence
of a helicity-dependent neutron index of refraction $n$ of a medium, which can
be given in terms of the coherent forward scattering amplitude $f(0)$ for a
low-energy neutron:
$n=1-\frac{2\pi\rho f(0)}{k^{2}},$ (1)
where $\rho$ is the number density of scatterers in the medium, and $\vec{k}$
is the incident neutron wave vector. For low-energy (meV) neutrons propagating
through an unpolarized medium, $f(0)$ is the sum of an isotropic, parity-even
term $f_{\rm{PC}}$ that is dominated by the strong interaction and a parity-
odd term $f_{\rm{PV}}$ that contains only weak interactions and is dominated
by p-wave contributions. The $f_{\rm{PV}}$ term is proportional to
$\vec{\sigma_{\rm{n}}}\cdot\vec{k}$, where $\vec{\sigma_{\rm{n}}}$ is the
neutron spin vector, so $f_{\rm{PV}}$ has opposite signs for the positive and
negative helicity neutron spin states.
As a neutron propagates through a medium, the two helicity states accumulate
different phases: $\phi_{\pm}=\phi_{\rm{PC}}\pm\phi_{\rm{PV}}$. The parity-odd
component causes a relative phase shift of the two neutron helicity
components, and so induces a rotation of the neutron polarization about its
momentum. Because the parity-odd amplitude is proportional to the wave vector
$k$, the rotary power (rotation per unit length) tends to a constant for low
energy neutrons [2]:
$\lim_{E_{n}\to 0}\frac{d\phi_{PV}}{dz}=\frac{4\pi\rho f_{PV}}{k}.$ (2)
An order-of-magnitude estimate leads one to expect weak rotary powers in the
$10^{-6}-10^{-7}$ rad/m range.
The parity-violating neutron spin rotation is due to the nucleon-nucleon (NN)
weak interaction and can be described in terms of the Desplanques, Donoghue,
and Holstein nucleon-meson weak coupling amplitudes [3][4] as well as pionless
effective field theory coupling parameters [5][6][7]. The values of these
couplings are neither well-constrained by theory nor by experiment, so a
measurement of the parity-violating neutron spin rotation through liquid
helium can constrain the poorly-understood properties of the NN weak
interaction.
### 1.2 Measurement Technique
An overview of the neutron polarimeter is shown in Figure 1. The measurement
technique – analogous to an arrangement of an orthogonally-crossed
polarizer–analyzer pair in light optics – focuses on the orientation of the
neutron polarization, which emerges along the $+\hat{\rm{y}}$-direction from
the supermirror polarizer [8].
Figure 1: Schematic diagram of the neutron polarimeter apparatus for the
Neutron Spin Rotation experiment.
In the absence of spin rotation – including both background magnetic rotations
and the signal rotations – this orientation would remain unchanged during
passage along the spin transport and into the target region. After leaving the
target region, neutrons would enter the output coil, which transversely and
adiabatically rotates the neutron polarization vector by $\pm\pi/2$ rad (see
Figure 2). Neutrons would then pass through the polarization analyzer. Because
the transmitted beam intensity for both $+\pi/2$ and $-\pi/2$ rotational
states is the same, the difference in count rates measured by the 3He
ionization chamber would be zero.
Figure 2: Diagram of the transverse rotation of neutron polarization by the
output coil. The amount of spin rotation experienced by the neutrons along the
target region is proportional to the count rate asymmetry measured in the 3He
ionization chamber between the two output coil rotational states.
However, if the neutron polarization rotates during beam passage through the
target region, there would be a component of neutron polarization along the
$\hat{\rm{x}}$-direction (horizontal) when the beam reaches the output coil.
This component would flip between the $+\hat{\rm{y}}$ and
$-\hat{\rm{y}}$-directions as the output coil alternates between $+\pi/2$ and
$-\pi/2$ rotational states. The transmission of neutrons polarized parallel to
the axis of the polarization analyzer would be different than those polarized
antiparallel, and this would produce an asymmetry in the count rates for the
two output coil rotational states. The neutron spin-rotation angle
$\phi_{\rm{PV}}$ is proportional to this count rate asymmetry.
Because longitudinal magnetic fields generate neutron spin rotation, it was
necessary to separate the parity-violating component of the signal from this
parity-conserving background. This separation was accomplished by oscillating
the parity-violating signal at a known frequency.
However, since the magnitude and direction of spin rotation along the target
is independent of the initial direction of the neutron transverse
polarization, flipping the direction of the transverse polarization cannot be
used to modulate the parity-violating signal. Instead, the oscillation was
created by a combination of target motion and a precession of the neutron
polarization about a vertical axis. Liquid helium was moved between a pair of
target chambers located upstream and downstream of a vertical solenoid called
a “pi-coil”.
The integrated spin rotation in the target region due to magnetic fields was
unaffected by the presence of liquid helium in either the upstream or
downstream target chambers, provided that (1) the target was nonmagnetic and
(2) the trajectories and energies of neutrons accepted by the polarization
analyzer and 3He ionization chamber were unchanged when liquid helium was
moved between the target chambers.
The pi-coil generated an internal magnetic field whose magnitude was chosen to
precess the neutron polarization direction by $\pi$ radians about the
$\hat{\rm{y}}$-axis for neutrons of a given energy. This precession
effectively reversed the sign of the x-component of neutron spin for rotations
that occurred upstream of the pi-coil. With the pi-coil energized, the parity-
violating contribution to the neutron spin rotation for liquid helium was
negative for the target chamber located in the upstream position relative to
the rotation in the downstream target position, and the difference in the
total spin rotation angle between the two target states was $2\phi_{\rm{PV}}$
plus a contribution due to non-ideal magnetic backgrounds within the target
region. A schematic of the spin rotation angles along the polarimeter is shown
in Figure 3.
Figure 3: Diagram of the spin rotation angles along the polarimeter. As
neutrons pass through the upstream section of the target, they undergo a
parity-conserving Larmor precession ${}^{up}\phi_{PC}$, which is due to
background magnetic fields in that region. Passage though the pi-coil reverses
the sign of that rotation. Neutrons then pass through the downstream target
region and undergo another parity-conserving Larmor precession
${}^{dn}\phi_{PC}$ of similar magnitude as ${}^{up}\phi_{PC}$ due to fields in
that region. The residual background rotation $\phi_{bkg}$ is the small
difference between ${}^{up}\phi_{PC}$ and ${}^{dn}\phi_{PC}$, and is
independent of the presence of helium in the target chambers. However,
neutrons passing through the target when the upstream chamber is filled with
liquid helium (I) undergo an additional parity-violating spin rotation
$\phi_{PV}$ that is due to the NN weak interaction. The sign of $\phi_{PV}$ is
reversed upon passage through the pi-coil. Similarly, neutrons passing through
the target when the downstream chamber is filled with liquid helium (II) also
undergo a parity-violating spin rotation $\phi_{PV}$. However, these neutrons
have already passed through the pi-coil, so the sign of $\phi_{PV}$ is
unchanged. Thus, moving liquid helium between upstream and downstream target
chambers can modulate the sign of $\phi_{PV}$ signal against a constant
background $\phi_{bkg}$.
### 1.3 Physics Driven Parameters
In order to reach a measurement sensitivity of $3\times 10^{-7}$ rad/m, the
neutron polarimeter and liquid helium target system needed to satisfy certain
requirements. Because the parity-violating rotary power is constant in the
limit for low-energy neutrons (Equation 2), the parity-odd spin-rotation angle
$\phi_{PV}$ is proportional to the number density of scatterers $\rho$ for the
target and the target length $z$. The statistical error is given by
$\sqrt{N}=\sqrt{N_{o}e^{-\rho\sigma_{tot}z}},$ (3)
with total neutron cross section $\sigma_{tot}$. Maximizing the signal to
statistical error yields a target length of $z=2/\rho\sigma_{tot}$ or twice
the mean free path. Because $\sigma_{tot}$ strongly depends on energy for
liquid helium in the cold neutron regime, the optimal target length depends on
the details of the neutron energy spectrum. In liquid helium at 4.2 K, the
mean free path for 0.5 nm neutrons is about 1 meter, and this path length
increases to about 2 m for 0.7 nm neutrons [10].
The polarimeter needed to effectively isolate the parity-violating spin
rotation signal from noise and a much larger parity-conserving background. The
primary sources of noise above $\sqrt{N}$ counting statistics were (1)
fluctuations in the neutron beam intensity that were transmitted through the
polarization analyzer and into the ion chamber, which were due to noise in the
intensity and/or beam spectrum from the reactor, or from transmission through
density fluctuations in the liquid helium target; (2) fluctuations of the
neutron spin rotation angle that were analyzed by the polarimeter, which were
caused by magnetic field fluctuations in the target region; and (3) extra
noise in the ion chamber due to the current-mode measurement technique. Noise
from (1) was relevant to the target design and is discussed, while the others
sources will be covered in a later paper on the apparatus as a whole.
The two relevant frequency bands for the Neutron Spin Rotation experiment are
the modulation frequency of the output coil (which determined the measurement
frequency of the neutron spin rotation angle from count-rate asymmetry
measurements) and the frequency of the target motion (which determined the
measurement frequency of the parity-violating component of the spin rotation
angle). In general, these frequencies should be as high as possible. The
modulation frequency of 1 Hz for the output coil was chosen by how quickly the
currents in the coil could be reversed and stabilized. The target motion
frequency of about 1 mHz was set by the performance of the liquid helium pump
and drain system, and the volume of liquid helium to be moved.
Although no precise measurements of reactor intensity or spectrum fluctuations
had been performed for the NIST NG-6 cold neutron beam in the 1 Hz frequency
range before this measurement, a time-series analysis of neutron flux monitor
data from a neutron lifetime experiment (sampled at $60$ second intervals) [9]
combined with an assumed $1/f$ dependence of the source noise that has been
observed in other reactors, implied that the beam intensity noise in a 1 Hz
frequency band could be $5-10$ times larger than the $\sqrt{N}$ noise from the
integrated number of neutrons in the same band. This reactor noise could be
suppressed by segmenting both of the target chambers into separate left and
right halves and then splitting the neutron beam into two parallel sub-beams,
which effectively created two separate simultaneous experiments. Operation of
the left and right side targets with opposite target states allowed a
comparison of left and right side asymmetry measurements that suppressed
common mode noise.
Constraining the common-mode noise associated effects to be smaller than
neutron counting statistics required that the two parallel sub-beams possess
the same intensity noise to 1% accuracy. Since the phase space of both halves
of the beam was filled from the same neutron source as viewed through a long,
uniform neutron guide, the common-mode intensity fluctuations for each half of
the beam were expected to possess similar power spectra.
The energy spectrum for neutrons entering the polarization analyzer was
primarily determined by the cold neutron moderator, the phase space acceptance
of the guides, and the beam transmission through the liquid target. None of
these elements was likely to shift the energy spectrum by more than 0.1% at 1
Hz, so this contribution to the noise was expected to be negligible.
Fluctuations in the attenuation of the beam through the target also increased
the noise, but stability of the liquid helium density at the 1% level in the
frequency band of interest near 1 Hz was not difficult to achieve.
Finally, since most sources of systematic uncertainties scaled with the
strength of the longitudinal magnetic field in the target region, suppression
of these effects to below the $5\times 10^{-8}$ rad/m level required a
longitudinal magnetic field of less than 10 nT in the target region.
### 1.4 Overall Design Considerations and Constraints
The neutron polarimeter employed a target system that moved liquid helium
between a pair of target chambers located upstream and downstream of a
vertical solenoid in order to generate beam count-rate asymmetries, which were
used to determine spin rotation angles. In the ideal case of no target-
dependent magnetic rotations, the location of liquid helium in the target
should generate the parity-violating signal and not affect the parity-
conserving background signal. The transfer of liquid helium between the target
chambers should change only the sign of the parity-violating signal but not
its magnitude and must be carried out non-magnetically.
The presence of liquid helium in either the upstream or downstream target
chamber defined a “target state” of the polarimeter. The length of time that
the liquid helium target remained in a given target state was determined by
beam count-rate statistics and dead-time during target state change-overs.
These time parameters were related to the data acquisition dead-time and to
possible systematic effects caused by time-dependent magnetic fields in the
target region [11].
In order to reduce the cost of the polarimeter and liquid helium target
system, the collaboration chose to reuse several components from a previous
experimental apparatus used to search for spin rotation in helium [12]. These
components included a nonmagnetic, horizontal bore cryostat; a pair of coaxial
room-temperature magnetic shields with endcaps; input and output guide coils
for the polarimeter; the pi-coil; and the liquid helium centrifugal pump.
The 100 cm long bore of the cryostat bound the overall length of the liquid
helium target assembly, which included the targets and pi-coil, mechanical
supports, a cylindrical vacuum vessel that housed the target assembly, and
necessary vacuum hardware and electrical instrumentation feedthroughs. In
order to optimize available space, target chamber lengths were set to 42 cm.
The transverse dimensions of the target were determined by the cross-sectional
area of the neutron beam at the exit window of the supermirror polarizer,
which was 4.5 cm tall by 5.5 cm wide. The dimensions inside the target
chambers allowed maximum acceptance of the beam with sufficient space for
internal collimation. Combined with the target length, these dimensions
determined the target volume.
The upstream and downstream target chambers were split vertically by a 3 mm
thick septum, which created left and right-side chambers. The collimation
within the target chambers spit the neutron beam into separate but parallel
sub-beams, which effectively created two separate simultaneous experiments.
The left and right-side targets were operated with opposite target states, so
that the parity-violating signal was opposite in sign for the left and right
sides while the parity-conserving background for both sides was the same.
Comparison of left and right-side measurements allowed a suppression of
systematic effects and common-mode noise as described in Section 1.2.
The need to suppress the magnetic field inside the target region severely
affected the design of the liquid helium target system and the spin transport
of the polarimeter. Within the target region, only non-magnetic and low-
permeable materials were used – all hardware was checked explicitly for
magnetic inclusions or impurities. Any item that produced changes in the
ambient magnetic field of more than 1 nT when moved past a fluxgate
magnetometer sensor at a distance of 1 cm was rejected.
The current-carrying wires of the pi-coil and the instrumentation potentially
generated undesired magnetic fields, therefore twisted-pair wires were used
for all wiring within the target region in order to suppress associated stray
fields. Furthermore, all instrumentation was powered off during data-taking
and energized during target changes as required.
### 1.5 Experimental Layout
Low energy ($\sim$10-3 eV) neutrons from the NIST Center for Neutron Research
(NCNR) cold source were transported to the end station of the NG-6
polychromatic beam line and passed through cryogenic blocks of bismuth to
filter out gamma rays and fast neutrons and beryllium to filter short
wavelength cold-neutrons. Transmitted neutrons passed into the neutron
polarimeter apparatus as shown in Figure 1.
At the upstream end of the apparatus, neutrons were vertically polarized in
the “up” direction ($+\hat{\rm{y}}$-axis) by a supermirror polarizer. The
neutrons traveled along a 1.25 m long guide tube that was filled with helium
gas and passed through an input coil, whose vertical field preserved the
alignment of the beam polarization. Neutrons passed through a current sheet at
the end of the input coil so that the neutron spin was non-adiabatically
transported into the magnetically shielded target region.
Reducing the ambient 50 mT field to less than 10 nT in the target region was
accomplished using a combination of room-temperature and cryogenic mu-metal
shielding. Transient fields in the NCNR guide hall generated by equipment and
other experiments were suppressed by compensation coils that were mounted
outside the room-temperature mu-metal shielding. Magnetometery located in the
target region provided control feedback to the compensation coils.
After passing into the target region, neutrons were collimated into separate
left and right sub-beams and allowed to enter the liquid helium target.
Neutrons propagated through the target and then entered the output coil, which
non-adiabatically guided the neutron spin out of the target region. The output
coil also transversely rotated the neutron polarization vector by $\pm\pi/2$
rad. The sub-beams then passed through a supermirror polarization analyzer
whose polarization axis was aligned with that of the supermirror polarizer.
Transmitted neutrons then entered a longitudinally-segmented 3He ionization
chamber based on the design by Penn et al [13], which operated in current mode
and generated a signal proportional to the neutron count rate. The asymmetry
in the count rates measured after changing the helicity of the magnetic
transport field in the output coil was proportional to the neutron spin
rotation angle. The segmentation of the 3He ionization chamber provided some
energy discrimination of the neutron beam. The $1/v$ dependence of the neutron
absorption cross-section in 3He allows higher energy neutrons to penetrate
deeper on average into the ion chamber. The parity-conserving spin rotation
due to magnetic fields is proportional to the time neutrons spend in the
fields and thus is dependent on neutron velocity. The parity-conserving
background should generate non-uniform asymmetries across the ion chamber
segmentation that scale with the strength of the integrated residual magnetic
field. However, the parity-violating signal is independent of energy for cold
neutrons and should generate the same asymmetry for all ion chamber segments.
## 2 Liquid Helium Target
The liquid helium target consisted of a pair of vessels located upstream and
downstream of the pi-coil. Each vessel was partitioned into separate left and
right-side chamber pairs, which created four identical target chambers that
could each hold liquid helium. This partitioning effectively created upstream
and downstream chamber pairs along the left and right-side sub-beams and
formed two parallel experiments when combined with the polarimeter design such
that whenever the upstream target chamber is filled, the corresponding
downstream chamber is nominally empty and vice versa. Alternately filling and
draining diagonal pairs of target chambers with liquid helium created the two
target states that were necessary to extract the parity-violating spin
rotation angle $\phi_{\rm{PV}}$.
Each target chamber possessed an inlet and outlet for transferring liquid
helium. Inlets from all four chambers were connected to a centrifugal pump
that was immersed in a 13 L liquid helium bath located in the bottom of a
cylindrical vessel called the vac-canister. By operating the centrifugal pump,
all four target chambers could be filled with liquid helium.
Figure 4: Photo of the liquid helium target showing the pump and drain system,
the pi-coil, and instrumentation.
Each outlet was connected to a flexible drainpipe that could be moved above or
below the height of nominally full or empty (respectively) liquid helium
levels inside the target chambers. By lowering a drain, a target chamber could
be emptied of liquid helium and its contents returned to the bath. The volume
of the bath in the bottom of the vac-canister was maintained by periodic
transfer of liquid helium from an external dewar.
### 2.1 Vac-canister
The vac-canister was a 95.3 cm long by 28.8 cm diameter cylindrical aluminum
vacuum chamber that housed the liquid helium target. Upstream and downstream
targets – as well as the pi-coil – were bolted to an aluminum support rail
that mated into a set of matching rails in the bottom of the vac-canister.
This rail system provided alignment of the target within the vac-canister.
Main flanges with indium o-ring seals were located on the upstream and
downstream ends of the vac-canister. Both of the main flanges had a 0.8 mm
thick aluminum flange with an indium o-ring seal covering a beam window.
During leak checks and data taking operation, the vac-canister and beam
windows were shown to withstand an internal pressure difference of 270 kPa and
an external pressure difference of 101 kPa without leak or rupture of the
windows at both room temperature and cryogenic temperatures. In addition, the
downstream main flange was built to accept two custom-built nonmagnetic
electrical feedthroughs [14], a motion-control feedthrough tube, and a liquid
helium transfer tube, along with spare access ports. All seals used aluminum
ConFlat111Certain commercial equipment, instruments, or materials are
identified in this paper in order to specify the experimental procedure
adequately. Such identification is not intended to imply recommendation or
endorsement by the National Institute of Standards and Technology, nor is it
intended to imply that the materials or equipment identified are necessarily
the best available for the purpose.(CF) flanges and gaskets or indium joints,
and all metals were brass, aluminum or titanium. All components were thermally
cycled several times in preliminary tests and later were shown to be
superfluid leak tight in low temperature tests at IUCF.
Around the outside of the vac-canister, ten equally spaced coils connected to
individual current supplies provided magnetic field compensation inside a
layer of cryogenic magnetic shielding that lined the cold bore.
Figure 5: Layout of the neutron beam collimation within a target. 6LiF-plastic
was glued onto aluminum backing frames and then glued into the target
chambers. The size and spacing of the collimation was chosen to prevent
neutrons with wavelengths larger than 2 nm from reflecting off a chamber wall
and then being accepted into the 3He ionization chamber. Additional 6LiF-
plastic was glued onto the upstream face of the targets.
The vac-canister rested within this cryogenic magnetic shielding on four
ceramic balls, which thermally isolated it from the rest of the cryostat. The
vac-canister was able to slide along the cold bore to compensate for
differential thermal contraction of the motion-control feedthrough and liquid
helium transfer tubes, which connected the vac-canister to fixed connections
on the motion-control box located outside the target region. A pair of
titanium bolts protruded through the cryostat upstream 4-K thermal shield,
which provided a low thermal conductivity mechanical stop for the upstream
movement of vac-canister.
### 2.2 Target Chambers
The 420 mm long by 80 mm wide upstream and downstream targets were each
machined from monolithic pieces of 6061 aluminum. Aluminum was chosen as the
target chamber material since it is nonmagnetic, has a high thermal
conductivity, and has a low neutron scattering and absorption cross section
[15].
A wire electrical discharge machine (EDM) at the University of Washington was
used to create left and right-side chambers in each target that were 416 mm
long by 33.5 mm wide by 60 mm deep; the chambers were separated by a 3 mm
thick septum that isolated the left and right sides (see Figure 4). Special
care was taken to ensure that all surfaces exposed to the neutron beam were
flat and normal to the mean beam direction to minimize possible systematic
effects from neutron refraction. In addition, special care was taken to make
the target dimensions, especially the target lengths, as identical as possible
to minimize target-dependent systematic effects. The measured length
difference at room temperature between all four target chambers was less than
0.01 mm.
### 2.3 Neutron Beam Collimation
Because of beam divergence or small angle scattering, some neutrons could
reflect from a target chamber wall, be transmitted through the polarimeter,
and counted in the 3He ionization chamber. The critical angle for neutron
reflection between helium and aluminum depends on the difference in the
neutron index of refraction of the two materials, which is proportional to
density and therefore changes with the liquid or gas state of the helium.
These differences can cause systematic effects through target-dependent
neutron beam intensity and phase space changes coupled to residual magnetic
fields in the target region [11]. This subclass of neutron trajectories was
prevented from reaching the 3He ionization chamber by collimation of the beam.
Within each target chamber, a set of three 6LiF-plastic collimators prevented
neutrons from reflecting off the chamber walls and reaching the detector as
shown in Figure 5. Collimators were positioned at 1/4, 1/2, and 3/4 of the
length of the target chambers and extended into the target chamber 5 mm along
the top, bottom, and outer chamber walls, and 2 mm along the chamber septum.
This collimation defined the sub-beam within a target chamber as 26.5 mm wide
by 50 mm tall.
The incident neutron beam possesses a broad energy spectrum, which begins at
the cold source as a Maxwellian distribution corresponding to a temperature of
40 K. Because the critical angle of the guides that transport the neutrons to
the apparatus increases with the neutron wavelength, any particular choice of
collimation suppresses reflected neutrons only below some cutoff wavelength.
The geometry and spacing of the collimators sufficed to prevent neutrons with
wavelengths less than 2 nm from reflecting off target walls and being accepted
by the ionization chamber. The neutron beam intensity for wavelengths above 2
nm in a long ($\sim$60 m) guide like that used at NCNR is typically over three
orders of magnitude smaller than the higher energy portion of the beam. This
fraction of the beam is too small to make a significant contribution to the
systematic uncertainty in the measurement.
The collimators were built from 6LiF-plastic that was glued to an aluminum
backing with Stycast 2850FT epoxy resin and then glued into the target
chambers. Additionally, 6LiF-plastic was glued to the upstream face of each
target to define the left and right-side neutron sub-beams with the same
collimation dimensions as those inside the target chambers.
### 2.4 Pi-Coil
The pi-coil generated an internal magnetic field that precessed the transverse
component of neutron spin about the $\hat{\rm{y}}$-direction. The amount of
spin precession was determined by the strength of the magnetic field and the
neutron velocity. The pi-coil was tuned to rotate the mean wavelength
(approximately 0.5 nm) of the neutron beam by $\pi$ rad.
Designed and constructed by the University of Washington, the pi-coil
consisted of a pair of side-by-side, 40 mm square cross-section, 160 mm tall
solenoids (see Figure 6). To minimize magnetic flux leakage, the current in
the two rectangular coils flowed in opposite directions with a set of three
curved solenoids providing magnetic flux return at the ends. The leakage field
was measured at less than 50 nT at a position of 1 cm from the center of the
coil. Each solenoid in the pi-coil was wound around an aluminum core with
three layers of 28 gauge copper magnet wire at a winding density of 10 wires
per cm per layer.
Figure 6: Diagram of the pi-coil. The coil is wound so that the left and right
solenoids produce vertical magnetic fields through the region of the passing
neutron beam. These fields are oriented opposite to each other, and the curved
solenoid sections on top and bottom provide flux return for both coils.
The pi-coil was fixed in place between the upstream and downstream target
chambers by nylon set screws that were bolted into a surrounding aluminum
mount. The screws pressed onto aluminum relief plates that were glued to the
outside of the pi-coil in order to prevent windings from being damaged. The
mount was attached to the upstream and downstream target chamber lids. A brass
pin was fit into the bottom flux return of the pi-coil and coincided with its
geometric vertical axis. The pin rested in a transverse groove in the target
support rail, which also secured the target chambers.
The height of the support pin and the positions of the set screws were chosen
to center the pi-coil in the beam as defined by the target chamber
collimation. The geometric axis of the pi-coil was positioned to coincide with
the vertical axis of the target as defined by the partition between target
chambers. The pi-coil was positioned equidistant between the targets
approximately 20 mm from the inside surface of the target chambers.
### 2.5 Centrifugal pump and drain system
Isolating the parity-violating component of the neutron spin rotation through
target motion required a method of changing target states that filled and
drained diagonal pairs of target chambers without changing the magnetic fields
inside the target region. This was accomplished with a centrifugal pump that
was immersed in a 13 L liquid helium bath located in the bottom of the vac-
canister and a set of flexible drainpipes that were connected to outlets
located at the bottom of each target chamber.
The centrifugal pump was similar in size and design to positive-displacement
pumps that have previously been used to move both normal and superfluid helium
[16]. The pump was tested thoroughly in liquid nitrogen at IUCF before
operation at liquid helium temperature. The torque needed to spin the impeller
was transferred from a stepper motor located outside of the target region
through a carbon fiber driveshaft within the motion-control feedthrough tube
and through a driveline inside the vac-canister that was built from brass rod
and flexible copper braided-wire rope (Figure 7).
Figure 7: Schematic of the centrifugal pump system. A stepper motor positioned
outside the target region provided torque to the centrifugal pump along a
driveshaft in the motion-control feedthrough tube and a driveline in the vac-
canister. The centrifugal pump moved liquid helium from a bath in the bottom
of the vac-canister into plumbing that connected to the four target chamber
inlets located in the target lids.
In the first version of this experiment, the centrifugal pump was mounted in a
low and horizontal position within the liquid helium bath, such that the
impeller spun around a vertical axis. This arrangement allowed the pump to
fill the target chambers in about 30 s, but required a 6:1 gear ratio and a
90∘ miter gear pair to transfer torque from the horizontal driveshaft to the
vertical impeller shaft. However the pump suffered mechanical failures due to
ice impurities within the liquid helium jamming the gears.
Based on the previous performance, the pump was mounted in a vertical position
within the liquid helium bath, so that the impeller spun around a horizontal
axis. This allowed the removal of the 90∘ miter gear pair. Also, a larger-
toothed 4:1 gear ratio was chosen that would be more resistant to seizing due
to ice crystals.
The volume throughput of the centrifugal pump was a function of impeller speed
and the depth of the liquid helium bath. The maximum rotation frequency of the
stepper motor feedthrough was 300 rev/min. The centrifugal pump had a 4:1 gear
ratio, which set the maximum impeller rotational speed at 120 rev/min. In
practice, operating the stepper motor above 120 rev/min caused the pump and
driveline to seize. This rotation speed could fill all four target chambers
with liquid helium in 250 s to 300 s, depending on the depth of the bath.
Target chambers were emptied of liquid helium by drainpipes that could be
raised or lowered by braided polyester strings, which were routed through the
target and motion-control feedthrough tube through Teflon and carbon fiber
sleeves to a pair of pneumatic linear actuators located outside of the target
region. Drainpipes from diagonal pairs of target chambers (e.g. upstream-right
and downstream-left) were operated together from a single actuator and could
empty a target chamber of liquid helium in 50 s when lowered (Figure 8).
Figure 8: Schematic diagram of the drain system. Actuators located outside the
target region move strings connected to the ends of drainpipes that are
attached to the outlets of the target chambers via flexible metal bellows. The
raised or lowered position for each drain was chosen to ensure that the tip of
the drainpipe was above the full liquid level or below the empty level
(respectively) for a target chamber.
Inside each target chamber, square grooves were machined into the floor and
lid to ensure that liquid helium or helium gas did not collect along the
bottom or top (respectively) of the chambers between collimators. This
situation could have created a helium liquid/gas interface, which might have
reflected some neutrons that would otherwise have been stopped by the
collimation and could have been counted in the 3He ionization chamber.
### 2.6 Pyroelectric Ice Getters
The centrifugal pump had gears and other moving parts that could in principle
be jammed by solid impurities mixed within the liquid helium. Possible
impurities include ice crystals from water or liquid air accidentally
introduced into the target system during a liquid helium transfer or from the
ice slurry found in the bottom of a typical liquid helium research dewar.
These impurities caused the gears of the pump to seize in the first version of
the experiment. We decided to protect the centrifugal pump with a pyroelectric
object.
Cesium nitrate is a ferroelectric that spontaneously polarizes at cryogenic
temperatures [17] and remains polarized at constant temperature in a cryogenic
environment. Cesium nitrate powder was mixed with urethane resin and cast into
discs 75 mm in diameter by 5 mm thick. The discs were bolted onto the lower
section of each target, so they would be immersed in the liquid helium bath.
Although we made no attempt to measure the impurities that may have been
present in the liquid helium inside the target, based on our experience it is
quite unrealistic to assume that they were absent. In our judgement, the most
likely explanation for the fact that the same centrifugal pump used in the
previous experiment did not seize over several months of nearly continuous
operation is the presence of these getters along with gear modifications
described in Section 2.5.
## 3 Motion Control System
In order to change target states, liquid helium needed to be non-magnetically
moved between target chambers. The liquid helium target system employed a pump
and drain system that was operated outside of the target region by the motion-
control system.
The motion-control system consisted of a stepper motor and driveshaft, which
turned the centrifugal pump, and a pair of pneumatic linear actuators that
were attached to the drains by strings. The stepper motor and actuators were
connected to a vacuum chamber called the motion control box (MCB), which
shared the same helium environment as the target system inside of the vac-
canister. A motion-control feedthrough tube guided the driveshaft and actuator
control strings from the MCB into the vac-canister through penetrations in the
magnetic shielding and cryostat.
The motion-control feedthrough tube was constructed from thin-walled G10-grade
glass epoxy laminate tube, which was coated with epoxy resin and surrounded by
a layer of reflective aluminum tape. The epoxy resin provided additional
mechanical stiffness to the laminate tube and suppressed helium diffusion and
light transmission through the wall of the laminate. The aluminum tape
provided a highly reflective surface that suppressed radiative heat transfer
by thermal radiation from the aluminum vacuum jacket that surrounded the
motion-control feedthrough tube. An aluminum CF flange was glued into the cold
end of the tube, and a 316 stainless steel bellows assembly was glued into the
room temperature end, which was located outside of target region (Figure 9).
A guide assembly was housed within the length of the motion-control
feedthrough tube and was built from four small diameter carbon fiber tubes
that were glued into a set of baffles. The carbon fiber tubes sheathed the
braided polyester control strings that connected the target chamber drains to
actuators within the MCB. The baffles provided mechanical support to the
carbon fiber tubes and allowed the guide assembly to slide within the motion-
control feedthrough tube under differential thermal contraction. The baffles
also prevented light and thermal radiation from shining onto the interior of
the vac-canister and segmented the helium gas column within the motion-control
feedthrough tube to suppress potential heat loads due to gas convection. A
drilled hole in each baffle supported and aligned the carbon fiber driveshaft,
which connected the centrifugal pump driveline in the vac-canister to the
stepper motor located in the MCB. Teflon caps were inserted into each end of
the motion-control feedthrough tube and provided a smooth bearing surface for
the strings and driveshaft.
Figure 9: Schematic diagram of the motion control feedthrough tube. Four
carbon fiber tubes that sheathed the drainpipe strings were glued into a set
of thermal radiation baffles. A hole in each baffle guided the pump
driveshaft. The bundle fit within a laminate tube and was capped by Teflon
guides on each end. The laminate tube was externally coated with a layer of
epoxy resin and reflective aluminum tape. A set of baffles were glued on the
outside of the laminate tube to suppress radiative heat loads on the vac-
canister. An aluminum CF flange and a stainless steel bellows assembly were
glued into the ends of the feedthrough tube.
The driveshaft coupled to the stepper motor inside the MCB via a stainless
steel double universal joint. The control strings attached onto the ends of
actuators through brass tension springs. The lengths of the strings were
chosen to ensure that each drain would travel through its entire range of
movement, and the springs provided tension relief for the strings. The springs
were chosen so that they would mechanically fail before a string broke in case
of a stuck string. The MCB possessed a large access port with a Buna-N o-ring
seal and an acrylic window for both visual inspection and (if needed)
mechanical repair of the springs, strings and actuators.
The aluminum vacuum jacket that surrounded the motion-control feedthrough tube
coupled into the MCB through a fluorosilicone compression o-ring seal. This
type of material remains plastic through a wider temperature range than
typical silicone or fluorocarbon o-rings [18], which was important since the
MCB became cold during target operation. During liquid helium transfers or
other periods of rapid cryogenic liquid boil-off in the target, heater tape
was used to guard against the development of vacuum leaks due to o-ring
embrittlement caused by excessive cold.
## 4 Cryogenics
### 4.1 Cryostat
A horizontal bore, nonmagnetic cryostat was originally built by Oxford
Instruments for a previous experiment to measure the parity-violating neutron
spin rotation in liquid helium. All of the vacuum joints that were originally
glued had failed, and so new joints were redesigned and replaced with either
soldered or indium-sealed joints as applicable.
The cryostat consisted of two coaxial annular aluminum vessels housed within
an aluminum cylindrical main vacuum vessel. The outer 77-K vessel could hold
50 L of liquid nitrogen, and the inner 4-K vessel could hold 30 L of liquid
helium. The measured hold time of the cryostat during nominal data runs was 50
h for liquid nitrogen and 30 h for liquid helium, which allowed a convenient
daily refill schedule.
The cylindrical interior surface of the 4-K vessel formed the cold bore, which
was 305 mm in diameter by 1000 mm long. A cryogenic magnetic shield built from
Amuneal Cryoperm 10 lined the cold bore. Cryoperm 10 was chosen because its
permeability at cryogenic temperatures is comparable to that of normal mu-
metal at room temperature.
All materials of the cryostat that were located inside the room-temperature
magnetic shielding were nonmagnetic. The two annular vessels were
independently suspended within the main vacuum vessel by adjustable G-10
straps and braces for thermal isolation and adjustability. The cryostat was
supported within the magnetic shielding by a set of four aluminum posts, which
passed through small openings in the shielding. The posts fit inside machined
inserts and supported the cryostat from its ends.
### 4.2 Liquid Helium Transfer Tube
The liquid helium target was subject to a heat load that boiled away the
liquid helium in the vac-canister, thereby decreasing the depth of the liquid
helium bath over time. A minimum bath depth was required for changing target
states, so the vac-canister was periodically refilled from an external dewar
using a liquid helium transfer tube that entered the vac-canister through an
external valve and compression o-ring fitting assembly located outside of the
target region and magnetic shielding.
The liquid helium transfer tube was of similar design as the motion-control
feedthrough tube (see section 3), except that there was no internal guide
assembly. An aluminum vacuum jacket surrounded the liquid helium transfer
tube.
### 4.3 Heat Load
The boil-off rate of liquid helium in the vac-canister determined the upper
bound on the length of a data run. Suppressing this heat load allowed longer
data runs and therefore greater statistical accuracy for the spin rotation
measurement. Considerable effort was therefore devoted to the reduction of
possible heat leaks in the system. The challenge was to achieve low heat leaks
in a liquid helium target system which necessarily possesses direct mechanical
linkages between room temperature and the inside regions of the vac-canister.
Electrical instrumentation in the target was turned off when not required in
order to reduce the heat load generated by current-carrying wires and sensors.
This included turning off the pi-coil during target changes.
All cryogenic surfaces were either polished or layered with reflective
aluminum tape or superinsulation to suppress radiative heat transfer. Aluminum
heat shields were bolted onto the upstream and downstream annular faces of the
cryostat liquid helium vessel, which provided nearly 4$\pi$ coverage of the
vac-canister surface with a 4 K surface. A second set of heat shields anchored
to the cryostat liquid nitrogen vessel provided a 77 K surface exterior to the
4 K surface. Beam windows on these heat shields were covered with aluminum
foil, and the inner face of each shield was covered with cryogenic super-
insulation.
Both the motion-control feedthrough tube and the liquid helium transfer tube
penetrated the 4-K and 77-K heat shields and were sheathed by a pair of
aluminum vacuum jackets outside of the target region. The outer surface of
these tubes were wrapped with super-insulation, and each of them employed a
set of exterior baffles that suppress 300 K thermal radiation from being
incident on the vac-canister through openings in the heat shields.
All components anchored to a surface warmer than 4 K were constructed using
materials that possessed low thermal conductivity and were sufficiently long
to suppress conductive heat transfer. The wiring harness for the target
instrumentation was thermally lagged to the heat shields. Similarly, the
motion-control feedthrough tube and the liquid helium transfer tube were
thermally lagged to the downstream 4-K and 77-K heat shields with braided
copper straps.
The design estimate for the total heat load on the liquid helium target was
100 mW to 150 mW. However, this estimate was too small to explain the observed
bath boil-off rate of about 10 L of liquid helium every 6 h to 8 h. A
numerical simulation was conducted of the thermal transport along the motion
control feedthrough tube and the liquid helium transfer tube, which revealed
that the layer of reflective aluminum tape wrapped around the outer surface of
each tube conducted 140 mW to 225 mW from the MCB and liquid helium valve
transfer port at room-temperature to the vac-canister at 4 K. Including the
effects of the radiative heat load along the length of each tube from the
surrounding room-temperature vacuum jackets – which was what the aluminum tape
was introduced to suppress – added another 20 mW.
Figure 10: Diagram of the target control sequence. Once data-taking for one
target state has completed, the _target-change routine_ initiates, which moves
liquid helium between target chambers and configures the target in the next
target state. Data-taking continues by running the target through a _target
sequence_ , which involves stepping the pi-coil state through five series of
current configurations (both directions of current within the pi-coil and a
null current setting). During each pi-coil state, the helicity of the output
coil field modulates 20 times at a rate of 1 Hz. A complete _target cycle_
consists of two consecutive _target sequences_ (one for each target state)
with the two intervening _target-change routines_.
In addition, the simulation indicated that the heat load along the tubes due
to the 77-K thermal lagging straps was substantial, because the copper braids
mechanically coupled onto each tube about 20 cm from the face of the vac-
canister. The straps were anchored to the 77-K heat shield, which could be
tens of degrees warmer than the cryostat 77-K vessel to which it was coupled.
The 4-K thermal lagging straps, which connected onto the tubes about 10 cm
from the vac-canister, partially offset the heat load due to the 77-K straps.
But, the 4-K straps were anchored to the 4-K shield, which was several degrees
warmer than the cryostat 4-K vessel to which it was attached.
The simulation suggested that the motion-control feedthrough tube with its
internal guide assembly, the pump driveshaft and control strings, and the LHe
transfer tube allowed 450 mW to 750 mW of heat into the vac-canister. This
heat load is close to the amount of heat needed to account for the helium
boil-off observed during data runs. Any future operation of the apparatus will
include design changes that would decrease this heat load to an amount closer
to the original design.
## 5 Experimental Control and Data Acquisition
### 5.1 Instrumentation
The liquid helium levels for each target chamber and the bath in the vac-
canister were individually monitored with resistive-wire liquid level sensors.
The temperatures on the upstream and downstream target, as well as the
cryostat 4-K and 77-K vessels were monitored with silicon diode temperature
sensors. The longitudinal magnetic field within the target region was measured
by a fluxgate magnetometer with a stated sensitivity of 0.5 nT. The
magnetometer was multiplexed to four single-axis low-temperature probes that
were mounted above the targets.
### 5.2 Experimental Control
The control sequence for the liquid helium target system, the data
acquisition, and data storage were managed by the Neutron Spin-Rotation
Acquisition and Control (NSAC) program. The NSAC collected neutron beam count
rates from the ionization chamber while it switched through the combinations
of the two output coil rotation states ($+\pi/2$, $-\pi/2$) with the three pi-
coil current states (current[-], off[0], current[+]) in a predetermined
sequence. This sequence was iterated several times to build up a _target
sequence_ (see Figure 10).
Figure 11: Plot of the liquid helium level in a target chamber as measured by
a resistive wire liquid helium meter during fill and drain testing for various
depths of the liquid helium bath in the vac-canister. The deepest bath depth
produced the fastest fill curve, while decreases in the bath corresponded to
slower fill times.
After a target sequence was completed, the NSAC stopped collecting beam rate
data and initiated a _target-change routine_. The temperature sensors and one
of the fluxgate magnetometer sensors were energized and allowed to warm up to
operational temperature. At the same time, a motion control actuator lowered
the pair of drains of the target chambers that were full of liquid helium
during the preceding target sequence and allowed them to empty their liquid
helium into the bath. Then, the stepper motor began spinning the centrifugal
pump, which began filling all four target chambers with liquid helium from the
bath.
The NSAC turned the stepper motor off after a given length of time, which
allowed the pump to completely fill the pair of target chambers with the
raised drains with liquid helium. Calibration testing of the centrifugal pump
using varying depths of liquid helium in the bath indicated that stepper motor
run times of 300 s was sufficient for target switch-overs that occurred in the
first half of an eight-hour data run and 350 s for the later half. The
difference in pump times was related to the decreasing bath depth over the
duration of a run from liquid helium boil-off.
Once the centrifugal pump had stopped, data from the temperature and fluxgate
sensors were recorded by the NSAC. The liquid helium was allowed to completely
empty from the target chambers with lowered drains for 50 s, a time which was
determined during calibration measurements. This drain time also allowed any
bubbles within a full target chamber to settle out so that the liquid helium
density was homogeneous during neutron data sequences. The drain time also
allowed the temperature in the targets to equilibrate and any turbulence in
the target chambers to subside.
After the empty target chambers drained, the actuators raised the drains and
all instrumentation in the target region was powered off. With the possible
exception of the angular orientation of the centrifugal pump vanes and other
rotating elements along the driveline, all of the locations of the moveable
mechanical systems inside the target chamber are in the same location for all
data runs after the liquid motion sequence. The target was now in the new
_target state_ , and data taking could resume for the next _target sequence_.
Two consecutive target sequences – one for each target state – constituted a
_target cycle_. Upon completion of each target cycle, the NSAC calculated
various count rate asymmetries for the target, total spin rotation angles, and
parity-violating spin rotation angles $\phi_{\rm{PV}}$ for real time analysis.
Because the target-change routine was constrained by hardware performance, the
target cycle duration was determined by the number of modulations of the
output coil rotation states for each pi-coil state, the frequency of the
modulation, and the number of pi-coil sequences within a target sequence. The
frequency choice for the output coil was discussed in Section 1.3, and the
number of modulations determined the statistical precision of the asymmetry
measurement for a single pi-coil state.
The concern for drifting magnetic fields in the target region placed a bound
on the number of pi-coil sequences in a target sequence, because the asymmetry
measurements for a given pi-coil state could include background rotation
signals due to different background magnetic fields, which would reduce the
precision of the measurement. Similarly, because the parity-violating spin
rotation angle $\phi_{PV}$ is calculated from the asymmetries from the same
pi-coil states for each target state in consecutive target cycles, a drifting
magnetic field could reduce the statistical precision of the measurement or
introduce a systematic error.
However, the duration of the target-change routine introduced a significant
dead time during data-taking. The length of the target sequence was chosen to
limit dead time and increase statistics while suppressing possible systematic
errors and increasing the precision of the measurement. Therefore, each target
sequence consisted of five iterations of each pi-coil current state, each of
which contained 20 modulations of the output coil rotation state with a
frequency of 1 Hz. A complete target cycle of two target sequences and two
target-change routines lasted 1300 s.
Figure 12: Plot of the neutron count-rate asymmetries for the left and right
sides of the target during a run in cycle 2. The pattern at the end of the run
indicates partial filling of target chambers due to decreasing bath depth in
the vac-canister. The pattern’s step-size of 300 s corresponds to the
modulation of the target states.
## 6 Liquid Helium Target System Performance
The liquid helium target system was assembled and tested at IUCF. Basic
testing of the target was conducted to ensure that instrumentation and motion
control components functioned correctly at low temperatures. The apparatus was
then shipped to the NCNR, and after initial beamline and polarimeter
characterization studies were complete, the liquid helium target system was
installed and commissioned on the NG-6 beamline.
During the commissioning, neutron rates were measured at various points along
the apparatus while the liquid helium target system was run in an operational
target state. In addition, rates were measured for an empty target
configuration, where the target chambers were drained, but liquid helium was
present in the bath. The neutron rates for the target are summarized in Table
1.
| Peak fluence (neutrons cm-2 s-1)
---|---
After supermirror polarizer | $3.1\times 10^{8}$
Before polarization analyzer (target empty) | $2.8\times 10^{7}$
Before polarization analyzer (target full) | $2.3\times 10^{7}$
Table 1: Typical neutron rates for the liquid helium target.
The liquid helium target operated more or less continuously during the
experiment for about six months in early 2008, executing 5406 target motion
sequences with liquid helium. The target was warmed once to room temperature
during a reactor shutdown to fix a small intermittent leak at the low
temperature end of the G-10 tube used to repeatedly transfer liquid helium
into the vac-canister. Except for this warm-up, the target was always held at
a temperature no greater than 77 K to minimize the development of internal
stresses on seals from differential thermal contraction.
The performance of the target fill and drain system in moving liquid between
target chambers was tested prior to the experiment. Figure 11 shows the liquid
helium levels as measured by the liquid level meters in the course of a fill
and drain sequence similar to that used in the experiment. The data
demonstrated that the centrifugal pump worked and that the drain pipes
operated as designed. It should be noted that the testing revealed that the
downstream target chambers filled about 25% quicker than the upstream
chambers. This was likely due to the downstream location of the centrifugal
pump and the additional length of plumbing needed to transport liquid helium
to the upstream chambers.
During the experiment, the liquid helium levels within the target chambers
could be monitored indirectly using the neutron beam transmission. The count-
rate asymmetry used to calculate the neutron spin-rotation angle was separated
into left and right-side count rate asymmetries. Any failure of the target to
either fill or drain properly was indicated in the neutron data as a large
deviation of these left–right count rate asymmetries from zero.
Figure 12 shows a plot of count rate asymmetries during a typical data run.
The neutron data indicate that initially, the liquid helium bath was
overfilled and that the drains didn’t fully empty all of the chambers between
target sequences. After several target sequences, normal fill and drain
performance was indicated. Towards the end of the run, the neutron data
indicate that the target chambers were not filling or draining properly, which
coincided with the depth of the liquid helium bath decreasing below nominal
operating levels due to boil-off.
The target could also be operated without liquid helium present in the target
chambers or even the target chambers and vac-canister. This was done before
and after the liquid helium phase of data taking to place constraints on
possible systematic effects.
The left-right segmentation design of the target allowed a suppression of
common-mode noise due to the intensity noise from the reactor described in
Section 1.3. As shown in Figure 13, this suppression increased the measurement
precision by roughly an order of magnitude, with statistical uncertainties
approaching $\sqrt{N}$.
Figure 13: Histogram of the measured parity-violating spin rotation angles
with and without common-mode noise suppression.
The temperature of the target was measured periodically throughout the run
during target motion. Figure 14 shows the temperatures as measured by
thermometers located at various locations on the aluminum target chamber. No
evidence was observed for excursions of the target temperature away from that
expected for a liquid helium bath whose pressure is close to atmosphere.
Figure 14: Plot of measured temperatures during cycle 2 of the liquid nitrogen
jacket, the liquid helium jacket, and the downstream and upstream target
chambers (shown in order top to bottom). The liquid nitrogen jacket displays
daily warming and cooling that coincide with its daily fill schedule. The
liquid helium jacket shows good temperature stability, with the noted
exception of temperature spikes during some of the daily fills. The target
temperatures similarly display temperature spikes during liquid helium fills,
but at a three time per day frequency that coincided with the scheduled 8-hour
runs; otherwise, the target temperatures displayed good stability during data
runs.
## 7 Conclusion
The Neutron Spin Rotation experiment acquired data from January 2008 to June
2008. During the experiment, the liquid helium target system met or exceeded
most design goals. In particular, the motion control and centrifugal pump
systems performed reliably in the cryogenic environment. The drain system
worked especially well, with drain times as expected or better.
Further experiments to perform precision measurements of neutron spin rotation
in liquid helium are possible using the same polarimeter and target system
design. Removal of the reflective aluminum tape on the motion control
feedthrough tube and the liquid helium transfer tube and a redesign of the
thermal anchoring for these tubes should greatly reduce the liquid helium
consumption rate of the target.
Another improvement in target performance could come from increasing the
centrifugal pump speed, thereby reducing the time needed to fill target
chambers with liquid helium. This could be accomplished by returning the pump
to its original low and horizontal position within the liquid helium bath,
choosing appropriate large-toothed gears, continued use of the ice-getters,
and development of a “cage” that would surround the gear system from any
remaining ice crystals. This pump modification could decrease the fill times
to the 50 s to 100 s range, which could decrease the time needed for a target
change routine by a factor of three. This reduction of dead time would
increase the available neutron count rate statistics accordingly.
In addition to measurements done with liquid helium, other targets are
possible, including liquid parahydrogen [19][20][21] and liquid orthodeuterium
[22][23]. These targets would require modifications of the basic liquid helium
target system for the warmer operational temperatures (20 K) and the inclusion
of hydrogen safety systems.
## 8 Acknowledgements
The authors acknowledge the support of the National Institute of Standards and
Technology, U.S. Department of Commerce, in providing the neutron research
facilities used in this work, and the support of the U.S. Department of
Energy, Office of Nuclear Science and the National Science Foundation.
We thank the University of Washington for funding and use of hardware from the
previous Neutron Spin Rotation experiment. We thank the Indiana University
Physics Department machine shop and the University of Washington machine shop
for their work on fabrication of the liquid helium target system components,
and the Indiana University Cyclotron Facility for infrastructure used in the
extensive target testing conducted before the experiment. We thank David Haase
for his work in refurbishing the cryostat and room-temperature magnetic
shielding. Christopher Bass acknowledges the support of the Ronald E. McNair
Graduate Fellowship Program and the National Research Council Postdoctoral
Associates Program. This work is supported in part by NSF PHY-0457219, NSF
PHY-0758018, NSF PHY-9804038, and DOE DE-FG02-87ER41020.
## References
* [1] F. C. Michel, Phys. Rev. 133, B329 (1964).
* [2] L. Stodolsky, Nucl. Phys. B 197, 213 (1982).
* [3] B. Desplanques, J. F. Donoghue, and B. R. Holstein, Ann. Phys. 124, 449 (1980).
* [4] B. R. Holstein, Fizika B 14, 165 (2005).
* [5] M. J. Ramsey-Musolf and S. A. Page, Ann. Rev. Nucl. Part. Sci. 56, 1 (2006).
* [6] C. -P. Liu, Phys. Rev. C 75, 065501 (2007).
* [7] S. L. Zhu, _et al_ , Nucl. Phys. A 748, 435 (2005).
* [8] M. Forte, et al, Phys. Rev. Lett. 45, 2088 (1980).
* [9] J. S. Nico, private communication (2006).
* [10] H. S. Sommers Jr., J. G. Dash, and L. Goldstein, Phys. Rev. 97, 855 (1955).
* [11] C. D. Bass, Measurement of the parity-violating spin-rotation of polarized neutrons propagating through liquid helium, Ph.D. diss., Indiana University, Bloomington, 2008 (available from http://www.proquest.com, publication number AAT 3297940).
* [12] D. M. Markoff, Measurement of the parity nonconserving spin-rotation of transmitted cold neutrons through a liquid helium target, PhD Thesis, University of Washington, Seattle, 1997.
* [13] S. D. Penn _et al_ , Nucl. Instrum. Meth. A 457, 332 (2001).
* [14] C. D. Bass, Rev. Sci. Instrum. 79, 055101 (2008).
* [15] V. F. Sears, Neutron News 3, No. 3, 26-37 (1992).
* [16] P. M. McConnell, in NBSIR 73-316 (National Bureau of Standards, 1973).
* [17] J. D. Brownridge, Cryogenics 29, 70 (1989).
* [18] Parker o-ring handbook - ORD 5700, Parker Hannifin Corp, Cleveland, OH (2007).
* [19] Y. Avishai and P. Grange, J. Phys.G: Nucl. Phys. 10, L263-L270 (1984).
* [20] D. M. Markoff, J. Res. Natl. Inst. Stand. Technol. 110, 209-213 (2005).
* [21] R. Schiavilla, J. Carlson, and M. Paris, Phys. Rev. C 70, 044007 (2004).
* [22] Y. Avishai, Phys. Rev. Lett. 52, 1389 - 1392 (1984).
* [23] R. Schiavilla et al., Phys. Rev. C 78, 014002 (2008).
|
arxiv-papers
| 2009-05-04T16:13:14 |
2024-09-04T02:49:02.314598
|
{
"license": "Public Domain",
"authors": "C. D. Bass, T. D. Bass, B. R. Heckel, C. R. Huffer, D. Luo, D. M.\n Markoff, A. M. Micherdzinska, W. M. Snow, H. E. Swanson, S. C. Walbridge",
"submitter": "Christopher Bass",
"url": "https://arxiv.org/abs/0905.0395"
}
|
0905.0596
|
# Sequential product on standard effect algebra ${\cal E}(H)$††thanks: This
project is supported by Natural Science Found of China (10771191 and
10471124).
Shen Jun1,2, Wu Junde1 E-mail: wjd@zju.edu.cn
###### Abstract
A quantum effect is an operator $A$ on a complex Hilbert space $H$ that
satisfies $0\leq A\leq I$, ${\cal E}(H)$ is the set of all quantum effects on
$H$. In 2001, Professor Gudder and Nagy studied the sequential product $A\circ
B=A^{\frac{1}{2}}BA^{\frac{1}{2}}$ of $A,B\in{\cal E}(H)$. In 2005, Professor
Gudder asked: Is $A\circ B=A^{\frac{1}{2}}BA^{\frac{1}{2}}$ the only
sequential product on ${\cal E}(H)$? Recently, Liu and Wu presented an example
to show that the answer is negative. In this paper, firstly, we characterize
some algebraic properties of the abstract sequential product on ${\cal E}(H)$;
secondly, we present a general method for constructing sequential products on
${\cal E}(H)$; finally, we study some properties of the sequential products
constructed by the method.
1Department of Mathematics, Zhejiang University, Hangzhou 310027, P. R. China
2Department of Mathematics, Anhui Normal University, Wuhu 241003, P. R. China
Key Words. Quantum effect, standard effect algebra, sequential product.
1\. Introduction
Sequential effect algebra is an important model for studying the quantum
measurement theory ([1-7]). A sequential effect algebra is an effect algebra
which has a sequential product operation. Firstly, we recall some elementary
notations and results.
An effect algebra is a system $(E,0,1,\oplus)$, where 0 and 1 are distinct
elements of $E$ and $\oplus$ is a partial binary operation on $E$ satisfying
that [8]:
(EA1) If $a\oplus b$ is defined, then $b\oplus a$ is defined and $b\oplus
a=a\oplus b$.
(EA2) If $a\oplus(b\oplus c)$ is defined, then $(a\oplus b)\oplus c$ is
defined and
$(a\oplus b)\oplus c=a\oplus(b\oplus c).$
(EA3) For each $a\in E$, there exists a unique element $b\in E$ such that
$a\oplus b=1$.
(EA4) If $a\oplus 1$ is defined, then $a=0$.
In an effect algebra $(E,0,1,\oplus)$, if $a\oplus b$ is defined, we write
$a\bot b$. For each $a\in(E,0,1,\oplus)$, it follows from (EA3) that there
exists a unique element $b\in E$ such that $a\oplus b=1$, we denote $b$ by
$a^{\prime}$. Let $a,b\in(E,0,1,\oplus)$, if there exists a $c\in E$ such that
$a\bot c$ and $a\oplus c=b$, then we say that $a\leq b$. It follows from [8]
that $\leq$ is a partial order of $(E,0,1,\oplus)$ and satisfies that for each
$a\in E$, $0\leq a\leq 1$, $a\bot b$ if and only if $a\leq b^{\prime}$.
Let $(E,0,1,\oplus,\circ)$ be an effect algebra and $a\in E$. If $a\wedge
a^{\prime}=0$, then $a$ is said to be a sharp element of $E$. We denote
$E_{S}$ the set of all sharp elements of $E$ ([9-10]).
A sequential effect algebra is an effect algebra $(E,0,1,\oplus)$ with another
binary operation $\circ$ defined on it satisfying [2]:
(SEA1) The map $b\mapsto a\circ b$ is additive for each $a\in E$, that is, if
$b\bot c$, then $a\circ b\bot a\circ c$ and $a\circ(b\oplus c)=a\circ b\oplus
a\circ c$.
(SEA2) $1\circ a=a$ for each $a\in E$.
(SEA3) If $a\circ b=0$, then $a\circ b=b\circ a$.
(SEA4) If $a\circ b=b\circ a$, then $a\circ b^{\prime}=b^{\prime}\circ a$ and
$a\circ(b\circ c)=(a\circ b)\circ c$ for each $c\in E$.
(SEA5) If $c\circ a=a\circ c$ and $c\circ b=b\circ c$, then $c\circ(a\circ
b)=(a\circ b)\circ c$ and $c\circ(a\oplus b)=(a\oplus b)\circ c$ whenever
$a\bot b$.
If $(E,0,1,\oplus,\circ)$ is a sequential effect algebra, then the operation
$\circ$ is said to be a sequential product on $(E,0,1,\oplus)$. If
$a,b\in(E,0,1,\oplus,\circ)$ and $a\circ b=b\circ a$, then $a$ and $b$ is said
to be sequentially independent and is denoted by $a|b$ ([1-2]).
Let $H$ be a complex Hilbert space, ${\cal B}(H)$ be the set of all bounded
linear operators on $H$, ${\cal P}(H)$ be the set of all projections on $H$,
${\cal E}(H)$ be the set of all self-adjoint operators on $H$ satisfying that
$0\leq A\leq I$. For $A,B\in{\cal E}(H)$, we say that $A\oplus B$ is defined
if $A+B\in{\cal E}(H)$, in this case, we define $A\oplus B=A+B$. It is easy to
see that $({\cal E}(H),0,I,\oplus)$ is an effect algebra, we call it standard
effect algebra ([8]). Each element $A$ in ${\cal E}(H)$ is said to be a
quantum effect, the set ${\cal E}(H)_{S}$ of all sharp elements of $({\cal
E}(H),0,I,\oplus)$ is just ${\cal P}(H)$ ([2, 9]).
Let $A\in{\cal B}(H)$, we denote $Ker(A)=\\{x\in H\mid Ax=0\\}$,
$Ran(A)=\\{Ax\mid x\in H\\}$, $P_{Ker(A)}$ denotes the projection onto
$Ker(A)$. Let $x\in H$ be a unit vector, $P_{x}$ denotes the projection onto
the one-dimensional subspace spanned by $x$.
In 2001 and 2002, Professor Gudder, Nagy and Greechie showed that for any two
quantum effects $A$ and $B$, if we define $A\circ
B=A^{\frac{1}{2}}BA^{\frac{1}{2}}$, then the operation $\circ$ is a sequential
product on the standard effect algebra $({\cal E}(H),0,I,\oplus)$, moreover,
they studied some properties of this special sequential product on $({\cal
E}(H),0,I,\oplus)$ ([1,2]).
In 2005, Professor Gudder asked ([4]): Is $A\circ
B=A^{\frac{1}{2}}BA^{\frac{1}{2}}$ the only sequential product on standard
effect algebra $({\cal E}(H),0,I,\oplus)$?
In 2009, Liu and Wu constructed a new sequential product on $({\cal
E}(H),0,I,\oplus)$, thus answered Gudder’s problem negatively ([7]). This new
sequential product on $({\cal E}(H),0,I,\oplus)$ motivated us to study the
following topics: (1) Characterize the algebraic properties of abstract
sequential product on $({\cal E}(H),0,I,\oplus)$. (2) Present a general method
for constructing sequential product on $({\cal E}(H),0,I,\oplus)$. (3)
Characterize some elementary properties of the sequential product constructed
by the method. Our results generalize many conclusions in [1,3,7,14].
2\. Abstract sequential product on $({\cal E}(H),0,I,\oplus)$
In this section, we study some elementary properties of the abstract
sequential product on the standard effect algebra $({\cal E}(H),0,I,\oplus)$.
Lemma 2.1 ([2]). Let $(E,0,1,\oplus,\circ)$ be a sequential effect algebra,
$a\in E$. Then the following conditions are all equivalent:
(1) $a\in E_{S}$;
(2) $a\circ a^{\prime}=0$;
(3) $a\circ a=a$.
Lemma 2.2 ([2]). Let $(E,0,1,\oplus,\circ)$ be a sequential effect algebra,
$a\in E$, $b\in E_{S}$. Then the following conditions are all equivalent:
(1) $a\leq b$;
(2) $a\circ b=b\circ a=a$.
Lemma 2.3 ([2, 8]). Let $(E,0,1,\oplus,\circ)$ be a sequential effect algebra,
$a,b,c\in E$.
(1) If $a\perp b$, $a\perp c$ and $a\oplus b=a\oplus c$, then $b=c$.
(2) $a\circ b\leq a$.
(3) If $a\leq b$, then $c\circ a\leq c\circ b$.
Lemma 2.4 ([7]). Let $\circ$ be a sequential product on the standard effect
algebra $({\cal E}(H),0,I,\oplus)$. Then for any $A,B\in{\cal E}(H)$ and real
number $t$, $0\leq t\leq 1$, we have $(tA)\circ B=A\circ(tB)=t(A\circ B)$.
Lemma 2.5 ([1]). Let $A,B,C\in{\cal B}(H)$ and $A,B,C$ be self-adjoint
operators. If for every unit vector $x\in H$, $\langle Cx,x\rangle=\langle
Ax,x\rangle\langle Bx,x\rangle$, then $A=tI$ or $B=tI$ for some real number
$t$.
Lemma 2.6 ([11]). Let $A\in{\cal B}(H)$ have the following operator matrix
form
$A=\left(\begin{array}[]{cc}A_{11}&A_{12}\\\ A_{21}&A_{22}\\\
\end{array}\right)$
with respect to the space decomposition $H=H_{1}\oplus H_{2}$. Then $A\geq 0$
iff
(1) $A_{ii}\in{\cal B}(H_{i})$ and $A_{ii}\geq 0$, $i=1,2$;
(2) $A_{21}=A_{12}^{*}$;
(3) there exists a linear operator $D$ from $H_{2}$ into $H_{1}$ such that
$||D||\leq 1$ and $A_{12}=A_{11}^{\frac{1}{2}}DA_{22}^{\frac{1}{2}}$.
Theorem 2.1. Let $\circ$ be a sequential product on $({\cal
E}(H),0,I,\oplus)$, $B\in{\cal E}(H)$, $E\in{\cal P}(H)$. Then $E\circ B=EBE$.
Proof. For $A\in{\cal E}(H)$, let $\Phi_{A}:{\cal E}(H)\longrightarrow{\cal
E}(H)$ be defined by $\Phi_{A}(C)=A\circ C$ for each $C\in{\cal E}(H)$. It
follows from Lemma 2.4 and (SEA1) that $\Phi_{A}$ is affine on the convex set
${\cal E}(H)$. Note that ${\cal E}(H)$ generates algebraically the vector
space ${\cal B}(H)$, so $\Phi_{A}$ has a unique linear extension to ${\cal
B}(H)$, which we also denote by $\Phi_{A}$. Then $\Phi_{A}$ is a positive
linear operator on ${\cal B}(H)$ and $\Phi_{A}(I)=A$. Thus $\Phi_{A}$ is
continuous.
Note that $E\in{\cal P}(H)={\cal E}(H)_{S}$, it follows from Lemma 2.1 that
$E\circ(I-E)=0$ and so $\Phi_{E}(I-E)=0$. By composing $\Phi_{E}$ with all
states on ${\cal B}(H)$ and using Schwarz’s inequality, we conclude that
$\Phi_{E}(B)=\Phi_{E}(EBE)$. Since $EBE\in{\cal E}(H)$, $E\in{\cal E}(H)_{S}$
and $EBE\leq E$, by Lemma 2.2 we have $E\circ(EBE)=EBE$. Thus $E\circ
B=\Phi_{E}(B)=\Phi_{E}(EBE)=E\circ(EBE)=EBE$.
Theorem 2.2. Let $\circ$ be a sequential product on $({\cal
E}(H),0,I,\oplus)$, $A,B\in{\cal E}(H)$ and $AB=BA$. Then $A\circ B=B\circ
A=AB$.
Proof. We use the notations as in the proof of Theorem 2.1.
Suppose $E\in{\cal P}(H)$ and $E\in\\{A\\}^{\prime}$, i.e., $EA=AE$. Note that
$EAE,(I-E)A(I-E)\in{\cal E}(H)$, $EAE\leq E$ and $(I-E)A(I-E)\leq I-E$, by
Lemma 2.2, it follows that $EAE|E$ and $(I-E)A(I-E)|(I-E)$. Since
$A=EAE+(I-E)A(I-E)$, by (SEA4) and (SEA5) we have $A|E$. By Theorem 2.1 we
conclude that $A\circ E=E\circ A=EAE=AE$. Thus, $\Phi_{A}(E)=AE$. Since
$\Phi_{A}$ is a continuous linear operator and $\\{A\\}^{\prime}$ is a von
Neumann algebra, we conclude that $\Phi_{A}(B)=AB$. That is, $A\circ B=AB$.
Similarly, we have $B\circ A=BA$. Thus $A\circ B=B\circ A=AB$.
Theorem 2.3. Let $\circ$ be a sequential product on $({\cal
E}(H),0,I,\oplus)$, $A,B\in{\cal E}(H)$. Then the following conditions are all
equivalent:
(1) $AB=BA=B$;
(2) $A\circ B\geq B$;
(3) $A\circ B=B$;
(4) $B\circ A=B$;
(5) $B\leq P_{Ker(I-A)}$;
(6) $B\leq A^{n}$ for each positive integer $n$.
Proof. (1)$\Rightarrow$(3) and (1)$\Rightarrow$(4): By Theorem 2.2.
(3)$\Rightarrow$(2) is obvious.
(4)$\Rightarrow$(3): By Theorem 2.2, $B\circ A=B=B\circ I$. Thus, it follows
from Lemma 2.3 that $B\circ(I-A)=0$. By (SEA3), $B|(I-A)$. By (SEA4), $B|A$.
So $A\circ B=B\circ A=B$.
(2)$\Rightarrow$(6): By using Theorem 2.2 and Lemma 2.3 repeatedly, we have:
$B\leq A\circ B\leq A\circ I=A$;
$A\circ B\leq A\circ(A\circ B)\leq A\circ A=A^{2}$;
$A\circ(A\circ B)\leq A\circ\Big{(}A\circ(A\circ B)\Big{)}\leq A\circ
A^{2}=A^{3}$;
$\vdots$
$A\circ\cdots\circ(A\circ B)\leq A\circ\Big{(}A\circ\cdots\circ(A\circ
B)\Big{)}\leq A\circ A^{n-1}=A^{n}$.
The above showed that $B\leq A^{n}$ for each positive integer $n$.
(6)$\Rightarrow$(5): Let $\chi_{\\{1\\}}$ be the characteristic function of
$\\{1\\}$. Note that $0\leq A\leq I$, it is easy to know that $\\{A^{n}\\}$
converges to $\chi_{\\{1\\}}(A)=P_{Ker(I-A)}$ in the strong operator topology.
Thus $B\leq P_{Ker(I-A)}$.
(5)$\Rightarrow$(1): Since $0\leq B\leq P_{Ker(I-A)}$, we have
$Ker(P_{Ker(I-A)})\subseteq Ker(B)$. So $Ran(B)\subseteq
Ran(P_{Ker(I-A)})=Ker(I-A)$. Thus $(I-A)B=0$. That is, $AB=B$. Taking adjoint,
we get $AB=BA=B$.
Theorem 2.4. Let $\circ$ be a sequential product on $({\cal
E}(H),0,I,\oplus)$, $A,B\in{\cal E}(H)$. Then the following conditions are all
equivalent:
(1) $C\circ(A\circ B)=(C\circ A)\circ B$ for every $C\in{\cal E}(H)$;
(2) $\langle(A\circ B)x,x\rangle=\langle Ax,x\rangle\langle Bx,x\rangle$ for
every $x\in H$ with $\|x\|=1$;
(3) $A=tI$ or $B=tI$ for some real number $0\leq t\leq 1$.
Proof. By Lemma 2.5, we conclude that (2)$\Rightarrow$(3). By Theorem 2.2 and
Lemma 2.4, (3)$\Rightarrow$(1) is trivial.
(1)$\Rightarrow$(2): If (1) hold, then $P_{x}\circ(A\circ B)=(P_{x}\circ
A)\circ B$ for every $x\in H$ with $\|x\|=1$. By Theorem 2.1,
$P_{x}\circ(A\circ B)=P_{x}(A\circ B)P_{x}=\langle(A\circ B)x,x\rangle P_{x}$.
By Theorem 2.1 and Lemma 2.4, $(P_{x}\circ A)\circ B=(P_{x}AP_{x})\circ
B=(\langle Ax,x\rangle P_{x})\circ B=\langle Ax,x\rangle(P_{x}\circ B)=\langle
Ax,x\rangle P_{x}BP_{x}=\langle Ax,x\rangle\langle Bx,x\rangle P_{x}$. Thus
(2) hold.
Theorem 2.5. Let $\circ$ be a sequential product on $({\cal
E}(H),0,I,\oplus)$, $B\in{\cal E}(H)$, $E\in{\cal P}(H)$. Then the following
conditions are all equivalent:
(1) $E\circ B\leq B$;
(2) $EB=BE$;
(3) $E\circ B=B\circ E$.
Proof. (2)$\Rightarrow$(3): By Theorem 2.2.
(3)$\Rightarrow$(1): By Lemma 2.3.
(1)$\Rightarrow$(2): Since $E\in{\cal P}(H)$, by Theorem 2.1, $E\circ B=EBE$.
Thus, $B-EBE\geq 0$. Note that
$B-EBE=\left(\begin{array}[]{cc}0&EB(I-E)\\\ (I-E)BE&(I-E)B(I-E)\\\
\end{array}\right)$
with respect to the space decomposition $H=Ran(E)\oplus Ker(E)$, so by Lemma
2.6 we have $EB(I-E)=(I-E)BE=0$. Thus $B=EBE+(I-E)B(I-E)$. So $EB=BE$.
Theorem 2.6. Let $\circ$ be a sequential product on $({\cal
E}(H),0,I,\oplus)$, $A,B,C\in{\cal E}(H)$. If $A$ is invertible, then the
following conditions are all equivalent:
(1) $B\leq C$;
(2) $A\circ B\leq A\circ C$.
Proof. (1)$\Rightarrow$(2): By Lemma 2.3.
(2)$\Rightarrow$(1): It is easy to see that $\|A^{-1}\|^{-1}A^{-1}\in{\cal
E}(H)$.
By Lemma 2.3, $(\|A^{-1}\|^{-1}A^{-1})\circ(A\circ
B)\leq(\|A^{-1}\|^{-1}A^{-1})\circ(A\circ C)$.
By Theorem 2.2, $(\|A^{-1}\|^{-1}A^{-1})|A$ and $(\|A^{-1}\|^{-1}A^{-1})\circ
A=\|A^{-1}\|^{-1}I$.
By (SEA4) and Theorem 2.2, we have
$(\|A^{-1}\|^{-1}A^{-1})\circ(A\circ B)=\Big{(}(\|A^{-1}\|^{-1}A^{-1})\circ
A\Big{)}\circ B=(\|A^{-1}\|^{-1}I)\circ B=\|A^{-1}\|^{-1}B$,
$(\|A^{-1}\|^{-1}A^{-1})\circ(A\circ C)=\Big{(}(\|A^{-1}\|^{-1}A^{-1})\circ
A\Big{)}\circ C=(\|A^{-1}\|^{-1}I)\circ C=\|A^{-1}\|^{-1}C$.
So $B\leq C$.
Corollary 2.1. Let $\circ$ be a sequential product on $({\cal
E}(H),0,I,\oplus)$, $A,B,C\in{\cal E}(H)$. If $A$ is invertible, then the
following conditions are all equivalent:
(1) $B=C$;
(2) $A\circ B=A\circ C$.
3\. General method for constructing sequential products on ${\cal E}(H)$
In the sequel, unless specified, suppose $H$ be a finite dimensional complex
Hilbert space, ${\mathbf{C}}$ be the set of complex numbers, ${\mathbf{R}}$ be
the set of real numbers, for each $A\in{\cal E}(H)$, $sp(A)$ be the spectra of
$A$ and $B(sp(A))$ be the set of all bounded complex Borel functions on
$sp(A)$.
Let $A,B\in{\cal B}(H)$, if there exists a complex constant $\xi$ such that
$|\xi|=1$ and $A=\xi B$, then we denote $A\approx B$.
In [7], Liu and Wu showed that if we define $A\circ
B=A^{\frac{1}{2}}f_{i}(A)Bf_{-i}(A)A^{\frac{1}{2}}$ for $A,B\in{\cal E}(H)$,
where $f_{z}(t)=\exp z(\ln t)$ if $t\in(0,1]$ and $f_{z}(0)=0$, then $\circ$
is a sequential product on $({\cal E}(H),0,I,\oplus)$, this result answered
Gudder’s problem negatively.
Now, we present a general method for constructing sequential products on
${\cal E}(H)$.
For each $A\in{\cal E}(H)$, take a $f_{A}\in B(sp(A))$.
Define $A\diamond B=f_{A}(A)B\overline{f_{A}}(A)$ for $A,B\in{\cal E}(H)$.
we say the set $\\{f_{A}\\}_{A\in{\cal E}(H)}$ satisfies sequential product
condition if the following two conditions hold:
(i) For every $A\in{\cal E}(H)$ and $t\in sp(A)$, $|f_{A}(t)|=\sqrt{t}$;
(ii) For any $A,B\in{\cal E}(H)$, if $AB=BA$, then $f_{A}(A)f_{B}(B)\approx
f_{AB}(AB)$.
If $\\{f_{A}\\}_{A\in{\cal E}(H)}$ satisfies sequential product condition,
then it is easy to see that
(1) $f_{A}(A)\overline{f_{A}}(A)=\overline{f_{A}}(A)f_{A}(A)=A$,
$(f_{A}(A))^{*}=\overline{f_{A}}(A)$.
(2) If $0\in sp(A)$, then $f_{A}(0)=0$.
(3) If $A=\sum\limits^{n}_{k=1}\lambda_{k}E_{k}$, where
$\\{E_{k}\\}^{n}_{k=1}$ are pairwise orthogonal projections, then
$f_{A}(A)=\sum\limits^{n}_{k=1}f_{A}(\lambda_{k})E_{k}$.
(4) For each $E\in{\cal P}(H)$, $f_{E}(E)=f_{E}(0)(I-E)+f_{E}(1)E=f_{E}(1)E$.
(5) for any $A,B\in{\cal E}(H)$, $A\diamond B\in{\cal E}(H)$.
Lemma 3.1 ([12]). Let $H$ be a complex Hilbert space, $A,B\in{\cal B}(H)$,
$A,B,AB$ be three normal operators, and at least one of $A,B$ be a compact
operator. Then $BA$ is also a normal operator.
Lemma 3.2 ([13]). If $M,N,T\in{\cal B}(H)$, $M,N$ are normal operators and
$MT=TN$, then $M^{*}T=TN^{*}$.
Lemma 3.3. Suppose $\\{f_{A}\\}_{A\in{\cal E}(H)}$ satisfy sequential product
condition and $A,B\in{\cal E}(H)$. If $A\diamond B=B\diamond A$ or $A\diamond
B=\overline{f_{B}}(B)Af_{B}(B)$, then $AB=BA$.
Proof. If $A\diamond B=B\diamond A$, that is,
$f_{A}(A)B\overline{f_{A}}(A)=f_{B}(B)A\overline{f_{B}}(B)$, then
$f_{A}(A)\overline{f_{B}}(B)f_{B}(B)\overline{f_{A}}(A)=f_{B}(B)\overline{f_{A}}(A)f_{A}(A)\overline{f_{B}}(B)$,
so $f_{A}(A)\overline{f_{B}}(B)$ is normal. By Lemma 3.1, we have
$\overline{f_{B}}(B)f_{A}(A)$ is also normal. Note that
$\Big{(}f_{A}(A)\overline{f_{B}}(B)\Big{)}f_{A}(A)=f_{A}(A)\Big{(}\overline{f_{B}}(B)f_{A}(A)\Big{)}$,
by using Lemma 3.2, we have
$\Big{(}f_{A}(A)\overline{f_{B}}(B)\Big{)}^{*}f_{A}(A)=f_{A}(A)\Big{(}\overline{f_{B}}(B)f_{A}(A)\Big{)}^{*}$.
That is, $f_{B}(B)A=Af_{B}(B)$. Taking adjoint, we have
$\overline{f_{B}}(B)A=A\overline{f_{B}}(B)$. Thus,
$AB=A\overline{f_{B}}(B)f_{B}(B)=\overline{f_{B}}(B)Af_{B}(B)=\overline{f_{B}}(B)f_{B}(B)A=BA$.
If $A\diamond B=\overline{f_{B}}(B)Af_{B}(B)$, that is,
$f_{A}(A)B\overline{f_{A}}(A)=\overline{f_{B}}(B)Af_{B}(B)$, the proof is
similar, we omit it.
Lemma 3.4. Suppose $\\{f_{A}\\}_{A\in{\cal E}(H)}$ satisfy sequential product
condition and $A,B\in{\cal E}(H)$. If $AB=BA$, then $A\diamond B=B\diamond
A=AB$.
Proof. Since $AB=BA$, by sequential product condition (i) we have $A\diamond
B=f_{A}(A)B\overline{f_{A}}(A)=|f_{A}|^{2}(A)B=AB$. Similarly, $B\diamond
A=f_{B}(B)A\overline{f_{B}}(B)=|f_{B}|^{2}(B)A=AB$. Thus $A\diamond
B=B\diamond A=AB$.
Lemma 3.5. Suppose $\\{f_{A}\\}_{A\in{\cal E}(H)}$ satisfy sequential product
condition and $A,B\in{\cal E}(H)$. If $AB=BA$, then for every $C\in{\cal
E}(H)$, $A\diamond(B\diamond C)=(A\diamond B)\diamond C$.
Proof. By Lemma 3.4, $A\diamond B=AB$. By sequential product condition (ii),
there exists a complex constant $\xi$ such that $|\xi|=1$ and
$f_{A}(A)f_{B}(B)=\xi f_{AB}(AB)$. Taking adjoint, we have
$\overline{f_{B}}(B)\overline{f_{A}}(A)=\overline{\xi}\,\overline{f_{AB}}(AB)$.
Thus,
$f_{A}(A)f_{B}(B)C\overline{f_{B}}(B)\overline{f_{A}}(A)=f_{AB}(AB)C\overline{f_{AB}}(AB)=f_{A\diamond
B}(A\diamond B)C\overline{f_{A\diamond B}}(A\diamond B)$. That is,
$A\diamond(B\diamond C)=(A\diamond B)\diamond C$.
Lemma 3.6 ([1]). If $y,z\in H$ and $|\langle y,x\rangle|=|\langle z,x\rangle|$
for every $x\in H$, then there exists a $c\in{\mathbf{C}}$, $|c|=1$, such that
$y=cz$.
Lemma 3.7 ([14]). Let $f:H\longrightarrow{\mathbf{C}}$ be a mapping,
$T\in{\cal B}(H)$. If the operator $S:H\longrightarrow H$ defined by
$S(x)=f(x)T(x)$ is linear, then $f(x)=f(y)$ for every $x,y\not\in Ker(T)$.
Lemma 3.8. Let $f:H\longrightarrow{\mathbf{C}}$ be a mapping, $T\in{\cal
B}(H)$. If the operator $S:H\longrightarrow H$ defined by $S(x)=f(x)T(x)$ is
linear, then there exists a constant $\xi\in{\mathbf{C}}$ such that $S(x)=\xi
T(x)$ for every $x\in H$.
Proof. By Lemma 3.7, there exists a constant $\xi\in{\mathbf{C}}$ such that
$S(x)=\xi T(x)$ for every $x\not\in Ker(T)$. Of course, $S(x)=0=\xi T(x)$ for
every $x\in Ker(T)$. So $S(x)=\xi T(x)$ for every $x\in H$.
Our main result in the section is the following.
Theorem 3.1. For each $A\in{\cal E}(H)$, take a $f_{A}\in B(sp(A))$. Define
$A\diamond B=f_{A}(A)B\overline{f_{A}}(A)$ for $A,B\in{\cal E}(H)$. Then
$\diamond$ is a sequential product on $({\cal E}(H),0,I,\oplus)$ iff the set
$\\{f_{A}\\}_{A\in{\cal E}(H)}$ satisfies sequential product condition.
Proof. (1) Firstly we suppose $\\{f_{A}\\}_{A\in{\cal E}(H)}$ satisfy
sequential product condition, we show that $({\cal E}(H),0,I,\oplus,\diamond)$
is a sequential effect algebra.
(SEA1) is obvious.
By Lemma 3.4, $I\diamond B=B$ for each $B\in{\cal E}(H)$, so (SEA2) hold.
We verify (SEA3) as follows:
If $A\diamond B=0$, then $f_{A}(A)B\overline{f_{A}}(A)=0$, so
$f_{A}(A)B^{\frac{1}{2}}=0$, thus, we have
$AB=\overline{f_{A}}(A)f_{A}(A)B^{\frac{1}{2}}B^{\frac{1}{2}}=0$. Taking
adjoint, we have $AB=BA$. So $A\diamond B=B\diamond A$.
We verify (SEA4) as follows:
If $A\diamond B=B\diamond A$, then by Lemma 3.3, $AB=BA$. So $A(I-B)=(I-B)A$.
By Lemma 3.4, we have $A\diamond(I-B)=(I-B)\diamond A$. By Lemma 3.5,
$A\diamond(B\diamond C)=(A\diamond B)\diamond C$ for every $C\in{\cal E}(H)$.
We verify (SEA5) as follows:
If $C\diamond A=A\diamond C$ and $C\diamond B=B\diamond C$, then by Lemma 3.3,
$AC=CA$, $BC=CB$. So (SEA5) follows easily by Lemma 3.4.
Thus, we proved that $({\cal E}(H),0,I,\oplus,\diamond)$ is a sequential
effect algebra.
(2) Now we suppose $\diamond$ be a sequential product on $({\cal
E}(H),0,I,\oplus)$, we show that the set $\\{f_{A}\\}_{A\in{\cal E}(H)}$
satisfies sequential product condition.
Since $({\cal E}(H),0,I,\oplus,\diamond)$ is a sequential effect algebra, by
Theorem 2.2, for each $A\in{\cal E}(H)$, $A\diamond I=A$, thus
$|f_{A}|^{2}(A)=A$. If $A=\sum\limits^{n}_{k=1}\lambda_{k}E_{k}$, where
$\\{E_{k}\\}^{n}_{k=1}$ are pairwise orthogonal projections,
$\sum\limits^{n}_{k=1}E_{k}=I$, then $sp(A)=\\{\lambda_{k}\\}$,
$|f_{A}|^{2}(A)=\sum\limits^{n}_{k=1}|f_{A}(\lambda_{k})|^{2}E_{k}$. Thus
$|f_{A}(\lambda_{k})|=\sqrt{\lambda_{k}}$ and $\\{f_{A}\\}_{A\in{\cal E}(H)}$
satisfies sequential product condition (i).
To prove $\\{f_{A}\\}_{A\in{\cal E}(H)}$ satisfies sequential product
condition (ii), let $A,B\in{\cal E}(H)$ and $AB=BA$. By Theorem 2.2, we have
$A\diamond B=B\diamond A=AB$. Thus by (SEA4), $A\diamond(B\diamond
C)=(A\diamond B)\diamond C$ for every $C\in{\cal E}(H)$.
Let $x\in H$, $\|x\|=1$, $C=P_{x}$. Then for every $y\in H$, we have
$\langle
f_{A}(A)f_{B}(B)P_{x}\overline{f_{B}}(B)\overline{f_{A}}(A)y,y\rangle$
$=\langle\Big{(}A\diamond(B\diamond P_{x})\Big{)}y,y\rangle$
$=\langle\Big{(}(A\diamond B)\diamond P_{x}\Big{)}y,y\rangle$
$=\langle\Big{(}(AB)\diamond P_{x}\Big{)}y,y\rangle$ $=\langle
f_{AB}(AB)P_{x}\overline{f_{AB}}(AB)y,y\rangle\ .$
Since
$\langle
f_{A}(A)f_{B}(B)P_{x}\overline{f_{B}}(B)\overline{f_{A}}(A)y,y\rangle=|\langle\overline{f_{B}}(B)\overline{f_{A}}(A)y,x\rangle|^{2}\
,$ $\langle
f_{AB}(AB)P_{x}\overline{f_{AB}}(AB)y,y\rangle=|\langle\overline{f_{AB}}(AB)y,x\rangle|^{2}\
,$
we have
$|\langle\overline{f_{B}}(B)\overline{f_{A}}(A)y,x\rangle|=|\langle\overline{f_{AB}}(AB)y,x\rangle|$
for every $x,y\in H$.
By Lemma 3.6, there exists a complex function g on $H$ such that $|g(x)|\equiv
1$ and $\overline{f_{B}}(B)\overline{f_{A}}(A)x=g(x)\overline{f_{AB}}(AB)x$
for every $x\in H$. By Lemma 3.8, there exists a constant $\xi\in{\mathbf{C}}$
such that $|\xi|=1$ and
$\overline{f_{B}}(B)\overline{f_{A}}(A)x=\xi\overline{f_{AB}}(AB)x$ for every
$x\in H$. So we conclude that
$\overline{f_{B}}(B)\overline{f_{A}}(A)=\xi\overline{f_{AB}}(AB)$. Taking
adjoint, we have $f_{A}(A)f_{B}(B)=\overline{\xi}f_{AB}(AB)$. Thus
$f_{A}(A)f_{B}(B)\approx f_{AB}(AB)$. This showed that the set
$\\{f_{A}\\}_{A\in{\cal E}(H)}$ satisfies sequential product condition.
Theorem 3.1 present a general method for constructing sequential products on
${\cal E}(H)$. Now, we give two examples.
Example 3.1. Let $g$ be a bounded complex Borel function on $[0,1]$ such that
$|g(t)|=\sqrt{t}$ for each $t\in[0,1]$ , $g(t_{1}t_{2})=g(t_{1})g(t_{2})$ for
any $t_{1},t_{2}\in[0,1]$ .
For each $A\in{\cal E}(H)$, let $f_{A}=g|_{sp(A)}$. Then it is easy to know
that $\\{f_{A}\\}$ satisfies sequential product condition. So by Theorem 3.1,
$A\diamond B=f_{A}(A)B\overline{f_{A}}(A)=g(A)B\overline{g}(A)$ defines a
sequential product on the standard effect algebra $({\cal E}(H),0,I,\oplus)$.
It is clear that Example 3.1 generalized Liu and Wu’s result in [7].
Example 3.2. Let $H$ be a 2-dimensional complex Hilbert space,
$\Gamma=\\{\gamma\mid\gamma$ be a decomposition of $I$ into two rank-one
orthogonal projections$\\}$. For each $\gamma\in\Gamma$, we can represent
$\gamma$ by a pair of rank-one orthogonal projections $(E_{1},E_{2})$, if
$A\in{\cal E}(H)$, $A\not\in span\\{I\\}$ and
$A=\sum\limits^{2}_{k=1}\lambda_{k}E_{k}$, then we say that $A$ can be
diagonalized by $\gamma$.
For each $\gamma\in\Gamma$, we take a $\xi(\gamma)\in\mathbf{R}$. If
$A\in{\cal E}(H)$, $A\not\in span\\{I\\}$ and $A$ can be diagonalized by
$\gamma$, let $f_{A}(t)=t^{\frac{1}{2}+\xi(\gamma)i}$ for $t\in sp(A)$.
If $A\in{\cal E}(H)$ and $A=\lambda I$, let $f_{A}(t)=\sqrt{t}$ for $t\in
sp(A)$.
Then the set $\\{f_{A}\\}_{A\in{\cal E}(H)}$ satisfies sequential product
condition (see the proof below). So by Theorem 3.1, $A\diamond
B=f_{A}(A)B\overline{f_{A}}(A)$ defines a sequential product on the standard
effect algebra $({\cal E}(H),0,I,\oplus)$.
Proof. Obviously $\\{f_{A}\\}_{A\in{\cal E}(H)}$ satisfies sequential product
condition (i).
Now we show that $\\{f_{A}\\}_{A\in{\cal E}(H)}$ satisfies sequential product
condition (ii). Let $A,B\in{\cal E}(H)$, $AB=BA$.
(1) If $A=\sum\limits^{2}_{k=1}\lambda_{k}E_{k}$,
$B=\sum\limits^{2}_{k=1}\mu_{k}E_{k}$, $\lambda_{1}\neq\lambda_{2}$,
$\mu_{1}\neq\mu_{2}$, let $\gamma=(E_{1},E_{2})$, we have
$f_{A}(t)=t^{\frac{1}{2}+\xi(\gamma)i}$ for $t\in sp(A)$,
$f_{B}(t)=t^{\frac{1}{2}+\xi(\gamma)i}$ for $t\in sp(B)$. So
$f_{A}(A)=A^{\frac{1}{2}+\xi(\gamma)i}=\sum\limits^{2}_{k=1}\lambda_{k}^{\frac{1}{2}+\xi(\gamma)i}E_{k}$,
$f_{B}(B)=B^{\frac{1}{2}+\xi(\gamma)i}=\sum\limits^{2}_{k=1}\mu_{k}^{\frac{1}{2}+\xi(\gamma)i}E_{k}$.
(1a) If $\lambda_{1}\mu_{1}=\lambda_{2}\mu_{2}$, then
$AB=\lambda_{1}\mu_{1}I$, so $f_{AB}(t)=t^{\frac{1}{2}}$ for $t\in sp(AB)$,
thus we have $f_{AB}(AB)=(AB)^{\frac{1}{2}}=\sqrt{\lambda_{1}\mu_{1}}I$,
$f_{A}(A)f_{B}(B)=\sum\limits^{2}_{k=1}(\lambda_{k}\mu_{k})^{\frac{1}{2}+\xi(\gamma)i}E_{k}=(\lambda_{1}\mu_{1})^{\frac{1}{2}+\xi(\gamma)i}I=(\lambda_{1}\mu_{1})^{\xi(\gamma)i}f_{AB}(AB)\approx
f_{AB}(AB)$.
(1b) If $\lambda_{1}\mu_{1}\neq\lambda_{2}\mu_{2}$, then
$AB=\sum\limits^{2}_{k=1}\lambda_{k}\mu_{k}E_{k}$, so
$f_{AB}(t)=t^{\frac{1}{2}+\xi(\gamma)i}$ for $t\in sp(AB)$,
$f_{AB}(AB)=(AB)^{\frac{1}{2}+\xi(\gamma)i}=\sum\limits^{2}_{k=1}(\lambda_{k}\mu_{k})^{\frac{1}{2}+\xi(\gamma)i}E_{k}$,
thus we have
$f_{A}(A)f_{B}(B)=\sum\limits^{2}_{k=1}(\lambda_{k}\mu_{k})^{\frac{1}{2}+\xi(\gamma)i}E_{k}=f_{AB}(AB)$.
(2) If $A=\lambda I$, $B=\sum\limits^{2}_{k=1}\mu_{k}E_{k}$,
$\mu_{1}\neq\mu_{2}$, let $\gamma=(E_{1},E_{2})$. Then we have
$f_{A}(t)=t^{\frac{1}{2}}$ for $t\in sp(A)$,
$f_{B}(t)=t^{\frac{1}{2}+\xi(\gamma)i}$ for $t\in sp(B)$. So
$f_{A}(A)=A^{\frac{1}{2}}=\sqrt{\lambda}I$,
$f_{B}(B)=B^{\frac{1}{2}+\xi(\gamma)i}=\sum\limits^{2}_{k=1}\mu_{k}^{\frac{1}{2}+\xi(\gamma)i}E_{k}$,
$AB=\sum\limits^{2}_{k=1}\lambda\mu_{k}E_{k}$.
(2a) If $\lambda=0$, then $AB=0$, $f_{AB}(t)=t^{\frac{1}{2}}$ for $t\in
sp(AB)$, so $f_{AB}(AB)=(AB)^{\frac{1}{2}}=0$. Thus
$f_{A}(A)f_{B}(B)=0=f_{AB}(AB)$.
(2b) If $\lambda\neq 0$, then $f_{AB}(t)=t^{\frac{1}{2}+\xi(\gamma)i}$ for
$t\in sp(AB)$. So
$f_{AB}(AB)=(AB)^{\frac{1}{2}+\xi(\gamma)i}=\lambda^{\frac{1}{2}+\xi(\gamma)i}\sum\limits^{2}_{k=1}(\mu_{k})^{\frac{1}{2}+\xi(\gamma)i}E_{k}$.
Thus
$f_{A}(A)f_{B}(B)=\sqrt{\lambda}\sum\limits^{2}_{k=1}(\mu_{k})^{\frac{1}{2}+\xi(\gamma)i}E_{k}\approx
f_{AB}(AB)$.
(3) If $A=\lambda I$, $B=\mu I$, then $f_{A}(t)=t^{\frac{1}{2}}$ for $t\in
sp(A)$, $f_{B}(t)=t^{\frac{1}{2}}$ for $t\in sp(B)$. So
$f_{A}(A)=A^{\frac{1}{2}}=\sqrt{\lambda}I$,
$f_{B}(B)=B^{\frac{1}{2}}=\sqrt{\mu}I$. $AB=\lambda\mu I$,
$f_{AB}(t)=t^{\frac{1}{2}}$ for $t\in sp(AB)$,
$f_{AB}(AB)=(AB)^{\frac{1}{2}}=\sqrt{\lambda\mu}I$. Thus
$f_{A}(A)f_{B}(B)=f_{AB}(AB)$.
It follows from (1)-(3) that the set $\\{f_{A}\\}_{A\in{\cal E}(H)}$ satisfies
sequential product condition (ii).
4\. Properties of the sequential product $\diamond$ on $({\cal
E}(H),0,I,\oplus)$
Now, we study some elementary properties of the sequential product $\diamond$
defined in Section 3.
In this section, unless specified, we follow the notations in Section 3. We
always suppose $\\{f_{A}\\}_{A\in{\cal E}(H)}$ satisfies sequential product
condition. So by Theorem 3.1 $\diamond$ is a sequential product on the
standard effect algebra $(\varepsilon(H),0,I,\oplus)$.
Lemma 4.1. If $C\in{\cal E}(H)$, $0\leq t\leq 1$, then $f_{tC}(tC)\approx
f_{tI}(t)f_{C}(C)$.
Proof. Since $\\{f_{A}\\}_{A\in{\cal E}(H)}$ satisfies sequential product
condition, $f_{tC}(tC)\approx f_{tI}(tI)f_{C}(C)=f_{tI}(t)f_{C}(C)$.
Lemma 4.2. Let $A\in{\cal E}(H)$, $x\in H$, $\|x\|=1$, $\|f_{A}(A)x\|\neq 0$,
$y=\frac{f_{A}(A)x}{\|f_{A}(A)x\|}$. Then $A\diamond
P_{x}=\|f_{A}(A)x\|^{2}P_{y}$.
Proof. For each $z\in H$, $(A\diamond
P_{x})z=f_{A}(A)P_{x}\overline{f_{A}}(A)z=\langle\overline{f_{A}}(A)z,x\rangle
f_{A}(A)x=\langle z,f_{A}(A)x\rangle f_{A}(A)x=\|f_{A}(A)x\|^{2}P_{y}z$. So
$A\diamond P_{x}=\|f_{A}(A)x\|^{2}P_{y}$.
Lemma 4.3. Let $M\subseteq{\cal B}(H)$ be a von Neumann algebra, $P$ be a
minimal projection in $M$, $A\in M$, $x\in Ran(P)$, $\|x\|=1$. Then
$PAP=\omega_{x}(A)P$, where $\omega_{x}(A)=\langle Ax,x\rangle$.
Proof. Since $P$ is a minimal projection in $M$, by [15, Proposition 6.4.3],
$PAP=\lambda P$ for some complex number $\lambda$. Thus $\langle
PAPx,x\rangle=\langle\lambda Px,x\rangle$. So $\lambda=\omega_{x}(A)$.
Theorem 4.1. Let $A,B\in{\cal E}(H)$. Then the following conditions are all
equivalent:
(1) $AB=BA$;
(2) $A\diamond B=B\diamond A$;
(3) $A\diamond(B\diamond C)=(A\diamond B)\diamond C$ for every $C\in{\cal
E}(H)$.
Proof. (1)$\Rightarrow$(2): By Theorem 2.2.
(2)$\Rightarrow$(1): By Lemma 3.3.
(1)$\Rightarrow$(3): By Lemma 3.5.
(3)$\Rightarrow$(1): Let $x\in H$, $\|x\|=1$, $C=P_{x}$. Then for each $y\in
H$,
$\langle
f_{A}(A)f_{B}(B)P_{x}\overline{f_{B}}(B)\overline{f_{A}}(A)y,y\rangle$
$=\langle\Big{(}A\diamond(B\diamond P_{x})\Big{)}y,y\rangle$
$=\langle\Big{(}(A\diamond B)\diamond P_{x}\Big{)}y,y\rangle$ $=\langle
f_{A\diamond B}(A\diamond B)P_{x}\overline{f_{A\diamond B}}(A\diamond
B)y,y\rangle\ .$
Since
$\langle
f_{A}(A)f_{B}(B)P_{x}\overline{f_{B}}(B)\overline{f_{A}}(A)y,y\rangle=|\langle\overline{f_{B}}(B)\overline{f_{A}}(A)y,x\rangle|^{2}\
,$ $\langle f_{A\diamond B}(A\diamond B)P_{x}\overline{f_{A\diamond
B}}(A\diamond B)y,y\rangle=|\langle\overline{f_{A\diamond B}}(A\diamond
B)y,x\rangle|^{2}\ ,$
we have
$|\langle\overline{f_{B}}(B)\overline{f_{A}}(A)y,x\rangle|=|\langle\overline{f_{A\diamond
B}}(A\diamond B)y,x\rangle|$ for every $x,y\in H$.
By Lemma 3.6, there exists a complex function $g$ on $H$ such that $|g(x)|=1$
and $\overline{f_{B}}(B)\overline{f_{A}}(A)x=g(x)\overline{f_{A\diamond
B}}(A\diamond B)x$ for every $x\in H$.
By Lemma 3.8, there exists a constant $\xi$ such that $|\xi|=1$ and
$\overline{f_{B}}(B)\overline{f_{A}}(A)x=\xi\overline{f_{A\diamond
B}}(A\diamond B)x$ for every $x\in H$.
So we conclude that
$\overline{f_{B}}(B)\overline{f_{A}}(A)=\xi\overline{f_{A\diamond
B}}(A\diamond B)$.
Taking adjoint, we have $f_{A}(A)f_{B}(B)=\overline{\xi}f_{A\diamond
B}(A\diamond B)$. Thus
$\overline{f_{B}}(B)Af_{B}(B)=\overline{f_{B}}(B)\overline{f_{A}}(A)f_{A}(A)f_{B}(B)=\xi\overline{f_{A\diamond
B}}(A\diamond B)\overline{\xi}f_{A\diamond B}(A\diamond B)=A\diamond B$. That
is, $A\diamond B=\overline{f_{B}}(B)Af_{B}(B)$, so by Lemma 3.3, we have
$AB=BA$.
Theorem 4.2. Let $A,B\in{\cal E}(H)$. If $A\diamond B\in{\cal P}(H)$, then
$AB=BA$.
Proof. If $A\diamond B=0$, then by (SEA3) we have $A\diamond B=B\diamond A$,
so by Theorem 4.1 we have $AB=BA$.
If $A\diamond B\neq 0$. Firstly, we let $x\in Ran(A\diamond B)$ and $\|x\|=1$.
Then $f_{A}(A)B\overline{f_{A}}(A)x=x$. So $\langle
B\overline{f_{A}}(A)x,\overline{f_{A}}(A)x\rangle=1$. By Schwarz inequality,
we conclude that $B\overline{f_{A}}(A)x=\overline{f_{A}}(A)x$. Thus
$Ax=f_{A}(A)\overline{f_{A}}(A)x=f_{A}(A)B\overline{f_{A}}(A)x=x$. So $1\in
sp(A)$ and $B\overline{f_{A}}(A)x=\overline{f_{A}}(A)x=\overline{f_{A}}(1)x$.
Next, we let $x\in Ker(A\diamond B)$ and $\|x\|=1$. Then
$f_{A}(A)B\overline{f_{A}}(A)x=0$. So $\langle
B\overline{f_{A}}(A)x,\overline{f_{A}}(A)x\rangle=0$. We conclude that
$B\overline{f_{A}}(A)x=0$.
Thus, we always have $B\overline{f_{A}}(A)=\overline{f_{A}}(1)(A\diamond B)$.
That is, $f_{A}(1)B\overline{f_{A}}(A)=A\diamond B$.
Taking adjoint, we have
$f_{A}(1)B\overline{f_{A}}(A)=\overline{f_{A}}(1)f_{A}(A)B$.
By Lemma 3.2, we have
$\overline{f_{A}}(1)Bf_{A}(A)=f_{A}(1)\overline{f_{A}}(A)B$. So
$f_{A}(1)\overline{f_{A}}(A)B$ is self-adjoint. By [15, Proposition 3.2.8], we
have
$sp\Big{(}f_{A}(1)\overline{f_{A}}(A)B\Big{)}\backslash\\{0\\}=sp\Big{(}f_{A}(1)B\overline{f_{A}}(A)\Big{)}\backslash\\{0\\}=sp(A\diamond
B)\backslash\\{0\\}\subseteq\mathbf{R^{+}}$.
Thus we conclude that $f_{A}(1)\overline{f_{A}}(A)B\geq 0$.
Since
$\Big{(}f_{A}(1)\overline{f_{A}}(A)B\Big{)}^{2}=\Big{(}\overline{f_{A}}(1)Bf_{A}(A)\Big{)}\Big{(}f_{A}(1)\overline{f_{A}}(A)B\Big{)}=BAB=\Big{(}f_{A}(1)B\overline{f_{A}}(A)\Big{)}\Big{(}\overline{f_{A}}(1)f_{A}(A)B\Big{)}=(A\diamond
B)^{2}$, by the uniqueness of positive square root, we have
$f_{A}(1)\overline{f_{A}}(A)B=A\diamond B$. That is,
$f_{A}(1)\overline{f_{A}}(A)B=\overline{f_{A}}(1)Bf_{A}(A)=f_{A}(1)B\overline{f_{A}}(A)=\overline{f_{A}}(1)f_{A}(A)B=A\diamond
B$. Thus,
$BA=f_{A}(1)B\overline{f_{A}}(A)\overline{f_{A}}(1)f_{A}(A)=f_{A}(1)\overline{f_{A}}(A)B\overline{f_{A}}(1)f_{A}(A)=f_{A}(1)\overline{f_{A}}(A)\overline{f_{A}}(1)f_{A}(A)B=AB$.
Theorem 4.3. Let $A,B\in{\cal E}(H)$. Then the following conditions are all
equivalent:
(1) $A\diamond(C\diamond B)=(A\diamond C)\diamond B$ for every $C\in{\cal
E}(H)$;
(2) $C\diamond(A\diamond B)=(C\diamond A)\diamond B$ for every $C\in{\cal
E}(H)$;
(3) $\langle(A\diamond B)x,x\rangle=\langle Ax,x\rangle\langle Bx,x\rangle$
for every $x\in H$ with $\|x\|=1$;
(4) $A=tI$ or $B=tI$ for some $0\leq t\leq 1$.
Proof. By Theorem 2.4, we conclude that
(2)$\Longleftrightarrow$(3)$\Longleftrightarrow$(4).
(4)$\Rightarrow$(1) follows from Lemma 2.4 and Theorem 2.2 easily.
(1)$\Rightarrow$(4): If (1) hold, then $A\diamond(P_{x}\diamond B)=(A\diamond
P_{x})\diamond B$ for each $x\in H$ with $\|x\|=1$. Without lose of
generality, we suppose $\|f_{A}(A)x\|\neq 0$. Let
$y=\frac{f_{A}(A)x}{\|f_{A}(A)x\|}$.
By Lemma 4.2 and Theorem 2.1,
$A\diamond(P_{x}\diamond B)=f_{A}(A)(P_{x}BP_{x})\overline{f_{A}}(A)$
$=f_{A}(A)(\langle Bx,x\rangle P_{x})\overline{f_{A}}(A)$ $=\langle
Bx,x\rangle(A\diamond P_{x})$ $=\|f_{A}(A)x\|^{2}\langle Bx,x\rangle P_{y}\ .$
By Lemma 4.1 and Lemma 4.2,
$(A\diamond P_{x})\diamond B=(\|f_{A}(A)x\|^{2}P_{y})\diamond B$
$=f_{\|f_{A}(A)x\|^{2}P_{y}}(\|f_{A}(A)x\|^{2}P_{y})B\overline{f_{\|f_{A}(A)x\|^{2}P_{y}}}(\|f_{A}(A)x\|^{2}P_{y})$
$=f_{\|f_{A}(A)x\|^{2}I}(\|f_{A}(A)x\|^{2})f_{P_{y}}(P_{y})B\overline{f_{\|f_{A}(A)x\|^{2}I}}(\|f_{A}(A)x\|^{2})\overline{f_{P_{y}}}(P_{y})$
$=\|f_{A}(A)x\|^{2}P_{y}BP_{y}$ $=\|f_{A}(A)x\|^{2}\langle By,y\rangle P_{y}\
.$
Thus $\langle Bx,x\rangle=\langle By,y\rangle$. So we have
$\langle\overline{f_{A}}(A)Bf_{A}(A)x,x\rangle=\langle Ax,x\rangle\langle
Bx,x\rangle$. By Lemma 2.5, we conclude that (4) hold.
Theorem 4.4. Let $A\in{\cal E}(H)$, $E\in{\cal P}(H)$. Then the following
conditions are all equivalent:
(1) $A\diamond E\leq E$;
(2) $E\overline{f_{A}}(A)(I-E)=0$.
Proof. Since $E\in{\cal P}(H)$ and $\|\overline{f_{A}}(A)\|\leq 1$, we have
$A\diamond E\leq E\Longleftrightarrow\langle
f_{A}(A)E\overline{f_{A}}(A)x,x\rangle\leq\langle Ex,x\rangle\hbox{ for every
$x\in H$}$ $\Longleftrightarrow\|E\overline{f_{A}}(A)x\|\leq\|Ex\|\hbox{ for
every $x\in H$}$ $\Longleftrightarrow\overline{f_{A}}(A)\mid_{Ker(E)}\subseteq
Ker(E)$ $\Longleftrightarrow E\overline{f_{A}}(A)(I-E)=0\ .$
Corollary 4.1 [14]. Let $A\in{\cal E}(H)$, $E\in{\cal P}(H)$. Then the
following conditions are all equivalent:
(1) $A^{\frac{1}{2}}EA^{\frac{1}{2}}\leq E$;
(2) $AE=EA$.
Proof. (2)$\Rightarrow$(1) is trivial.
(1)$\Rightarrow$(2): Let $f_{B}(t)=\sqrt{t}$ for each $B\in{\cal E}(H)$ and
$t\in sp(B)$, then $\\{f_{B}\\}_{B\in{\cal E}(H)}$ satisfies sequential
product condition. For this sequential product, $A\diamond
E=A^{\frac{1}{2}}EA^{\frac{1}{2}}$. So by Theorem 4.4 we have
$EA^{\frac{1}{2}}(I-E)=0$. That is, $EA^{\frac{1}{2}}=EA^{\frac{1}{2}}E$.
Taking adjoint, we have $EA^{\frac{1}{2}}=A^{\frac{1}{2}}E$. Thus $AE=EA$.
Corollary 4.2. Let $M\subseteq{\cal B}(H)$ be a von Neumann algebra, ${\cal
E}(M)=\\{A\in M|0\leq A\leq I\\}$, $P$ or $I-P$ be a minimal projection in
$M$. Then for every $A\in{\cal E}(M)$, the following conditions are all
equivalent:
(1) $A\diamond P\leq P$;
(2) $AP=PA$.
Proof. (2)$\Rightarrow$(1): By Theorem 2.2, $A\diamond P=AP=PAP\leq P$.
(1)$\Rightarrow$(2): If $P$ is a minimal projection in $M$, then by Theorem
4.4 we have $P\overline{f_{A}}(A)(I-P)=0$, that is,
$P\overline{f_{A}}(A)=P\overline{f_{A}}(A)P$.
Let $x\in Ran(P)$ with $\|x\|=1$. Then by Lemma 4.3 we have
$P\overline{f_{A}}(A)P=\omega_{x}(\overline{f_{A}}(A))P$. So
$P\overline{f_{A}}(A)=\omega_{x}(\overline{f_{A}}(A))P$. Taking adjoint, we
have $f_{A}(A)P=\omega_{x}(f_{A}(A))P$. By Lemma 3.2, we have
$Pf_{A}(A)=\overline{\omega_{x}(\overline{f_{A}}(A))}P=\omega_{x}(f_{A}(A))P$.
Thus $Pf_{A}(A)=f_{A}(A)P$. Taking adjoint, we have
$P\overline{f_{A}}(A)=\overline{f_{A}}(A)P$. So,
$PA=Pf_{A}(A)\overline{f_{A}}(A)=f_{A}(A)P\overline{f_{A}}(A)=f_{A}(A)\overline{f_{A}}(A)P=AP$.
If $I-P$ is a minimal projection in $M$. By Theorem 4.4 we have
$P\overline{f_{A}}(A)(I-P)=0$. Taking adjoint, we have $(I-P)f_{A}(A)P=0$.
That is, $(I-P)f_{A}(A)=(I-P)f_{A}(A)(I-P)$. Similar to the proof above, we
conclude that $(I-P)A=A(I-P)$. So $AP=PA$.
References
[1]. Gudder, S, Nagy, G. Sequential quantum measurements. J. Math. Phys.
42(2001), 5212-5222.
[2]. Gudder, S, Greechie, R. Sequential products on effect algebras. Rep.
Math. Phys. 49(2002), 87-111.
[3]. Gheondea, A, Gudder, S. Sequential product of quantum effects. Proc.
Amer. Math. Soc. 132 (2004), 503-512.
[4]. Gudder, S. Open problems for sequential effect algebras. Inter. J.
Theory. Phys. 44 (2005), 2219-2230.
[5]. Gudder, S, Latr moli re, F. Characterization of the sequential product on
quantum effects. J. Math. Phys. 49 (2008), 052106-052112.
[6]. Shen J., Wu J. D.. Not each sequential effect algebra is sharply
dominating. Phys. Lett. A 373 (2009), 1708-1712.
[7]. Liu Weihua, Wu Junde. A uniqueness problem of the sequence product on
operator effect algebra ${\cal E}(H)$. J. Phys. A: Math. Theor. 42(2009),
185206-185215.
[8]. Foulis, D J, Bennett, M K. Effect algebras and unsharp quantum logics.
Found. Phys. 24 (1994), 1331-1352.
[9]. Gudder, S. Sharply dominating effect algebras. Tatra Mt. Math. Publ.
15(1998), 23-30.
[10]. Riecanova, Z, Wu Junde. States on sharply dominating effect algebras.
Science in China Series A: Math. 51(2008), 907-914.
[11]. Smuljan, J L. An operator Hellinger integral (Russian). Mat. Sb. (N.S.)
49(1959), 381-430.
[12]. Kaplansky, I. Products of normal operators. Duke Math. J. 20 (1953),
257-260.
[13]. Rudin, W. Functional analysis. McGraw-Hill, New York (1991).
[14]. Li, Y, Sun, X H, Chen, Z L. Generalized infimum and sequential product
of quantum effects. J. Math. Phys. 48 (2007), 102101.
[15]. Kadison, R, Ringrose, J. Fundamentals of the theory of operator algebras
(I, II). American Mathematical Society, New York (1997).
|
arxiv-papers
| 2009-05-05T12:50:13 |
2024-09-04T02:49:02.329077
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Shen Jun and Wu Junde",
"submitter": "Junde Wu",
"url": "https://arxiv.org/abs/0905.0596"
}
|
0905.0774
|
# The Possible $J^{PC}=0^{--}$ Exotic State
Chun-Kun Jiao ckjiao@163.com Wei Chen boya@pku.edu.cn Hua-Xing Chen
chx@water.pku.edu.cn Shi-Lin Zhu zhusl@pku.edu.cn Department of Physics and
State Key Laboratory of Nuclear Physics and Technology
Peking University, Beijing 100871, China
###### Abstract
In order to explore the possible existence of the exotic $0^{--}$ state, we
have constructed the tetraquark interpolating operators systematically. As a
byproduct, we notice the $0^{+-}$ tetraquark operators without derivatives do
not exist. The special Lorentz structure of the $0^{--}$ currents forbids the
four-quark type of corrections to the spectral density. Now the gluon
condensates are the dominant power corrections. None of the seven
interpolating currents supports a resonant signal. Therefore we conclude that
the exotic $0^{--}$ state does not exist below 2 GeV, which is consistent with
the current experimental observations.
exotic state, QCD sum rule
###### pacs:
12.39.Mk, 11.40.-q, 12.38.Lg
## I Introduction
Most of the experimentally observed hadrons can be interpreted as
$q\bar{q}$/$qqq$ states and accommodated in the quark model Amsler:2008zzb ;
Klempt:2007cp . Up to now there has accumulated some evidence of the exotic
state with $J^{PC}=1^{-+}$ Adams:2006sa ; Abele:1999tf ; Thompson:1997bs .
Such a quantum number is not accessible for a pair of quark and anti-quark. It
is sometimes labelled as an exotic hybrid meson with the particle contents
$\bar{q}g_{s}G_{\mu\nu}\gamma^{\nu}q$. Recently we have investigated the
$1^{-+}$ state using the tetraquark currents Chen:2008qw . The extracted mass
and characteristic decay pattern are quite similar to those expected for the
exotic hybrid meson. Such a result is expected. Since the gluon field creates
a pair of $q\bar{q}$ easily, the hybrid operator
$\bar{q}g_{s}G_{\mu\nu}\gamma^{\nu}q$ transforms into a tetraquark
interpolating operator with the same exotic quantum number. In quantum field
theory different operators with the same quantum number mix and tend to couple
to the same physical state.
Using the same tetraquark formalism developed in the study of the low-lying
scalar mesons Chen:2006hy and the exotic $1^{-+}$ mesons Chen:2008qw , we
study the possible $J^{PC}=0^{--}$ states composed of light quarks. For a
neutral quark model state $q\bar{q}$, we know that $J=0$ ensures $L=S$ hence
$C=(-)^{L+S}=+1$. In other words, states with $J^{PC}=0^{--},0^{+-}$ are
strictly forbidden. On the other hand, the gauge invariant scalar and
pseudoscalar operators composed of a pair of the gluon field are
$g^{2}_{s}G_{\mu\nu}^{a}G^{a\mu\nu}$ and
$\epsilon^{\mu\nu\alpha\beta}g^{2}_{s}G_{\mu\nu}^{a}G^{a}_{\alpha\beta}$, both
of which carry the even C-parity.
We construct all the local tetraquark currents with $J^{PC}=0^{--}$. There are
two kinds of constructions: $(qq)(\bar{q}\bar{q})$ and $(\bar{q}q)(\bar{q}q)$.
They can be related to each other by using the Fierz transformation. As usual,
we use the first set Chen:2006hy . Their flavor structure can be
$\mathbf{\bar{3}}_{f}\otimes\mathbf{3}_{f}$,
$\mathbf{6}_{f}\otimes\mathbf{\bar{6}}_{f}$, and
$\mathbf{\bar{3}}_{f}\otimes\mathbf{\bar{6}}_{f}\oplus\mathbf{6}_{f}\otimes\mathbf{3}_{f}$
($(qq)(\bar{q}\bar{q})$). With all these independent currents, we perform the
QCD sum rule analysis. As a byproduct, we notice that there does not exist any
tetraquark interpolating operator without derivative for the $J^{PC}=0^{+-}$
case.
This paper is organized as follows. In Sec. II, we construct the tetraquark
currents with $J^{PC}=0^{--}$ using the diquark ($qq$) and antidiquark
($\bar{q}\bar{q}$) fields. The tetraquark currents constructed with the quark-
antiquark ($\bar{q}q$) pairs are shown in Appendix.A. We present the spectral
density in Sec. III and perform the numerical analysis in Sec. IV. For
comparison, we present the finite energy sum rule analysis in the Appendix.B.
The last section is a short summary.
## II tetraquark interpolating currents
### II.1 The $J^{PC}=0^{--}$ Tetraquark Interpolating Currents
In this section, we construct the tetraquark interpolating currents with
$J^{PC}=0^{--}$ using diquark and antidiquark fields. Such a quantum number
can not be accessed by a $q\bar{q}$ pair. The currents can be similarly
constructed by using the quark-antiquark pairs. However, as shown in
Appendix.A, these two constructions are equivalent as we have shown several
times in our previous studies Chen:2006hy ; Chen:2008qw .
The pseudoscalar tetraquark currents can be constructed using five independent
diquark fields, which are constructed by five independent $\gamma$-matrices
$\displaystyle
S_{abcd}=(q_{1a}^{T}Cq_{2b})(\bar{q}_{3c}\gamma_{5}C\bar{q}^{T}_{4d})\,,$
$\displaystyle
V_{abcd}=(q_{1a}^{T}C\gamma_{5}q_{2b})(\bar{q}_{3c}C\bar{q}^{T}_{4d})\,,$
$\displaystyle
T_{abcd}=(q_{1a}^{T}C\sigma_{\mu\nu}q_{2b})(\bar{q}_{3c}\sigma^{\mu\nu}\gamma_{5}C\bar{q}^{T}_{4d})\,,$
(1) $\displaystyle
A_{abcd}=(q_{1a}^{T}C\gamma_{\mu}q_{2b})(\bar{q}_{3c}\gamma^{\mu}\gamma_{5}C\bar{q}^{T}_{4d})\,,$
$\displaystyle
P_{abcd}=(q_{1a}^{T}C\gamma_{\mu}\gamma_{5}q_{2b})(\bar{q}_{3c}\gamma^{\mu}C\bar{q}^{T}_{4d}).$
where $q_{1-4}$ represents the $up$, $down$ and $strange$ quarks, and $a-d$
are the color indices.
To compose a color singlet pseudoscalar tetraquark current, the diquark and
antidiquark should have the same color and spin symmetries. So the color
structure of the tetraquark is either $\mathbf{6}\otimes\mathbf{\bar{6}}$ or
$\mathbf{\bar{3}}\otimes\mathbf{3}$, which is denoted by labels $\mathbf{6}$
and $\mathbf{3}$ respectively. Therefore, considering both the color and
Lorentz structures, there are altogether ten terms of products
$\displaystyle\\{S\oplus V\oplus T\oplus A\oplus
P\\}_{Lorentz}\otimes\\{3\oplus 6\\}_{Color}.$ (2)
We list them as follows
$\displaystyle 6_{F}\otimes\bar{6}_{F}~{}(S)$
$\displaystyle\left\\{\begin{array}[]{l}S_{6}=q_{1a}^{T}Cq_{2b}(\bar{q}_{3a}\gamma_{5}C\bar{q}^{T}_{4b}+\bar{q}_{3b}\gamma_{5}C\bar{q}^{T}_{4a})\,,\\\
V_{6}=q_{1a}^{T}C\gamma_{5}q_{2b}(\bar{q}_{3a}C\bar{q}^{T}_{4b}+\bar{q}_{3b}C\bar{q}^{T}_{4a})\,,\\\
T_{3}=q_{1a}^{T}C\sigma_{\mu\nu}q_{2b}(\bar{q}_{3a}\sigma^{\mu\nu}\gamma_{5}C\bar{q}^{T}_{4b}-\bar{q}_{3b}\sigma^{\mu\nu}\gamma_{5}C\bar{q}^{T}_{4a})\,,\end{array}\right.$
(6) $\displaystyle\bar{3}_{F}\otimes 3_{F}~{}(A)$
$\displaystyle\left\\{\begin{array}[]{l}S_{3}=q_{1a}^{T}Cq_{2b}(\bar{q}_{3a}\gamma_{5}C\bar{q}^{T}_{4b}-\bar{q}_{3b}\gamma_{5}C\bar{q}^{T}_{4a})\,,\\\
V_{3}=q_{1a}^{T}C\gamma_{5}q_{2b}(\bar{q}_{3a}C\bar{q}^{T}_{4b}-\bar{q}_{3b}C\bar{q}^{T}_{4a})\,,\\\
T_{6}=q_{1a}^{T}C\sigma_{\mu\nu}q_{2b}(\bar{q}_{3a}\sigma^{\mu\nu}\gamma_{5}C\bar{q}^{T}_{4b}+\bar{q}_{3b}\sigma^{\mu\nu}\gamma_{5}C\bar{q}^{T}_{4a})\,,\end{array}\right.$
(10) $\displaystyle\bar{3}_{F}\otimes\bar{6}_{F}~{}(M)$
$\displaystyle\left\\{\begin{array}[]{l}A_{6}=q_{1a}^{T}C\gamma_{\mu}q_{2b}(\bar{q}_{3a}\gamma^{\mu}\gamma_{5}C\bar{q}^{T}_{4b}+\bar{q}_{3b}\gamma^{\mu}\gamma_{5}C\bar{q}^{T}_{4a})\,,\\\
P_{3}=q_{1a}^{T}C\gamma_{\mu}\gamma_{5}q_{2b}(\bar{q}_{3a}\gamma^{\mu}C\bar{q}^{T}_{4b}-\bar{q}_{3b}\gamma^{\mu}C\bar{q}^{T}_{4a})\,.\end{array}\right.$
(13) $\displaystyle 6_{F}\otimes 3_{F}~{}(M)$
$\displaystyle\left\\{\begin{array}[]{l}P_{6}=q_{1a}^{T}C\gamma_{\mu}\gamma_{5}q_{2b}(\bar{q}_{3a}\gamma^{\mu}C\bar{q}^{T}_{4b}+\bar{q}_{3b}\gamma^{\mu}C\bar{q}^{T}_{4a})\,,\\\
A_{3}=q_{1a}^{T}C\gamma_{\mu}q_{2b}(\bar{q}_{3a}\gamma^{\mu}\gamma_{5}C\bar{q}^{T}_{4b}-\bar{q}_{3b}\gamma^{\mu}\gamma_{5}C\bar{q}^{T}_{4a})\,.\end{array}\right.$
(16)
In the above expressions, the flavor structure is fixed at the same time due
to the Pauli principle. The currents $S_{6}$, $V_{6}$, $T_{3}$ belong to the
symmetric flavor representation $\mathbf{6_{F}}\otimes\mathbf{\bar{6}_{F}}(S)$
where both diquark and antidiquark fields have the symmetric flavor structure.
The currents $S_{3}$, $V_{3}$, $T_{6}$ belong to the antisymmetric flavor
representation $\mathbf{\bar{3}_{F}}\otimes\mathbf{3_{F}}(A)$, where both
diquark and antidiquark fields have the antisymmetric flavor structure.
$A_{6}$, $P_{3}$ for $\mathbf{\bar{3}_{F}}\otimes\mathbf{\bar{6}_{F}}(M)$ and
$A_{3}$, $P_{6}$ for $\mathbf{6_{F}}\otimes\mathbf{3_{F}}(M)$, where $M$
represents the mixed flavor symmetry. The isovector states with charges can be
observed in the experiments more easily, therefore in this paper we
concentrate on the isovector currents which was shown in the $SU(3)$
tetraquark weight diagram in Fig 1 Chen:2008qw . We have:
$\displaystyle qq\bar{q}\bar{q}(S),~{}qs\bar{q}\bar{s}(S)~{}~{}$
$\displaystyle\in$ $\displaystyle~{}~{}6_{F}\otimes\bar{6}_{F}~{}~{}~{}(S)\,,$
$\displaystyle qs\bar{q}\bar{s}(A)~{}~{}$ $\displaystyle\in$
$\displaystyle~{}~{}\bar{3}_{F}\otimes 3_{F}~{}~{}~{}(A)\,,$ (17)
$\displaystyle qq\bar{q}\bar{q}(M),qs\bar{q}\bar{s}(M)~{}~{}$
$\displaystyle\in$
$\displaystyle~{}~{}(\bar{3}_{F}\otimes\bar{6}_{F})\oplus(6_{F}\otimes
3_{F})~{}~{}~{}(M)\,.$
We do not differentiate $up$ and $down$ quarks and denote them by $q$. We are
only interested in those neutral components. The other states do not carry
definite C-parity. It turns out that the neutral isovector and isoscalar
states have the same QCD sum rules. Our following discussions are valid for
both of them. Taking the charge-conjugation transformation, we get
$\mathbb{C}S_{6}\mathbb{C}^{-1}=V_{6}\,,\mathbb{C}A_{6}\mathbb{C}^{-1}=P_{6}\,,\mathbb{C}A_{3}\mathbb{C}^{-1}=P_{3}\,,\mathbb{C}S_{3}\mathbb{C}^{-1}=V_{3}\,,\mathbb{C}T_{6}\mathbb{C}^{-1}=T_{6}\,,\mathbb{C}T_{3}\mathbb{C}^{-1}=T_{3}\,.$
(18)
$T_{6}$ and $T_{3}$ have even charge-conjugation parity. We conclude that the
currents with $J^{PC}=0^{--}$ are:
$\displaystyle\eta^{(S)}$ $\displaystyle=$ $\displaystyle
S_{6}-V_{6}=q_{1a}^{T}Cq_{2b}(\bar{q}_{3a}\gamma_{5}C\bar{q}^{T}_{4b}+\bar{q}_{3b}\gamma_{5}C\bar{q}^{T}_{4a})-q_{1a}^{T}C\gamma_{5}q_{2b}(\bar{q}_{3a}C\bar{q}^{T}_{4b}+\bar{q}_{3b}C\bar{q}^{T}_{4a})\,,$
$\displaystyle\eta^{(M)}_{1}$ $\displaystyle=$ $\displaystyle
A_{6}-P_{6}=q_{1a}^{T}C\gamma_{\mu}q_{2b}(\bar{q}_{3a}\gamma^{\mu}\gamma_{5}C\bar{q}^{T}_{4b}+\bar{q}_{3b}\gamma^{\mu}\gamma_{5}C\bar{q}^{T}_{4a})-q_{1a}^{T}C\gamma_{\mu}\gamma_{5}q_{2b}(\bar{q}_{3a}\gamma^{\mu}C\bar{q}^{T}_{4b}+\bar{q}_{3b}\gamma^{\mu}C\bar{q}^{T}_{4a})\,,$
$\displaystyle\eta^{(M)}_{2}$ $\displaystyle=$ $\displaystyle
A_{3}-P_{3}=q_{1a}^{T}C\gamma_{\mu}q_{2b}(\bar{q}_{3a}\gamma^{\mu}\gamma_{5}C\bar{q}^{T}_{4b}-\bar{q}_{3b}\gamma^{\mu}\gamma_{5}C\bar{q}^{T}_{4a})-q_{1a}^{T}C\gamma_{\mu}\gamma_{5}q_{2b}(\bar{q}_{3a}\gamma^{\mu}C\bar{q}^{T}_{4b}-\bar{q}_{3b}\gamma^{\mu}C\bar{q}^{T}_{4a})\,,$
(19) $\displaystyle\eta^{(A)}$ $\displaystyle=$ $\displaystyle
S_{3}-V_{3}=q_{1a}^{T}Cq_{2b}(\bar{q}_{3a}\gamma_{5}C\bar{q}^{T}_{4b}-\bar{q}_{3b}\gamma_{5}C\bar{q}^{T}_{4a})-q_{1a}^{T}C\gamma_{5}q_{2b}(\bar{q}_{3a}C\bar{q}^{T}_{4b}-\bar{q}_{3b}C\bar{q}^{T}_{4a})\,.$
Adding different quauk contents as shown in Eq. (II.1), there are altogether
seven independent currents as shown :
1. 1.
For $6_{F}\otimes\bar{6}_{F}~{}(S)$:
$\displaystyle\eta_{1}$ $\displaystyle=$ $\displaystyle
S_{6}(qq\bar{q}\bar{q})-V_{6}(qq\bar{q}\bar{q})=u^{T}_{a}Cd_{b}(\bar{u}_{a}\gamma_{5}C\bar{d}^{T}_{b}+\bar{u}_{b}\gamma_{5}C\bar{d}^{T}_{a})-u^{T}_{a}C\gamma_{5}d_{b}(\bar{u}_{a}C\bar{d}^{T}_{b}+\bar{u}_{b}C\bar{d}^{T}_{a})\,,$
$\displaystyle\eta_{2}$ $\displaystyle=$ $\displaystyle
S_{6}(qs\bar{q}\bar{s})-V_{6}(qs\bar{q}\bar{s})=u^{T}_{a}Cs_{b}(\bar{u}_{a}\gamma_{5}C\bar{s}^{T}_{b}+\bar{u}_{b}\gamma_{5}C\bar{s}^{T}_{a})-u^{T}_{a}C\gamma_{5}s_{b}(\bar{u}_{a}C\bar{s}^{T}_{b}+\bar{u}_{b}C\bar{s}^{T}_{a})\,,$
(20)
where $\eta_{1}$ belongs to the $\mathbf{27_{F}}$ representation and contains
up and down quarks only while $\eta_{2}$ belongs to the $\mathbf{8_{F}}$
representation and contains one $s\bar{s}$ quark pair.
2. 2.
For $(\bar{3}_{F}\otimes\bar{6}_{F})\oplus(6_{F}\otimes 3_{F})~{}(M)$:
$\displaystyle\eta_{3}$ $\displaystyle=$ $\displaystyle
A_{6}(qq\bar{q}\bar{q})-P_{6}(qq\bar{q}\bar{q})=u^{T}_{a}C\gamma_{\mu}d_{b}(\bar{u}_{a}\gamma^{\mu}\gamma_{5}C\bar{d}^{T}_{b}+\bar{u}_{b}\gamma^{\mu}\gamma_{5}C\bar{d}^{T}_{a})-u^{T}_{a}C\gamma_{\mu}\gamma_{5}d_{b}(\bar{u}_{a}\gamma^{\mu}C\bar{d}^{T}_{b}+\bar{u}_{b}\gamma^{\mu}C\bar{d}^{T}_{a})\,,$
$\displaystyle\eta_{4}$ $\displaystyle=$ $\displaystyle
A_{6}(qs\bar{q}\bar{s})-P_{6}(qs\bar{q}\bar{s})=u^{T}_{a}C\gamma_{\mu}s_{b}(\bar{u}_{a}\gamma^{\mu}\gamma_{5}C\bar{s}^{T}_{b}+\bar{u}_{b}\gamma^{\mu}\gamma_{5}C\bar{s}^{T}_{a})-u^{T}_{a}C\gamma_{\mu}\gamma_{5}s_{b}(\bar{u}_{a}\gamma^{\mu}C\bar{s}^{T}_{b}+\bar{u}_{b}\gamma^{\mu}C\bar{s}^{T}_{a})\,,$
$\displaystyle\eta_{5}$ $\displaystyle=$ $\displaystyle
A_{3}(qq\bar{q}\bar{q})-P_{3}(qq\bar{q}\bar{q})=u^{T}_{a}C\gamma_{\mu}d_{b}(\bar{u}_{a}\gamma^{\mu}C\bar{d}^{T}_{b}-\bar{u}_{b}\gamma^{\mu}C\bar{d}^{T}_{a})-u^{T}_{a}C\gamma_{\mu}\gamma_{5}d_{b}(\bar{u}_{a}\gamma^{\mu}C\bar{d}^{T}_{b}-\bar{u}_{b}\gamma^{\mu}C\bar{d}^{T}_{a})\,,$
(21) $\displaystyle\eta_{6}$ $\displaystyle=$ $\displaystyle
A_{3}(qs\bar{q}\bar{s})-P_{3}(qs\bar{q}\bar{s})=u^{T}_{a}C\gamma_{\mu}s_{b}(\bar{u}_{a}\gamma^{\mu}C\bar{s}^{T}_{b}-\bar{u}_{b}\gamma^{\mu}C\bar{s}^{T}_{a})-u^{T}_{a}C\gamma_{\mu}\gamma_{5}s_{b}(\bar{u}_{a}\gamma^{\mu}C\bar{s}^{T}_{b}-\bar{u}_{b}\gamma^{\mu}C\bar{s}^{T}_{a})\,,$
where $\eta_{3}$ and $\eta_{5}$ belong to the $\mathbf{\bar{10}_{F}}$
representation and contain only u, d quarks while $\eta_{4}$ and $\eta_{6}$
belong to the $\mathbf{8_{F}}$ representation and contain one $s\bar{s}$ quark
pair.
3. 3.
For $\bar{3}_{F}\otimes 3_{F}~{}(A)$:
$\displaystyle\eta_{7}=S_{3}(qs\bar{q}\bar{s})-V_{3}(qs\bar{q}\bar{s})=u^{T}_{a}Cs_{b}(\bar{u}_{a}\gamma_{5}C\bar{s}^{T}_{b}-\bar{u}_{b}\gamma_{5}C\bar{s}^{T}_{a})-u^{T}_{a}C\gamma_{5}s_{b}(\bar{u}_{a}C\bar{s}^{T}_{b}-\bar{u}_{b}C\bar{s}^{T}_{a})\,.$
(22)
where $\eta_{7}$ belongs to the $\mathbf{8_{F}}$ and contains one $s\bar{s}$
quark pair.
It is understood that there exists the other part $\pm[u\leftrightarrow d]$ in
the expressions of $\eta_{2,4,6,7}$.
### II.2 The $J^{PC}=0^{+-}$ Tetraquark Currents
Now we move on to the $J^{PC}=0^{+-}$ case. There are also ten independent
scalar tetraquark currents without derivative:
$\displaystyle
S^{\prime}_{6}=q_{1a}^{T}Cq_{2b}(\bar{q}_{3a}C\bar{q}^{T}_{4b}+\bar{q}_{3b}C\bar{q}^{T}_{4a})\,,$
$\displaystyle
V^{\prime}_{6}=q_{1a}^{T}\gamma_{\mu}Cq_{2b}(\bar{q}_{3a}C\gamma^{\mu}\bar{q}^{T}_{4b}+\bar{q}_{3b}C\gamma^{\mu}\bar{q}^{T}_{4a})\,,$
$\displaystyle
T^{\prime}_{6}=q_{1a}^{T}\sigma_{\mu\nu}Cq_{2b}(\bar{q}_{3a}C\sigma^{\mu\nu}\bar{q}^{T}_{4b}+\bar{q}_{3b}C\sigma^{\mu\nu}\bar{q}^{T}_{4a})\,,$
$\displaystyle
A^{\prime}_{6}=q_{1a}^{T}\gamma_{\mu}\gamma_{5}Cq_{2b}(\bar{q}_{3a}C\gamma^{\mu}\gamma_{5}\bar{q}^{T}_{4b}+\bar{q}_{3b}C\gamma^{\mu}\gamma_{5}\bar{q}^{T}_{4a})\,,$
$\displaystyle
P^{\prime}_{6}=q_{1a}^{T}\gamma_{5}Cq_{2b}(\bar{q}_{3a}C\gamma_{5}\bar{q}^{T}_{4b}+\bar{q}_{3b}C\gamma_{5}\bar{q}^{T}_{4a})\,,$
$\displaystyle
S^{\prime}_{3}=q_{1a}^{T}Cq_{2b}(\bar{q}_{3a}C\bar{q}^{T}_{4b}-\bar{q}_{3b}C\bar{q}^{T}_{4a})\,,$
(23) $\displaystyle
V^{\prime}_{3}=q_{1a}^{T}\gamma_{\mu}Cq_{2b}(\bar{q}_{3a}C\gamma^{\mu}\bar{q}^{T}_{4b}-\bar{q}_{3b}C\gamma^{\mu}\bar{q}^{T}_{4a})\,,$
$\displaystyle
T^{\prime}_{3}=q_{1a}^{T}\sigma_{\mu\nu}Cq_{2b}(\bar{q}_{3a}C\sigma^{\mu\nu}\bar{q}^{T}_{4b}-\bar{q}_{3b}C\sigma^{\mu\nu}\bar{q}^{T}_{4a})\,,$
$\displaystyle
A^{\prime}_{3}=q_{1a}^{T}\gamma_{\mu}\gamma_{5}Cq_{2b}(\bar{q}_{3a}C\gamma^{\mu}\gamma_{5}\bar{q}^{T}_{4b}-\bar{q}_{3b}C\gamma^{\mu}\gamma_{5}\bar{q}^{T}_{4a})\,,$
$\displaystyle
P^{\prime}_{3}=q_{1a}^{T}\gamma_{5}Cq_{2b}(\bar{q}_{3a}C\gamma_{5}\bar{q}^{T}_{4b}-\bar{q}_{3b}C\gamma_{5}\bar{q}^{T}_{4a})\,.$
The flavor structure is again fixed due to the Pauli principle. To have a
charge-conjugation parity, we fix the quark contents to be: $q_{1}=q_{3}$ and
$q_{2}=q_{4}$ (or $q_{1}=q_{4}$ and $q_{2}=q_{3}$). After performing the
charge-conjugation transformation, we find that they all have an even charge-
conjugation parity, for example:
$\mathbb{C}S^{\prime}_{6}\mathbb{C}^{-1}=+S^{\prime}_{6}\,.$ (24)
Therefore, the $J^{PC}=0^{+-}$ tetraquark interpolating currents without
derivatives do not exist.
## III The Spectral Density
We consider the two-point correlation function in the framework of QCD sum
rule Shifman:1978bx ; Reinders:1984sr :
$\Pi(q^{2})\equiv\int d^{4}xe^{iqx}\langle
0|T\eta(x)\eta^{\dagger}(0)|0\rangle,$ (25)
where $\eta$ is an interpolating current. We can calculate $\Pi(q^{2})$ at the
quark gluon level using the propagator:
$\displaystyle iS^{ab}_{q}$ $\displaystyle\equiv$ $\displaystyle\langle
0|T[q^{a}(x)\bar{q}^{b}(0)]|0\rangle$ (26) $\displaystyle=$
$\displaystyle\frac{i\delta^{ab}}{2\pi^{2}x^{4}}\hat{x}+\frac{i}{32\pi^{2}}\frac{\lambda^{n}_{ab}}{2}gG_{\mu\nu}^{n}\frac{1}{x^{2}}(\sigma^{\mu\nu}\hat{x}+\hat{x}\sigma^{\mu\nu})-\frac{\delta^{ab}}{12}\langle\bar{q}q\rangle$
$\displaystyle+\frac{\delta^{ab}x^{2}}{192}\langle g_{s}\bar{q}\sigma
Gq\rangle-\frac{m_{q}\delta^{ab}}{4\pi^{2}x^{2}}+\frac{i\delta^{ab}m_{q}\langle\bar{q}q\rangle}{48}\hat{x}+\frac{i\delta^{ab}m_{q}^{2}\langle\bar{q}q\rangle}{8\pi^{2}x^{2}}\hat{x},$
where $\hat{x}\equiv\gamma_{\mu}x^{\mu}$. With the dispersion relation
$\Pi(q^{2})$ is related to the observable at the hadron level
$\Pi(p^{2})=\int_{0}^{\infty}\frac{\rho(s)}{s-p^{2}-i\varepsilon}ds,$ (27)
where
$\rho(s)\equiv\sum_{n}\delta(s-M^{2}_{n})\langle 0|\eta|n\rangle\langle
n|\eta^{{\dagger}}|0\rangle=f^{2}_{X}\delta(s-M^{2}_{X})+\mbox{continuum}\;.$
(28)
Here, the usual pole plus continuum parametrization of the hadronic spectral
density is adopted. Up to dimension 12, the spectral density $\rho_{i}(s)$ at
the quark and gluon level reads:
$\displaystyle\rho_{1}(s)$ $\displaystyle=$
$\displaystyle\frac{s^{4}}{15360\pi^{6}}-\frac{m_{q}^{2}}{192\pi^{6}}s^{3}-(\frac{\langle
g_{s}^{2}GG\rangle}{3072\pi^{6}}-\frac{m_{q}\langle\bar{q}q\rangle}{24\pi^{4}})s^{2}+[\frac{\langle
g_{s}^{2}GG\rangle m_{q}^{2}}{256\pi^{6}}+\frac{\langle
g_{s}^{3}fGGG\rangle}{768\pi^{6}}(3\ln(\frac{s}{\tilde{\mu}^{2}})-5)]s$ (29)
$\displaystyle-(\frac{3m_{q}^{2}\langle\bar{q}q\rangle^{2}}{2\pi^{2}}+\frac{\langle
g_{s}^{2}GG\rangle
m_{q}\langle\bar{q}q\rangle}{192\pi^{4}})+(\frac{16}{9}m_{q}\langle\bar{q}q\rangle^{3}-\frac{1}{\pi^{2}}m_{q}^{2}\langle\bar{q}q\rangle\langle
g_{s}\bar{q}\sigma Gq\rangle)\delta(s)\,,$ $\displaystyle\rho_{2}(s)$
$\displaystyle=$
$\displaystyle\frac{s^{4}}{15360\pi^{6}}-\frac{m_{s}^{2}}{384\pi^{6}}s^{3}+(\frac{m_{s}^{4}}{64\pi^{6}}+\frac{m_{s}\langle\bar{s}s\rangle}{48\pi^{4}}-\frac{\langle
g_{s}^{2}GG\rangle}{3072\pi^{6}})s^{2}$ (30) $\displaystyle+[\frac{\langle
g_{s}^{3}fGGG\rangle}{768\pi^{6}}(3\ln(\frac{s}{\tilde{\mu}^{2}})-5)-(\frac{m_{s}^{3}\langle\bar{s}s\rangle}{8\pi^{4}}-\frac{m_{s}^{2}\langle
g_{s}^{2}GG\rangle}{512\pi^{6}})]s$
$\displaystyle+(\frac{m_{s}^{2}\langle\bar{s}s\rangle^{2}}{12\pi^{2}}-\frac{m_{s}^{2}\langle\bar{q}q\rangle^{2}}{3\pi^{2}}-\frac{m_{s}\langle\bar{s}s\rangle\langle
g_{s}^{2}GG\rangle}{384\pi^{4}})-(\frac{m_{s}^{2}\langle\bar{u}u\rangle\langle
g_{s}\bar{q}\sigma
Gq\rangle}{6\pi^{2}}-\frac{8}{9}m_{s}\langle\bar{s}s\rangle\langle\bar{q}q\rangle^{2})\delta(s)\,,$
$\displaystyle\rho_{3}(s)$ $\displaystyle=$
$\displaystyle\frac{s^{4}}{3840\pi^{6}}-\frac{m_{q}^{2}}{48\pi^{6}}s^{3}+(\frac{5\langle
g_{s}^{2}GG\rangle}{1536\pi^{6}}+\frac{m_{q}\langle\bar{q}q\rangle}{6\pi^{4}})s^{2}+[\frac{\langle
g_{s}^{3}fGGG\rangle}{192\pi^{6}}(3\ln(\frac{s}{\tilde{\mu}^{2}})-5)-\frac{5\langle
g_{s}^{2}GG\rangle m_{q}^{2}}{128\pi^{6}}]s$ (31)
$\displaystyle-(\frac{6m_{q}^{2}\langle\bar{q}q\rangle^{2}}{\pi^{2}}-\frac{5\langle
g_{s}^{2}GG\rangle
m_{q}\langle\bar{q}q\rangle}{96\pi^{4}})+(\frac{64}{9}m_{q}\langle\bar{q}q\rangle^{3}-\frac{4}{\pi^{2}}m_{q}^{2}\langle\bar{q}q\rangle\langle
g_{s}\bar{q}\sigma Gq\rangle)\delta(s)\,,$ $\displaystyle\rho_{4}(s)$
$\displaystyle=$
$\displaystyle\frac{s^{4}}{3840\pi^{6}}-\frac{m_{s}^{2}}{96\pi^{6}}s^{3}+(\frac{m_{s}^{4}}{16\pi^{6}}+\frac{m_{s}\langle\bar{s}s\rangle}{12\pi^{4}}+\frac{5\langle
g_{s}^{2}GG\rangle}{1536\pi^{6}})s^{2}$ (32) $\displaystyle+[\frac{\langle
g_{s}^{3}fGGG\rangle}{192\pi^{6}}(3\ln(\frac{s}{\tilde{\mu}^{2}})-5)-(\frac{m_{s}^{3}\langle\bar{s}s\rangle}{2\pi^{4}}+\frac{5m_{s}^{2}\langle
g_{s}^{2}GG\rangle}{256\pi^{6}})]s$
$\displaystyle+(\frac{m_{s}^{2}\langle\bar{s}s\rangle^{2}}{3\pi^{2}}-\frac{4m_{s}^{2}\langle\bar{q}q\rangle^{2}}{3\pi^{2}}+\frac{5m_{s}\langle\bar{s}s\rangle\langle
g_{s}^{2}GG\rangle}{192\pi^{4}})-(\frac{2m_{s}^{2}\langle\bar{u}u\rangle\langle
g_{s}\bar{q}\sigma
Gq\rangle}{3\pi^{2}}-\frac{32}{9}m_{s}\langle\bar{s}s\rangle\langle\bar{q}q\rangle^{2})\delta(s)\,,$
$\displaystyle\rho_{5}(s)$ $\displaystyle=$
$\displaystyle\frac{s^{4}}{7680\pi^{6}}-\frac{m_{q}^{2}}{96\pi^{6}}s^{3}+(\frac{\langle
g_{s}^{2}GG\rangle}{1536\pi^{6}}+\frac{m_{q}\langle\bar{q}q\rangle}{12\pi^{4}})s^{2}+[\frac{\langle
g_{s}^{3}fGGG\rangle}{384\pi^{6}}(3\ln(\frac{s}{\tilde{\mu}^{2}})-5)-\frac{\langle
g_{s}^{2}GG\rangle m_{q}^{2}}{128\pi^{6}}]s$ (33)
$\displaystyle-(\frac{3m_{q}^{2}\langle\bar{q}q\rangle^{2}}{\pi^{2}}-\frac{\langle
g_{s}^{2}GG\rangle
m_{q}\langle\bar{q}q\rangle}{96\pi^{4}})+(\frac{32}{9}m_{q}\langle\bar{q}q\rangle^{3}-\frac{2}{\pi^{2}}m_{q}^{2}\langle\bar{q}q\rangle\langle
g_{s}\bar{q}\sigma Gq\rangle)\delta(s)\,,$ $\displaystyle\rho_{6}(s)$
$\displaystyle=$
$\displaystyle\frac{s^{4}}{7680\pi^{6}}-\frac{m_{s}^{2}}{192\pi^{6}}s^{3}+(\frac{m_{s}^{4}}{32\pi^{6}}+\frac{m_{s}\langle\bar{s}s\rangle}{24\pi^{4}}+\frac{\langle
g_{s}^{2}GG\rangle}{1536\pi^{6}})s^{2}$ (34) $\displaystyle+[\frac{\langle
g_{s}^{3}fGGG\rangle}{384\pi^{6}}(3\ln(\frac{s}{\tilde{\mu}^{2}})-5)-(\frac{m_{s}^{3}\langle\bar{s}s\rangle}{4\pi^{4}}+\frac{m_{s}^{2}\langle
g_{s}^{2}GG\rangle}{256\pi^{6}})]s$
$\displaystyle+(\frac{m_{s}^{2}\langle\bar{s}s\rangle^{2}}{6\pi^{2}}-\frac{2m_{s}^{2}\langle\bar{q}q\rangle^{2}}{3\pi^{2}}+\frac{m_{s}\langle\bar{s}s\rangle\langle
g_{s}^{2}GG\rangle}{192\pi^{4}})-(\frac{m_{s}^{2}\langle\bar{u}u\rangle\langle
g_{s}\bar{q}\sigma
Gq\rangle}{3\pi^{2}}-\frac{16}{9}m_{s}\langle\bar{s}s\rangle\langle\bar{q}q\rangle^{2})\delta(s)\,,$
$\displaystyle\rho_{7}(s)$ $\displaystyle=$
$\displaystyle\frac{s^{4}}{30720\pi^{6}}-\frac{m_{s}^{2}}{768\pi^{6}}s^{3}+(\frac{m_{s}^{4}}{128\pi^{6}}+\frac{m_{s}\langle\bar{s}s\rangle}{96\pi^{4}}+\frac{\langle
g_{s}^{2}GG\rangle}{3072\pi^{6}})s^{2}$ (35) $\displaystyle+[\frac{\langle
g_{s}^{3}fGGG\rangle}{1536\pi^{6}}(3\ln(\frac{s}{\tilde{\mu}^{2}})-5)-(\frac{m_{s}^{3}\langle\bar{s}s\rangle}{16\pi^{4}}+\frac{m_{s}^{2}\langle
g_{s}^{2}GG\rangle}{512\pi^{6}})]s$
$\displaystyle+(\frac{m_{s}^{2}\langle\bar{s}s\rangle^{2}}{24\pi^{2}}-\frac{m_{s}^{2}\langle\bar{q}q\rangle^{2}}{6\pi^{2}}+\frac{m_{s}\langle\bar{s}s\rangle\langle
g_{s}^{2}GG\rangle}{384\pi^{4}})-(\frac{m_{s}^{2}\langle\bar{u}u\rangle\langle
g_{s}\bar{q}\sigma
Gq\rangle}{12\pi^{2}}-\frac{4}{9}m_{s}\langle\bar{s}s\rangle\langle\bar{q}q\rangle^{2})\delta(s)\,.$
It is interesting to note several important features of the above spectral
densities:
* •
First the special Lorentz structure of the $J^{PC}=0^{--}$ interpolating
currents forbids the appearance of the four-quark type of condensates
$\langle\bar{q}q\rangle^{2}$, $\langle\bar{q}q\rangle$ $\langle
g_{s}\bar{q}\sigma Gq\rangle$ and $\langle g_{s}\bar{q}\sigma Gq\rangle^{2}$.
Usually these terms play an important role in the multiquark sum rules. The
Feynman diagrams for the dimension 10 condensate $\langle g_{s}\bar{q}\sigma
Gq\rangle^{2}$ are shown in Fig. 1.
| |
---|---|---
Figure 1: Feynman diagrams for the quark gluon mixed condensate.
* •
The dominant non-perturbative correction arises from the gluon condensate,
which is destructive for $\rho_{1-2}(s)$ and constructive for $\rho_{3-7}(s)$.
Moreover there are corrections from the tri-gluon condensate $\langle
g_{s}^{3}f^{abc}G^{a}G^{b}G^{c}\rangle$ as shown in Fig.2. In the above
expressions we use the short-hand notation $\langle g_{s}^{3}fGGG\rangle$ to
denote the tri-gluon condensate. There are three types of Feynman diagrams.
The first class of Feynman diagrams vanishes because of the product of the
color matrices. The second class is proportional to $m_{q}$ and could be
omitted in the chiral limit. Only the third class leads to non-vanishing tri-
gluon correction. In fact the gluon condensates become the only power
corrections in the chiral limit.
| |
---|---|---
Figure 2: Feynman diagrams for the tri-gluon condensate.
* •
The second term in each $\rho_{i}(s)$ is destructive, which renders the
spectral density negative when $s$ is small. This $-m_{q}^{2}s^{3}$ piece is
an artefact of the expansion of the quark propagator ${i\over{\hat{p}}-m_{q}}$
in terms of the quark mass $m_{q}$ perturbatively. Without making such an
expansion, the perturbative contribution to the spectral density is always
positive-definite. Such a destructive term will sometimes produce an
artificial plateau and stability window in the sum rule analysis, which must
be removed.
* •
Although the tree-level four-quark condensate vanishes, one may wonder whether
the four-quark condensate $g_{s}^{2}\langle\bar{q}q\rangle^{2}$ plays a role
since the latter is very important in the $q\bar{q}$ meson sum rules
Shifman:1978bx ; Reinders:1984sr . Two types of Feynman diagrams could produce
such a correction. The first class of Feynman diagrams is very similar to that
in the $q\bar{q}$ meson case where a gluon propagator is attached between two-
quark condensates, as Fig.3 shown. It’s easy to check that they vanish due to
the special Lorentz structure of the correlation function.
|
---|---
Figure 3: One set of Feynman diagrams for the four-quark condensate.
One of the second class of diagrams is shown in Fig.4. In this case, we use
the mesonic type interpolating currents in the appendixA to simplify the
derivation. After making Wick-contraction to the correlation function,
$\displaystyle\bar{\psi}_{3}(x)\Gamma^{\prime}_{1}\psi_{4}(x)\bar{\psi}_{1}(x)\Gamma_{1}\psi_{2}(x)\bar{\psi}_{1}(z_{1})gt^{a}\gamma^{\mu}\psi_{1}(z_{1})A_{\mu}^{a}(z_{1})\bar{\psi}_{2}(z_{2})gt^{b}\gamma^{\nu}\psi_{2}(z_{2})A_{\nu}^{b}(z_{2})\bar{\psi}_{2}(y)\Gamma_{2}\psi_{1}(y)\bar{\psi}_{4}(y)\Gamma^{\prime}_{2}\psi_{3}(y)$
we get
$\displaystyle
Tr[-\Gamma^{\prime}_{1}S_{Q}(x-y)\Gamma^{\prime}_{2}S_{Q}(y-x)]Tr[-S_{Q}(x-z_{2})\gamma^{\nu}S_{Q}(z_{2}-y)\Gamma_{2}S_{Q}(y-z_{1})\gamma^{\mu}S_{Q}(z_{1}-x)\Gamma_{1}\times
g_{\mu\nu}\times S_{G}(z_{2}-z_{1})].$
where $S_{Q}$ is the quark propagator and $S_{G}$ is the gluon propagator.
{$\Gamma_{1}$,$\Gamma_{2}$} could be either {$I$, $\gamma_{5}$} or
{$\gamma_{\alpha}$, $\gamma_{5}\gamma_{\alpha}$}.
$S_{Q}(y-z_{1})\propto\langle\bar{q}q\rangle$. In fact, there would be three
$\gamma$-matrices or three $\gamma$-matrices plus $\gamma_{5}$ left in the
latter trace. Therefore this piece also vanishes.
---
Figure 4: Feynman diagrams for the four-quark condensate.
## IV Numerical Analysis
In the chiral limit ($m_{s}=m_{q}=0$) the spectral density reads
$\displaystyle\rho_{1-2}(s)$ $\displaystyle=$
$\displaystyle\frac{s^{4}}{15360\pi^{6}}-\frac{\langle
g_{s}^{2}GG\rangle}{3072\pi^{6}}s^{2}+\frac{\langle
g_{s}^{3}fGGG\rangle}{768\pi^{6}}(3\ln(\frac{s}{\tilde{\mu}^{2}})-5)s,$
$\displaystyle\rho_{3-4}(s)$ $\displaystyle=$
$\displaystyle\frac{s^{4}}{3840\pi^{6}}-\frac{5\langle
g_{s}^{2}GG\rangle}{1536\pi^{6}}s^{2}+\frac{\langle
g_{s}^{3}fGGG\rangle}{192\pi^{6}}(3\ln(\frac{s}{\tilde{\mu}^{2}})-5)s,$
$\displaystyle\rho_{5-6}(s)$ $\displaystyle=$
$\displaystyle\frac{s^{4}}{7680\pi^{6}}-\frac{\langle
g_{s}^{2}GG\rangle}{1536\pi^{6}}s^{2}+\frac{\langle
g_{s}^{3}fGGG\rangle}{384\pi^{6}}(3\ln(\frac{s}{\tilde{\mu}^{2}})-5)s,$
$\displaystyle\rho_{7}(s)$ $\displaystyle=$
$\displaystyle\frac{s^{4}}{30720\pi^{6}}-\frac{\langle
g_{s}^{2}GG\rangle}{3072\pi^{6}}s^{2}+\frac{\langle
g_{s}^{3}fGGG\rangle}{1536\pi^{6}}(3\ln(\frac{s}{\tilde{\mu}^{2}})-5)s$ (36)
where $\tilde{\mu}=1$ GeV. Requiring the pole contribution is larger than
$40\%$, one gets the upper bound $M^{2}_{\mbox{max}}$ of the Borel parameter
$M_{B}^{2}$. The convergence of the operator expansion product leads to the
lower bound $M^{2}_{\mbox{min}}$ of the Borel parameter. In the present case,
we require that the two gluon condensate correction be less than one third of
the perturbative term and the tri-gluon condensate correction less than one
third of the gluon condensate correction. The working region of $M_{B}^{2}$ in
the sum rule analysis is [$M^{2}_{\mbox{min}}$, $M^{2}_{\mbox{max}}$], which
is dependent on the threshold $s_{0}$.
In order to study the sensitivity of the sum rule to the condensate values, we
adopt two sets of the gluon condensate values in our numerical analysis. One
set is from Ioffe’s recent review gluon1 : $\langle
g_{s}^{2}GG\rangle=(0.20\pm 0.16)~{}\mbox{GeV}^{4}$, $\langle
g_{s}^{3}fGGG\rangle=0.12~{}\mbox{GeV}^{6}$. We also use the original SVZ
values Shifman:1978bx : $\langle g_{s}^{2}GG\rangle=(0.48\pm
0.14)~{}\mbox{GeV}^{4}$, $\langle
g_{s}^{3}fGGG\rangle=0.045~{}\mbox{GeV}^{6}$. The working regions of the sum
rules with the above two sets of gluon condensates and $s_{0}=7$ GeV2 are
listed in Table 1.
$\diagdown$ | [$M^{2}_{\mbox{min}}$, $M^{2}_{\mbox{max}}$](SVZ) | [$M^{2}_{\mbox{min}}$, $M^{2}_{\mbox{max}}$] (Ioffe)
---|---|---
$\rho_{1-2}$ | $0.77\sim 1.50$ | $0.90\sim 1.68$
$\rho_{3-4}$ | $1.22\sim 1.90$ | $1.40\sim 1.65$
$\rho_{5-6}$ | $1.05\sim 1.77$ | $1.55\sim 1.74$
$\rho_{7}$ | $1.10\sim 1.85$ | $1.50\sim 1.75$
Table 1: The working region of $M_{B}^{2}$ with Ioffe’s and SVZ’s gluon
condensates and $s_{0}=7$ GeV2.
The working region of the sum rule is very narrow even with $s_{0}=7$ GeV2.
The variation of $M_{X}$ with $M_{B}^{2}$ and $s_{0}$ is shown in Figs. 5-8
for the interpolating currents $\eta_{1-2}$, $\eta_{3-4}$, $\eta_{5-6}$,
$\eta_{7}$ respectively using Ioffe’s gluon condensate values. The variation
of $M_{X}$ with $M_{B}^{2}$ and $s_{0}$ and SVZ’s gluon condensate values is
presented in Figs.9-12.
For a genuine hadron state, one expects that the extracted mass from the sum
rule analysis is stable with the reasonable variation of the Borel parameter
and the continuum threshold. In other words, there should exists dual
stability in $M_{B}^{2}$ and $s_{0}$ in the working region of $M_{B}^{2}$.
From all these figures we notice none of the mass curves satisfy the stability
requirement. These interpolating currents do not support a low-lying resonant
signal.
|
---|---
Figure 5: The variation of $M_{X}$ with $M^{2}_{B}$ (Left) and $s_{0}$ (Right) for the current $\eta_{1-2}$ using Ioffe’s gluon condensate values. |
---|---
Figure 6: The variation of $M_{X}$ with $M^{2}_{B}$ (Left) and $s_{0}$ (Right) for the current $\eta_{3-4}$ using Ioffe’s gluon condensate values. |
---|---
Figure 7: The variation of $M_{X}$ with $M^{2}_{B}$ (Left) and $s_{0}$ (Right) for the current $\eta_{5-6}$ using Ioffe’s gluon condensate values. |
---|---
Figure 8: The variation of $M_{X}$ with $M^{2}_{B}$ (Left) and $s_{0}$ (Right) for the current $\eta_{7}$ using Ioffe’s gluon condensate values. |
---|---
Figure 9: The variation of $M_{X}$ with $M^{2}_{B}$ (Left) and $s_{0}$ (Right) for the current $\eta_{1-2}$ using SVZ’s gluon condensate values. |
---|---
Figure 10: The variation of $M_{X}$ with $M^{2}_{B}$ (Left) and $s_{0}$ (Right) for the current $\eta_{3-4}$ using SVZ’s gluon condensate values. |
---|---
Figure 11: The variation of $M_{X}$ with $M^{2}_{B}$ (Left) and $s_{0}$ (Right) for the current $\eta_{5-6}$ using SVZ’s gluon condensate values. |
---|---
Figure 12: The variation of $M_{X}$ with $M^{2}_{B}$ (Left) and $s_{0}$
(Right) for the current $\eta_{7}$ using SVZ’s gluon condensate values.
## V Conclusion
The exotic state with $J^{PC}=0^{--}$ can not be composed of a pair of gluons
nor $q\bar{q}$. In order to explore the possible existence of these
interesting states, we first construct the tetraquark type interpolating
operators systematically. As a byproduct, we notice that the $J^{PC}=0^{+-}$
tetraquark operators without derivatives do not exist. Then we make the
operator product expansion and extract the spectral density. The gluon
condensate becomes the dominant power correction. Usually the four-quark type
of condensates $\langle\bar{q}q\rangle^{2}$, $\langle\bar{q}q\rangle$ $\langle
g_{s}\bar{q}\sigma Gq\rangle$ and $\langle g_{s}\bar{q}\sigma Gq\rangle^{2}$
are the dominant nonperturbative corrections in the multiquark sum rules.
However these terms vanish because of the special Lorentz structure imposed by
the exotic $0^{--}$ quantum numbers.
Within the framework of the SVZ sum rule, we note that the absence of the
$\langle\bar{q}q\rangle^{2}$, $\langle\bar{q}q\rangle$ $\langle
g_{s}\bar{q}\sigma Gq\rangle$ and $\langle g_{s}\bar{q}\sigma Gq\rangle^{2}$
terms destabilize the sum rule. There does not exist stability in either
$M_{B}^{2}$ or $s_{0}$ in the working region of $M_{B}^{2}$. Therefore we
conclude that none of these independent interpolating currents support a
resonant signal below 2 GeV, which is consistent with the current experimental
measurement Amsler:2008zzb .
## Acknowledgments
The authors are grateful to Professor Wei-Zhen Deng for useful discussions.
This project was supported by the National Natural Science Foundation of China
under Grants 10625521, 10721063 and Ministry of Science and Technology of
China (2009CB825200).
## References
* (1) C. Amsler et al. [Particle Data Group], Phys. Lett. B 667, 1 (2008).
* (2) E. Klempt and A. Zaitsev, Phys. Rept. 454, 1 (2007).
* (3) G. S. Adams et al. [E862 Collaboration], Phys. Lett. B 657, 27 (2007).
* (4) A. Abele et al. [Crystal Barrel Collaboration], Phys. Lett. B 446, 349 (1999); A. Abele et al. [Crystal Barrel Collaboration], Phys. Lett. B 423, 175 (1998).
* (5) D. R. Thompson et al. [E852 Collaboration], Phys. Rev. Lett. 79, 1630 (1997).
* (6) H. X. Chen, A. Hosaka and S. L. Zhu, Phys. Rev. D 78, 054017 (2008); D78, 117502 (2008).
* (7) H. X. Chen, A. Hosaka and S. L. Zhu, Phys. Rev. D 74, 054001 (2006); D76, 094025 (2007); Phys. Lett. B650, 369 (2007); H. X. Chen, X. Liu, A. Hosaka and S. L. Zhu, Phys. Rev. D78, 034012 (2008).
* (8) M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Nucl. Phys. B 147, 385 (1979).
* (9) L. J. Reinders, H. Rubinstein and S. Yazaki, Phys. Rept. 127, 1 (1985).
* (10) B. L. Ioffe, Prog. Part. Nucl.Phys. 56 (2006))232
## Appendix A Interpolating Currents in $(\bar{q}q)(\bar{q}q)$ Basis
For $6_{F}\otimes\bar{6}_{F}~{}(S)$:
$\displaystyle\eta^{(S)(1)}_{m}$ $\displaystyle=$
$\displaystyle(\bar{q}_{1a}\gamma_{\mu}q_{1a})(\bar{q}_{2b}\gamma^{\mu}\gamma_{5}q_{2b})+(\bar{q}_{1a}\gamma_{\mu}\gamma_{5}q_{1a})(\bar{q}_{2b}\gamma^{\mu}q_{2b})+(\bar{q}_{1a}\gamma_{\mu}q_{2a})(\bar{q}_{2b}\gamma^{\mu}\gamma_{5}q_{1b})+(\bar{q}_{1a}\gamma_{\mu}\gamma_{5}q_{2a})(\bar{q}_{2b}\gamma^{\mu}q_{1b}),$
$\displaystyle\eta^{(S)(8)}_{m}$ $\displaystyle=$
$\displaystyle\lambda_{ab}\lambda_{cd}\\{(\bar{q}_{1a}\gamma_{\mu}q_{1b})(\bar{q}_{2c}\gamma^{\mu}\gamma_{5}q_{2d})+(\bar{q}_{1a}\gamma_{\mu}\gamma_{5}q_{1b})(\bar{q}_{2d}\gamma^{\mu}q_{2d})+(\bar{q}_{1a}\gamma_{\mu}q_{2b})(\bar{q}_{2c}\gamma^{\mu}\gamma_{5}q_{1d})+(\bar{q}_{1a}\gamma_{\mu}\gamma_{5}q_{2b})(\bar{q}_{2c}\gamma^{\mu}q_{1d})\\},$
For $(\bar{3}_{F}\otimes\bar{6}_{F})\oplus(6_{F}\otimes 3_{F})~{}(M)$:
$\displaystyle\ \eta^{(M)(1)}_{1m}$ $\displaystyle=$
$\displaystyle(\bar{q}_{1a}q_{1a})(\bar{q}_{2b}\gamma_{5}q_{2b})-(\bar{q}_{1a}\gamma_{5}q_{1a})(\bar{q}_{2b}q_{2b}),$
$\displaystyle\ \eta^{(M)(8)}_{1m}$ $\displaystyle=$
$\displaystyle\lambda_{ab}\lambda_{cd}\\{(\bar{q}_{1a}q_{1b})(\bar{q}_{2c}\gamma_{5}q_{2d})-(\bar{q}_{1a}\gamma_{5}q_{1b})(\bar{q}_{2c}q_{2d})\\},$
$\displaystyle\ \eta^{(M)(1)}_{2m}$ $\displaystyle=$
$\displaystyle(\bar{q}_{1a}\gamma_{\mu}q_{1a})(\bar{q}_{2b}\gamma^{\mu}\gamma_{5}q_{2b})-(\bar{q}_{1a}\gamma_{\mu}\gamma_{5}q_{1a})(\bar{q}_{2b}\gamma^{\mu}q_{2b}),$
$\displaystyle\ \eta^{(M)(8)}_{2m}$ $\displaystyle=$
$\displaystyle\lambda_{ab}\lambda_{cd}\\{(\bar{q}_{1a}\gamma_{\mu}q_{1b})(\bar{q}_{2c}\gamma^{\mu}\gamma_{5}q_{2d})-(\bar{q}_{1a}\gamma_{\mu}\gamma_{5}q_{1b})(\bar{q}_{2c}\gamma^{\mu}q_{2d})\\},$
For $\bar{3}_{F}\otimes 3_{F}~{}(A)$:
$\displaystyle\eta^{(A)(1)}_{m}$ $\displaystyle=$
$\displaystyle(\bar{q}_{1a}\gamma_{\mu}q_{1a})(\bar{q}_{2b}\gamma^{\mu}\gamma_{5}q_{2b})+(\bar{q}_{1a}\gamma_{\mu}\gamma_{5}q_{1a})(\bar{q}_{2b}\gamma^{\mu}q_{2b})-(\bar{q}_{1a}\gamma_{\mu}q_{2a})(\bar{q}_{2b}\gamma^{\mu}\gamma_{5}q_{1b})-(\bar{q}_{1a}\gamma_{\mu}\gamma_{5}q_{2a})(\bar{q}_{2b}\gamma^{\mu}q_{1b}),$
$\displaystyle\eta^{(A)(8)}_{m}$ $\displaystyle=$
$\displaystyle\lambda_{ab}\lambda_{cd}\\{(\bar{q}_{1a}\gamma_{\mu}q_{1b})(\bar{q}_{2c}\gamma^{\mu}\gamma_{5}q_{2d})+(\bar{q}_{1a}\gamma_{\mu}\gamma_{5}q_{1b})(\bar{q}_{2d}\gamma^{\mu}q_{2d})-(\bar{q}_{1a}\gamma_{\mu}q_{2b})(\bar{q}_{2c}\gamma^{\mu}\gamma_{5}q_{1d})-(\bar{q}_{1a}\gamma_{\mu}\gamma_{5}q_{2b})(\bar{q}_{2c}\gamma^{\mu}q_{1d})\\},$
where the indices $(1)$, $(8)$ represent the color singlet and octet. Now we
get eight mesonic currents. Then we introduce the formula of the interchange
of the color indices:
$\displaystyle(q_{1a}q_{2b}\bar{q}_{3a}\bar{q}_{4b})$ $\displaystyle=$
$\displaystyle\frac{1}{3}(q_{1a}q_{2b}\bar{q}_{3b}\bar{q}_{4a})+\frac{1}{2}\lambda_{ab}\lambda_{cd}(q_{1a}q_{2c}\bar{q}_{3d}\bar{q}_{4b}),$
$\displaystyle\lambda_{ab}\lambda_{cd}(q_{1a}q_{2c}\bar{q}_{3b}\bar{q}_{4d})$
$\displaystyle=$
$\displaystyle\frac{16}{9}(q_{1a}q_{2b}\bar{q}_{3b}\bar{q}_{4a})-\frac{1}{3}\lambda_{ab}\lambda_{cd}(q_{1a}q_{2c}\bar{q}_{3d}\bar{q}_{4b}),$
(37)
Next, we perform the Fierz rearrangement in the Lorrentz indices with the
formula
$\displaystyle(\bar{a}b)(\bar{b}a)=\frac{1}{4}(\bar{a}a)(\bar{b}b)+\frac{1}{4}(\bar{a}\gamma_{5}a)(\bar{b}\gamma_{5}b)+\frac{1}{4}(\bar{a}\gamma_{\mu}a)(\bar{b}\gamma^{\mu}b)-\frac{1}{4}(\bar{a}\gamma_{5}\gamma_{\mu}a)(\bar{b}\gamma_{5}\gamma^{\mu}b)+\frac{1}{8}(\bar{a}\sigma_{\mu\nu}a)(\bar{b}\sigma^{\mu\nu}b),$
(38)
For example, we have
$\displaystyle(q_{1a}^{T}Cq_{2b})(\bar{q}_{3a}\gamma_{5}C\bar{q}^{T}_{4b})$
$\displaystyle=$
$\displaystyle-\frac{1}{4}(q_{1a}^{T}C\gamma_{5}C\bar{q}^{T}_{4b})(\bar{q}_{3a}q_{2b})-\frac{1}{4}(q_{1a}^{T}C\gamma_{\mu}\gamma_{5}C\bar{q}^{T}_{4b})(\bar{q}_{3a}\gamma^{\mu}q_{2b})$
$\displaystyle-\frac{1}{8}(q_{1a}^{T}C\sigma_{\mu\nu}\gamma_{5}C\bar{q}^{T}_{4b})(\bar{q}_{3a}\sigma^{\mu\nu}q_{2b})+\frac{1}{4}(q_{1a}^{T}C\gamma_{\mu}\gamma_{5}\gamma_{5}C\bar{q}^{T}_{4b})(\bar{q}_{3a}\gamma^{\mu}\gamma_{5}q_{2b})$
$\displaystyle-\frac{1}{4}(q_{1a}^{T}C\gamma_{5}\gamma_{5}C\bar{q}^{T}_{4b})(\bar{q}_{3a}\gamma_{5}q_{2b})$
$\displaystyle=$
$\displaystyle-\frac{1}{4}(\bar{q}_{4b}\gamma_{5}q_{1a})(\bar{q}_{3a}q_{2b})-\frac{1}{4}(\bar{q}_{4b}\gamma_{\mu}\gamma_{5}q_{1a})(\bar{q}_{3a}\gamma^{\mu}q_{2b})$
$\displaystyle+\frac{1}{8}(\bar{q}_{4b}\sigma_{\mu\nu}\gamma_{5}q_{1a})(\bar{q}_{3a}\sigma^{\mu\nu}q_{2b})-\frac{1}{4}(\bar{q}_{4b}\gamma_{\mu}q_{1a})(\bar{q}_{3a}\gamma^{\mu}\gamma_{5}q_{2b})$
$\displaystyle-\frac{1}{4}(\bar{q}_{4b}q_{1a})(\bar{q}_{3a}\gamma_{5}q_{2b}),$
There are only four independent currents among those eight mesonic currents.
Any four currents are independent and can be expressed by the other four.
$\displaystyle\eta^{(S)(8)}_{m}$ $\displaystyle=$
$\displaystyle\frac{4}{3}\eta^{(S)(1)}_{m},$ $\displaystyle\eta^{(M)(8)}_{1m}$
$\displaystyle=$
$\displaystyle-\frac{2}{3}\eta^{(M)(1)}_{1m}-\eta^{(M)(1)}_{2m},$
$\displaystyle\eta^{(M)(8)}_{2m}$ $\displaystyle=$
$\displaystyle-4\eta^{(M)(1)}_{1m}-\frac{2}{3}\eta^{(M)(1)}_{2m},$
$\displaystyle\eta^{(A)(8)}_{m}$ $\displaystyle=$
$\displaystyle-\frac{8}{3}\eta^{(A)(1)}_{m},$
We establish the relations between the diquark currents and the mesonic
currents using the Fierz transformation. For instance, we can verify the
relations
$\displaystyle\eta^{(S)(1)}_{m}$ $\displaystyle=$
$\displaystyle-2\eta^{S}_{d},$ $\displaystyle\eta^{(M)(1)}_{1m}$
$\displaystyle=$
$\displaystyle\frac{1}{4}\eta^{M}_{1d}+\frac{1}{4}\eta^{M}_{2d},$
$\displaystyle\eta^{(M)(1)}_{2m}$ $\displaystyle=$
$\displaystyle-\frac{1}{2}\eta^{M}_{1d}+\frac{1}{2}\eta^{M}_{2d},$
$\displaystyle\eta^{(A)(1)}_{m}$ $\displaystyle=$
$\displaystyle-2\eta^{A}_{d}.$
## Appendix B Finite energy sum rule
Sometimes the finite energy sum rule is also employed in the numerical
analysis. One first defines the $n$th moment using the spectral density
$\displaystyle W(n,s_{0})=\int^{s_{0}}_{0}\rho(s)s^{n}ds.$ (40)
With the quark-hadron duality, we have
$\displaystyle W(n,s_{0})|_{Hadron}=W(n,s_{0})|_{OPE}.$ (41)
The mass of the ground state can be obtained as
$\displaystyle M_{X}^{2}(n,s_{0})=\frac{W(n+1,s_{0})}{W(n,s_{0})}.$ (42)
We have plotted the variation of $M_{X}$ with $s_{0}$ for all the seven
interpolating currents in Fig. 13. The left and right diagrams correspond to
Ioffe’s and SVZ’s gluon condensate values respectively. It seems that there
exists a minimum of $M_{X}$ for each current. However, a reasonable sum rule
requires that the operator product expansion should converge well. In other
words, we require that the two-gluon power correction be less than one third
of the perturbative term and the tri-gluon power correction less than one
third of two-gluon power correction in $W(0,s_{0})$, which leads to the
working window of this finite energy sum rule as:
$\begin{array}[]{c|c|c}\diagdown&s_{0}(\mbox{SVZ})&s_{0}(\mbox{Ioffe})\\\
\hline\cr\rho_{1-2}&4.0&7.0\\\ \hline\cr\rho_{3-4}&4.2&5.7\\\
\hline\cr\rho_{5-6}&4.0&7.0\\\ \hline\cr\rho_{7}&4.9&6.0\end{array}$
Clearly for each current the minimum of the mass curve lies outside of the
working region in both of the figures and is not a real resonant signal.
Starting from 4.0 GeV2, each mass curve grows monotonically with $s_{0}$.
Thus, there does not exist a resonant signal for every interpolating current.
|
---|---
Figure 13: The variation of $M_{X}$ with $s_{0}$ and $n=0$ from the finite
energy sum rule. The left and right diagrams correspond to Ioffe’s and SVZ’s
gluon condensate values respectively.
|
arxiv-papers
| 2009-05-06T08:23:54 |
2024-09-04T02:49:02.340031
|
{
"license": "Public Domain",
"authors": "Chun-Kun Jiao, Wei Chen, Hua-Xing Chen and Shi-Lin Zhu",
"submitter": "Wei Chen",
"url": "https://arxiv.org/abs/0905.0774"
}
|
0905.0794
|
# Constructions of Almost Optimal Resilient Boolean Functions on Large Even
Number of Variables ††thanks: Published in IEEE Transactions on Information
Theory, vol. 55, no. 12, 2009. (doi: 10.1109/TIT.2009.2032736)
WeiGuo ZHANG and GuoZhen XIAO
ISN Lab, Xidian University, Xi’an 710071, P.R.China e-mail: w.g.zhang@qq.com
###### Abstract
In this paper, a technique on constructing nonlinear resilient Boolean
functions is described. By using several sets of disjoint spectra functions on
a small number of variables, an almost optimal resilient function on a large
even number of variables can be constructed. It is shown that given any $m$,
one can construct infinitely many $n$-variable ($n$ even), $m$-resilient
functions with nonlinearity $>2^{n-1}-2^{n/2}$. A large class of highly
nonlinear resilient functions which were not known are obtained. Then one
method to optimize the degree of the constructed functions is proposed. Last,
an improved version of the main construction is given.
Keywords: Stream cipher, Boolean function, Algebraic degree, disjoint spectra
functions, nonlinearity, resiliency,
## 1 Introduction
Boolean functions are used as nonlinear combiners or nonlinear filters in
certain models of stream cipher systems. In the design of cryptographic
Boolean functions, there is a need to consider a multiple number of criteria
simultaneously. The widely accepted criteria are balancedness, high
nonlinearity, high algebraic degree, and correlation immunity of high order
(for balanced functions, correlation immunity is referred to as resiliency).
By an $(n,m,d,N_{f})$ function we mean an $n$-variable, $m$-resilient (order
of resiliency $m$) Boolean function $f$ with algebraic degree $d$ and
nonlinearity $N_{f}$.
Unfortunately, all the criteria above cannot be maximized together. For $n$
even, the most notable example is perhaps bent functions [19]. Achieving
optimal nonlinearity $2^{n-1}-2^{{n/2}-1}$, bent functions permit to resist
linear attacks in the best possible way. But they are improper for
cryptographic use because they are neither balanced nor correlation-immune and
their algebraic degrees are not more than $n/2$. When concerning the order of
resiliency, Siegenthaler [28] and Xiao [31] proved that $d\leq n-m-1$ for
$m$-reslient Boolean functions. Such a function, reaching this bound, is
called degree-optimized.
For the reasons above, it is more important to construct those degree-
optimized resilient Boolean functions which have almost optimal (large but not
optimal) nonlinearity, say between $2^{n-1}-2^{n/2}$ and
$2^{n-1}-2^{{n/2}-1}$, when $n$ is even. This is also what we do in this
paper.
We now give a summary of earlier results that are related to our work.
1) To obtain nonlinear resilient functions, a modification of the Maiorana-
McFarland (M-M) construction of bent functions (cf. [5]) by concatenating the
small affine functions was first employed by Camion et al [1] and later
studied in [27], [3], [21]. The nonlinearity of $n$-variable M-M resilient
functions cannot exceed $2^{n-1}-2^{\lfloor n/2\rfloor}$. The M-M technique in
general does not generate degree-optimized functions and the M-M functions are
potentially cryptographically weak [2],[9].
2) An interesting extension of the M-M technique has been made by Carlet [2],
where the concatenation of affine functions is replaced by concatenation of
quadratic functions. In general, these constructed functions can not be
degree-optimized, and the other parameters such as nonlinearity and resiliency
are not better than those of the M-M functions.
3) Pasalic [17] presented a revised version of the M-M technique to obtain
degree-optimized resilient functions. The modification is simple and smart but
the nonlinearity value of the constructed functions is at most
$2^{n-1}-2^{\lfloor n/2\rfloor}$.
4) Sarkar and Maitra [21] indicated that for each order of resiliency $m$, it
is possible to find an even positive integer $n$ to construct an
$(n,m,n-m-1,N_{f})$ function $f$ with $N_{f}>2^{n-1}-2^{n/2}$. They showed
that for even $n\geq 12$, the nonlinearity of 1-resilient functions with
maximum possible algebraic degree $n-2$ can reach
$2^{n-1}-2^{n/2-1}-2^{n/2-2}-2^{n/4-2}-4$. It was further improved due to the
method proposed by Maitra and Pasalic [12]. Thanks to the existence of the
$(8,1,6,116)$ functions, an $(n,1,n-2,N_{f})$ function $f$ with
$N_{f}=2^{n-1}-2^{n/2-1}-2^{n/2-2}-4$ could be obtained, where $n\geq 10$.
5) Seberry et al. [26] and Dobbertin [6] independently presented constructions
of highly nonlinear balanced Boolean functions by modifying the M-M bent
functions. To obtain an $n$-variable balanced function, they concatenated
$2^{n/2}-1$ nonconstant distinct $n/2$-variable linear functions and one
$n/2$-variable modified M-M class highly nonlinear balanced Boolean function
which can be constructed in a recursive manner. These constructed functions
attain the best known nonlinearity for $n$-variable ($n$ even) balanced
functions. Unfortunately, these functions are not 1-resilient functions.
6) To obtain $m$-resilient functions with nonlinearity
$>2^{n-1}-2^{n/2-1}-2^{n/2-2}$ for $n$ even and $n\geq 14$, Maitra and Pasalic
[11] applied the concatenation of $2^{n/2}-2^{k}$ distinct linear
$m$-resilient functions on $n/2$ variables together with a highly nonlinear
resilient function on $n/2+k$ variables. Moreover, they have provided a
generalized construction method for $m$-resilient functions with nonlinearity
$2^{n-1}-2^{n/2-1}-2^{n/2-3}-2^{n/2-4}$ for all $n\geq 8m+6$. For sufficiently
large $n$, it is possible to get such functions with nonlinearity $\approx
2^{n-1}-2^{n/2-1}-\frac{2}{3}2^{n/2-2}$. And it is the upper bound on maximum
possible nonlinearity under their construction method.
7) Computer search techniques have played an important role in the design of
cryptographic Boolean functions in the last ten years [14], [16], [15], [4].
For a small number of variables, Boolean functions with good cryptographic
parameters could be found by using heuristic search techniques [12], [20],
[8]. However, search techniques cannot be used for functions with a large
number of variables at present.
8) During the past decade, the most infusive results on the design of
cryptographic Boolean functions were centered on finding small functions with
desirable cryptographic properties. When it comes to constructing large
functions, people used recursive constructions [29], [30], [7] besides the M-M
construction and its revised (extended) versions. With the rapid development
of integrated circuit technique, Boolean functions with large number of
variables can be easily implemented in hardware [22].
In this paper we propose a technique to construct high nonlinear resilient
Boolean functions on large even number of variables $(n\geq 12)$. We obtain a
large class of resilient Boolean functions with a nonlinearity higher than
that attainable by any previously known construction method.
The organization of this paper is as follows. In Section II, the basic
concepts and notions are presented. In Section III, we present a method to
construct a set of “disjoint spectra functions” by using a class of “partially
linear functions”. Our main construction is given in Section IV. A method for
constructing resilient functions on large even number of input variables is
proposed. We show that all the constructed functions are almost optimal. In
Section V, the degrees of the constructed functions are optimized. In Section
VI, an improved version of the main construction is given. Finally, Section
VII concludes the paper with an open problem.
## 2 Preliminary
To avoid confusion with the additions of integers in $\mathbb{R}$, denoted by
$+$ and $\Sigma_{i}$, we denote the additions over $\mathbb{F}_{2}$ by
$\oplus$ and $\bigoplus_{i}$. For simplicity, we denote by $+$ the addition of
vectors of $\mathbb{F}_{2}^{n}$. A Boolean function of $n$ variables is a
function from $\mathbb{F}_{2}^{n}$ into $\mathbb{F}_{2}$, and we denote by
$\mathcal{B}_{n}$ the set of all Boolean functions of $n$ variables. A Boolean
function $f(X_{n})\in\mathcal{B}_{n}$, where
$X_{n}=(x_{1},\cdots,x_{n})\in\mathbb{F}_{2}^{n}$, is generally represented by
its algebraic normal form (ANF)
$f(X_{n})=\bigoplus_{u\in\mathbb{F}_{2}^{n}}\lambda_{u}(\prod_{i=1}^{n}x_{i}^{u_{i}})$
(1)
where $\lambda_{u}\in\mathbb{F}_{2}$ and $u=(u_{1},\cdots,u_{n}).$ The
algebraic degree of $f(X_{n})$, denoted by $deg(f)$, is the maximal value of
$wt(u)$ such that $\lambda_{u}\neq 0$, where $wt(u)$ denotes the Hamming
weight of $u$. A Boolean function with $deg(f)\leq 1$ is said to be affine. In
particular, an affine function with constant term equal to zero is called a
linear function. Any linear function on $\mathbb{F}_{2}^{n}$ is denoted by
$\omega\cdot X_{n}=\omega_{1}x_{1}\oplus\cdots\oplus\omega_{n}x_{n}$
where $\omega=(\omega_{1},\cdots,\omega_{n})\in\mathbb{F}_{2}^{n}$. The Walsh
spectrum of $f\in\mathcal{B}_{n}$ in point $\omega$ is denoted by
$W_{f}(\omega)$ and calculated by
$W_{f}(\omega)=\sum_{X_{n}\in\mathbb{F}_{2}^{n}}(-1)^{f(X_{n})\oplus\omega\cdot
X_{n}}.$ (2)
$f\in\mathcal{B}_{n}$ is said to be balanced if its output column in the truth
table contains equal number of $0$’s and $1$’s (i.e. $W_{f}(0)=0$).
In [31], a spectral characterization of resilient functions has been
presented.
_Lemma 1:_ An $n$-variable Boolean function is $m$-resilient if and only if
its Walsh transform satisfies
$W_{f}(\omega)=0,\textrm{ for $0\leq wt(\omega)\leq m$,
$\omega\in\mathbb{F}_{2}^{n}$}.$ (3)
The Hamming distance between two $n$-variable Boolean functions $f$ and $\rho$
is denoted by
$d(f,\rho)=\\{X_{n}\in\mathbb{F}_{2}^{n}:f(X_{n})\neq\rho(X_{n})\\}.$
The set of all affine functions on $\mathbb{F}_{2}^{n}$ is denoted by $A(n)$.
The nonlinearity of a Boolean function $f\in\mathcal{B}_{n}$ is its distance
to the set of all affine functions and is defined as
$N_{f}=\min_{\rho\in A(n)}(d(f,\rho)).$
In term of Walsh spectra, the nonlinearity of $f$ is given by [13]
$N_{f}=2^{n-1}-\frac{1}{2}\cdot\max_{\omega\in\mathbb{F}_{2}^{n}}|W_{f}(\omega)|.$
(4)
Parseval’s equation [10] states that
$\sum_{\omega\in\mathbb{F}_{2}^{n}}(W_{f}(\omega))^{2}=2^{2n}$ (5)
and implies that
$N_{f}\leq 2^{n-1}-2^{n/2-1}.$
The equality occurs if and only if $f\in\mathcal{B}_{n}$ are bent functions,
where $n$ is even.
Bent functions can be constructed by the M-M method. The original M-M
functions are defined as follows: for any positive integers $p$, $q$ such that
$n=p+q$, an M-M function is a function $f\in\mathcal{B}_{n}$ defined by
$f(Y_{q},X_{p})=\phi(Y_{q})\cdot
X_{p}\oplus\pi(Y_{q}),~{}~{}X_{p}\in\mathbb{F}_{2}^{p},Y_{q}\in\mathbb{F}_{2}^{q}$
(6)
where $\phi$ is any mapping from $\mathbb{F}_{2}^{q}$ to $\mathbb{F}_{2}^{p}$
and $\pi\in\mathcal{B}_{q}$. When $n$ is even, $p$=$q$=$n/2$, and $\phi$ is
injective, the M-M functions are bent. Certain choices of $\phi$ can easily
yield bent functions with degree $n/2$. For the case of $n=2$,
$f\in\mathcal{B}_{2}$ is bent if and only if $deg(f)=2$.
The M-M construction is in essence a concatenation of affine functions. The
following definition shows a more general approach to obtain a “large” Boolean
function by concatenating the truth tables of any small Boolean functions.
_Definition 1:_ Let $Y_{q}\in\mathbb{F}_{2}^{q}$,
$X_{p}\in\mathbb{F}_{2}^{p}$, and $p$, $q$ be positive numbers with $p+q=n$.
$f\in\mathcal{B}_{n}$ is called a concatenation of the functions in the set
$G=\\{g_{b}\ |\ b\in\mathbb{F}_{2}^{q}\\}\subset\mathcal{B}_{p}$ if
$\displaystyle f(Y_{q},X_{p})=\bigoplus_{b\in\mathbb{F}_{2}^{q}}Y_{q}^{b}\cdot
g_{b}(X_{p}),$ (7)
where the notation $Y_{q}^{b}$ is defined by
$\displaystyle Y_{q}^{b}=\left\\{\begin{array}[]{ll}1&\textrm{if $Y_{q}=b$}\\\
0&\textrm{if $Y_{q}\neq b$}.\end{array}\right.$ (10)
Theorem 2 in [28] allows us to verify that the following lemma is true.
_Lemma 2:_ With the same notation as in Definition 1, if all the functions in
$G$ are $m$-resilient functions, then $f$ is an $m$-resilient function.
From now on, we will focus on highly nonlinear resilient Boolean functions
with an even number of variables in the following sense.
_Definition 2:_ Let $n\geq 4$ be even. $f\in\mathcal{B}_{n}$ is said to be
almost optimal if
$\displaystyle 2^{n-1}-2^{n/2}\leq N_{f}<2^{n-1}-2^{n/2-1}.$ (11)
## 3 A Large Set of Disjoint Spectra Functions
Disjoint spectra functions will play an important role in constructing almost
optimal resilient functions in this paper.
_Definition 3:_ A set of Boolean functions
$\\{g_{1},g_{2},\cdots,g_{e}\\}\subset\mathcal{B}_{p}$ such that for any
$\alpha\in\mathbb{F}_{2}^{p}$,
$\displaystyle W_{g_{i}}(\alpha)\cdot W_{g_{j}}(\alpha)=0,\ \ 1\leq i<j\leq e$
(12)
is called a set of disjoint spectra functions.
The idea that two Boolean functions with disjoint spectra can be used to
construct highly nonlinear resilient functions was clearly mentioned in [18],
and it was also used in [29], [7], [12]. In this section, we provide a simple
construction method for a large set of disjoint spectra functions by using a
set of “partially linear” functions.
As a family of special resilient functions, partially linear functions were
firstly considered by Siegenthaler [28]. Here is the definition of such
functions.
_Definition 4:_ Let $t$ be a positive integer and
$\\{i_{1},\cdots,i_{t}\\}\cup\\{i_{t+1},\cdots,i_{p}\\}=\\{1,\cdots,p\\}$. Let
$X_{p}=(x_{1},\cdots,x_{p})\in\mathbb{F}_{2}^{p}$,
$X^{\prime}_{t}=(x_{i_{1}},\cdots,x_{i_{t}})\in\mathbb{F}_{2}^{t}$ and
$X^{\prime\prime}_{p-t}=(x_{i_{t+1}},\cdots,x_{i_{p}})\in\mathbb{F}_{2}^{p-t}$.
For any $c\in\mathbb{F}_{2}^{t}$, $g_{c}\in\mathcal{B}_{p}$ is called a $t$th-
order partially linear function if
$\displaystyle g_{c}(X_{p})=c\cdot X^{\prime}_{t}\oplus
h_{c}(X^{\prime\prime}_{p-t})$ (13)
where $h_{c}\in\mathcal{B}_{p-t}$.
Now we use partially linear functions to construct a set of disjoint spectra
functions.
_Lemma 3:_ With the same notation as in Definition 4, a set of $t$th-order
partially linear functions
$\displaystyle T=\\{g_{c}(X_{p})=c\cdot X^{\prime}_{t}\oplus
h_{c}(X^{\prime\prime}_{p-t})~{}|~{}c\in\mathbb{F}_{2}^{t}\\}$ (14)
is a set of disjoint spectra functions.
_Proof:_ Let $\alpha=(\delta,\theta)\in\mathbb{F}_{2}^{p}$, where
$\delta\in\mathbb{F}_{2}^{t}$ and $\theta\in\mathbb{F}_{2}^{p-t}$. For any
$g_{c}\in T$,
$\displaystyle W_{g_{c}}(\alpha)$
$\displaystyle=\sum_{X_{p}\in\mathbb{F}_{2}^{p}}(-1)^{c\cdot
X^{\prime}_{t}\oplus h_{c}(X^{\prime\prime}_{p-t})\oplus\alpha\cdot X_{p}}$
$\displaystyle=\sum_{X_{p}\in\mathbb{F}_{2}^{p}}(-1)^{(c+\delta)\cdot
X^{\prime}_{t}\oplus(h_{c}(X^{\prime\prime}_{p-t})\oplus\theta\cdot
X^{\prime\prime}_{p-t})}$
$\displaystyle=\sum_{X^{\prime}_{t}\in\mathbb{F}_{2}^{t}}(-1)^{(c+\delta)\cdot
X^{\prime}_{t}}\sum_{X^{\prime\prime}_{p-t}\in\mathbb{F}_{2}^{p-t}}(-1)^{(h_{c}(X^{\prime\prime}_{p-t})\oplus\theta\cdot
X^{\prime\prime}_{p-t})}$
$\displaystyle=\left(\sum_{X^{\prime}_{t}\in\mathbb{F}_{2}^{t}}(-1)^{(c+\delta)\cdot
X^{\prime}_{t}}\right)\cdot W_{h_{c}}(\theta)$ (15)
We have
$\displaystyle W_{g_{c}}(\alpha)=\left\\{\begin{array}[]{ll}0&\textrm{if
$c\neq\delta$}\\\ 2^{t}\cdot W_{h_{c}}(\theta)&\textrm{if
$c=\delta$}.\end{array}\right.$ (18)
For any $g_{c^{\prime}}\in T$, $c^{\prime}\neq c$, we have
$W_{g_{c}}(\alpha)\cdot W_{g_{c^{\prime}}}(\alpha)=0.$
According to Definition 3, $T$ is a set of disjoint spectra functions.
Disjoint spectra functions (partially linear functions) will be used as the
“components” to construct almost optimal resilient Boolean functions in this
paper.
_Open Problem:_ Construct a large set of disjoint spectra functions which are
not (linearly equivalent to) partially linear functions.
## 4 Main Construction
This section presents a method for constructing resilient Boolean functions
with very high nonlinearity. The algebraic degrees of the functions are also
given.
_Construction 1:_ Let $n\geq 12$ be an even number, $m$ be a positive number,
and $(a_{1},\cdots,a_{s})\in\mathbb{F}_{2}^{s}$ such that
$\sum_{j=m+1}^{n/2}{{n/2}\choose
j}+\sum_{k=1}^{s}\left(a_{k}\cdot\sum_{j=m+1}^{n/2-2k}{{n/2-2k}\choose
j}\right)\geq 2^{n/2}$ (19)
where $s=\lfloor(n-2m-2)/4\rfloor$. Let
$X_{n/2}=(x_{1},\cdots,x_{n/2})\in\mathbb{F}_{2}^{n/2}$,
$X^{\prime}_{t}=(x_{1},\cdots,x_{t})\in\mathbb{F}_{2}^{t}$, and
$X^{\prime\prime}_{2k}=(x_{t+1},\cdots,x_{n/2})$ $\in\mathbb{F}_{2}^{2k}$ with
$t+2k=n/2$. Let
$\Gamma_{0}=\\{c\cdot X_{n/2}\ |\ c\in\mathbb{F}_{2}^{n/2},\ wt(c)>m\\}.$ (20)
For $1\leq k\leq s$, let $H_{k}$ be a nonempty set of $2k$-variable bent
functions with algebraic degree $\max(k,2)$ and
$\displaystyle\Gamma_{k}=\\{c\cdot X^{\prime}_{t}\oplus
h_{c}(X^{\prime\prime}_{2k})~{}|~{}c\in\mathbb{F}_{2}^{t},wt(c)>m\\}$ (21)
where $h_{c}\in H_{k}$. Set
$\displaystyle\Gamma=\bigcup_{k=0}^{s}\Gamma_{k}.$ (22)
Denote by $\phi$ any injective mapping from $\mathbb{F}_{2}^{n/2}$ to
$\Gamma$. Then for
$(Y_{n/2},X_{n/2})\in\mathbb{F}_{2}^{n/2}\times\mathbb{F}_{2}^{n/2}$ we
construct the function $f\in\mathcal{B}_{n}$ as follows:
$\displaystyle
f(Y_{n/2},X_{n/2})=\bigoplus_{b\in\mathbb{F}_{2}^{n/2}}Y_{n/2}^{b}\cdot\phi(b).$
(23)
_Remark 1:_
1) For Inequality (19) holds, we have
$\displaystyle|\Gamma|=\sum_{k=0}^{s}|\Gamma_{k}|\geq 2^{n/2}.$ (24)
Due to this we can find an injective mapping $\phi$.
2) All the functions in $\Gamma$ are partially linear functions and each
$\Gamma_{k}$ is a set of disjoint spectra functions.
_Theorem 1:_ Let $f\in\mathbb{F}_{2}^{n}$ be as in Construction 1. Then $f$ is
an almost optimal $(n,m,d,N_{f})$ function with
$N_{f}\geq 2^{n-1}-2^{n/2-1}-\sum_{k=1}^{s}(a_{k}\cdot 2^{n/2-k-1})$ (25)
and
$d\leq n/2+\max\\{2,\max\\{k~{}|~{}a_{k}\neq 0,~{}k=1,2,\cdots s\\}\\}.$ (26)
_Proof:_ For any
$(\beta,\alpha)\in\mathbb{F}_{2}^{n/2}\times\mathbb{F}_{2}^{n/2}$ we have
$\displaystyle W_{f}(\beta,\alpha)$
$\displaystyle=\sum_{(Y_{n/2},X_{n/2})\in\mathbb{F}_{2}^{n}}(-1)^{f(Y_{n/2},X_{n/2})\oplus(\beta,\alpha)\cdot(Y_{n/2},X_{n/2})}$
$\displaystyle=\sum_{b\in\mathbb{F}_{2}^{n/2}}(-1)^{\beta\cdot
b}\sum_{X_{n/2}\in\mathbb{F}_{2}^{n/2}}(-1)^{g_{b}(X_{n/2})\oplus\alpha\cdot
X_{n/2}}$ $\displaystyle=\sum_{b\in\mathbb{F}_{2}^{n/2}}(-1)^{\beta\cdot
b}W_{g_{b}}(\alpha)$
$\displaystyle=\sum_{k=0}^{s}\sum_{\phi(b)\in\Gamma_{k}\atop
b\in\mathbb{F}_{2}^{n/2}}(-1)^{\beta\cdot b}W_{g_{b}}(\alpha)$ (27)
Let $0\leq k\leq s$. Any $g_{b}\in\Gamma_{k}$ is a partially linear function.
From (18), we have
$W_{g_{b}}(\alpha)\in\\{0,\pm 2^{n/2-k}\\}.$
Let
$\displaystyle A_{k}=\Gamma_{k}\cap\\{\phi(b)\ |\
b\in\mathbb{F}_{2}^{n/2}\\}.$ (28)
Since (19) holds, there exists an injective mapping $\phi$ such that
$\displaystyle\sum_{k=1}^{s}|A_{k}|=\sum_{i=1}^{m}{{n/2}\choose i}.$ (29)
From Lemma 3, $\Gamma_{k}$ is a set of disjoint spectra functions. Noting
(12), if $A_{k}\neq\emptyset,$ then we have
$\displaystyle\sum_{\phi(b)\in\Gamma_{k}\atop
b\in\mathbb{F}_{2}^{n/2}}(-1)^{\beta\cdot b}W_{g_{b}}(\alpha)\in\\{0,\pm
2^{n/2-k}\\}.$ (30)
If $A_{k}=\emptyset,$ then we have
$\displaystyle\sum_{\phi(b)\in\Gamma_{k}\atop
b\in\mathbb{F}_{2}^{n/2}}(-1)^{\beta\cdot b}W_{g_{b}}(\alpha)=0.$ (31)
Combining (27), (30), and (31), we have
$\displaystyle|W_{f}(\beta,\alpha)|\leq 2^{n/2}+\sum_{k=1}^{s}a_{k}\cdot
2^{n/2-k}$ (32)
where
$\displaystyle a_{k}=\left\\{\begin{array}[]{ll}0&\textrm{if
$A_{k}=\emptyset$}\\\ 1&\textrm{if $A_{k}\neq\emptyset$}.\end{array}\right.$
(35)
From (4), Inequality (25) holds. From Definition 2, $f$ is almost optimal.
Note that the algebraic degree of any function in $\Gamma_{1}$ is 2. Hence,
when
$\displaystyle\max\\{k~{}|~{}a_{k}\neq 0,\ k=1,2,\cdots s\\}=1$ (36)
$d\leq n/2+2$ where the equality holds if and only if $|A_{1}|$ is odd. Note
that the algebraic degree of any bent functions on $\mathbb{F}_{2}^{2k}$ can
reach $k$ when $k\geq 2$. So $d$ can reach $n/2+k^{\prime}$ when
$k^{\prime}\geq 2$ and $|A_{k^{\prime}}|$ is odd, where
$\displaystyle k^{\prime}=\max\\{k\ |\ a_{k}\neq 0,\ k=1,2,\cdots s\\}.$ (37)
Any $g_{b}\in\Gamma$, $b\in\mathbb{F}_{2}^{n/2}$, is an $m$-resilient
function, where $g_{b}(X_{n/2})=\phi(b)$. From Lemma 2, $f$ is an
$m$-resilient function since it is a concatenation of $m$-resilient functions.
$\Box$
_Remark 2:_
1) The nonlinearity of the resilient functions constructed above is always
strictly greater than $2^{n-1}-2^{n/2}$. For reasonable fixed $n$ and $m$, the
nonlinearity of the constructed functions is always greater than that of the
known ones except some functions on small even number of variables.
2) Let $m$ be the maximum number such that Inequality (19) holds. Roughly
speaking, $m/n$ tends to 1/4.
_Example 1:_ It is possible to construct a $(16,1,10,2^{15}-2^{7}-2^{5})$
function.
Note that $s=\lfloor(n-2m-2)/4\rfloor$=3. For $1\leq k\leq 3$, let
$X^{\prime}_{8-2k}=(x_{1},\cdots,x_{8-2k})\in\mathbb{F}_{2}^{8-2k}$ and
$X^{\prime\prime}_{2k}=(x_{8-2k+1},\cdots,x_{8})\in\mathbb{F}_{2}^{2k}$. Let
$X_{8}=(X^{\prime}_{8-2k},X^{\prime\prime}_{2k})\in\mathbb{F}_{2}^{8}$. We
construct four sets of disjoint spectra functions as follows:
$\Gamma_{0}=\\{c\cdot X_{8}\ |\ wt(c)>1,c\in\mathbb{F}_{2}^{8}\\}.$
For $1\leq k\leq 3$,
$\Gamma_{k}=\\{c\cdot X^{\prime}_{8-2k}\oplus h_{c}(X^{\prime\prime}_{2k})\ |\
wt(c)>1,c\in\mathbb{F}_{2}^{8-2k}\\}$
where $h_{c}\in H_{k}$. We have
$|\Gamma_{k}|=\sum_{i=2}^{8-2k}{{8-2k}\choose i},\ 0\leq k\leq 3.$
Notice that
$|\Gamma_{0}|+|\Gamma_{2}|=258>2^{8},$
it is possible to establish an injective mapping $\phi$ from
$\mathbb{F}_{2}^{8}$ to $\Re$, where $\Re=\Gamma_{0}\cup\Gamma_{2}$. Then for
$(Y_{8},X_{8})\in\mathbb{F}_{2}^{8}\times\mathbb{F}_{2}^{8}$ we construct the
function $f\in\mathcal{B}_{16}$ as follows:
$f(Y_{8},X_{8})=\bigoplus_{b\in\mathbb{F}_{2}^{8}}Y_{8}^{b}\cdot\phi(b).$
From (32), for any
$(\beta,\alpha)\in\mathbb{F}_{2}^{8}\times\mathbb{F}_{2}^{8}$, we have
$\displaystyle\max_{(\beta,\alpha)\in\mathbb{F}_{2}^{8}}|W_{f}(\beta,\alpha)|\leq\sum_{k=0}^{3}\sum_{g\in\Gamma_{k}}|W_{g}(\alpha)|=\sum_{k=0}^{3}\max_{\alpha\in\mathbb{F}_{2}^{8}\atop{g\in\Gamma_{k}}}|W_{g}(\alpha)|=2^{8}+2^{6}$
By (4), we have
$N_{f}\geq 2^{15}-2^{7}-2^{5}.$
Note that the partially linear function in $\Gamma_{2}$ can be denoted by
$g=c\cdot X^{\prime}_{4}\oplus h_{c}(X^{\prime\prime}_{4})$ where $h_{c}$ is a
bent function on $\mathbb{F}_{2}^{4}$. Since the algebraic degree of
$h_{c}(X^{\prime\prime}_{4})$ can reach 2, $deg(f)$ can reach 8+2=10. So it is
possible to obtain a $(16,1,10,2^{15}-2^{7}-2^{5})$ function.
## 5 Degree Optimization
Let
$\displaystyle\\{i_{1},\cdots,i_{m+1}\\}\cup\\{i_{m+2},\cdots,i_{n/2}\\}=\\{1,\cdots,n/2\\}.$
The algebraic degree of any $(n,m,d,N_{f})$ function $f$ obtained in
Construction 1 can be optimized by adding a monomial $x_{i_{m+2}}\cdots
x_{i_{n/2}}$ to one function $g\in\Gamma$ with $\phi^{-1}(g)\neq\emptyset$,
where $g$ can be denoted by
$\displaystyle g=x_{i_{1}}\oplus\cdots\oplus
x_{i_{m+1}}\oplus\hbar(x_{i_{m+2}},\cdots,x_{i_{n/2}}).$ (38)
It is not difficult to prove that
$N_{f^{\prime}}\in\\{N_{f},N_{f}-2^{m+1}\\}$, where $N_{f^{\prime}}$ is the
nonlinearity of the degree-optimized function $f^{\prime}$. To optimize the
algebraic degree of $f$ and ensure that $N_{f^{\prime}}=N_{f}$, we below
propose an idea to construct a set of disjoint spectra functions
$\Gamma^{\prime}_{0}$ including a nonlinear function
$g^{\prime}=g+x_{i_{m+2}}\cdots x_{i_{n/2}}$.
_Construction 2:_ Let $n\geq 12$ be an even number, $m$ be a positive number,
and $(a_{1},\cdots,a_{s})\in\mathbb{F}_{2}^{s}$ such that
$\displaystyle\left(\sum_{j=m+1}^{n/2}{{n/2}\choose
j}-2^{n/2-m-1}+1\right)+\sum_{k=1}^{s}\left(a_{k}\cdot\sum_{j=m+1}^{n/2-2k}{{n/2-2k}\choose
j}\right)\geq 2^{n/2}$ (39)
where $s=\lfloor(n-2m-2)/4\rfloor$. Let
$\displaystyle
S=\\{c~{}|~{}c=(c_{1},\cdots,c_{n/2})\in\mathbb{F}_{2}^{n/2},wt(c)>m,(c_{i_{1}},\cdots,c_{i_{m+1}})=(1\cdots
1)\\}.$ (40)
Let
$\displaystyle g^{\prime}(X_{n/2})=c^{\prime}\cdot X_{n/2}\oplus
x_{i_{m+2}}x_{i_{m+3}}\cdots x_{i_{n/2}}$ (41)
where $c^{\prime}\in S$. For $1\leq k\leq s$, $\Gamma_{k}$ is defined as in
Construction 1. And we modify the $\Gamma_{0}$ in Construction 1 as follows:
$\displaystyle\Gamma_{0}^{\prime}=\\{g^{\prime}(X_{n/2})\\}\cup\\{c\cdot
X_{n/2}|c\in\mathbb{F}_{2}^{n/2},wt(c)>m,c\notin S\\}.$ (42)
Set
$\displaystyle\Gamma^{\prime}=\Gamma^{\prime}_{0}\cup\Gamma_{1}\cup\cdots\cup\Gamma_{s}.$
(43)
Denote by $\phi^{\prime}$ any injective mapping from $\mathbb{F}_{2}^{n/2}$ to
$\Gamma^{\prime}$ such that $\phi^{\prime-1}(g_{c^{\prime}})\neq\emptyset$.
The function $f^{\prime}\in\mathcal{B}_{n}$ is constructed as follows:
$\displaystyle
f^{\prime}(Y_{n/2},X_{n/2})=\bigoplus_{b\in\mathbb{F}_{2}^{n/2}}Y_{n/2}^{b}\cdot\phi^{\prime}(b).$
(44)
_Theorem 2:_ The function $f^{\prime}\in\mathbb{F}_{2}^{n}$ proposed by
Construction 2 is an almost optimal $(n,m,n-m-1,N_{f^{\prime}})$ function with
$\displaystyle N_{f^{\prime}}\geq 2^{n-1}-2^{n/2-1}-\sum_{k=1}^{s}a_{k}\cdot
2^{n/2-k-1}.$ (45)
_Proof:_ $f^{\prime}$ is an $m$-resilient function since it is a concatenation
of $m$-resilient functions.
Let $g_{b}(X_{n/2})=\phi^{\prime}(b)\in\Gamma^{\prime}$,
$b\in\mathbb{F}_{2}^{n/2}$. From the proof of Theorem 1, for any
$(\beta,\alpha)\in\mathbb{F}_{2}^{n/2}\times\mathbb{F}_{2}^{n/2}$, we have
$\displaystyle
W_{f^{\prime}}(\beta,\alpha)=\sum_{\phi(b)\in\Gamma^{\prime}_{0}\atop
b\in\mathbb{F}_{2}^{n/2}}(-1)^{\beta\cdot
b}W_{g_{b}}(\alpha)+\sum_{k=1}^{s}\sum_{\phi(b)\in\Gamma_{k}\atop
b\in\mathbb{F}_{2}^{n/2}}(-1)^{\beta\cdot b}W_{g_{b}}(\alpha).$ (46)
Let
$c^{\prime}=(c^{\prime}_{1},\cdots,c^{\prime}_{n/2})\in\mathbb{F}_{2}^{n/2}$
and $\alpha=(\alpha_{1},\cdots,\alpha_{n/2})\in\mathbb{F}_{2}^{n/2}.$ We have
$\displaystyle W_{g^{\prime}}(\alpha)$
$\displaystyle=\sum_{X_{n/2}\in\mathbb{F}_{2}^{n/2}}(-1)^{(c^{\prime}+\alpha)\cdot
X_{n/2}\oplus x_{i_{m+2}}\cdots~{}x_{i_{n/2}}}$
$\displaystyle=\left\\{\begin{array}[]{ll}2^{n/2}-2^{m+2}&\textrm{if
$\alpha=c^{\prime}$}\\\ \pm 2^{m+2}&\textrm{if $\alpha\neq c^{\prime}$ and
$\theta=\delta$}\\\ 0&\textrm{if $\alpha\neq c^{\prime}$ and
$\theta\neq\delta$}.\end{array}\right.$ (50)
where $\delta=(c^{\prime}_{i_{1}},\cdots,c^{\prime}_{i_{m+1}})$ and
$\theta=(\alpha_{i_{1}},\cdots,\alpha_{i_{m+1}})$. Let
$g_{b}(X_{n/2})=\phi^{\prime}(b)\in\Gamma^{\prime}$,
$b\in\mathbb{F}_{2}^{n/2}$. When $g_{b}\in\Gamma^{\prime}_{0}$ and
$g_{b}=c\cdot X_{n/2}\neq g^{\prime}$, we have
$\displaystyle W_{g_{b}}(\alpha)=\left\\{\begin{array}[]{ll}0&\textrm{if
$\alpha\neq c$}\\\ 2^{n/2}&\textrm{if $\alpha=c$}.\end{array}\right.$ (53)
From (42), if $\alpha\neq c^{\prime}$ and
$(\alpha_{i_{1}},\cdots,\alpha_{i_{m+1}})=(1\cdots 1)$, then $\alpha\neq c$.
Obviously, $\Gamma^{\prime}_{0}$ is a set of disjoint spectra functions. So we
have
$\displaystyle\sum_{\phi(b)\in\Gamma^{\prime}_{0}\atop
b\in\mathbb{F}_{2}^{n/2}}(-1)^{\beta\cdot b}W_{g_{b}}(\alpha)\in\\{0,\pm
2^{m+2},\pm(2^{n/2}-2^{m+2}),\pm 2^{n/2}\\}.$ (54)
Let $A_{k}=\Gamma_{k}\cap\\{\phi(b)\ |\ b\in\mathbb{F}_{2}^{n/2}\\}.$
Similarly to the proof of Theorem 1, for any
$(\beta,\alpha)\in\mathbb{F}_{2}^{n}$, we have
$|W_{f^{\prime}}(\beta,\alpha)|\leq 2^{n/2}+\sum_{k=1}^{s}a_{k}\cdot
2^{n/2-k}$
where
$a_{k}=\left\\{\begin{array}[]{ll}0&\textrm{if $A_{k}=\emptyset$}\\\
1&\textrm{if $A_{k}\neq\emptyset$}.\end{array}\right.$
From (3), Inequality (39) holds and $f^{\prime}$ is obviously almost optimal.
For the existence of $g^{\prime}$, $deg(f^{\prime})=n-m-1$. $\Box$
_Remark 3:_
1) Apparently, the idea above to obtain degree-optimized resilient functions
is firstly considered by Pasalic [17].
2) A long list of input instances and the corresponding cryptographic
parameters can be found in Table 1 and Table 2. In Table 2, the entries with
“*” represent the functions that can not be degree optimized via Construction
2 on the premise of that $N_{f^{\prime}}=N_{f}$.
Table 1: Existence of Almost Optimal $(n,m,n-m-1,N_{f^{\prime}})$ functions ($1\leq m\leq 4$) $m$ | $n$ | $N_{f^{\prime}}$
---|---|---
| | $12\leq n\leq 20$ | $2^{n-1}-2^{n/2-1}-2^{n/4+1}-4$
| | $24\leq n\leq 112$ | $2^{n-1}-2^{n/2-1}-2^{n/4+2}-4$
| | $116\leq n\leq 132$ | $2^{n-1}-2^{n/2-1}-2^{n/4+2}-2^{n/4+1}-4$
| $n\equiv 0\atop{(mod4)}$ | $n=136$ | $2^{n-1}-2^{n/2-1}-2^{n/4+2}-2^{n/4+1}-2^{n/4}-4$
| | $140\leq n\leq 492$ | $2^{n-1}-2^{n/2-1}-2^{n/4+3}-4$
| | $496\leq n\leq 512$ | $2^{n-1}-2^{n/2-1}-2^{n/4+3}-2^{n/4+1}-4$
| | $14\leq n\leq 50$ | $2^{n-1}-2^{n/2-1}-2^{{(n+6)}/4}-4$
| | $54\leq n\leq 58$ | $2^{n-1}-2^{n/2-1}-2^{{(n+6)}/4}-2^{{(n+2)}/4}-4$
| | $62\leq n\leq 238$ | $2^{n-1}-2^{n/2-1}-2^{{(n+10)}/4}-4$
| $n\equiv 2\atop{(mod4)}$ | $242\leq n\leq 246$ | $2^{n-1}-2^{n/2-1}-2^{{(n+10)}/4}-2^{{(n+2)}/4}-4$
| | $250\leq n\leq 290$ | $2^{n-1}-2^{n/2-1}-2^{{(n+10)}/4}-2^{{(n+6)}/4}-4$
| | $294\leq n\leq 298$ | $2^{n-1}-2^{n/2-1}-2^{{(n+10)}/4}-2^{{(n+6)}/4}-2^{{(n+2)}/4}-4$
| | $n=16$ | $2^{n-1}-2^{n/2-1}-2^{n/4+2}-8$
| | $20\leq n\leq 40$ | $2^{n-1}-2^{n/2-1}-2^{n/4+3}-8$
| | $n=44$ | $2^{n-1}-2^{n/2-1}-2^{n/4+3}-2^{n/4+2}-8$
| $n\equiv 0\atop{(mod4)}$ | $48\leq n\leq 84$ | $2^{n-1}-2^{n/2-1}-2^{n/4+4}-8$
2 | | $n=88$ | $2^{n-1}-2^{n/2-1}-2^{n/4+4}-2^{n/4+2}-8$
| | $92\leq n\leq 96$ | $2^{n-1}-2^{n/2-1}-2^{n/4+4}-2^{n/4+3}-8$
| | $100\leq n\leq 176$ | $2^{n-1}-2^{n/2-1}-2^{n/4+5}-8$
| | $18\leq n\leq 26$ | $2^{n-1}-2^{n/2-1}-2^{(n+10)/4}-8$
| $n\equiv 2\atop{(mod4)}$ | $30\leq n\leq 58$ | $2^{n-1}-2^{n/2-1}-2^{(n+14)/4}-8$
| | $62\leq n\leq 66$ | $2^{n-1}-2^{n/2-1}-2^{(n+14)/4}-2^{(n+10)/4}-8$
| | $70\leq n\leq 122$ | $2^{n-1}-2^{n/2-1}-2^{(n+18)/4}-8$
| | $n=20$ | $2^{n-1}-2^{n/2-1}-2^{n/4+3}-2^{n/4+2}-16$
| | $24\leq n\leq 32$ | $2^{n-1}-2^{n/2-1}-2^{n/4+4}-16$
| | $n=36$ | $2^{n-1}-2^{n/2-1}-2^{n/4+4}-2^{n/4+3}-16$
| | $40\leq n\leq 56$ | $2^{n-1}-2^{n/2-1}-2^{n/4+5}-16-16$
| $n\equiv 0\atop{(mod4)}$ | $n=60$ | $2^{n-1}-2^{n/2-1}-2^{n/4+5}-2^{n/4+4}-16$
| | $64\leq n\leq 88$ | $2^{n-1}-2^{n/2-1}-2^{n/4+6}-16$
| | $n=92$ | $2^{n-1}-2^{n/2-1}-2^{n/4+6}-2^{n/4+4}-16$
3 | | $n=96$ | $2^{n-1}-2^{n/2-1}-2^{n/4+6}-2^{n/4+5}-16$
| | $100\leq n\leq 144$ | $2^{n-1}-2^{n/2-1}-2^{n/4+7}-16$
| | $22\leq n\leq 26$ | $2^{n-1}-2^{n/2-1}-2^{(n+14)/4}-16$
| | $30\leq n\leq 42$ | $2^{n-1}-2^{n/2-1}-2^{{(n+18)}/4}-16$
| | $n=46$ | $2^{n-1}-2^{n/2-1}-2^{{(n+18)}/4}-2^{{(n+14)}/4}-16$
| $n\equiv 2\atop{(mod4)}$ | $52\leq n\leq 70$ | $2^{n-1}-2^{n/2-1}-2^{{(n+22)}/4}-16$
| | $n=74$ | $2^{n-1}-2^{n/2-1}-2^{{(n+22)}/4}-2^{{(n+18)}/4}-16$
| | $n=78$ | $2^{n-1}-2^{n/2-1}-2^{{(n+22)}/4}-2^{{(n+18)}/4}-2^{{(n+14)}/4}-16$
| | $82\leq n\leq 114$ | $2^{n-1}-2^{n/2-1}-2^{{(n+26)}/4}-16$
| | $28\leq n\leq 32$ | $2^{n-1}-2^{n/2-1}-2^{n/4+5}-32$
| | $36\leq n\leq 48$ | $2^{n-1}-2^{n/2-1}-2^{n/4+6}-32$
| | $n=52$ | $2^{n-1}-2^{n/2-1}-2^{n/4+6}-2^{n/4+5}-32$
| $n\equiv 0\atop{(mod4)}$ | $56\leq n\leq 68$ | $2^{n-1}-2^{n/2-1}-2^{n/4+7}-32$
| | $n=72$ | $2^{n-1}-2^{n/2-1}-2^{n/4+7}-2^{n/4+5}-2^{n/4+4}-32$
| | $76\leq n\leq 100$ | $2^{n-1}-2^{n/2-1}-2^{n/4+8}-32$
| | $n=26$ | $2^{n-1}-2^{n/2-1}-2^{{(n+18)}/4}-32$
4 | | $30\leq n\leq 38$ | $2^{n-1}-2^{n/2-1}-2^{{(n+22)}/4}-32$
| | $n=42$ | $2^{n-1}-2^{n/2-1}-2^{{(n+22)}/4}-2^{{(n+18)}/4}-32$
| $n\equiv 2\atop{(mod4)}$ | $46\leq n\leq 58$ | $2^{n-1}-2^{n/2-1}-2^{{(n+26)}/4}-32$
| | $n=62$ | $2^{n-1}-2^{n/2-1}-2^{{(n+26)}/4}-2^{{(n+22)}/4}-32$
| | $66\leq n\leq 82$ | $2^{n-1}-2^{n/2-1}-2^{{(n+30)}/4}-32$
| | $n=86$ | ${2^{n-1}-2^{n/2-1}-2^{{(n+30)}/4}-2^{{(n+22)}/4}-2^{{(n+18)}/4}-2^{(n+14)/4}-32}$
| | $n=90$ | $2^{n-1}-2^{n/2-1}-2^{{(n+30)}/4}-2^{{(n+26)}/4}-2^{{(n+22)}/4}-32$
| | $94\leq n\leq 118$ | $2^{n-1}-2^{n/2-1}-2^{{(n+34)}/4}-32$
Table 2: $(n,m,n-m-1,N_{f^{\prime}})$ functions ($m\geq 5$)which were not known earlier $(30,5,24,2^{29}-2^{14}-2^{13})$ | $(36,5,30,2^{35}-2^{17}-2^{15}-2^{6})^{*}$
---|---
$(38,5,32,2^{37}-2^{18}-2^{16})$ | $(42,5,36,2^{41}-2^{20}-2^{17}-2^{14}-2^{6})^{*}$
$(44,5,38,2^{43}-2^{21}-2^{18}-2^{6})^{*}$ | $(48,5,42,2^{47}-2^{23}-2^{19}-2^{6})^{*}$
$(54,5,48,2^{53}-2^{26}-2^{21}-2^{6})^{*}$ | $(58,5,52,2^{57}-2^{28}-2^{22}-2^{21}-2^{6})^{*}$
$(60,5,54,2^{59}-2^{29}-2^{23}-2^{6})^{*}$ | $(64,5,48,2^{63}-2^{31}-2^{24}-2^{6})^{*}$
$(70,5,64,2^{69}-2^{34}-2^{26}-2^{6})^{*}$ | $(74,5,68,2^{73}-2^{36}-2^{27}-2^{24}-2^{6})^{*}$
$(76,5,70,2^{75}-2^{37}-2^{28}-2^{6})^{*}$ | $(80,5,74,2^{79}-2^{39}-2^{29}-2^{6})^{*}$
$(84,5,78,2^{83}-2^{41}-2^{30}-2^{6})^{*}$ | $(88,5,82,2^{87}-2^{43}-2^{31}-2^{30}-2^{6})^{*}$
$(90,5,84,2^{89}-2^{44}-2^{32}-2^{6})^{*}$ | $(94,5,88,2^{93}-2^{46}-2^{33}-2^{6})^{*}$
$(98,5,92,2^{97}-2^{48}-2^{34}-2^{32}-2^{6})^{*}$ | $(100,5,94,2^{99}-2^{49}-2^{35}-2^{6})^{*}$
$(34,6,27,2^{23}-2^{16}-2^{15})$ | $(40,6,33,2^{39}-2^{19}-2^{17}-2^{16}-2^{7})^{*}$
$(42,6,35,2^{41}-2^{20}-2^{18})$ | $(48,6,41,2^{47}-2^{23}-2^{20}-2^{7})^{*}$
$(52,6,45,2^{51}-2^{25}-2^{22})$ | $(54,6,47,2^{53}-2^{26}-2^{22}-2^{7})^{*}$
$(60,6,53,2^{59}-2^{29}-2^{24}-2^{7})^{*}$ | $(64,6,47,2^{63}-2^{31}-2^{25}-2^{24}-2^{7})^{*}$
$(66,6,59,2^{65}-2^{32}-2^{26}-2^{7})^{*}$ | $(70,6,63,2^{69}-2^{34}-2^{27}-2^{7})^{*}$
$(76,6,69,2^{75}-2^{37}-2^{29}-2^{7})^{*}$ | $(80,6,73,2^{79}-2^{39}-2^{30}-2^{29}-2^{7})^{*}$
$(82,6,75,2^{81}-2^{40}-2^{31}-2^{7})^{*}$ | $(86,6,79,2^{85}-2^{42}-2^{32}-2^{7})^{*}$
$(90,6,83,2^{89}-2^{44}-2^{33}-2^{32}-2^{7})^{*}$ | $(92,6,85,2^{91}-2^{45}-2^{34}-2^{7})^{*}$
$(96,6,89,2^{95}-2^{47}-2^{35}-2^{7})^{*}$ | $(100,6,93,2^{99}-2^{49}-2^{36}-2^{35}-2^{7})^{*}$
$(38,7,30,2^{37}-2^{18}-2^{17}-2^{16})$ | $(40,7,32,2^{39}-2^{19}-2^{18})$
$(46,7,38,2^{45}-2^{22}-2^{20})$ | $(48,7,40,2^{47}-2^{23}-2^{21})$
$(52,7,44,2^{51}-2^{25}-2^{22}-2^{21}-2^{8})^{*}$ | $(54,7,46,2^{53}-2^{26}-2^{23})$
$(58,7,50,2^{57}-2^{28}-2^{24}-2^{23}-2^{8})^{*}$ | $(60,7,52,2^{59}-2^{29}-2^{25}-2^{8})^{*}$
$(64,7,46,2^{63}-2^{31}-2^{26}-2^{25}-2^{8})^{*}$ | $(66,7,58,2^{65}-2^{32}-2^{27}-2^{8})^{*}$
$(70,7,62,2^{69}-2^{34}-2^{28}-2^{27}-2^{8})^{*}$ | $(72,7,64,2^{71}-2^{35}-2^{29}-2^{8})^{*}$
$(76,7,68,2^{73}-2^{37}-2^{30}-2^{8})^{*}$ | $(78,7,70,2^{77}-2^{38}-2^{31}-2^{8})^{*}$
$(82,7,74,2^{81}-2^{40}-2^{32}-2^{8})^{*}$ | $(86,7,78,2^{85}-2^{42}-2^{33}-2^{32}-2^{8})^{*}$
$(88,7,80,2^{87}-2^{43}-2^{34}-2^{8})^{*}$ | $(92,7,84,2^{91}-2^{45}-2^{35}-2^{8})^{*}$
$(98,7,90,2^{97}-2^{48}-2^{37}-2^{8})^{*}$ | $(100,7,92,2^{99}-2^{49}-2^{38}-2^{8})^{*}$
$(42,8,33,2^{41}-2^{20}-2^{19}-2^{18})$ | $(44,8,35,2^{43}-2^{21}-2^{20})$
$(50,8,41,2^{49}-2^{24}-2^{22}-2^{21})$ | $(52,8,43,2^{51}-2^{25}-2^{23})$
$(58,8,49,2^{57}-2^{28}-2^{25}-2^{9})^{*}$ | $(64,8,45,2^{63}-2^{31}-2^{27}-2^{9})^{*}$
$(68,8,59,2^{67}-2^{33}-2^{25}-2^{29})$ | $(70,8,61,2^{69}-2^{34}-2^{29}-2^{27}-2^{9})^{*}$
$(72,8,63,2^{71}-2^{35}-2^{30}-2^{9})^{*}$ | $(76,8,67,2^{75}-2^{37}-2^{31}-2^{28}-2^{9})^{*}$
$(78,8,69,2^{77}-2^{38}-2^{32}-2^{9})^{*}$ | $(82,8,73,2^{81}-2^{40}-2^{33}-2^{9})^{*}$
$(88,8,79,2^{87}-2^{43}-2^{35}-2^{9})^{*}$ | $(92,8,83,2^{91}-2^{45}-2^{36}-2^{35}-2^{9})^{*}$
$(94,8,85,2^{93}-2^{46}-2^{37}-2^{9})^{*}$ | $(98,8,89,2^{97}-2^{48}-2^{38}-2^{36}-2^{9})^{*}$
$(100,8,91,2^{99}-2^{49}-2^{39}-2^{9})^{*}$ | $(200,8,191,2^{199}-2^{99}-2^{68}-2^{9})^{*}$
$(46,9,36,{2^{45}-2^{22}-2^{21}-2^{20}-2^{19}-2^{10}})^{*}$ | $(48,9,38,2^{47}-2^{23}-2^{22})$
$(54,9,44,2^{53}-2^{26}-2^{24}-2^{23}-2^{22})$ | $(56,9,46,2^{55}-2^{27}-2^{25})$
$(62,9,52,2^{61}-2^{30}-2^{27}-2^{26})$ | $(64,9,44,2^{63}-2^{31}-2^{28})$
$(68,9,58,{2^{67}-2^{33}-2^{29}-2^{28}-2^{27}-2^{10}})^{*}$ | $(70,9,60,2^{69}-2^{34}-2^{30}-2^{10})^{*}$
$(74,9,64,2^{73}-2^{36}-2^{32}$ | $(76,9,66,2^{75}-2^{37}-2^{32}-2^{10})^{*}$
$(80,9,70,2^{79}-2^{39}-2^{34}-2^{10})^{*}$ | $(82,9,72,2^{81}-2^{40}-2^{34}-2^{10})^{*}$
$(88,9,78,2^{87}-2^{43}-2^{36}-2^{10})^{*}$ | $(94,9,84,2^{93}-2^{46}-2^{38}-2^{10})^{*}$
$(98,9,88,2^{97}-2^{48}-2^{39}-2^{38}-2^{10})^{*}$ | $(100,9,90,2^{99}-2^{49}-2^{40}-2^{10})^{*}$
$(52,10,41,2^{51}-2^{25}-2^{24})$ | $(60,10,49,2^{59}-2^{29}-2^{27})$
${\scriptstyle(66,10,55,2^{65}-2^{32}-2^{29}-2^{28}-2^{27}-2^{26}-2^{25}-2^{11})}^{*}$ | $(68,10,57,2^{67}-2^{33}-2^{30})$
$(74,10,63,{2^{73}-2^{36}-2^{32}-2^{30}-2^{11}})^{*}$ | $(76,10,65,2^{75}-2^{37}-2^{33})$
$(80,10,69,{2^{79}-2^{39}-2^{34}-2^{33}-2^{11}})^{*}$ | $(82,10,71,2^{81}-2^{40}-2^{35}-2^{11})^{*}$
$(84,10,73,2^{83}-2^{41}-2^{36}-2^{11})^{*}$ | $(86,10,75,{2^{85}-2^{42}-2^{36}-2^{35}-2^{34}-2^{11}})^{*}$
$(88,10,77,2^{87}-2^{43}-2^{37}-2^{11})^{*}$ | $(92,10,81,{2^{91}-2^{45}-2^{38}-2^{37}-2^{36}-2^{35}-2^{11}})^{*}$
$(94,10,83,2^{93}-2^{46}-2^{39}-2^{11})^{*}$ | $(98,10,87,{2^{97}-2^{48}-2^{40}-2^{39}-2^{38}-2^{11}})^{*}$
$(100,10,89,2^{99}-2^{49}-2^{41}-2^{11})^{*}$ | $(500,10,489,2^{499}-2^{249}-2^{153}-2^{11})^{*}$
$(100,21,78,2^{99}-2^{49}-2^{48})$ | $(200,45,154,2^{199}-2^{99}-2^{98})$
$(184,38,145,2^{183}-2^{91}-2^{89}-2^{87}-2^{86})$ | $(516,116,399,2^{515}-2^{255}-2^{253})$
$(832,200,631,2^{831}-2^{415}-2^{414}-2^{413})$ | ${(10000,2475,7524,2^{9999}-2^{4999}-2^{4998}-2^{4997}-2^{4996})}$
## 6 Improved Version of the Main Construction
Both constant functions and balanced Boolean functions are regarded as
$0$-resilient functions. The Boolean functions that are neither balanced nor
correlation-immune are regarded as $(-1)$-resilient functions (e.g. bent
functions).
_Lemma 4:_ With the same notation as in the Definition 4, if $h_{c}$ is a
$v$-resilient function, then $g_{c}$ is a $(wt(c)+v)$-resilient function.
_Proof:_ Let $\alpha\in\mathbb{F}_{2}^{p}$ and $l=c\cdot X^{\prime}_{t}$. It
is not difficult to deduce that
$W_{g_{c}}(\alpha)=W_{l}(\alpha_{i_{1}},\cdots,\alpha_{i_{t}})\cdot
W_{h_{c}}(\alpha_{i_{t+1}},\cdots,\alpha_{i_{p}}).$ When
$wt(\alpha_{i_{1}},\cdots,\alpha_{i_{t}})<wt(c)$,
$W_{l}(\alpha_{i_{1}},\cdots,\alpha_{i_{t}})=0$. From Lemma 1, for $h_{c}$ is
a $v$-resilient function, we have
$\displaystyle~{}~{}W_{h_{c}}(\alpha_{i_{t+1}},\cdots,\alpha_{i_{p}})=0,~{}~{}\textrm{for
$wt(\alpha_{i_{t+1}},\cdots,\alpha_{i_{p}})\leq v$}.$
Obviously, $W_{g_{c}}(\alpha)=0$ when $wt(\alpha)\leq wt(c)+v$. From Lemma 1,
$g_{c}$ is a $(wt(c)+v)$-resilient function. $\Box$
_Construction 3:_ Let $n\geq 12$ be an even number, $m$ be a positive number,
$e_{k}$ be a nonnegative number with $0\leq e_{k}\leq k+1$, and
$a_{k}\in\mathbb{F}_{2}$ ($k=1,\cdots,\lfloor n/4\rfloor$) such that
$\displaystyle\sum_{i=m+1}^{n/2}{{n/2}\choose i}+\sum_{k=1}^{\lfloor
n/4\rfloor}\left(a_{k}\cdot\sum_{j=m-e_{k}+1}^{n/2-2k}{{n/2-2k}\choose
j}\right)\geq 2^{n/2}.$ (55)
Let $X_{n/2}=(x_{1},\cdots,x_{n/2})\in\mathbb{F}_{2}^{n/2}$,
$X^{\prime}_{t}=(x_{1},\cdots,x_{t})\in\mathbb{F}_{2}^{t}$ and
$X^{\prime\prime}_{2k}=(x_{t+1},\cdots,x_{n/2})\in\mathbb{F}_{2}^{2k}$, where
$t+2k=n/2$. Let
$\displaystyle\Omega_{0}=\\{c\cdot X_{n/2}\ |\ c\in\mathbb{F}_{2}^{n/2},\
wt(c)>m\\}.$ (56)
For $1\leq k\leq\lfloor n/4\rfloor$ and $0\leq e_{k}\leq m+1$, let $R_{k}$ be
a nonempty set of nonlinear $(2k,e_{k}-1,-,N_{h_{k}})$ functions with high
nonlinearity and
$\displaystyle\Omega_{k}=\\{c\cdot X^{\prime}_{t}\oplus
h_{c}(X^{\prime\prime}_{2k})~{}|~{}c\in\mathbb{F}_{2}^{t},wt(c)>m-e_{k}\\}$
(57)
where $h_{c}\in R_{k}$. Set
$\displaystyle\Omega=\bigcup_{k=0}^{\lfloor n/4\rfloor}\Omega_{k}.$ (58)
Denote by $\varphi$ any injective mapping from $\mathbb{F}_{2}^{n/2}$ to
$\Omega$ such that there exists an $(n/2,m,n/2-m-1,N_{g_{b}})$ function
$g_{b}\in\Omega$ with $\varphi^{-1}(g_{b})\neq\emptyset$ . We construct the
function $f\in\mathcal{B}_{n}$ as follows:
$\displaystyle
f(Y_{n/2},X_{n/2})=\bigoplus_{b\in\mathbb{F}_{2}^{n/2}}Y_{n/2}^{b}\cdot\varphi(b)$
(59)
_Theorem 3:_ If $f\in\mathcal{B}_{n}$ is proposed by Construction 3, then $f$
is an almost optimal $(n,m,n-m-1,N_{f})$ function with
$\displaystyle N_{f}\geq 2^{n-1}-2^{n/2-1}-\sum_{k=1}^{\lfloor
n/4\rfloor}(a_{k}\cdot 2^{n/2-2k}\cdot(2^{2k-1}-N_{h_{k}})).$ (60)
_Proof:_ For any
$(\beta,\alpha)\in\mathbb{F}_{2}^{n/2}\times\mathbb{F}_{2}^{n/2}$ we have
$\displaystyle W_{f}(\beta,\alpha)$ $\displaystyle=$
$\displaystyle\sum_{k=0}^{s}\sum_{\phi(b)\in\Gamma_{k}\atop
b\in\mathbb{F}_{2}^{n/2}}(-1)^{\beta\cdot b}W_{g_{b}}(\alpha)$ (61)
Let
$\displaystyle A_{k}=\Omega_{k}\cap\\{\phi(b)\ |\
b\in\mathbb{F}_{2}^{n/2}\\}.$ (62)
Note that each $\Omega_{k}$ ($k=0,1,\cdots,\lfloor n/4\rfloor$) is a set of
disjoint spectra functions. Similarly to the proof of Theorem 1, we obtain
$\displaystyle|W_{f}(\beta,\alpha)|\leq 2^{n/2}+\sum_{k=1}^{s}a_{k}\cdot
2^{n/2-2k}\cdot(2^{2k}-2N_{h_{k}})$ (63)
where
$a_{k}=\left\\{\begin{array}[]{ll}0&\textrm{if $A_{k}=\emptyset$}\\\
1&\textrm{if $A_{k}\neq\emptyset$}.\end{array}\right.$
From (3), Inequality (60) holds.
From Lemma 4, all the functions in $\Omega$ are $m$-resilient functions. Due
to Lemma 2, $f$ is an $m$-resilient function. For the existence of a degree-
optimized function $g_{b}\in\Omega$ with $\varphi^{-1}(g_{b})\neq\emptyset$,
we have $deg(f)=n-m-1$. $\Box$
Fixing $n$ and $m$, we can also obtain lots of degree-optimized resilient
functions whose nonlinearity are better than that of functions constructed by
Construction 1. See the following example.
_Example 2:_ It is possible to construct a
$(28,1,26,2^{27}-2^{13}-2^{8}-2^{6})$ function. Let
$\Omega_{0}=\\{c\cdot
X^{\prime}_{t}~{}|~{}c\in\mathbb{F}_{2}^{14},~{}wt(c)\geq 2\\}$
and
$\Omega_{5}=\\{c\cdot X^{\prime}_{4}\oplus
h_{c}(X^{\prime\prime}_{10})~{}|~{}c\in\mathbb{F}_{2}^{4},~{}h_{c}\in
R_{5}\\}$
where $R_{5}$ is a nonempty set of $(10,1,8,492)$ functions [8]. Note that
$|\Omega_{0}|=16369$, $|\Omega_{5}|=16$. For $16369+16>2^{14}$, it is possible
to select $2^{14}$ many $14$-variable $1$-resilient functions from
$\Omega_{0}\cup\Omega_{5}$. We concatenate these functions and obtain a
$(28,1,26,2^{27}-2^{13}-2^{8}-2^{6})$ function.
Similarly, one can obtain the following resilient functions:
$(36,3,32,2^{35}-2^{17}-2^{13})$, $(42,5,36,2^{41}-2^{20}-2^{17}-2^{14})$,
$(66,10,55,2^{65}-2^{32}-2^{29}-2^{28}-2^{27}-2^{26}-2^{25})$,
$(86,4,81,2^{85}-2^{42}-2^{29}-2^{27}-2^{26}-2^{18}-2^{14}-2^{13}-2^{5})$,
etc.
## 7 Conclusion and an Open Problem
In this paper, we described a technique for constructing resilient functions
with good nonlinearity on large even number variables. As a consequence, we
obtained general constructions of functions which were not known earlier.
Sarkar and Maitra [23] have shown that the nonlinearity of any
$(n,m,n-m-1,N_{f})$ function ($m\leq n-2$) is divisible by $2^{m+2}$. And they
have deduced the following result: If $n$ is even, and $m\leq n/2-2$, then
$N_{f}\leq 2^{n-1}-2^{n/2-1}-2^{m+1}$. But we suppose that this upper bound
could be improved. So we propose an open problem as follows:
Does there exist $n$-variable ($n$ even), $m$-resilient ($m\geq 0$) functions
with nonlinearity $>2^{n-1}-2^{n/2-1}-2^{\lfloor n/4\rfloor+m-1}$? If there
does, how to construct these functions?
_Conjecture:_ Let $n\geq 12$ be even and $m\leq n/2-2$. For any
$(n,m,-,N_{f})$ function, the following inequality always holds:
$\displaystyle N_{f}\leq 2^{n-1}-2^{n/2-1}-2^{\lfloor n/4\rfloor+m-1}.$ (64)
## References
* [1] P. Camion, C. Carlet, P. Charpin, and N. Sendrier, “On correlation-immune functions,” in Advances in Cryptology - CRYPTO’91 (Lecture Notes in Computer Sceince). Berlin, Germany: Springer-Verlag, 1992, vol. 547, pp. 86-100.
* [2] C. Carlet, “A larger class of cryptographic Boolean functions via a study of the Maiorana-Mcfarland constructions,” in Advances in Cryptology - CRYPTO 2002 (Lecture Notes in Computer Sceince), Berlin, Germany: Springer-Verlag, 2002, vol. 2442, pp. 549-564.
* [3] S. Chee, S. Lee, D. Lee, and S. H. Sung, “On the correlation immune functions and their nonlinearity,” in Advances in Cryptology - Asiacrypt’96 (Lecture Notes in Computer Sceince). Berlin, Germany: Springer-Verlag, 1997, vol. 1163, pp. 232-243.
* [4] J. Clark, J. Jacob, S. Stepney, S. Maitra, and W. Millan, “Evolving Boolean functions satisfying multiple criteria,” in Progress in INDOCRYPT 2002 (Lecture Notes in Computer Science). Berlin, Germany: Springer-Verlag, 2002, vol. 2551, pp. 246-259.
* [5] J. F. Dillon, Elementary Hadamard difference set, Ph.D. Thesis, University of Maryland, 1974.
* [6] H. Dobbertin, “Construction of bent functions and balanced Boolean functions with high nonlinearity,” in Workshop on Fast Software Encryption (FES 1994) (Lecture Notes in Computer Science). Berlin, Germany: Springer-Verlag, 1995, vol. 1008, pp. 61-74.
* [7] M. Fedorova and Y. V. Tarannikov, “On the constructing of highly nonlinear resilient Boolean functions by means of special matrices,” in Progress in Cryptology - INDOCRYPT 2001 (Lecture Notes in Computer Science). Berlin, Germany: Springer-Verlag, 2001, vol. 2247, pp. 254-266.
* [8] S. Kavut, S. Maitra, and M. D. Yücel, “Search for Boolean functions with excellent profiles in the rotation symmetric class,” IEEE Transations on Information Theory, vol. 53, no. 5, pp. 1743-1751, 2007.
* [9] K. Khoo, G. Gong, and H.-K. Lee, “The rainbow attack on stream ciphers based on Maiorana-McFarland functions,” in Appiled Cryptography and Network Security - ACNS 2006 (Lecture Notes in Computer Sceince), Berlin, Germany: Springer-Verlag, 2006, vol. 3989, pp. 194-209.
* [10] F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes, Amsterdam, The Netherlands: North-Holland, 1977.
* [11] S. Maitra and E. Pasalic, “A Maiorana-McFarland type construction for resilient Boolean functions on variables ($n$ even) with nonlinearity $>2^{n-1}-2^{n/2}+2^{n/2-2}$,” Discrete Applied Mathematics, vol. 154, pp. 357-369, 2006.
* [12] S. Maitra and E. Pasalic, “Further constructions of resilient Boolean functions with very high nonlinearity,” IEEE Transations on Information Theory, vol. 52, no. 5, pp. 2269-2270, 2006.
* [13] W. Meier and O. Staffelbach, “Nonlinearity criteria for cryptographic functions,” in Advances in Cryptology - EUROCRYPT’89 (Lecture Notes in Computer Sceince), Berlin, Germany: Springer-Verlag, 1990, vol. 434, pp. 549-562.
* [14] W. Millan, A. Clark, and E. Dawson, “An effective genetic algorithm for finding highly nonlinear Boolean functions,” in Proceedings of the First International Conference on Information and Communication Security (Lecture Notes in Computer Science). Berlin, Germany: Springer-Verlag, 1997, vol. 1334, pp. 149-158.
* [15] W. Millan, A. Clark, and E. Dawson, “Boolean function design using hill climbing methods,” in Proceedings of the 4th Australasian Conference on Information Security and Privacy (Lecture Notes in Computer Science). Berlin, Germany: Springer-Verlag, 1999, vol. 1587, pp. 1-11.
* [16] W. Millan, A. Clark, and E. Dawson. “Heuristic design of cryptographically strong balanced Boolean functions,” in Advances in Cryptology - EUROCRYPT’98 (Lecture Notes in Computer Science). Berlin, Germany: Springer-Verlag, 1998, vol. 1403, pp. 489-499.
* [17] E. Pasalic, “Maiorana-McFarland class: degree optimization and algebraic properties,” IEEE Transactions on Information Theory, vol. 52, no.10, pp. 4581-4594, 2006.
* [18] E. Pasalic, S. Maitra, T. Johansson, and P. Sarkar, “New constructions of resilient and correlation immune Boolean functions achieving upper bounds on nonlinearity,” in Workshop on Coding and Cryptography - WCC 2001, Paris, France, Jan. 8-12, 2001. Published in Electronic Notes in Discrete Mathematics. Amsterdam, The Netherlands: Elsevier Science, 2001, vol. 6, pp. 158-167.
* [19] O. S. Rothaus, On ’bent’ functions, Journal of Combinatorial Theory, Ser. A, vol. 20, pp. 300-305, 1976.
* [20] Z. Saber, M. F. Uddin, and A. Youssef, “On the existence of $(9,3,5,240)$ resilient functions,” IEEE Transations on Information Theory, vol. 48, no. 7, pp. 1825-1834, 2002.
* [21] P. Sarkar and S. Maitra, “Construction of nonlinear Boolean functions with important cryptographic properties,” in Advances in Cryptology - EUROCRYPT 2000 (Lecture Notes in Computer Sceince), Berlin, Germany: Springer-Verlag, 2000, vol. 1807, pp. 485-506.
* [22] P. Sarkar and S. Maitra, “Efficient implementation of cryptographically useful large Boolean functions, IEEE Transactions on Computers, vol. 52, no. 4, pp. 410-417, 2003.
* [23] P. Sarkar and S. Maitra, “Nonlinearity bounds and constructions of resilient Boolean functions,” in Advances in Cryptology - CRYPTO 2000 (Lecture Notes in Computer Sceince), Berlin, Germany: Springer-Verlag, 2000, vol. 1880, pp. 515-532.
* [24] P. Sarkar and S. Maitra, “Construction of nonlinear resilient Boolean functions using small affine functions,” IEEE Transactions on Information Theory, vol. 50, no. 9, pp. 2185-2193, 2004.
* [25] J. Seberry, X.-M. Zhang, and Y. Zheng, “Nonlinearity and propagation characteristics of balanced Boolean functions,” Information and Computation, vol. 119, pp. 1-13, 1995.
* [26] J. Seberry, X.-M. Zhang, and Y. Zheng, “Nonlinearly balanced Boolean functions and their propagation characteristics,” in Advances in Cryptology - CRYPTO’93 (Lecture Notes in Computer Sceince), Berlin, Germany: Springer-Verlag, 1994, vol. 773, pp. 49-60.
* [27] J. Seberry, X.-M. Zhang, and Y. Zheng, “On constructions and nonlinearity of correlation immune Boolean functions,” in Advances in Cryptology - EUROCRYPT’93 (Lecture Notes in Computer Sceince), Berlin, Germany: Springer-Verlag, 1984, vol. 765, pp. 181-199.
* [28] T. Siegenthaler, “Correlation-immunity of nonlinear combining functions for cryptographic applications,” IEEE Transactions on Information Theory, vol. 30, no.5, pp. 776-780, 1984.
* [29] Y. V. Tarannikov, “On resilient Boolean functions with maximum possible nonlinearity,” in Progress in Cryptology - INDOCRYPT 2000 (Lecture Notes in Computer Science). Berlin, Germany: Springer Verlag, 2000, vol. 1977, pp. 19-30.
* [30] Y. V. Tarannikov, “New constructions of resilient Boolean functions with maximal nonlinearity,” in Workshop on Fast Software Encryption (FSE 2001) (Lecture Notes in Computer Science). Berlin, Germany: Springer-Verlag, 2001, vol. 2355, pp. 66-77.
* [31] GZ. Xiao and J. L. Massey, “A spectral characterization of correlation-immune combining functions,” IEEE Transactions on Information Theory, vol. 34, no. 3, pp. 569-571, 1988.
|
arxiv-papers
| 2009-05-06T10:05:40 |
2024-09-04T02:49:02.346911
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "WeiGuo Zhang and GuoZhen Xiao",
"submitter": "Weiguo Zhang",
"url": "https://arxiv.org/abs/0905.0794"
}
|
0905.0799
|
# Comment on “Single-mode excited entangled coherent states”
Hong-chun Yuan1and Li-yun Hu1,2
1Department of Physics, Shanghai Jiao Tong University, Shanghai 200030, China
2 College of Physics and Communication Electronics, Jiangxi Normal University,
Nanchang 330022, China Corresponding author: hlyun2008@126.com
###### Abstract
In Xu and Kuang (J. Phys. A: Math. Gen. 39 (2006) L191), the authors claim
that, for single-mode excited entangled coherent states
$\left|\Psi_{\pm}(\alpha,m)\right\rangle$, “the photon excitations lead to the
decrease of the concurrence in the strong field regime of
$\left|\alpha\right|^{2}$ and the concurrence tends to zero when
$\left|\alpha\right|^{2}\rightarrow\infty$”. This is wrong.
PACS number: 03.65.Ud; 03.67.Hk
In the recent paper [1], single-mode excited entangled coherent states
(SMEECSs) $\left|\Psi_{\pm}(\alpha,m)\right\rangle$ are introduced and their
entanglement characteristics and the influence of photon excitations on
quantum entanglement are also investigated. They claim that “the photon
excitations lead to the decrease of the concurrence in the strong field regime
of $\left|\alpha\right|^{2}$ and the concurrence tends to zero when
$\left|\alpha\right|^{2}\rightarrow\infty$”. Unfortunately, however, this
conclusion is wrong.
First, we recall the entangled coherent states (ECS) [2]
$\left|\Psi_{\pm}(\alpha,0)\right\rangle=N_{\pm}(\alpha,0)(\left|\alpha,\alpha\right\rangle\pm\left|-\alpha,-\alpha\right\rangle),$
(1)
where
$\left|\alpha,\alpha\right\rangle\equiv\left|\alpha\right\rangle_{a}\otimes\left|\alpha\right\rangle_{b}$
with $\left|\alpha\right\rangle_{a}$ and $\left|\alpha\right\rangle_{b}$ being
the usual coherent state in $a$ and $b$ modes, respectively, and
$\left(N_{\pm}(\alpha,0)\right)^{-2}=2\left[1\pm\exp\left(-4\left|\alpha\right|^{2}\right)\right],$
(2)
is the normalization constants.
The SMEECSs are obtained throught actions of a creation operator of a single-
mode optical field on the ECSs, which are expressed as
$\left|\Psi_{\pm}(\alpha,m)\right\rangle=N_{\pm}(\alpha,m)a^{\dagger
m}(\left|\alpha,\alpha\right\rangle\pm\left|-\alpha,-\alpha\right\rangle),$
(3)
where without any loss of generality we consider $m$-photon excitations of the
mode $a$ in the ECS and $N_{\pm}(\alpha,m)$ represents the normalization
factor. Using the identity of operator [3]
$a^{n}a^{\dagger m}=\left(-i\right)^{n+m}\colon
H_{m,n}\left(ia^{\dagger},ia\right)\colon$ (4)
where the symbol $\colon\colon$represents the normal ordering for Bosonic
operators $\left(a^{\dagger},a\right)$, and $H_{m,n}(\eta,\eta^{\ast})$ is the
two-variable Hermite polynomial[4],
$H_{m,n}(\eta,\eta^{\ast})=\sum_{l=0}^{\min(m,n)}\frac{\left(-1\right)^{l}n!m!}{l!\left(m-l\right)!\left(n-l\right)!}\eta^{m-l}\eta^{\ast
n-l},$ (5)
we can easily obtain
$\left\langle\alpha\right|a^{m}a^{\dagger
m}\left|\alpha\right\rangle=m!L_{m}(-\left|\alpha\right|^{2}),\text{ \
}\left\langle\alpha\right|a^{m}a^{\dagger
m}\left|-\alpha\right\rangle=m!e^{-2\left|\alpha\right|^{2}}L_{m}\left(\left|\alpha\right|^{2}\right),$
(6)
and directly calculate the normalization factor
$\left[N_{\pm}(\alpha,m)\right]^{-2}=2m!\left[L_{m}\left(-\left|\alpha\right|^{2}\right)\pm
e^{-4\left|\alpha\right|^{2}}L_{m}\left(\left|\alpha\right|^{2}\right)\right],$
(7)
where $L_{m}(x)$ is the $m-$order Laguerre polynomial defined by[5]
$L_{m}(x)=\sum_{l=0}^{m}\frac{(-1)^{l}m!x^{l}}{\left(l!\right)^{2}(m-l)!}.$
(8)
It is quite clear that when $m=0$, Eq.(3) reduces to the usual ECSs in Eq.(1).
Eq.(7) is valid for any integer $m$ (including the case of $m=0$) which is
different from Eq.(4) of Ref.[1].
Next, we calculate the concurrence for the SMEECSs. Noticing that the photon-
added coherent states (PACSs) $\left|\alpha,m\right\rangle$ is defined by[5]
$\left|\alpha,m\right\rangle=\frac{a^{\dagger
m}\left|\alpha\right\rangle}{\sqrt{m!L_{m}\left(-\left|\alpha\right|^{2}\right)}},$
(9)
thus the SMEECSs $\left|\Psi_{\pm}(\alpha,m)\right\rangle$ in terms of the
PACSs can be rewritten as
$\left|\Psi_{\pm}(\alpha,m)\right\rangle=M_{\pm}(\alpha,m)(\left|\alpha,m\right\rangle\otimes\left|\alpha\right\rangle\pm\left|-\alpha,m\right\rangle\otimes\left|-\alpha\right\rangle),$
(10)
where the normalization constant $M_{\pm}(\alpha,m)$ is determined by
$\left[M_{\pm}(\alpha,m)\right]^{2}=\frac{L_{m}\left(-\left|\alpha\right|^{2}\right)}{2\left[L_{m}\left(-\left|\alpha\right|^{2}\right)\pm
e^{-4\left|\alpha\right|^{2}}L_{m}\left(\left|\alpha\right|^{2}\right)\right]}.$
(11)
Following the approach of Ref.[6] and considering Eqs.(6) and (9), for the
SMEECSs $\left|\Psi_{\pm}(\alpha,m)\right\rangle,$ the concurrence can be
calculated as
$C_{\pm}(\alpha,m)=\frac{\sqrt{\left(1-p_{1}^{2}\right)\left(1-p_{2}^{2}\right)}}{1\pm
p_{1}p_{2}},$ (12)
where
$p_{1}=\left\langle\alpha,m\right|\left.-\alpha,m\right\rangle=\frac{\exp(-2\left|\alpha\right|^{2})L_{m}\left(\left|\alpha\right|^{2}\right)}{L_{m}\left(-\left|\alpha\right|^{2}\right)},$
(13)
and
$p_{2}=\left\langle\alpha\right|\left.-\alpha\right\rangle=\exp(-2\left|\alpha\right|^{2}).$
(14)
Then submitting Eqs.(13) and (14) into Eq.(12) we see that
$C_{\pm}(\alpha,m)=\frac{\left[\left(L_{m}^{2}\left(-\left|\alpha\right|^{2}\right)-e^{-4\left|\alpha\right|^{2}}L_{m}^{2}\left(\left|\alpha\right|^{2}\right)\right)\left(1-e^{-4\left|\alpha\right|^{2}}\right)\right]^{1/2}}{L_{m}\left(-\left|\alpha\right|^{2}\right)\pm
e^{-4\left|\alpha\right|^{2}}L_{m}\left(\left|\alpha\right|^{2}\right)},$ (15)
which is another expression different from Eqs.(23) and (24) in Ref.[1]. In
particular, when $m=0$, Eq.(15) becomes
$C_{+}(\alpha,0)=\frac{1-e^{-4\left|\alpha\right|^{2}}}{1+e^{-4\left|\alpha\right|^{2}}},C_{-}(\alpha,0)=1.$
(16)
It implies that the concurrence $C_{+}(\alpha,0)$ of ECS
$\left|\Psi_{+}(\alpha,0)\right\rangle$ increases with the values of
$\left|\alpha\right|^{2};$ while $C_{-}(\alpha,0)$ is indenpent of
$\left|\alpha\right|^{2}\ $and $\left|\Psi_{-}(\alpha,0)\right\rangle$ is a
maximally entangled state.
In order to see clearly the influence of the concurrence with parameter $m$,
the concurrences $C$ for the state $\left|\Psi_{\pm}(\alpha,m)\right\rangle$
as a function of $\left|\alpha\right|^{2}$ are shown in Figs.1 and 2. It is
shown that $C_{\pm}(\alpha,m)$ increases with the increase of
$\left|\alpha\right|^{2}$ for given parameter $m$. Especially, the concurrence
$C_{\pm}(\alpha,m)$ tend to unit for the larger
$\left|\alpha\right|^{2}.$These conclusions are completely different from
those of Ref.[1].
## References
* [1] Xu L and Kuang L M 2006 J. Phys. A: Math. Gen. 39 L191
* [2] Sanders B C 1992 Phys. Rev. A 45 6811; Luis A 2001 Phys. Rev. A 64 054102
* [3] Hu L Y and Fan H Y 2009 Phys. Scr. 79 035004
* [4] Wünsche A 2000 J. Phys. A: Math. Gen. 33 1603; Hu L Y and Fan H Y 2009 Chin. Phys. B 18 0902
* [5] Agarwal G S and Tara K 1991 Phys. Rev. A 43 492
* [6] Wang X G 2002 J. Phys. A: Math. Gen. 35 165; Rungta P et al 2001 Phys. Rev. A 64 042315
Figure 1: Concurrence of entanglement of
$\left|\phi_{+}(\alpha,m)\right\rangle$ as a function of
$\left|\alpha\right|^{2}$ for the different photon excitations with $m$
$=0$(solid line), $m=1$(dashed line), $m=3$(dotted line), $m=5$(dash-dotted
line), and $m=20$(dash-dash-dotted line), respectively. Figure 2: Concurrence
of entanglement of $\left|\phi_{-}(\alpha,m)\right\rangle$ as a function of
$\left|\alpha\right|^{2}$ for the different photon excitations with
$m=0$(solid line), $m=3$(dashed line), $m=5$(dotted line), $m=10$(dash-dotted
line), and $m=20$(dash-dash-dotted line), respectively.
|
arxiv-papers
| 2009-05-06T10:50:23 |
2024-09-04T02:49:02.353908
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Hong-chun Yuan and Li-yun Hu",
"submitter": "Liyun Hu",
"url": "https://arxiv.org/abs/0905.0799"
}
|
0905.0808
|
# The unusual spectral energy distribution of LBQS 0102-2713
Th. Boller11affiliationmark: Max-Planck-Institut für extraterrestrische
Physik Garching, Germany bol@mpe.mpg.de K. Linguri22affiliationmark:
Secondary School Neu-Isenburg, Germany T. Heftrich33affiliationmark: Johann
Wolfgang Goethe-University Frankfurt am Main, Germany M.
Weigand33affiliationmark: Johann Wolfgang Goethe-University Frankfurt am
Main, Germany
###### Abstract
We have studied the spectral energy distribution of the quasar LBQS 0102-2713.
The available multiwavelength data in the observers frame are one optical
spectrum between 3200 and 7400 Å (Morris et al. 1991), 7 HST FOS spectra
between 1700 and 2300 Å (York et al. 1990), one GALEX NUV flux density and a
$\rm K_{S}$ magnitude obtained from NED, and 3 public ROSAT PSPC pointed
observations in the 0.1$-$2.4 keV energy band. The $\rm\alpha_{ox}$ values
obtained from the HST FOS-, the optical spectrum, and the ROSAT observations
are -2.3 and -2.2, respectively, comparable to BAL quasars (e.g. Gallagher et
al. 2006). The 2500 Å luminosity density is about a factor of 10 higher
compared to the mean of the most luminous SDSS quasars (Richards et al. 2006,
their Fig. 11). The 2 keV $\nu L_{\nu}$ value is lower by about a factor of 10
compared to the radio loud quasars shown in Fig. 10 of Richards et al. (2006).
LBQS 0102-2713 exhibits one of the steepest soft X-ray photon indices obtained
so far. For a simple power law fit with leaving the $\rm N_{H}$ free in the
fit we obtain a photon index of $\rm\Gamma=6.0\pm 1.3$. Fixing the $\rm N_{H}$
value to the Galactic value the photon index still remains steep with a value
of about 3.5. We argue that LBQS 0102-2713 is similar to BAL quasars with
respect to their UV brightness and 2 keV X-ray weakness. However, the
absorption by neutral matter is significantly lower compared to BAL quasars.
The X-ray weakness is most probably not due intrinsically X-ray weakness based
on the UV line strenghts which are comparable to the line strength values
reported in quasar composites (e.g. Brotherthon et al. 2001, Vanden Berk et
al. 2001, or Zheng et al. 1997). If the X-ray weakness will be confirmed in
future observations, LBQS 0102-2713 might be indicative for a new class of
quasars with an unusual combination in their UV-, X-ray, and $\rm N_{H}$
properties.
###### Subject headings:
AGN: general — Quasars: individual LBQS 0102-2713
††slugcomment: An extreme quasar SED
## 1\. Introduction
We report on the extreme ultraviolet-to-X-ray spectral energy distribution
(SED) of the quasar LBQS 0102-2713 using the spectral index $\rm\alpha_{ox}$.
The $\rm\alpha_{ox}$ value is defined as $\rm\alpha_{ox}=0.384\times log\
(L_{2keV})/(L_{2500\AA})$. The $\rm\alpha_{ox}$ value relates the relative
efficiencies of emitted ultraviolet disk photons to the hard X-ray photons.
This value is therefore an important tool to provide quantitative and
qualitative constrains on models of the physical association between the UV
and X-ray emission. The $\rm\alpha_{ox}$ value obtained from the HST FOS
spectra (York et al. 1990), the optical spectrum (Morris et al. 1991), and the
ROSAT data are -2.3 or -2.2, respectively. The $\rm\alpha_{ox}$ values are
similar compared to BAL quasars (e.g. Gallagher et al. 2006). The authors have
analyzed 35 BAL quasars based on Chandra observations. Their $\rm\alpha_{ox}$
values range between -1.65 and -2.48. The majority of the objects have values
smaller than -2.0. It is argued that the X-ray weakness of BAL quasars is due
to neutral intrinsic absorption with column densities between about
$\rm(0.1-10)\times 10^{23}\ cm^{-2}$. As more soft X-ray photons are expected
for a simple neutral absorber, the absorption is assumed to be more complex.
Partial covering or ionized absorbers can account for this observational fact.
Gibson et al. (2008) have analyzed Chandra and XMM-Newton observations of 536
SDSS quasars. They find that radio-quiet BAL quasars tend to have steeper
$\rm\alpha_{ox}$ values compared to non-BAL quasars (their Fig. 3). They
constrain the fraction of X-ray weak non-BAL quasars and find that such
objects are rare. Leighly et al. (2007) report on a $\alpha_{ox}$ value of
-2.3 in the quasar PHL 1811. Miniutti et al. (2009, submitted to MNRAS) found
values between -1.5 and -4.3 in PHL 1092. Similar results have been obtained
by Strateva et al. (2005), Vignali et al. (2003) and Green et al. (2008).
However most of the objects with $\rm\alpha_{ox}$ values close to -2 are upper
limits. The dependence on $\rm\alpha_{ox}$ as a function of redshift and the
rest frame ultraviolet luminosity has been investigated by Vignali et al.
(2003), Strateva et al. (2005), and Green et al. (2008).
We also concentrate on the spectral energy distribution in the soft (0.1-2.4
keV) X-ray band based on ROSAT PSPC observations. We find that LBQS 0102-2713
might have an extreme value of the photon index of $\rm\Gamma=(6.0\pm 1.3)$.
X-ray observations have shown Narrow-Line Seyfert 1 Galaxies (NLS1s) to have
steep soft X-ray spectra as a class, first reported by Puchnarewicz et al.
(1992). A significant correlation between the slope of the 0.1$-$2.4 keV X-ray
continua and the FWHM of the $\rm H\beta$ line was found for Seyfert 1
galaxies (Boller et al. 1996). The distribution of the values $\rm\Gamma$ and
FWHM $\rm H\beta$ show a continuous increase in the slope of the spectral
continuum distribution with decreasing FWHM $\rm H\beta$ line width. This
suggests that narrow- and broad-line Seyfert 1 galaxies form essentially the
same class of objects and that there might be an underlying physical parameter
which controls the distribution of objects. Smaller black hole masses could
account for narrow $\rm H\beta$ line widths which requires that NLS1 have to
be accreting at higher fractions of their Eddington rates to maintain their
relatively normal observed luminosities (Boller et al. 1996). XMM-Newton
observations of IRAS 13224-3809 (Boller et al. 2003) and 1H 0707-495 (Boller
et al. (2003) have confirmed that a shifted accretion disc spectrum accounts
for the steep ROSAT photon indices. Steep soft photon indices have also been
reported by other authors (e.g. New Astronomy Reviews, 2000). Walter & Fink
(1993) found values ranging between 1.5 and 3.4. Fiore et al. (1998) have used
ASCA data on the quasars NAB 0205+024 and PG 1244+026 and found photon indices
of $\rm\Gamma=(3.22\pm 0.24)\ and\ (2.95\pm 0.28)$, respectively. George et
al. (2000) report on a photon index of $\rm 4.18^{0.82}_{1.1}$ in PG 0003+199.
Similarly to the correlation found in the soft energy range, a correlation
between the slope of the 2$-$10 keV photon index, obtained from power-law fits
to ASCA observations of broad- and narrow-line Seyfert 1 galaxies, and the
width of the FWHM $\rm H\beta$ line, has been discovered by Brandt et al.
(1997). Steep 2$-$10 keV X-ray continua, with values of the photon index
between 1.9 and 2.6, are characteristic of NLS1s. One possible explanation
suggested is that NLS1s may exhibit a cooler accretion disc corona (Pounds et
al. 1995). A more detailed description of our knowledge on NLS1s is given in:
The Universe in X-rays, sections 22.2 to 22.5, Eds. Trümper & Hasinger (2008).
The multiwavelength SED of radio-loud quasars has been investigated by
Richards et al. (2006), building upon the quasar SED published by Elvis et al.
(1994) for radio-quiet quasars. Richards et al. (2006) use mid-infrared data
based on the SpitzerSpaceTelescope, SDSS photometry as well as near-infrared,
GALEX, VLA and ROSAT data. One of the most important results is the quasar SED
shown in their Fig. 10. The mean radio-quiet $\rm\alpha_{ox}$ value from Elvis
et al. (1994) is -1.5, while for radio-loud objects the mean value is -1.8
(Richards et al. 2006).
LBQS 0102-2713 is a quasar at a redshift of 0.78. The object was selected from
the ROSAT PSPC catalogue via the hardness ratio 1 (HR1) 111The hardness ratio
1 is defined as HR1 = ((52-102)-(11-41))/((11-41)+(52-201)) Zimmermann et al.
1994. The numbers refer to the ROSAT PSPC channels. criteria in the range
from -0.5 to -0.7 derived from a reference sample of known steep spectrum
AGNs. LBQS 0102-2713 showed the steepest photon index from the selected
sample. Here we concentrate on 3 public ROSAT PSPC observations, an optical
spectrum between the 3200 to 7400 Å observed in 1988 (Morris et al. 1991), 7
HST FOS spectra between 1700 and 2300 Å, and one GALEX NUV flux density
available at NED. High energy data above 2.4 keV are not available presently.
In Section 2 we describe the X-ray observations and data analysis. The results
from the X-ray fitting analysis is given in Section 3. Section 4 contains the
results from the $\rm\alpha_{ox}$ analysis. The comparison with mean SED’s is
given in Section 5. Models for the X-ray weakness are presented in Section 6.
Section 7 contains the summary and open problems. In Section 8 we list
security checks on the identification of LBQS 0102-2713 as a X-ray source.
Throughout the paper we adopt $\rm H_{0}=70\ km\ s^{-1}$ and a
$\rm\Lambda$-cosmology of $\rm\Omega_{M}=0.3\ and\ \Omega_{\Lambda}=0.7$.
## 2\. X-ray observations and data analysis
The object was observed three times with the ROSAT PSPC detector in 1992 (c.f.
Table 1 for the observation log file). The source is located off-axis in the
PSPC detector with a separation from the pointing direction of about 0.5
degrees. The data were retrieved from the MPE archive. The PSPC data in the
MPE archive were processed at MPE and copied to HEASARC. The ROSAT data in
both archives are identical.
The ROSAT data were converted into FITS files using the xselect command-line
interface version 2.4. The observations were converted into FITS files and a
merged FITS file was created from the two longer observations (ROR numbers
700121p-0 and 700121p-2). The short observation (ROR number 700121p-1) is
highly background dominated and the source is not significantly detected.
Therefore the results from the spectral fitting are only presented for the two
long observations and the corresponding merged data set. The xselect command-
line interfaces handles RDF FITS format files. In the RDF case the events are
for instance in the file rp700121a00_bas.fits file (the basic file) for ROR
number 700121p-0. Using the read events command the two single observations
and the merged observation the were read into xselect. ROSAT images were
created via extract image and saoimage. For all three observations used for
subsequent spectral fitting the source region was extracted at $\rm
RA=01^{h}04^{m}40.9^{s}$ and $\rm DEC=-26^{o}57^{m}07^{s}$. The background
region was extracted at $\rm RA=01^{h}04^{m}09^{s}$ and $\rm
DEC=-26^{o}45^{m}04^{s}$. The source and background extraction radii are 320
arcsec. Finally using extract spectrum and save spectrum the source and
background fits files were created. The corresponding ROSAT response matrix is
pspcb_92mar11.rsp.
We have analyzed the ROSAT PSPC observations of LBQS 0102-2713 independently
from the xselect command-line interface with the EXSAS software routines
(Zimmermann et al. 1994) developed for analyzing ROSAT data based on the MIDAS
software package for the two long observations 700121p-0 and 700121p-2. A
merged data file was also created with the EXSAS command intape/disk. Energy
channels were selected from 8 to 240 using the EXSAS prepare-spectrum
programme.
## 3\. Spectral fitting
In the ROSAT All-Sky Survey (RASS) we get 26 source counts with a mean
exposure of 333 seconds. The extraction radius is 500 arcsec. For the
background we get 16 counts with a mean exposure of 435 seconds and an
extraction radius of 500 arcsec. This results in 10 net counts and a mean
count rate of $\rm 0.02\ counts\ s^{-1}$. With these numbers no reliable fits
could be obtained from the RASS data. For the merged data sets of the PSPC
pointings we get 1351 source counts and 741 background counts for a total
exposure of 12881 seconds. This results in 610 net counts and a mean count
rate of $\rm 0.05\ counts\ s^{-1}$. With these numbers and the limited
spectral energy resolution of ROSAT only a limited range of models can be
fitted. We present the results only for a simple power law fit with cold
absorption and a disk black body model. Longer X-ray observation with the
present generation of X-ray satellites are required to present more advanced
model fits, e.g. a smeared absorption model according to Schurch & Done
(2006), or a disk reflection model following (Crummy et al. 2006).
### 3.1. xselect command-line interface and EXSAS spectral power law fitting
The source spectra obtained from the ROR numbers 700121p-0 and 700121p-2 using
the xselect command-line interface were grouped with the grppha command with
group min 10. The merged data set was grouped with group min 30. The spectral
fitting was performed using XSPEC version 12.5.0. The model components were
(mo = phabs (zpo)). While the light curves do not show significant variations
(Fig. 1) the resulting photon indices are remarkably steep (Fig. 2). LBQS
0102-2713 might be the quasar with the steepest soft X-ray photon index,
detected in two individual and the merged observations, reported so far. For
the merged data set we get $\rm\Gamma=(6.0\pm 1.3)$ and $\rm\Gamma=(5.8\pm
1.3)$ for the xselect and EXSAS software packages, respectively. The spectral
fitting results from the xselect command line interface and the EXSAS software
routines are listed in Table 2. The spectral parameters obtained from both
software systems are consistent within the errors. As a security check we have
used the NASA GSFC simulation software webpimms. The simulated spectra are
consistent with the ROSAT power law fits. This is an independent test, whether
the calibration and response files available at the MPE site give the same
results as the corresponding files produced by the NASA GSFC calibration team.
### 3.2. $\rm Gamma-N_{H}$ contour plots
The Galactic foreground absorption in the direction to LBQS 0102-2713 is very
low with $\rm N_{H}=1.2\times 10^{20}\ cm^{-2}$. It is known that $\rm\Gamma$
and $\rm N_{H}$ are correlated in ROSAT PSPC fits, such that a larger $\rm
N_{H}$ requires a steeper photon index to give a good fit. In Fig 3 we show
the $\rm\Gamma-N_{H}$ contour plots for the merged data set and for the ROR
number 700121p-2. For the ROR number 700121p-0 no reliable contour plot could
be obtained. At a 99 per cent confidence level the the photon index is about
3.5 at the Galactic $\rm N_{H}$ value. The photon index might indeed be
flatter as $\rm\Gamma=(6.0\pm 1.3)$ and the object may be less special in
X-rays than it appears from the simple power law fits. However, the photon
index obtained for a power law fit with the Galactic $\rm N_{H}$ value fixed
remains still steep for quasar soft X-ray spectral energy distributions (c.f.
Section 1). Longer observations with the present generation of X-ray
telescopes are required to confirm whether the photon index of LBQS 0102-2713
is indeed extremely steep.
### 3.3. Ionized absorber plus power law spectral modelling
We note that a simulation of a power law plus an ionized absorber could also
result into a flatter photon index. A simulated 50 ks XMM-Newton spectrum
obtained from a power law model with cold absorption plus an ionized absorber
based on the grid25BIG_mt.fits model (Done, private communication) gives a
photon index of $\rm\Gamma=(4.4\pm 0.5)$ for a column density of the ionized
absorber ranging between $\rm(1.1-1.5)\times 10^{22}\ cm^{-2}$ and an
ionization parameter range between $\rm\xi=(20-30)$. The parameter ranges
explored range between $\rm N_{H}=10^{(21-24)}\ cm^{-2}$ and $\rm\xi=$ 10 to
1000. The flatter photon index occurs in a very narrow parameter range of the
ionized absorber column density and the ionization value. When fitting the
simulated data with the ROSAT PSPC response and the ROSAT exposure time, the
photon index is $\rm(6.7\pm 1.8)$, consistent with the ROSAT spectral fitting
results for a power law model with the $\rm N_{H}$ value leaving free in the
fit.
### 3.4. Black body spectral fitting results
A black body fit to the merged data set does also result into an acceptable
fit. The derived spectral parameters in the rest frame are: $\rm N_{H}=(2.0\pm
2.3)\times 10^{20}\ cm^{-2}$, $\rm kT=(0.13\pm 0.013)\ keV$, and $\rm
n=(2.6\pm 2.8)\times 10^{-5}\ photons\ cm^{-2}\ s^{-1}\ keV^{-1}$ at 1 keV.
Due to the limited spectral resolution of ROSAT we cannot disentangle between
the a power law and a black body model. We compare the derived black body
temperature with other published black body temperatures and find that they
are in good agreement. Crummy et al. (2006) found values for the black body
temperature for an AGN sample observed with XMM-Newton between 0.009 and 0.17
keV (their Table 2). Tanaka, Boller and Gallo (2005) have analyzed NLS1
spectra observed with XMM-Newton and found for a sample of 17 objects black
body temperatures between 0.09 and 0.15 keV. Fiore et al. (1998) obtain black
body temperatures of 0.16 keV for PG 1244+026 and NAB 0205+024, respectively
based on ASCA observations. Puchnarewicz et al. (2001) obtain a value of 0.13
keV for the NLS1 galaxy RE J1034+396.
## 4\. Deriving the $\alpha_{ox}$ value for LBQS 0102-2713
### 4.1. Deriving the 2500 Å flux density from the optical spectrum
Morris et al. (1991) have published an optical spectrum of LBQS 0102-2713 in
the wavelength range between 3200 and 7400 Å in the observers frame (c.f. Fig.
4). The Mg II line at 2800 Å (rest frame) is clearly visible. Applying a
Gaussian fit to this line we obtain a FWHM value of about 2200 km $\rm s^{-1}$
in the rest frame. This value is very close to the artificial border line
between NLS1 galaxies and broad line Seyfert galaxies of 2000 km $\rm s^{-1}$
following the definition of Osterbrock and Pogge (1985). In addition Morris et
al. (1991) first noted the strong UV Fe II multiplet emission between about
2200 and 2500 Å in the rest frame. All this is typical for NLS1 galaxies and
LBQS 0102-2713 can be considered as a X-ray bright NLS1 galaxy. This goes in
line with the steep X-ray spectrum. In addition a strong C III] line is found
at 1909 Å rest frame and a strong emission feature at about 2075 Å which is
usually interpreted as a bunch of Fe III multiplets. The later feature is
catalogued in the Vanden Berg quasar composite paper (2001) and is certainly
present in higher signal-to-noise composites.
The observed 4450 Å flux density from the optical spectrum (c.f. Fig. 4)
obtained in 1988 is about $\rm f_{4450\AA}^{obs.}=3.0\times 10^{-16}\ erg\
cm^{-2}\ s^{-1}\ \AA^{-1}$. Details of the spectroscopic measurements can be
found in section 2.3 of Morris et al. (1991). To convert the units from Å to
Hz we followed the relation from Hogg (2000) described in his Chapter 7. As
the differential flux per unit log frequency or log wavelength is $\rm\nu
f_{\nu}=\lambda f_{\lambda}$ (Hogg uses S instead of f) one gets $\rm
f_{\nu}=(\lambda^{2}/c)\times f_{\lambda}$ with $\rm
f_{\lambda}=f_{4450\AA}^{obs.}$ and $\rm\lambda=4450\ \AA$. The resulting
observed flux density per Hz is $\rm f_{4500\AA}^{obs.}=2.0\times 10^{-27}\
erg\ cm^{-2}\ s^{-1}\ Hz^{-1}$. To correct the monochromatic flux measurements
for Galactic extinction we used the Nandy et al. (1975) extinction law with
$\rm R=A_{V}/E(B-V)$. Their Table 3 gives for $\rm(1/\lambda(\mu\ m^{-1}))$ at
4450 Å a R value of 3.8. The E(B-V) value from the GALEX data obtained from
NED is 0.02. According to Geminale & Popowski (2005) the relation between R
and $\rm\epsilon$ is $\rm\epsilon(\lambda\ -V)=R((A_{\lambda}/A_{V})-1)$.
Following their extinction curves shown in Fig. 4 one obtains an $\rm\epsilon$
value of about 0.5. The $\rm A_{\lambda}$ value at 4450 Å is then 0.076 and
the $\rm A_{V}$ value using the relations given above is 0.066. The $\rm
A_{\lambda}/A_{V}$ value is 1.15. The extinction corrected flux and the
uncorrected flux at 4450 Å are related by $\rm
f_{4450\AA}^{Ext.}=(10^{+4}\times A_{\lambda}/A_{V})\times
f_{4450\AA}^{obs.}=2.6\times f_{4450\AA}^{obs.}=5.3\times 10^{-27}\ erg\
cm^{-2}\ s^{-1}\ Hz^{-1}$. The luminosity distance $\rm D_{L}=1.5\times
10^{28}\ cm$. The resulting rest frame UV luminosity density is $\rm
L_{2500\AA}=1.6\times 10^{31}\ erg\ s^{-1}\ Hz^{-1}$. The relation between
$\rm l_{uv}$ and $\rm L_{2500\AA}$ is $\rm log\ L_{2500\AA}=l_{uv}=31.21$.
### 4.2. Deriving the 2500 Å flux density from the HST FOS spectra
We also analyzed the 7 HST FOS spectra named as Y309010nT with n ranging from
2 to 8. The HST FOS spectra were obtained from the HST FOS proposal ID 6007:
Comparison of the large scale structure in QSO absorbers and galaxies in the
Galactic Poles, D. York, University of Chicago. The observed wavelength ranges
between 1700 and 2300 Å. All 7 exposures obtained in 1995 agree in their flux
density values and the HST FOS absolute and relative calibration is very good
(M. Rosa, ESO Garching, private communication). In Fig. 5 we show the spectrum
for the science exposure number Y3090102T. The $\rm Ly_{\alpha}$ line is
clearly visible and marked in the spectrum. The line is blended with a low
ionization line of N V at 1240 and 1242 Å in the rest frame. In addition there
is emission from the $\rm Ly_{\beta}$ line blended with the O VI line at 1032
and 1037 Å at their rest frame wavelengths.
The observed 2300 Å flux density is about $\rm f_{2300\AA}^{obs.}=1.5\times
10^{-15}\ erg\ cm^{-2}\ s^{-1}\ \AA^{-1}$. The flux density at 2500 Å rest
frame is $\rm f_{2500\AA}=f_{1292\AA}\times(1292\AA\ /2500\AA\
)^{-0.5}=4.2\times 10^{-27}\ erg\ cm^{-2}\ s^{-1}\ Hz^{-1}$. To extrapolate
the corresponding rest frame wavelength of 1292 Å to 2500 Å we have used the
$\rm\alpha_{o}$ value of -0.5 of Richstone & Schmidt (2000). The extinction
corrected flux at 2500 Å is $\rm 1.4\times 10^{-26}\ erg\ cm^{-2}\ s^{-1}\
Hz^{-1}$, about a factor of 3.3 larger compared to the non-extinction
corrected flux density. The slight discrepancy compared to the optical
spectrum obtained by Morris et al. (1991) might be due to flux variability
between the 1988 and 1995 observations or due to the extrapolation from 1292
to 2500 Å using a mean $\rm\alpha_{o}$ value of -0.5. The correction for
Galactic extinction has been performed as described in the previous
subsection. The R value at 2500 Å is 7.2 and the $\rm\epsilon$ value is 7.2.
The resulting $\rm A_{\lambda}/A_{V}$ value is 1.4. This results into a
Galactic extinction correction with a factor 3.3 and the extinction corrected
flux is $\rm f_{2500\AA}^{Ext.}=1.4\times 10^{-26}\ erg\ cm^{-2}\ s^{-1}\
Hz^{-1}$. The resulting rest frame luminosity density is $\rm
L_{2500\AA}=4.0\times 10^{31}\ erg\ s^{-1}\ Hz^{-1}$.
### 4.3. Deriving the 2 keV flux density from the merged ROSAT PSPC data set
The 2 keV rest frame flux density is determined following Hogg (2000) and
Weedman’s Quasar Astronomy (Section 3.5, page 61, 1986). The photon index
$\rm\Gamma$ is related to the energy index via $\rm\alpha_{x}=-\Gamma+1$. For
a photon index not equal to 2 the rest frame 2 keV flux density is given by
$\rm
f_{2keV}=f(0.5-2.0)\times((1+\alpha_{x})/(1+z)^{\alpha_{x}}))\times((\nu_{2keV}^{\alpha_{x}}/(\nu_{2keV}^{\alpha_{x}+1}-\nu_{0.5keV}^{\alpha_{x}+1}))$.
The unabsorbed ROSAT flux in the 0.5-2.0 keV energy band is $\rm 4.3\times
10^{-14}\ erg\ cm^{-2}\ s^{-1}$. As the photon index derived in the previous
section was 6.0, we get for $\rm\alpha_{x}$ a value of -5. The corresponding 2
keV and 0.5 keV frequencies are $\rm 4.8\times 10^{17}\ Hz\ and\ 1.2\times
10^{17}\ Hz$. This results into a flux density at 2 keV of $\rm f_{\nu\
2keV}=4.3\times 10^{-32}\ erg\ cm^{-2}\ s^{-1}\ Hz^{-1}$. The 2 keV luminosity
density is then $\rm L_{2keV}=4.0\times 10^{25}\ erg\ s^{-1}\ Hz^{-1}$. We
note that the extremely steep photon index has to be confirmed in future
observations with the present generation of X-ray telescopes given the results
from the contour plots shown in Fig. 3. If the X-ray spectrum is flatter than
obtained from the power law fits, this would result in a less extreme
$\alpha_{ox}$ value.
For a definition of $\rm\alpha_{ox}=0.384\times log(L_{2keV}/L_{2500\AA})$ one
derives $\rm\alpha_{ox}$ values of -2.3 and -2.2.
## 5\. Comparison with quasar SED’s
### 5.1. Comparison with mean SDSS quasar SED’s
Richards et al. (2006) show the mean quasar SED’s for optically luminous
SDSS-, all SDSS-, and optically dim quasars (c.f. Fig. 11 of their paper). In
Fig. 6 we show an adopted version of the plot of Richards et al. (2006). The
$\rm\nu L_{\nu}$ value at 2500 Å rest frame wavelength is $\rm 4\times
10^{47}\ erg\ s^{-1}$. This is about a factor of 10 more luminous compared to
the mean of the optically most luminous SDSS quasars. However there is a large
dispersion in the optical $\rm M_{B}$ magnitudes for quasars. Brotherton et
al. (2001) show the distribution of the $\rm M_{B}$ magnitudes for more than
11000 quasars (their Fig. 5). Their absolute magnitudes range between about
$\rm-23^{mag}$ to $\rm-32^{mag}$. LBQS 0102-2713 has a photometric IIIa-J
magnitude of 17.52 mag as taken from NED at a wavelength of 4612 Å
corresponding to 0.46 $\rm\mu$m. This close to the mean B wavelength of 0.40
$\rm\mu$m. The corresponding $\rm M_{B}$ value of LBQS 0102-2713 is -25.9.
This might indicate that the scatter in the 2500 Å luminosities is also large
and that LBQS 0102-2713 is less extreme in the UV than one would think at
first glance when comparing the UV value to the mean of the optically most
luminous quasars. The corresponding 2.0 keV rest frame value of $\rm\nu
L_{\nu}$ value is $\rm 1.4\times 10^{44}\ erg\ s^{-1}$ which is comparable to
the optically dim SDSS quasars. The dispersion in the X-rays appears to be
much less compared to the UV luminosities if one compares the X-ray luminosity
of LBQS 0102-2713 to the X-ray luminosities as shown in the sample papers of
Strateva et al. (2005), Vignali et al. (2003) or Gibson et al. (2008).
### 5.2. Comparison with the quasar SED for radio-loud quasars
In Fig. 7 we have plotted the available multiwavelength data in units of
$\rm\nu f_{\nu}$ versus the wavelength and compare these data points to the
mean quasar SED for radio-loud objects from Richards et al. (2006, their Fig.
10). All data points are extinction corrected. For the ROSAT data we show only
the lowest and highest energy data point. While at 2 keV the object is X-ray
weak compared to the mean quasar SED, at the softest energies LBQS 0102-2713
is a very powerful emitter and the $\rm\nu f_{\nu}$ value even exceeds the
2500 Å UV value. The HST FOS data points are about a factor of 2 to 5 larger
compared to the mean quasar SED. Here we note again that the scatter in the
individual UV luminosities compared to the mean might be larger as pointed out
in the previous subsection and that therefore the object might be less extreme
in it’s UV luminosity. The optical data points and the $\rm K_{S}$ magnitude
follow the mean quasar SED. The main result from this plot is the discrepancy
between the X-ray and HST FOS data points as these bands are discontinuous.
The standard picture assumes that the rest frame UV photons are emitted from
the accretion disc and that the hard X-ray photons are arising from the
accretion disc corona. From Fig. 7 it is obvious that there is a discrepancy
between the UV and the X-ray photons. The optical data from Morris et al.
(1991) are also offset with respect to the HST FOS data points. This is most
probably due to a significant contribution from the host galaxy and the rest
frame optical photons are not expected to be related to the accretion disc. We
have cross-checked the discontinuity between the UV and X-ray data points. The
GALEX NUV wavelength is 2315.7 Å (c.f. Morrissey et al. 2007, their Table 1).
The NUV GALEX calibrated flux from NED is 282.2 $\rm\mu$Jy. This converts into
flux densities of $\rm 2.8\times 10^{-27}\ erg\ cm^{-2}\ s^{-1}\ Hz^{-1}$ or
$\rm 1.6\times 10^{-15}\ erg\ cm^{-2}\ s^{-1}\ \AA^{-1}$. The GALEX NUV flux
density value is consistent with the flux density shown in the HST FOS
spectrum is Fig. 5.
## 6\. Models for the X-ray weakness
### 6.1. The intrinsically weakness model
Leighly et al. (2007) favor an intrinsically X-ray weakness model for the
quasar PHL 1811. The 2 keV X-ray emission is intrinsically weak rather than
absorbed. In this case one expects weak low ionization of semiforbidden UV
lines. For the blend of the $\rm Ly_{\alpha}$ and N V lines an EW value of 15
eV is obtained. No EW values are given for the blend of $\rm Ly_{\beta}$ and O
VI. The authors argue that this points to an intrinsically weak X-ray model.
The argumention for an intrinsically X-ray weakness model is based on
individual lines such as Na I D, Ca II H and K and C IV. The rest frame EW
values for the blend of $\rm Ly_{\beta}$ and the O VI lines and the $\rm
Ly_{\alpha}$ and the N V lines shown in Fig. 5 are about 12 and 50 Å,
respectively. In the following we compare these values with quasar composite
spectra. Brotherton et al. (2001) give for the $\rm Ly_{\beta}$ plus O VI
lines an EW value of 11 Å in the rest frame. For the blend of $\rm
Ly_{\alpha}$ and N V the EW value is 87 Å. The corresponding values reported
by Vanden Berk et al. (2001) are 9 and 94 Å respectively. Zheng et al. (1997)
found values of 16 and 102 Å respectively. The EW values obtained for LBQS
0102-2713 are typical to the mean quasar composite values and the source
appears not to be intrinsically X-ray weak.
### 6.2. Other models
There are other models in the literature which are speculative or do have a
low probability to explain the X-ray weakness.
Gibson et al. (2008) present in their Fig. 2 a decreasing trend of
$\rm\alpha_{ox}$ with increasing 2500 Å UV luminosity density. This is a
strong observational constraint as an increased UV luminosity density results
into steeper $\rm\alpha_{ox}$ values. In the case of PHL 1811 and LBQS
0102-2713 we indeed see lower 2 keV flux densities compared to other quasars.
If one compares the UV and X-ray luminosity densities of LBQS 0102-2713 which
are $\rm log\ l_{UV}=31.20$ for the Morris et al. (1991) spectrum and $\rm
log\ l_{UV}=31.60$ obtained from the HST FOS spectra and an X-ray luminosity
density of $\rm log\ l_{X}=25.60$ obtained from ROSAT, with Fig. 8 of Vignali
et al. (2003) the X-ray weakness becomes immediately apparent. The reason for
the so called global X-ray Baldwin effect is presently not known. It is
speculated that a patchy or disrupted accretion disc corona is related to the
UV luminosity density (Gibson et al. 2008). However there is no direct
observational proof for this model.
One could speculate that UV and X-ray variability of the source might account
for the low $\rm\alpha_{ox}$ value. Assuming that LBQS 0102-2713 was in 1992
in a low X-ray flux state, then a factor of about 10 at 2 keV is required to
obtain the canonical $\rm\alpha_{ox}$ value of -1.8 for radio-loud quasar as
shown in Fig. 11 of Richards et al. (2006). A factor of 10 in three years with
respect to the optical observations appears unlikely as only a few AGN are
known to exhibit flux variability of a factor of 10 or more. In addition,
variability seems unlikely given Hook et al. (1994) who find only weak optical
variability with $\rm\sigma$=0.17.
## 7\. Summary and open problems
### 7.1. Comparison to BAL quasars and a possible unique combination of UV
brightness, X-ray weakness and low $\rm{\bf N_{H}}$ values
LBQS 0102-2713 appears to be very similar to BAL quasars which show strong UV-
and weak 2 keV X-ray emission (c.f. Gallagher et al. 2006). Their
$\rm\alpha_{ox}$ values range between -1.65 and -2.48. The majority of the
objects have values smaller than -2.0. They authors argue that the X-ray
weakness of BAL quasars is due to neutral intrinsic absorption with column
densities between about $\rm(0.1-10)\times 10^{23}\ cm^{-2}$. As more soft
X-ray photons are expected for a simple neutral absorber, the authors argue
that the absorption is more complex. Partial covering or ionized absorbers can
account for this observational fact. However, LBQS 0102-2713 do not show such
high values for the neutral absorber compared to BAL quasars. The foreground
absorption is about $\rm 6\times 10^{20}\ cm^{-2}$. This is at least a factor
of about 20 lower compared to BAL quasars. LBQS 0102-2713 is an object which
exhibits an unusual combination of UV brightness, X-ray weakness and no
significant absorption by neutral matter along the line of sight. In addition,
there are no significant indications that the object is intrinsically X-ray
weak, in contrast to the argumentation for PHL 1811 by Leighly et al. (2007).
This parameter combination is new and needs to be explained. With the present
available X-ray data we are limited in providing self consistent models but
would like to present these new results to the community.
### 7.2. LBQS 0102-2713 as a supersoft quasar
The object might exhibit the steepest photon index reported from a quasar so
far. For a simple power law fit with neutral absorption left free in the fit
we get photon indices of $\rm\Gamma=(5.8\pm 1.3)\ or\ (6.0\pm 1.3)$ by using
the xselect command line interface and the EXSAS software package,
respectively. However, if the foreground $\rm N_{H}$ value is set to the
Galactic value of $\rm 2\times 10^{20}\ cm^{-2}$ (Dickey & Lockman, 1990) the
photon index is about 3.5 (c.f. the $\rm\Gamma\ -N_{H}$ contour plots shown in
Fig. 3). The photon index still remains steep compared to previous studies
listed in Section 1. In addition, Puchnarewicz et al. (1995) applied a broken
power law fit to RE J2248-511 with a photon index of $\rm
4.13^{+5.85}_{-0.60}$ up to the break energy of 0.26 keV. The Galactic $\rm
N_{H}$ value was fixed to $\rm 1.4\times 10^{20}\ cm^{-2}$. For the object RE
J1034+396 Puchnarewicz et al. (1995) obtain for their broken power law fit a
value of $\rm\Gamma=4.45^{+0.25}_{-0.32}$ up to a break energy of 0.41 keV.
The $\rm N_{H}$ value was fixed to the Galactic value of $\rm 1.5\times
10^{20}\ cm^{-2}$. We note that the low $\rm N_{H}$ values, although fixed to
their Galactic values, result in similar steep photon indices as obtained for
LBQS 0102-2713 with the $\rm N_{H}$ value fixed in the fit.
### 7.3. Open problems
There are open problems which cannot be solved given the present set of
multiwavelength data. First, we found that the UV emission is discontinuous to
the X-ray emission, e.g. the UV photons which are expected to arise from the
accretion disc appear not to be correlated with the X-ray photons. Second, if
the steep soft photon index of $\rm(6.0\pm 1.3)$ will be confirmed by new
X-ray observations this steepness needs to be explained. Based on the contour
plots shown in Fig. 3, the object might not be as extreme as one would expect
from the simple power law fits. Third, the UV luminosity density is about a
factor of 10 more luminous compared to the mean of the most luminous SDSS
quasars. However, as pointed out there is a large spread in the $\rm M_{B}$
magnitudes for quasars and the UV brightness might also show a large scatter
compared to the mean value at a certain frequency. Finally, if the 2 keV X-ray
weakness is confirmed by other observations, the combination of UV brightness,
X-ray weakness and the absence of significant absorption by neutral matter
results into an unusual combination observational parameters which cannot be
explained with the available set of multiwavelength data. Finally, what is the
spectral complexity in the soft and hard X-ray band unresolvable for ROSAT? We
know from observations with the present generation of X-ray telescopes that
new spectral components can often be added to the fit and that the real
spectrum might be more complex. We expect to see this in the soft band with
higher signal-to-noise observations. In addition, observations above 2.4 keV
are required to better constrain physical models for the unusual observational
parameters detected in LBQS 0102-2713.
## 8\. Security check
The probability that the X-rays arise from nearby sources is low. The total 1
$\rm\sigma$ position error of the ROSAT detection is 7 arcsec including a 6
arcsec systematic error (see Voges et al. 1999 and the link to the source
catalogue entries). The nearest SIMBAD source has a distance to the position
of LBQS 0102-2713 of 84 arcsec resulting into a discrepancy of 12 $\rm\sigma$
in the position offset. The same holds for the NED detections. The next source
is at a distance of 0.8 arcmin resulting into a position error offset of about
7 $\rm\sigma$. In Fig. 8 we show the optical image with the SIMBAD and NED
detections overlayed. In Tables 3 and 4 we list objects within a distance of
500 arcsec to the position of LBQS 0102-2713.
The paper is based on observations obtained with the ROSAT PSPC satellite. We
thank the anonymous referee for his/her extremely helpful comments to improve
the scientific content and the structure of the paper. TB is grateful for
intensive discussion with Ari Laor and Narum Arav on the UV- X-ray relations
in quasars and especially on the object properties presented in this paper. TB
acknowledges many comments from Niel Brandt on the UV- and X-ray properties
reported in this paper and on his intensive suggestions with respect to the
comparison to BAL quasars. TB thanks Chris Done for providing her ionized
absorber model and many suggestions to improve the paper. TB is also grateful
to Gordon Richards for providing the super mongo script for the SED of radio-
loud and radio-quiet quasars. TB would like to thank Michael Rosa for his
information regarding the HST FOS flux calibration for LBQS 0102-2713 and Don
Neill for his help to achieve precise information regarding the GALEX data. KL
is grateful to the Secondary School Neu-Isenburg for their support to work
together with TB at MPE Garching in analyzing the multiwavelength data
presented in this paper. TH greatly acknowledges the collaboration with TB and
WG in writing up this paper. The authors acknowledge Iskra Strateva, Frank
Haberl, Marcella Brusa and Konrad Dennerl for many helpful suggestions and
information on the data presented in this paper. Facilities: ROSAT, HST FOS,
GALEX.
## References
* (1) Boller, Th.; Brandt, W.; & Fink, H.; A&A, 305, 53, 1996
* (2) Boller, Th.; Fabian, A. C.; Sunyvaev, R.; Trümper, J.; Vaughan, S.; Ballantyne, D. R.; Brandt, W. N.; MNRAS, 329, 1, 2002
* (3) Brandt, W.; Mathur, S.; & Elvis, M.; MNRAS, 285, 25, 1997
* (4) Brotherton, M. S.; Arav, Nahum; Becker, R. H.; Tran, Hien D.; Gregg, Michael D.; White, R. L.; Laurent-Muehleisen, S. A.; Hack, Warren, ApJ, 546, 134, 2001
* (5) Crummy, J; Fabian, A.C.; Gallo, L.; & Ross, R.R.; MNRAS, 365, 1067, 2006
* (6) Dickey, John M.; Lockman, Felix J., Ann. Rev. Astron. Astrophys., 28, 215, 1990
* (7) Elvis, M.; Wilkes, B.J.; McDowell, J.C.; Green, R.F.; Bechtold, J.; Willner, S.P.; Oey, M.S.; Polomski, E.; & Cutri, R.; ApJS, 95, 100, 1994
* (8) Fiore, F.; Matt, G.; Cappi, M.; Elvis, M.; Leighly, K. M.; Nicastro, F.; Piro, L.; Siemiginowska, A.; Wilkes, B. J.; MNRAS, 289, 103, 1998
* (9) Gallagher, S. C.; Brandt, W. N.; Chartas, G.; Priddey, R.; Garmire, G. P.; Sambruna, R. M., ApJ, 644, 709, 2006
* (10) Geminale, A.; & Popowski, P.; astro-ph, 2540, 2005
* (11) George, I. M.; Turner, T. J.; Yaqoob, T.; Netzer, H.; Laor, A.; Mushotzky, R. F.; Nandra, K.; Takahashi, T.; ApJ, 531, 52, 2000
* (12) Gibson, Robert R.; Brandt, W. N.; Schneider, Donald P., ApJ, 685, 773, 2008
* (13) Green, Paul J.; Aldcroft, T. L.; Richards, G. T.; Barkhouse, W. A.; Constantin, A.; Haggard, D.; Karovska, M.; Kim, D.-W.; Kim, M.; Vikhlinin, A.; Anderson, S. F.; Mossman, A.; Kashyap, V.; Myers, A. C.; Silverman, J. D.; Wilkes, B. J.; Tananbaum, H., ApJ, 690, 644, 2008
* (14) Hogg, David W.;astro-ph/9905116, 2000
* (15) Hook, I. M.; McMahon, R. G.; Boyle, B. J.; Irwin, M. J., MNRAS, 286, 305, 1994
* (16) Leighly, Karen M.; Halpern, Jules P.; Jenkins, Edward B.; Casebeer, Darrin, ApJS, 173, 1, 2007
* (17) Morrissey, Patrick; Conrow, Tim; Barlow, Tom A.; Small, Todd; Seibert, Mark; Wyder, Ted K.; Budav ri, Tam s; Arnouts, Stephane; Friedman, Peter G.; Forster, Karl; and 13 coauthors, ApJS, 173, 682, 2007
* (18) Morris, Simon L.; Weymann, Ray J.; Anderson, Scott F.; Hewett, Paul C.; Francis, Paul J.; Foltz, Craig B.; Chaffee, Frederic H.; MacAlpine, Gordon M., AJ, 102, 1627, 1991
* (19) Nandy, K.; Thompson, G.I.; Jamar, C.; Monols, A.; Wilson, R.; A&A, 44, 195, 1975
* (20) New Astromomy Reviews 44, 2000, Proceedings of the Workshop on Observational and Theoretical Progress in the Study of Narrow-Line Seyfert 1 Galaxies, held in Bad Honnef, Germany, December 8-11, 1999, WE-Heraeus-Seminar 226, Eds.: Th. Boller, W.N. Brandt, K.M. Leighly, M.J. Ward
* (21) Osterbrock, D. E.; Pogge, R. W., ApJ, 297, 166, 1985
* (22) Pounds, K. A.; Done, C.; Osborne, J. P., MNRAS, 277, 5, 1995
* (23) Puchnarewicz, E. M.; Mason, K. O.; Cordova, F. A.; Kartje, J.; Brabduardi, A. A.; Puchnarewicz, E. M.; Mason, K. O.; Cordova, F. A.; Kartje, J.; Branduardi-Raymont, G.; Mittaz, J. P. D.; Murdin, P. G.; Allington-Smith, J., MNRAS, 256, 589, 1992
* (24) Puchnarewicz, E. M.; Branduardi-Raymont, G.; Mason, K. O.; Sekiguchi, K., MNRAS, 276, 1281, 1995
* (25) Puchnarewicz, E. M.; Mason, K. O.; Siemiginowska, A.; Cagnoni, I.; Comastri, A.; Fiore, F.; Fruscione, A., ApJ, 550, 874 2001
* (26) Richards, G.; Lacy, M.; Storrie-Lombardi, L.; Fan, X.; Papovich, C.; Gallagher, S.; Hall, P.;Hines,D.; Anderson, S.; Jester, S.; Schneider, D.; Vanden Berk, D.; Strauss, M.; York, D., ApJS, 357,261, 2006
* (27) Richstone, D.O.; & Schmidt, M.; ApJ, 235, 361, 1980
* (28) Schurch, N.; & Done, C.; MNRAS, 371, 81, 2006
* (29) Strateva, Iskra V.; Brandt, W. N.; Schneider, Donald P.; Vanden Berk, Daniel G.; Vignali, Cristian, AJ, 130, 387, 2005
* (30) Tanaka, Boller & Gallo; In: Growing black holes: accretion in a cosmological context, Proceedings of the MPA/ESO/MPE/USM Joint Astronomy Conference held at Garching, Springer, ISBN 3-540-25275-4,2005
* (31) Vanden Berk, Daniel E.; Richards, Gordon T.; Bauer, Amanda; Strauss, Michael A.; Schneider, and 60 coauthors, AJ, 122, 549, 2001
* (32) York, D., HST Proposal 6007, Comparision of Large Scale Structure in QSO Absrobers and Galaxies at the Galactic Poles, 1990
* (33) Vignali, C.; Brandt, W.N.; & Schneider, D.P.; AJ, 125, 433, 2003
* (34) Voges, W.; Aschenbach, B.; Boller, Th.; Bräuninger, H.; Briel, U.; Burkert, W.; Dennerl, K.; Englhauser, J.; Gruber, R.; Haberl, F.; and 10 coauthors, A&A 349, 289, 1999
* (35) Walter, R.; & Fink, H., A&A 274, 105, 1993
* (36) Weedman, Daniel W., Quasar astronomy, 1986qa, book, 1986
* (37) Zheng, Wei; Kriss, Gerard A.; Telfer, Randal C.; Grimes, John P.; Davidsen, Arthur F., ApJ, 475, 469, 1997
* (38) Zimmermann, U.; Becker, W.; Belloni, T.; Döbereiner; S., Izzo, C.; Kahabka, P.; & Schwentker, O., MPE Report 257, 1994
Table 1Observation log data of the ROSAT PSPC observations of LBQS 0102-2713 ROR number | Date | Observation duration [sec]
---|---|---
700121p-0 | January 5, 1992 | 6157
700121p-1 | June 5, 1992 | 2191
700121p-2 | December 8, 1992 | 6724
Table 2Spectral fitting results for a power law model with foreground
absorption of $\rm LBQS\ 0102-2713^{a}$
| | xselect | | | | EXSAS | |
---|---|---|---|---|---|---|---|---
| $\rm\Gamma$ | $\rm N_{H}$ | norm | | | $\rm\Gamma$ | $N_{H}$ | norm
ROR number | | | | | | | |
700121p-0 | $\rm 6.6\pm 2.5$ | $\rm 6.2\pm 0.5$ | $\rm 2.5\pm 2.9$ | | | $\rm 5.5\pm 2.5$ | $\rm 7.4\pm 1.5$ | $\rm 4.2\pm 5.0$
700121p-2 | $\rm 5.5\pm 2.5$ | $\rm 3.7\pm 2.4$ | $\rm 1.3\pm 1.8$ | | | $\rm 6.5\pm 2.1$ | $\rm 6.0\pm 4.7$ | $\rm 4.5\pm 4.0$
merge data set | $\rm 6.0\pm 1.3$ | $\rm 4.8\pm 1.5$ | $\rm 1.3\pm 1.6$ | | | $\rm 5.8\pm 1.3$ | $\rm 6.5\pm 4.4$ | $\rm 1.5\pm 1.0$
aafootnotetext: The $N_{H}$ value is in units of $\rm 10^{20}cm^{-2}$ and the
normalization is given in $\rm 10^{-4}\ photons\ cm^{-2}\ s^{-1}\ keV^{-1}$ at
1 keV.
Table 3SIMBAD detection within a 500 arcsec radius around LBQS 0102-2713 Identifier | dist(arcsec) | type | ICRS (2000) coord. | Sp type
---|---|---|---|---
HB89 0102-272 | 0.00 | QSO | 01 04 40.94 -26 57 07.5 | -
1RXS J010441.1-265712 | 5.44 | X | 01 04 41.10 -26 57 12.5 | -
PHL 7126 | 84.37 | blu | 01 04.60.00 -26 58 00.0 | -
675 | 148.54 | WD* | 01 04 47.00 -26 59 12.0 | -
GEN +6.20077018 | 184.71 | * | 01 04.60.00 -27 00 00.0 | -
GSGP 1 | 220.57 | * | 01 04 33.30 -27 00 23.0 | -
CGG* SGP 51 | 384.71 | * | 01 04 28.60 -26 51 20.0 | G5
GSA 55 | 425.46 | G | 01 04 27.00 -27 03.50.0 | -
CGG* SGP 68 | 444.42 | * | 01 05 03.80 -26 51 45.0 | G5
RG 0102.6-2720 | 484.81 | * | 01 05.00.00 -27 04 00.0 | -
Table 4NED detection within a radius of 500 arcsec around the source position of LBQS 0102-2713 Object Name | RA | DEC | Type | z | dist
---|---|---|---|---|---
| | | | | (arcmin)
LBQS 0102-2713 | 01h04m40.9s | -26d57m07s | QSO | 0.780000 | 0.0
2dFGRS S212Z160 | 01h04m44.3s | -26d57m05s | G | 0.113844 | 0.8
APMUKS(BJ)B010210.33-271359.0 | 01h04m34.7s | -26d57m55s | G | | 1.6
2dFGRS S213Z276 | 01h04m44.0s | -26d55m38s | G | 0.128000 | 1.6
APMUKS(BJ)B010215.35-271559.7 | 01h04m39.7s | -26d59m55s | G | | 2.8
2dFGRS S213Z278 | 01h04m28.9s | -26d55m40s | G | 0.127000 | 3.0
2MASX J01044626-2659591 | 01h04m46.3s | -27d00m00s | G | 0.156813 | 3.1
APMUKS(BJ)B010217.75-271619.8 | 01h04m42.1s | -27d00m16s | G | | 3.1
2MASX J01042726-2655252 | 01h04m28.2s | -26d55m20s | G | 0.128500 | 3.4
2dFGRS S213Z275 | 01h04m56.4s | -26d56m26s | G | 0.158043 | 3.5
APMUKS(BJ)B010212.20-271714.6 | 01h04m36.6s | -27d01m10s | G | | 4.2
2dFGRS S212Z166 | 01h04m20.1s | -26d56m42s | G | 0.129330 | 4.7
APMUKS(BJ)B010210.57-271755.8 | 01h04m34.9s | -27d01m51s | G | | 4.9
APMUKS(BJ)B010241.39-271015.9 | 01h05m05.7s | -26d54m12s | G | | 6.3
APMUKS(BJ)B010215.35-271953.3 | 01h04m39.7s | -27d03m49s | G | | 6.7
2dFGRS S212Z165 | 01h04m28.9s | -26d50m54s | G | 0.112910 | 6.8
2dFGRS S212Z164 | 01h04m32.8s | -26d50m11s | G | 0.128985 | 7.2
APMUKS(BJ)B010245.29-270904.2 | 01h05m09.6s | -26d53m01s | G | | 7.6
APMUKS(BJ)B010251.36-271413.7 | 01h05m15.7s | -26d58m10s | G | | 7.8
MDS ua-01-09 | 01h04m35.0s | -27d04m54s | G | 0.611600 | 7.9
APMUKS(BJ)B010224.51-270526.6 | 01h04m48.9s | -26d49m23s | G | | 8.0
APMUKS(BJ)B010252.42-271158.3 | 01h05m16.7s | -26d55m55s | G | | 8.1
Figure 1.— ROSAT PSPC light curves of LBQS 0102-2713 observed in January 1992
(left panel) and December 1992 (right panel). The bin size is 4000 seconds.
The source shows no significant X-ray variability between the two
observations.
Figure 2.— ROSAT PSPC observation of LBQS 0102-2713 from January 1992 and
December 1992 (upper left and right pannels). The lower left panel shows the
fit to the merged data set. Applying a simple power-law fit and leaving the
absorption parameter free, we obtain from the xselect analysis photon indices
of $\rm 6.6\pm 2.5$, $\rm 5.5\pm 2.5$ and $\rm 6.0\pm 1.3$, respectively. The
unfolded spectrum obtained from the merged data set is shown in the lower
right panel.
Figure 3.— $\rm\Gamma-N_{H}$ contour plots obtained from ROR number 700121p-0
(left) and from the merged data set (right panel). The contour lines
correspond to 68, 90, and 99 per cent confidence levels. The low Galactic $\rm
N_{H}$ value of $\rm 1.2\times 10^{20}\ cm^{-2}$ is within the 99 per cent
confidence level resulting into an photon index of about 3.5. The photon index
might therefore be flatter as indicated by the power law fits and LBQS
0102-2713 might be less special in X-rays. Figure 4.— Optical spectrum of
the quasar LBQS 0102-2713 obtained by Morris et al. (1991). obtained in 1988.
The strongest emission lines of C III], Fe III, and Mg II are marked. Between
the Fe III and the Mg II emission there is a strong Fe II UV multiplet
emission between about 2200 and 2500 Å rest frame wavelength, first noted by
Morris et al. (1991). Figure 5.— HST FOS spectra of the quasar LBQS 0102-2713
obtained from data set Y30901012T obtained in 1995. The strongest UV emission
lines are marked and their EW values are comparable to those found in
composite quasar spectra (c.f. Section 6.1). Figure 6.— Comparision of the
$\rm\nu L_{\nu}$ values in the UV and in the X-rays with the mean SED’s for
SDSS quasars. While the object is UV bright compared to the mean of the most
luminous SDSS quasars (although there might be large scatter in the individual
luminosities), the 2 keV value is comparable to the SDSS dim quasars. Figure
7.— Optical spectrum from Morris et al. (1991), the HST FOS data, the ROSAT
data and the $\rm K_{S}$ value in the rest frame. The SED of LBQS 0102-2713 is
normalized at the frequency of the $\rm K_{S}$ $\rm\nu f_{\nu}$ value. The
unnormalized $\rm\nu f_{\nu}$ value and the normalized one are $\rm 5.6\times
10^{-13}\ erg\ cm^{-2}\ s^{-1}$ and $\rm 7.9\times 10^{-13}\ erg\ cm^{-2}\
s^{-1}$, resulting into a normalization factor of 1.4. While in the UV the
object is bright compared to the mean, at X-rays the object is X-ray weak. We
find a strong discrepancy between the X-ray and the HST FOS UV data. The data
obtained from the optical spectrum are most probably dominated by the host
galaxy. Figure 8.— ESO R-MAMA.475 image with SIMBAD and NED detections
overlaid. The SIMBAD objects are blue coloured and the red circles indicate
the NED detections. The box size is 10 times 10 arcmin. LBQS 0102-2713 is
located in the center.
|
arxiv-papers
| 2009-05-06T11:51:42 |
2024-09-04T02:49:02.357990
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Th. Boller, K. Linguri, T. Heftrich, M. Weigand",
"submitter": "Thomas Boller",
"url": "https://arxiv.org/abs/0905.0808"
}
|
0905.0896
|
# The Gravitational Field of a Plane Slab
RICARDO E. GAMBOA SARAVÍ Departamento de Física, Facultad de Ciencias
Exactas,
Universidad Nacional de La Plata and IFLP, CONICET.
C.C. 67, 1900 La Plata, Argentina,
quique@fisica.unlp.edu.ar
(Day Month Year; Day Month Year)
###### Abstract
We discuss the exact solution of Einstein’s equation corresponding to a static
and plane symmetric distribution of matter with constant positive density
located below $z=0$ matched to vacuum solutions. The internal solution depends
essentially on two constants: the density $\rho$ and a parameter $\kappa$. We
show that these space-times finish down below at an inner singularity at
finite depth $d\leq\sqrt{\frac{\pi}{24\rho}}$. We show that for $\kappa\geq
0.3513\dots$, the dominant energy condition is satisfied all over the space-
time.
We match these singular solutions to the vacuum one and compute the external
gravitational field in terms of slab’s parameters. Depending on the value of
$\kappa$, these slabs are either attractive, repulsive or neutral. The
external solution turns out to be a Rindler’s space-time. Repulsive slabs
explicitly show how negative, but finite pressure can dominate the attraction
of the matter. In this case, the presence of horizons in the vacuum shows that
there are null geodesics which never reach the surface of the slab.
We also consider a static and plane symmetric non-singular distribution of
matter with constant positive density $\rho$ and thickness $d$
($0<d<\sqrt{\frac{\pi}{24\rho}}$) surrounded by two external vacuums. We
explicitly write down the pressure and the external gravitational fields in
terms of $\rho$ and $d$. The solution turns out to be attractive and
remarkably asymmetric: the “upper” solution is Rindler’s vacuum, whereas the
“lower” one is the singular part of Taub’s plane symmetric solution. Inside
the slab, the pressure is positive and bounded, presenting a maximum at an
asymmetrical position between the boundaries. We show that if
$0<\sqrt{6\pi\rho}\,d<1.527\dots$, the dominant energy condition is satisfied
all over the space-time. We also show how the mirror symmetry is restored at
the Newtonian limit.
We also find thinner repulsive slabs by matching a singular slice of the inner
solution to the vacuum.
We also discuss solutions in which an attractive slab and a repulsive one, and
two neutral ones are joined. We also discuss how to assemble a “gravitational
capacitor” by inserting a slice of vacuum between two such slabs.
## 1 Introduction
Due to the complexity of Einstein’s field equations, one cannot find exact
solutions except in spaces of rather high symmetry, but very often with no
direct physical application. Nevertheless, exact solutions can give an idea of
the qualitative features that could arise in General Relativity, and so, of
possible properties of realistic solutions of the field equations.
We have recently discussed exact solutions of Einstein’s equation presenting
an empty (free of matter) singular repelling boundary [1, 2]. These
singularities are not the sources of the fields, but they arise owing to the
attraction of distant matter.
In this paper, we want to illustrate this and other curious features of
relativistic gravitation by means of a simple exact solution: the
gravitational field of a static plane symmetric relativistic perfect
incompressible fluid with positive density located below $z=0$ matched to
vacuum solutions. In reference [3], we analyze in detail the properties of
this internal solution, originally found by A. H. Taub [4] (see also [5, 6,
7]), and we find that it finishes up down below at an inner singularity at
finite depth $d$, where $0<d<\sqrt{\frac{\pi}{24\rho}}$. Depending on the
value of a parameter $\kappa$, it turns out to be gravitational attractive
($\kappa<\kappa_{\text{crit}}$), neutral ($\kappa=\kappa_{\text{crit}}$) or
repulsive ($\kappa>\kappa_{\text{crit}}$), where
$\kappa_{\text{crit}}=1.2143\dots$. We also show that for $\kappa\geq
0.3513\dots$, the dominant energy condition is satisfied all over the space-
time.
In this paper, we make a detailed analysis of the matching of these exact
solutions to vacuum ones. Here, we impose the continuity of the metric
components and of their first derivatives at the matching surfaces, in
contrast to reference [3], where not all these derivatives are continuous at
the boundary.
In the first place, we consider the matching of the whole singular slabs to
the vacuum, and explicitly compute the external gravitational fields in terms
of the slab parameters. Repulsive slabs explicitly show how negative but
finite pressure can dominate the attraction of the matter. In this case, they
have the maximum depth, i.e., $d=\sqrt{\frac{\pi}{24\rho}}$, and the exterior
solution presents horizons showing that there are vertical photons that cannot
reach the slab surface.
Secondly, we consider a non-singular slice of these slabs with thickness $d$
($0<d<\sqrt{\frac{\pi}{24\rho}}$) surrounded by two external vacuum. Some of
the properties of this solution have already been discussed in reference [6].
Here, we explicitly write down the pressure and the external gravitational
fields in terms of $\rho$ and $d$. The solution turns out to be attractive,
and remarkably asymmetric: the “upper” solution is Rindler’s vacuum, whereas
the “lower” one is the singular part of Taub’s plane symmetric solution.
Inside the slab, the pressure is positive and bounded, presenting a maximum at
an asymmetrical position between the boundaries. We show that if
$0<\sqrt{6\pi\rho}\,d<1.527\dots$, the dominant energy condition is satisfied
all over the space-time. This solution finishes up down below at an empty
repelling boundary where space-time curvature diverges. This exact solution
clearly shows how the attraction of distant matter can shrink the space-time
in such a way that it finishes at a free of matter singular boundary, as
pointed out in [1]. We also show how the mirror symmetry is restored at the
Newtonian limit.
We also construct thinner repulsive slabs by matching a singular slice of the
inner solution to vacuum. These slabs turn out to be less repulsive than the
ones discussed above, since all incoming vertical null geodesics reach the
slab surface in this case.
For the sake of completeness, in section 2, we include some results from
reference [3] which are necessary for the computations of the following
sections. In section 3, we show under which conditions the dominant energy
condition is satisfied. In section 4, we discuss how solutions can be matched.
In section 5, we study the matching of the whole singular interior solution to
vacuum. In section 6, we match two interior solutions facing each other. In
section 7, we discuss the matching of a non-singular slice of the interior
solution with two different vacuums. In section 8, we show how the mirror
symmetry of this solution is restored at the Newtonian limit. In section 9 we
construct thinner repulsive slabs ($d<\sqrt{\frac{\pi}{24\rho}}$) by matching
a singular slice of the inner solution to vacuum.
Throughout this paper, we adopt the convention in which the space-time metric
has signature $(-\ +\ +\ +)$, the system of units in which the speed of light
$c=1$, Newton’s gravitational constant $G=1$ and $g$ denotes gravitational
field and not the determinant of the metric.
## 2 The interior solution
In this section, we consider the solution of Einstein’s equation corresponding
to a static and plane symmetric distribution of matter with constant positive
density and plane symmetry. That is, it must be invariant under translations
in the plane and under rotations around its normal. The matter we shall
consider is a perfect fluid of uniform density $\rho$. The stress-energy
tensor is
$T_{ab}=(\rho+p)\,u_{a}u_{b}+p\,g_{ab}\,,$ (1)
where $u^{a}$ is the velocity of fluid elements.
Due to the plane symmetry and staticity, following [9] we can find coordinates
$(t,x,y,z)$ such that
$ds^{2}=-\mathcal{G}(z)^{2}\
dt^{2}+e^{2V(z)}\left(dx^{2}+dy^{2}\right)+dz^{2}\,,$ (2)
that is, the more general metric admitting the Killing vectors $\partial_{x}$,
$\partial_{y}$, $x\partial_{y}-y\partial_{x}$ and $\partial_{t}$.
The non identically vanishing components of the Einstein tensor are
$\displaystyle G_{tt}=-\,\mathcal{G}^{2}\left(2\,V^{\prime\prime}+3\,V^{\prime
2}\right)$ (3) $\displaystyle
G_{xx}=G_{yy}=e^{2V}\left({\mathcal{G}^{\prime\prime}}/{\mathcal{G}}+{\mathcal{G}^{\prime}}/{\mathcal{G}}\;V^{\prime}+V^{\prime\prime}+V^{\prime
2}\right)\,,$ (4) $\displaystyle G_{zz}=V^{\prime}\left(2\
{\mathcal{G}^{\prime}}/{\mathcal{G}}+V^{\prime}\right),$ (5)
where a prime $(^{\prime})$ denotes differentiation with respect to $z$.
On the other hand, due to the assumed symmetries and to the fact that the
material content is a perfect fluid, $u_{a}=(-\mathcal{G},0,0,0)$, so
$T_{ab}=\text{diag}\left(\rho\,\mathcal{G}^{2},p\,e^{2V},p\,e^{2V},p\right)\,,$
(6)
where $p$ depends only on the z-coordinante. Thus, Einstein’s equations, i.e.,
$G_{ab}=8\pi T_{ab}$, are
$\displaystyle 2\,V^{\prime\prime}+3\,V^{\prime 2}=-8\pi{\rho}\,,$ (7)
$\displaystyle{\mathcal{G}^{\prime\prime}}/{\mathcal{G}}+{\mathcal{G}^{\prime}}/{\mathcal{G}}\;V^{\prime}+V^{\prime\prime}+V^{\prime
2}=8\pi{p}\,,$ (8) $\displaystyle V^{\prime}\left(2\
\mathcal{G}^{\prime}/{\mathcal{G}}+V^{\prime}\right)=8\pi{p}\,.$ (9)
Moreover, $\nabla_{a}T^{ab}=0$ yields
$p^{\prime}=-(\rho+p)\,\mathcal{G}^{\prime}/\mathcal{G}\,.$ (10)
Of course, due to Bianchi’s identities equations, (7), (8), (9) and (10) are
not independent, so we shall here use only (7), (9), and (10).
Since $\rho$ is constant, from (10) we readily find
$p=C_{p}/\mathcal{G}(z)-\rho,$ (11)
where $C_{p}$ is an arbitrary constant.
By setting $W(z)=e^{\frac{3}{2}V(z)}$, we can write (7) as
$W^{\prime\prime}=-{{6\pi\rho}}\,W$, and its general solution can be written
as
$\displaystyle W(z)=C_{1}\,\sin{(\sqrt{{6\pi\rho}}\ z+C_{2})},\,$ (12)
where $C_{1}$ and $C_{2}$ are arbitrary constants. Therefore, we have
$V(z)={\frac{2}{3}}\,\ln\left({C_{1}\,\sin{(\sqrt{{6\pi\rho}}\
z+C_{2})}}\right).$ (13)
Now, by replacing (11) into (9), we get the first order linear differential
equation which $\mathcal{G}(z)$ obeys
$\displaystyle\mathcal{G}^{\prime}=-\left(\frac{4\pi\rho}{V^{\prime}}+\frac{V^{\prime}}{2}\right)\mathcal{G}+\frac{4\pi
C_{p}}{V^{\prime}}\ $ (14) $\displaystyle=-{\sqrt{{6\pi\rho}}}\left(\tan
u+\frac{1}{3}\cot u\right)\mathcal{G}+{\sqrt{{6\pi\rho}}}\,\
\frac{C_{p}}{\rho}\tan u\,,$ (15)
where $u=\sqrt{{6\pi\rho}}\ z+C_{2}$. And in the last step, we have made use
of (13). The general solution of (14) can be written as 111In the appendix we
show how the integral appearing in the first line of (2) is performed.
$\displaystyle\mathcal{G}=\frac{\cos u}{(\sin u)^{1/3}}\left(C_{3}\
+\frac{C_{p}}{\rho}\int_{0}^{u}\frac{(\sin u^{\prime})^{\frac{4}{3}}}{(\cos
u^{\prime})^{2}}du^{\prime}\right)$ $\displaystyle=C_{3}\,\frac{\cos
u}{\left(\sin u\right)^{\frac{1}{3}}}+\frac{3C_{p}}{7\rho}\
{\sin^{2}\\!u}\,\,_{2}F_{1}\\!\Bigl{(}1,\frac{2}{3};\frac{13}{6};\sin^{2}u\Bigr{)},$
(16)
where $C_{3}$ is another arbitrary constant, and ${}_{2}F_{1}(a,b;c;z)$ is the
Gauss hypergeometric function (see the appendix at the end of the paper).
Therefore, the line element (2) becomes
$\displaystyle ds^{2}=-\mathcal{G}(z)^{2}\,dt^{2}+\left({C_{1}\,\sin
u}\right)^{\frac{4}{3}}\left(dx^{2}+dy^{2}\right)+dz^{2},$ (17)
where $\mathcal{G}(z)$ is given in (16) and $u=\sqrt{{6\pi\rho}}\ z+C_{2}$.
Thus, the solution contains five arbitrary constants: $\rho$, $C_{p}$,
$C_{1}$, $C_{2}$, and $C_{3}$. The range of the coordinate $z$ depends on the
value of these constants.
Notice that the metric (17) has a space-time curvature singularity where $\sin
u=0$, since straightforward computation of the scalar quadratic in the Riemann
tensor yields
$\displaystyle R_{abcd}R^{abcd}=4\left({\mathcal{G}^{\prime\prime
2}}+2\,{\mathcal{G}^{\prime 2}}\,V^{\prime
2}\right)/{\mathcal{G}^{2}}+4\left(2\,V^{\prime\prime
2}+4\,V^{\prime\prime}V^{\prime 2}+3\,V^{\prime 4}\right)$
$\displaystyle=\frac{256}{3}\,\,\pi^{2}\rho^{2}\,\left(2+{\sin^{-4}u}+\frac{3}{4}\left(\frac{p}{\rho}+1\right)\left(\frac{3p}{\rho}-1\right)\right),$
(18)
so $R_{abcd}R^{abcd}\rightarrow\infty$ when $\sin u\rightarrow 0$.
On the other hand, by contracting Einstein’s equation, we get
$R(z)=8\pi(\rho-3p(z))=8\pi(4\rho-3C_{p}/\mathcal{G}(z))\,.$ (19)
For $\rho>0$, $C_{p}>0$ and $C_{3}>0$, the solution (17) was found by Taub [4,
7]. Nevertheless, this solution has a wider range of validity.
Of course, from this solution we can obtain vacuum ones as a limit. In fact,
when $C_{p}=0$, it is clear from (11) that $p(z)=-\rho$, and the solution (17)
turns out to be a vacuum solution with a cosmological constant
$\Lambda=8\pi\rho$ [10, 7]
$\displaystyle
ds^{2}=-{\cos^{2}u}\;\sin^{-\frac{2}{3}}u\,dt^{2}+\,\sin^{\frac{4}{3}}u\,\left(dx^{2}+dy^{2}\right)+dz^{2},$
$\displaystyle-\infty<t<\infty,\quad-\infty<x<\infty,\quad-\infty<y<\infty,\quad
0<u<\pi,$ (20)
where $u=\sqrt{3\Lambda}/2\ z+C_{2}$. We get from (19) that it is a space-time
with constant scalar curvature $4\Lambda$, and from (2) we get that
$\displaystyle
R_{abcd}R^{abcd}=\frac{4}{3}\,\,\Lambda^{2}\,\left(2+\frac{1}{\sin^{4}u}\right)\
.$ (21)
Now, we take the limit $\Lambda\rightarrow 0$ ($\rho\rightarrow 0$). By
setting $C_{2}=\pi-\frac{\sqrt{3\Lambda}}{6g}$ and an appropriate rescaling of
the coordinates $\left\\{t,x,y\right\\}$, we can readily see that, when
$\Lambda\rightarrow 0$, (2) becomes
$\displaystyle
ds^{2}=-(1-3gz)^{-\frac{2}{3}}\,dt^{2}+(1-3gz)^{\frac{4}{3}}\left(dx^{2}+dy^{2}\right)+dz^{2},$
$\displaystyle-\infty<t<\infty,\quad-\infty<x<\infty,\quad-\infty<y<\infty,\quad
0<1-3gz<\infty\,,$ (22)
where $g$ is an arbitrary constant. In (2), the coordinates have been chosen
in such a way that it describes a homogeneous gravitational field $g$ pointing
in the negative $z$-direction in a neighborhood of $z=0$. The metric (2) is
Taubs’s [9] vacuum plane solution expressed in the coordinates used in Ref.
[1], where a detailed study of it can be found.
On the other hand, by setting $C_{2}=\frac{\pi}{2}+\frac{\sqrt{3\Lambda}}{2g}$
and an appropriate rescaling of the coordinate $t$, we can readily see that,
when $\Lambda\rightarrow 0$, (2) becomes
$\displaystyle ds^{2}=-(1+gz)^{{2}}\,dt^{2}+dx^{2}+dy^{2}+dz^{2},$
$\displaystyle-\infty<t<\infty,\quad-\infty<x<\infty,\quad-\infty<y<\infty,\quad-\frac{1}{g}<z<\infty\,,$
(23)
where $g$ is an arbitrary constant, and the coordinates have been chosen in
such a way that it also describes a homogeneous gravitational field $g$
pointing in the negative $z$-direction in a neighborhood of $z=0$. The metric
(2) is, of course, Rindler’s flat space-time.
For exotic matter, some interesting solutions also arise, but the complete
analysis turns out to be somehow involved. So, for the sake of clarity, we
shall confine our attention to positive values of $\rho$ and $C_{p}\neq 0$,
leaving the complete study to a forthcoming publication [11].
Now, it is clear from (7), (8), (9) and (10) that field equations are
invariant under the transformation $z\rightarrow\pm z+z_{0}$, i.e.,
z-translations and mirror reflections across any plane $z\\!=$const. Thus, if
$\\{\mathcal{G}(z),V(z),p(z)\\}$ is a solution $\\{\mathcal{G}(\pm
z+z_{0}),V(\pm z+z_{0}),p(\pm z+z_{0})\\}$ is another one, where $z_{0}$ is an
arbitrary constant. Therefore, taking into account that
${u=\sqrt{{6\pi\rho}}\,z+C_{2}}$, without loss of generality, the
consideration of the case $0<u<\pi/2$ shall suffice.
By an appropriate rescaling of the coordinates $\left\\{x,y\right\\}$, without
loss of generality, we can write the metric (17) as
$\displaystyle
ds^{2}=-\mathcal{G}(z)^{2}\,dt^{2}+\,\sin^{\frac{4}{3}}u\,\left(dx^{2}+dy^{2}\right)+dz^{2},$
$\displaystyle-\infty<t<\infty,\quad-\infty<x<\infty,\quad-\infty<y<\infty,\quad
0<u=\sqrt{{6\pi\rho}}\ z+C_{2}\leq\pi/2,$
and (16) as
$\displaystyle\mathcal{G}(z)=\frac{\kappa C_{p}}{\rho}\,\frac{\cos
u}{\sin^{\frac{1}{3}}u}+\frac{3C_{p}}{7\rho}\
{\sin^{2}u}\,\,\,_{2}F_{1}\\!\Bigl{(}1,\frac{2}{3};\frac{13}{6};\sin^{2}u\Bigr{)},$
(25)
where $\kappa$ is an arbitrary constant.
By replacing (25) into (11), we see that the pressure is independent of
$C_{p}$. On the other hand, since $\mathcal{G}(z)$ appears squared in (2), it
suffices to consider $C_{p}>0$. Therefore, rescaling the coordinate $t$, we
may set $C_{p}=\rho$. Thus, (25) becomes
$\displaystyle\mathcal{G}(z)=G_{\kappa}(u)=\kappa\,\frac{\cos
u}{\sin^{1/3}u}+\frac{3}{7}\
{\sin^{2}u}\,\,\,_{2}F_{1}\\!\Bigl{(}1,\frac{2}{3};\frac{13}{6};\sin^{2}u\Bigr{)},$
(26)
where $G_{\kappa}(u)$ is defined for future use, and we recall that
${u=\sqrt{{6\pi\rho}}\,z+C_{2}}$. Furthermore, (11) becomes
$p(z)=\rho\left(1/\mathcal{G}(z)-1\right).$ (27)
Therefore, the solution depends on two essential parameters, $\rho$ and
$\kappa$. We shall discuss in detail the properties of the functions
$\mathcal{G}(z)$ and $p(z)$ depending on the value of the constant $\kappa$.
By using the transformation (93), we can write $\mathcal{G}(z)$ as
$\displaystyle\mathcal{G}(z)=\frac{\left(\kappa-\kappa_{\text{crit}}\right){\cos
u}+\,_{2}F_{1}\\!\Bigl{(}-\frac{1}{2},-\frac{1}{6};\frac{1}{2};\cos^{2}u\Bigr{)}}{{\sin^{1/3}u}}\,,$
(28)
where
$\kappa_{\text{crit}}=\frac{\sqrt{\pi}\,{\Gamma(7/6)}}{{\Gamma(2/3)}}=1.2143\dots\,,$
(29)
which is the form used in references [5, 6], and which is more suitable to
analyze its properties near $u=\pi/2$.
Now, the hypergeometric function in (26) is a monotonically increasing
continuous positive function of $u$ for $0\leq u\leq\pi/2$, since
$c-a-b={1}/{2}>0$. Furthermore, taking into account that
${}_{2}F_{1}(a,b;c;0)=1$ and (93), we have
${}_{2}F_{1}\\!\Bigl{(}1,\frac{2}{3};\frac{13}{6};0\Bigr{)}=1,\,\,\,\,\text{and}\,\,\,\,_{2}F_{1}\\!\Bigl{(}1,\frac{2}{3};\frac{13}{6};1\Bigr{)}=\frac{7}{3}.$
(30)
Therefore, we readily see from (26) that, no matter what the value of $\kappa$
is, $\mathcal{G}(z)|_{u=\pi/2}=1$, and we get then from (27) that $p(z)$
vanishes at $u=\pi/2$. On the other hand, since
$\mathcal{G}(z)=\kappa\,u^{-\frac{1}{3}}+O(u^{\frac{5}{3}})\,\,\,\,\,\,\,\,\,\text{as}\,\,\,\,\,\,u\rightarrow
0\,,$ (31)
$\mathcal{G}(z)|_{u=0}=0$ if $\kappa=0$, whereas it diverges if $\kappa\neq
0$.
For the sake of clarity, we shall analyze separately the cases $\kappa>0$,
$\kappa=0$, and $\kappa<0$.
### 2.1 $\kappa>0$
Figure 1: $\mathcal{G}(z)$, $V(z)$ and $p(z)$, as functions of $u$ for
decreasing values of $\kappa>0$. Since $V(z)$ is independent of $\kappa$, it
is shown once.
In this case, it is clear from (26) that $\mathcal{G}(z)$ is positive definite
when $0<u\leq\pi/2$. On the other hand, from (8) and (9), we get
$\displaystyle\mathcal{G}^{\prime\prime}=\mathcal{G}^{\prime}V^{\prime}-\mathcal{G}V^{\prime\prime}=-\Bigl{(}V^{\prime\prime}+\frac{V^{\prime
2}}{2}+4\pi\rho\Bigr{)}\mathcal{G}+4\pi C_{p}=V^{\prime
2}\mathcal{G}+4\pi\rho,$ (32)
where we have made use of (14), (7) and $C_{p}=\rho$. Then, also
$\mathcal{G}^{\prime\prime}$ is positive definite in $0<u\leq\pi/2$, and so
$\mathcal{G}^{\prime}$ is a monotonically increasing continuous function of
$u$ in this interval.
Now, taking into account that
$\,\mathcal{G^{\prime}}=\partial_{z}\mathcal{G}=\sqrt{6\pi\rho}\
\partial_{u}\mathcal{G}$, we get from (26) that
$\displaystyle\mathcal{G}^{\prime}(z)=-\frac{\kappa\sqrt{6\pi\rho}}{3}u^{-\frac{4}{3}}+O(u^{\frac{2}{3}})\,\,\,\,\,\,\,\,\,\text{as}\,\,\,u\to
0,$ (33)
and from (28) that
$\displaystyle\mathcal{G}^{\prime}(z)|_{u=\pi/2}=\sqrt{6\pi\rho}\left(\kappa_{\text{crit}}-\kappa\right)\,.$
(34)
If $\kappa\geq\kappa_{\text{crit}}$, $\mathcal{G}^{\prime}$ is negative for
small enough values of $u$ and non-positive at ${u=\pi/2}$. Hence
$\mathcal{G}^{\prime}$ is negative in $0<u<\pi/2$, so $\mathcal{G}(z)$ is
decreasing, and then $\mathcal{G}(z)>\mathcal{G}(z)|_{u=\pi/2}=1$ in this
interval (see Fig.1(a) and Fig.1(b)).
For $\kappa_{\text{crit}}>\kappa>0$, $\mathcal{G}^{\prime}$ is negative for
sufficiently small values of $u$ and positive at $\pi/2$. So, there is one
(and only one) value $u_{m}$ where it vanishes. Clearly $\mathcal{G}(z)$
attains a local minimum there. Hence, there is one (and only one) value
$u_{0}$ ($0<u_{0}<\pi/2$) such that
$\mathcal{G}(z)|_{u=u_{0}}=\mathcal{G}(z)|_{u=\pi/2}=1$, and then
$\mathcal{G}(z)<1$ when $u_{0}<u<\pi/2$ (see Fig.1(c) and Fig.1(d)).
Since $\mathcal{G}(z)>0$, it is clear from (27) that $p(z)>0$ if
$\mathcal{G}(z)<1$, and $p(z)$ reaches a maximum when $\mathcal{G}(z)$ attains
a minimum.
Therefore, for $\kappa\geq\kappa_{\text{crit}}$, $p(z)$ is negative when
$0\leq u<\pi/2$ and it increases monotonically from $-\rho$ to $0$ and it
satisfies $|p|\leq\rho$ all over the space-time (see Fig.1(a) and Fig.1(b)).
On the other hand, for $\kappa_{\text{crit}}>\kappa>0$, $p(z)$ grows from
$-\rho$ to a maximum positive value when $u=u_{m}$ where it starts to decrease
and vanishes at $u=\pi/2$. Thus, $p(z)$ is negative when $0<u<u_{0}$ and
positive when $u_{0}<u<\pi/2$ (see Fig.1(c) and Fig.1(d)). It can be readily
seen from (26) and (27) that, as $\kappa$ decreases from
$\kappa_{\text{crit}}$ to $0$, $u_{m}$ moves to the left and the maximum value
of $p(z)/\rho$ monotonically increases from $0$ to $\infty$. In section 3, we
shall show that for $\kappa=\kappa_{dec}=0.3513\dots$ it gets $1$, and then
for $0<\kappa<\kappa_{dec}$, there is a region of space-time where $p>\rho$
and where the dominant energy condition is thus violated.
### 2.2 $\kappa=0$
In this case, it is clear from (26) that $\mathcal{G}$ monotonically increases
with $u$ from $0$ to $\mathcal{G}(z)|_{u=\pi/2}=1$. Therefore, $p$ is a
monotonically decreasing positive continuous function of $u$ in $0<u<\pi/2$
(see Fig.2(a)). Furthermore, at $u=0$ it diverges, since
$\displaystyle
p(z)\sim\frac{7\rho}{3}u^{-2}\rightarrow+\infty\,\,\,\,\,\,\,\,\,\text{as}\,\,\,u\rightarrow
0.$ (35)
Figure 2: $\mathcal{G}(z)$ and $p(z)$ as functions of $u$ for $\kappa\leq 0$.
### 2.3 $\kappa<0$
In this case, we see from (33) that $\mathcal{G}^{\prime}$ is positive when
$u$ takes small enough values, and from (34) we see that it is also positive
when $u$ is near to $\pi/2$.
Now, suppose that $\mathcal{G}^{\prime}(z)$ attains a local minimum when
$u=u_{1}$ ($0<u_{1}<\pi/2$), then
$\mathcal{G}^{\prime\prime}(z)|_{u=u_{1}}=0$. Hence, we get from (32) that
$\mathcal{G}(z)|_{u=u_{1}}<0$. And taking into account that
$V^{\prime}(z)|_{u=u_{1}}={2\sqrt{6\pi\rho}}/{3}\cot u_{1}>0$, we see from
(14) that $\mathcal{G}^{\prime}(z)|_{u=u_{1}}>0$. Thus, we have shown that
$\mathcal{G}^{\prime}(z)$ is a continuous positive definite function when
$0<u\leq\pi/2$ if $\kappa<0$.
Therefore, in this case, $\mathcal{G}(z)$ is a continuous function
monotonically increasing with $u$ when $0<u\leq\pi/2$. Since it is negative
for sufficiently small values of $u$ and $1$ when $u=\pi/2$ it must vanish at
a unique value of $z$ when $u=u_{\kappa}$ (say). Furthermore
$\mathcal{G}(z)<1$ when $0<u<\pi/2$. Clearly, we get from (28) that
$u_{\kappa}$ is given implicitly in terms of $\kappa$ through
${\kappa}=\kappa_{\text{crit}}-\frac{\,{}_{2}F_{1}\\!\Bigl{(}-\frac{1}{2},-\frac{1}{6};\frac{1}{2};\cos^{2}u_{\kappa}\Bigr{)}}{{\cos
u_{\kappa}}\;\;{\sin^{1/3}u_{\kappa}}}\,.$ (36)
We can readily see from (36) that $u_{\kappa}$ is a monotonically decreasing
function of $\kappa$ in $-\infty<\kappa<0$, and it tends to $\pi/2$ when
$\kappa\rightarrow-\infty$ and to $0$ when $\kappa\rightarrow 0^{-}$.
From (27), it is clear that $p(z)$ diverges when $u=u_{\kappa}$. Furthermore,
(27) also shows that $p(z)<0$ when $\mathcal{G}(z)<0$. And taking into account
that $\mathcal{G}(z)<1$, we see that $p(z)>0$ when $\mathcal{G}(z)>0$.
Therefore, $p(z)$ is negative when $0<u<u_{\kappa}$ whereas it is positive
when $u_{\kappa}<u<\pi/2$ (see Fig.2(b)).
On the other hand, we see from (2) that, when $\kappa$ is negative, another
space-time curvature singularity arises at $u_{\kappa}$ (besides the one at
$u=0$) since $p$ diverges there.
Therefore, if $\kappa$ is negative, the metric (2) describes two very
different space-times:
(a) For $0<u<u_{\kappa}$, the whole space-time is trapped between two
singularities separated by a finite distance $\sqrt{6\pi\rho}\,u_{\kappa}$.
This is a space-time full of a fluid with constant positive density $\rho$ and
negative pressure $p$ monotonically decreasing with $u$, and
$p(z)|_{u=0}=-\rho$ and $p(z)\rightarrow-\infty$ as $u\rightarrow u_{\kappa}$.
(b) For $u_{\kappa}<u<\pi/2$, the pressure is positive and monotonically
decreasing with $u$, $p(z)\rightarrow\infty$ as $u\rightarrow u_{\kappa}$ and
$p(z)|_{u=\pi/2}=0$.
## 3 The maximum of the pressure and the dominant energy condition
We have seen in the preceding section that for
$\kappa\geq\kappa_{\text{crit}}$, $p(z)$ is negative, it increases
monotonically from $-\rho$ to $0$ and it satisfies $|p|\leq\rho$ all over the
space-time (see Fig.1(a) and Fig.1(b)). On the other hand, for $\kappa\leq 0$,
$p(z)$ is unbounded at an inner singularity and thus the dominant energy
condition is not satisfied in this case.
For $\kappa_{\text{crit}}>\kappa>0$, since $\mathcal{G}(z)>0$, it is clear
from (11) that $p(z)>0$ if $\mathcal{G}(z)<1$, and that $p(z)$ reaches a
maximum when $\mathcal{G}(z)$ attains a minimum. Then, $p(z)$ grows from
$-\rho$ to a maximum positive value $p_{m}$ when $u=u_{m}$, where it starts to
decrease and vanishes at $u=\pi/2$. Thus, $-\rho\leq p(z)<0$ for $0<u\leq
u_{0}$ and $0<p(z)<p_{m}$ when $u_{0}<u<\pi/2$ (see Fig.1(c)).
We readily see from (9), since $\mathcal{G}^{\prime}(z)|_{u=u_{m}}$ vanishes,
that
$p_{m}=p(z)|_{u=u_{m}}=\frac{1}{8\pi}\left(V^{\prime}(z)\right)^{2}|_{u=u_{m}}=\frac{\rho}{3}\cot^{2}u_{m}\,,$
(37)
where we have made use of (13), and so the maximum value of $p(z)$
monotonically decreases from $\infty$ to $0$ in $0<u_{m}<\pi/2$.
Now, by replacing (37) into (28) and taking into account (27), we can write
down $\kappa$ in terms of $p_{m}$
$\displaystyle\kappa=\kappa_{\text{crit}}+\frac{\left(\rho+{3p_{m}}\right)^{\frac{1}{2}}}{\left({3p_{m}}\right)^{\frac{1}{2}}}\left(\frac{\rho^{\frac{7}{6}}}{(\rho+p_{m})(\rho+3p_{m})^{\frac{1}{6}}}-\,_{2}F_{1}\\!\left(-\frac{1}{2},-\frac{1}{6};\frac{1}{2};\frac{3p_{m}}{\rho+3p_{m}}\right)\right)$
(38)
$\displaystyle=\kappa_{\text{crit}}-{2}\sqrt{\frac{p_{m}}{3\rho}}\left(1-\frac{1}{3}\frac{p_{m}}{\rho}+\frac{8}{21}\frac{{p_{m}}^{3}}{\rho^{3}}+\dots\right)\text{\,\,\,for\,\,\,}p_{m}<\rho,$
which clearly shows that $\kappa\to\kappa_{\text{crit}}$ as $p_{m}\to 0$. On
the other hand, by using (96), we can write
$\kappa_{=}\frac{{2}}{\sqrt[6]{3}}\left(\frac{\rho}{p_{m}}\right)^{\frac{7}{6}}\left(\frac{3}{7}-\frac{17}{39}\left(\frac{\rho}{p_{m}}\right)+\dots\right)\text{\,\,\,for\,\,\,}p_{m}>\rho,$
(40)
which clearly shows that $\kappa\to 0$ as $p_{m}\to\infty$. Thus, as $\kappa$
increases from $0$ to $\kappa_{\text{crit}}$, $p_{m}$ monotonically decreases
from $\infty$ to $0$ (see Fig.3).
Hence, there is a value $\kappa_{\text{dec}}$ of $\kappa$ for which
$p_{m}=\rho$, and from (3) we see that it is given by
$\kappa_{\text{dec}}=\kappa_{\text{crit}}+\frac{2}{\sqrt{3}}\left(\frac{1}{2\sqrt[3]{2}}-\;_{2}F_{1}\\!\Bigl{(}-\frac{1}{2},-\frac{1}{6};\frac{1}{2};\frac{3}{4}\Bigr{)}\right)=0.351307\dots\,.$
(41)
Also note that, in this case, we get from (37) that the maximum of the
pressure occurs at $u_{m}=\pi/6$.
Figure 3: $\kappa$ as a function of $p_{m}$.
Thus, for $0<\kappa<\kappa_{\text{dec}}$, there is a region of space-time
where $p>\rho$ and where the dominant energy condition is thus violated.
However, we see that for $\kappa_{\text{dec}}\leq\kappa<\kappa_{\text{crit}}$,
the condition $|p|<\rho$ is everywhere satisfied.
Therefore, the dominant energy condition is satisfied all over the space-time
if $\kappa\geq\kappa_{\text{dec}}$.
Notice that, for $\kappa_{\text{crit}}>\kappa>0$, by eliminating $\kappa$ by
means of (3), the solution can be parameterized in terms of $p_{m}$ and
$\rho$.
## 4 The matching of solutions and the external gravitational fields
We shall discuss matching the interior solution to a vacuum one, as well as
joining two interior solutions facing each other at the surfaces where the
pressure vanishes. For any value of $\kappa$, $p(z)=0$ at $u=\pi/2$, while for
$\kappa_{\text{crit}}>\kappa>0$ it also vanishes at $u=u_{0}$. Therefore, the
matching at $u=\pi/2$ is always possible, while the matching at $u=u_{0}$ is
also possible in the latter case.
We shall impose the continuity of the metric components and of their first
derivatives at the matching surfaces.
Notice that, due to the symmetry required, vacuum solutions satisfy the field
equations (7), (8) and (9), with $\rho=p=0$. In this case, we immediately get
from (9) that or $V^{\prime}=0$ or $2\
\mathcal{G}^{\prime}/{\mathcal{G}}+V^{\prime}=0$.
In the former case, we get from (8) that $\mathcal{G}^{\prime\prime}=0$, and
the solution is
$ds^{2}=-(A+Bz)^{2}\,dt^{2}+C(dx^{2}+dy^{2})+dz^{2}\,,$ (42)
which is the Rindler space-time.
In the latter one, it can be written as
$ds^{2}=-(A+Bz)^{-\frac{2}{3}}\,dt^{2}+C(A+Bz)^{\frac{4}{3}}\,(dx^{2}+dy^{2})+dz^{2}\,,$
(43)
which is the Taub’s vacuum plane solution [9].
Therefore, as pointed out by the authors of reference [6], if at the matching
“plane” the interior $V^{\prime}$ vanishes, we can only match it to Rindler’s
space-time, since for the Taub’s one $V^{\prime}$ does not vanish at any
finite point. Whereas, if on the contrary, $V^{\prime}$ does not vanish at the
matching “plane”, we can only match the inner solution with Taub’s one, since
for Rindler’s one, $V^{\prime}$ vanishes anywhere.
Now, we see from (13) that $V^{\prime}$ vanishes at $u=\pi/2$ and it is non
zero at $u=u_{0}\neq\pi/2$. Therefore the solution can be matched to Ridler’s
space-time at $u=\pi/2$ and to Taub’s vacuum plane solution at $u=u_{0}$.
Notice that in reference [3] we did not demand the continuity of
$V^{\prime}(z)$ at the matching surface and we analyzed there the matching of
the solution to Taub’s vacuum plane solution at $u=\pi/2$.
In the next section, we discuss the matching of the whole interior solution to
Rindler vacuum, for any value of $\kappa$ at $u=\pi/2$, while we match two
interior solutions facing each other at $u=\pi/2$ in section 6.
In section 7, for $\kappa_{\text{crit}}>\kappa>0$, we discuss the matching of
the slice of the interior solution $u_{0}\leq u\leq\pi/2$ with both vacua,
while, in section 9 we match the remaining piece (i.e. $0<u<u_{0}$) to a
Taub’s vacuum.
## 5 Matching the whole slab to a Rindler space-time
In this section, we discuss matching the whole interior solution to a vacuum
one at $u=\pi/2$.
Since the field equations are invariant under $z$-translation, we can choose
to match the solutions at $z=0$ without losing generality. So we select
$C_{2}=\pi/2$, and then (28) becomes
$\displaystyle\mathcal{G}(z)=G_{\kappa}(\sqrt{6\pi\rho}\,z+\pi/2)$
$\displaystyle=\frac{-\left(\kappa-\kappa_{\text{crit}}\right){\sin(\sqrt{6\pi\rho}\,z)}+\,_{2}F_{1}\\!\Bigl{(}-\frac{1}{2},-\frac{1}{6};\frac{1}{2};\sin^{2}(\sqrt{6\pi\rho}\,z)\Bigr{)}}{{\cos^{1/3}(\sqrt{6\pi\rho}\,z)}}\,.$
(44)
Therefore, the metric (2) reads
$\displaystyle
ds^{2}=-\mathcal{G}(z)^{2}\,dt^{2}+\,\cos^{\frac{4}{3}}(\sqrt{6\pi\rho}\,z)\,\left(dx^{2}+dy^{2}\right)+dz^{2},$
$\displaystyle-\infty<t<\infty,\quad-\infty<x<\infty,\quad-\infty<y<\infty,\quad{-\sqrt{\frac{\pi}{24\rho}}}<z\leq
0\,.$ (45)
We must impose the continuity of the components of the metric at the matching
boundary. Notice that $g_{tt}(0)=-\mathcal{G}(0)^{2}=-1$,
$g_{xx}(0)=g_{yy}(0)=1$, and $p(0)=0$.
Furthermore, we also impose the continuity of the derivatives of the metric
components at the boundary. From (34), we have
$\displaystyle\partial_{z}g_{tt}(0)|_{\text{interior}}=-2\,\mathcal{G}(0)\,\mathcal{G}^{\prime}(0)=-2\sqrt{6\pi\rho}\left(\kappa_{\text{crit}}-\kappa\right)\,,$
(46)
and, from (5) we get
$\displaystyle\partial_{z}g_{xx}(0)|_{\text{interior}}=\partial_{z}g_{yy}(0)|_{\text{interior}}=-\frac{4\sqrt{6\pi\rho}}{3}\cos^{\frac{1}{3}}(\sqrt{6\pi\rho}\,z)\sin(\sqrt{6\pi\rho}\,z)\Big{|}_{z=0}=0\,.$
The exterior solution, i.e. for $z\geq 0$, is the Rindler space-time
$\displaystyle ds^{2}=-(1+gz)^{2}\,dt^{2}+dx^{2}+dy^{2}+dz^{2},$
$\displaystyle-\infty<t<\infty,\quad-\infty<x<\infty,\quad-\infty<y<\infty,\quad
0\leq z<\infty\,,$ (48)
which describes a homogeneous gravitational field $-g$ in the vertical (i.e.,
$z$) direction.
Since $g_{tt}(0)|_{\text{exterior}}=-1$ and
$g_{xx}(0)|_{\text{exterior}}=g_{yy}(0)|_{\text{exterior}}=1$, the continuity
of the metric components is assured. And, concerning the derivatives, we have
$\displaystyle\partial_{z}g_{xx}(z)|_{\text{exterior}}=\partial_{z}g_{xx}(z)|_{\text{exterior}}=0\,,$
(49)
which identically matches to (5).
Moreover, we readily get
$\displaystyle\partial_{z}g_{tt}(z)|_{\text{exterior}}=-2g\,(1+2gz)\,.$ (50)
Then, by comparing it with (33), we see that the continuity of
$\partial_{z}g_{tt}$ at the boundary yields
$\displaystyle g=\sqrt{6\pi\rho}\left(\kappa_{\text{crit}}-\kappa\right)\,,$
(51)
which relates the external gravitational field $g$ with matter density $\rho$
and $\kappa$.
Now, by replacing $\kappa$ from (51) into (5), we get
$\displaystyle\mathcal{G}(z)=\frac{{g\,\sin(\sqrt{6\pi\rho}\,z)}+{\sqrt{6\pi\rho}}\,\;_{2}F_{1}\\!\Bigl{(}-\frac{1}{2},-\frac{1}{6};\frac{1}{2};\sin^{2}(\sqrt{6\pi\rho}\,z)\Bigr{)}}{{{\sqrt{6\pi\rho}}\;\cos^{1/3}(\sqrt{6\pi\rho}\,z)}}\,,$
(52)
and the solution is parameterized in terms of the external gravitational field
$g$ and the density of the matter $\rho$.
It can readily be seen from (51) that, if $\kappa>\kappa_{\text{crit}}$, $g$
is negative and the slab turns out to be repulsive. If
$\kappa=\kappa_{\text{crit}}$ it is gravitationally neutral, and the exterior
is one half of Minkowski’s space-time. If $\kappa<\kappa_{\text{crit}}$, it is
attractive.
If $\kappa>0$, the depth of the slab is $\sqrt{\frac{\pi}{24\rho}}$
independently of the value of $\kappa$. In this case, the pressure is finite
anywhere, but it is negative deep below and $p=-\rho$ at the inner singularity
(see Fig.1(a), Fig.1(b) and Fig.1(c)). But, as discussed in section 3, only
when $\kappa\geq\kappa_{dec}$ is the condition $|p|\leq\rho$ everywhere
satisfied.
If $\kappa\leq 0$, the pressure inside the slab is always positive, and it
diverges deep below at the inner singularity (see Fig.2). Its depth is
$d=(\pi/2-u_{\kappa})/\sqrt{6\pi\rho}\,,$ (53)
where $u_{\kappa}$ ($0<u_{\kappa}<\pi/2$) is given implicitly in terms of
$\kappa$ through (36). By using (36), we can write $\kappa$ in terms of $d$
$\displaystyle\kappa=\kappa_{\text{crit}}-\frac{\,{}_{2}F_{1}\\!\Bigl{(}-\frac{1}{2},-\frac{1}{6};\frac{1}{2};\sin^{2}(\sqrt{6\pi\rho}\,d)\Bigr{)}}{{\sin(\sqrt{6\pi\rho}\,d)}\;\;{\cos^{\frac{1}{3}}(\sqrt{6\pi\rho}\,d)}}\,.$
(54)
Now, in this case, by using (51) we can write the external gravitational field
$g$ in terms of the matter density $\rho$ and the depth of the slab $d$
$\displaystyle
g=\frac{\sqrt{6\pi\rho}}{\sin(\sqrt{6\pi\rho}\,d)\;\cos^{\frac{1}{3}}(\sqrt{6\pi\rho}\,d)}\,\,\,_{2}F_{1}\\!\Bigl{(}-\frac{1}{2},-\frac{1}{6};\frac{1}{2};\sin^{2}(\sqrt{6\pi\rho}\,d)\Bigr{)}\,.$
(55)
For the sake of clearness, we summarize the properties of the solutions
discussed above in Table 1.
Case | $\kappa$ | $g$ | $p(z)$ | $|p|\leq\rho$ | Depth | Fig.
---|---|---|---|---|---|---
I | $\kappa>\kappa_{\text{crit}}$ | $<0$ | $-\rho\leq p(z)\leq 0$ | yes | $\sqrt{\frac{\pi}{24\rho}}$ | 1(a)
II | $\kappa=\kappa_{\text{crit}}$ | $=0$ | $-\rho\leq p(z)\leq 0$ | yes | $\sqrt{\frac{\pi}{24\rho}}$ | 1(b)
III | $\kappa_{\text{crit}}>\kappa\geq\kappa_{\text{dec}}$ | $>0$ | $-\rho\leq p(z)\leq p_{m}(\kappa)\leq\rho$ | yes | $\sqrt{\frac{\pi}{24\rho}}$ | 1(c)
IV | $\kappa_{\text{dec}}>\kappa>0$ | $>0$ | $-\rho\leq p(z)\leq p_{m}(\kappa)$ | no | $\sqrt{\frac{\pi}{24\rho}}$ | 1(d)
V | $0\geq\kappa$ | $>0$ | unbounded | no | $\frac{(\pi/2-u_{\kappa})}{\sqrt{6\pi\rho}}$ | 2
Table 1: Properties of the solutions according to the value of $\kappa$.
Some remarks are in order. First, notice that the maximum depth that a slab
with constant density $\rho$ can reach is $\sqrt{\frac{\pi}{24\rho}}$, being
the counterpart of the well-known bound $M<4R/9$
($R<\frac{1}{\sqrt{3\pi\rho}}$), which holds for spherical symmetry.
If we restrict ourselves to non “exotic” matter, the dominant energy condition
will put aside cases IV and V, as shown in section 3. However, as already
mentioned, it is satisfied for the cases I, II and III (see Fig.1(a), Fig.1(b)
and Fig.1(c)). Thus, there are still attractive, neutral and repulsive
solutions satisfying this condition.
In this case, we readily get from (51) the bound
$\displaystyle
g\leq\sqrt{6\pi\rho}\left(\kappa_{\text{crit}}-\kappa_{dec}\right)\approx
3.75\,\sqrt{\rho}\,\,.$ (56)
In order to analyze the geodesics in the vacuum, it is convenient to consider
the transformation from Rindler’s coordinates $t$ and $z$ to Minkowski’s ones
$T$ and $Z$ shown in Table 2. Notice that, for the repulsive case, four
Rindler’s patches are necessary to cover the whole exterior of the slab. Also
note that, in this case, $z$ becomes the temporal coordinate in quadrants III
and IV, see Fig. 4. In this coordinates, of course, the vacuum metric becomes
$ds^{2}=-dT^{2}+dx^{2}+dy^{2}+dZ^{2}\,.$ (57)
Notice that, the “planes” $z=$ constant correspond to the hyperbolae
$Z^{2}-T^{2}=$ constant, and $t=$ constant. On the other hand, incoming
vertical null geodesics are $Z+T=$ constant, and outgoing ones are given by
$Z-T=$ constant.
For attractive slabs, we readily see from Fig. 4(a) that all incoming vertical
photons finish at the surface of the slab, while all outgoing ones escape to
infinite. Vertical time-like geodesics start at the surface of the slab, reach
a turning point and fall down to the slab in a finite amount of coordinate
time $t$. Notice that a particle world-line is tangent to only one hyperbola
$Z^{2}-T^{2}=C$, with $C>1/g$, and that the maximum value of $z$ that it
reaches is $C-1/g$.
For repulsive slabs, Fig. 4(b), two horizons appear in the vacuum: the lines
$T=\pm Z$, showing that not all the vertical null geodesics reach the surface
of the slab. In fact, only vertical incoming photons coming from region IV end
at the slab surface, and only the outgoing ones finishing in region III start
at the slab surface. Incoming particles can reach the surface or bounce at a
turning point before getting it.
Case | Quadrant | $T$ | $Z$ | $-dT^{2}+dZ^{2}$
---|---|---|---|---
Attractive | I | $(z+{1}/{g})\sinh gt$ | $(z+{1}/{g})\cosh gt$ | $-(z+{1}/{g})^{2}dt^{2}+dz^{2}$
| I and II | $(z-{1}/{g})\sinh gt$ | $(z-{1}/{g})\cosh gt$ | $-(z-{1}/{g})^{2}dt^{2}+dz^{2}$
Repulsive | III | $(z-{1}/{g})\cosh gt$ | $(z-{1}/{g})\sinh gt$ | $-dz^{2}+(z-{1}/{g})^{2}dt^{2}$
| IV | $({1}/{g}-z)\cosh gt$ | $(z-{1}/{g})\sinh gt$ | $-dz^{2}+(z-{1}/{g})^{2}dt^{2}$
Table 2: The transformations from Rindler’s coordinates to Minkowski’s ones.
Figure 4: Vertical time-like and null geodesics in the vacuum
## 6 Matching two slabs
Now we consider two incompressible fluids joined at $z=0$ where the pressure
vanishes, the lower one having density $\rho$ and the upper having density
$\rho^{\prime}$. Thus, the lower solution is given by (5). By means of the
transformation $z\rightarrow-z$, $\rho\rightarrow\rho^{\prime}$ and
$\kappa\rightarrow\kappa^{\prime}$ we get the upper one
$\displaystyle
ds^{2}=-G_{\kappa^{\prime}}(\pi/2-\sqrt{6\pi\rho^{\prime}}\,z)^{2}\,dt^{2}+\,\cos^{\frac{4}{3}}(\sqrt{6\pi\rho^{\prime}}\,z)\,\left(dx^{2}+dy^{2}\right)+dz^{2},$
$\displaystyle-\infty<t<\infty,\quad-\infty<x<\infty,\quad-\infty<y<\infty,\quad
0\leq z<\sqrt{\frac{\pi}{24\rho^{\prime}}}\,.$ (58)
From (5), (5) and (6), we can readily see that $g_{tt}(z)$, $g_{xx}(z)$ and
$\partial_{z}g_{xx}(z)$ are continuous at $z=0$. Furthermore, from (46) we see
that the continuity of $\partial_{z}g_{tt}$ requires
$\displaystyle\sqrt{\rho}\left(\kappa_{\text{crit}}-\kappa\right)=-\sqrt{\rho^{\prime}}\left(\kappa_{\text{crit}}-\kappa^{\prime}\right)\,.$
(59)
Thus, if one solution has a $\kappa$ greater than $\kappa_{\text{crit}}$, the
other one must have it smaller than $\kappa_{\text{crit}}$. Therefore, the
joining is only possible between an attractive solution and a repulsive one,
or between two neutral ones.
It is easy to see that we can also insert a slice of arbitrary thickness of
the vacuum solution (2) between them, obtaining a full relativistic plane
“gravitational capacitor”. For example, we can trap a slice of Minkowski’s
space-time between two solutions with $\kappa=\kappa_{\text{crit}}$.
## 7 Attractive Slab surrounded by two different vacuums
We have already seen that, in the case $\kappa_{\text{crit}}>\kappa>0$, the
pressure also vanishes inside the slab at the point where $u=u_{0}$. Here we
discuss the matching of the slice of the interior solution $u_{0}\leq
u\leq\pi/2$ with two vacuums.
Clearly, the thickness of the slab $d$ is given by
$d=\frac{(\pi/2-u_{0})}{\sqrt{6\pi\rho}}\,,$ (60)
and $0<d<\sqrt{\frac{\pi}{24\rho}}$.
Since $\mathcal{G}(z)|_{u=u_{0}}=1$, we can write down from (28) the
expression which gives $\kappa$ in terms of $d$ and $\rho$
$\displaystyle\kappa=\kappa_{\text{crit}}+\frac{{\cos^{1/3}(\sqrt{6\pi\rho}\,d)}-\,_{2}F_{1}\\!\Bigl{(}-\frac{1}{2},-\frac{1}{6};\frac{1}{2};\sin^{2}(\sqrt{6\pi\rho}\,d)\Bigr{)}}{{\sin(\sqrt{6\pi\rho}\,d)}}$
(61)
$\displaystyle=\kappa_{\text{crit}}-\frac{\sqrt{6\pi\rho}\,d}{3}\left(1+\frac{2\pi\rho}{3}d^{2}+\frac{(2\pi\rho)^{2}}{5}d^{4}+\dots\right)\text{\,\,\,for\,\,\,}\sqrt{6\pi\rho}\,d<1\,,$
(62)
which clearly shows that $\kappa\to\kappa_{\text{crit}}$ as $d\to 0$. On the
other hand, by using (96), we can write
$\kappa_{=}\frac{{\cos^{1/3}(\sqrt{6\pi\rho}\,d)}}{{\sin(\sqrt{6\pi\rho}\,d)}}\left(1-\frac{3}{7}\cos^{2}(\sqrt{6\pi\rho}\,d)+\dots\right)$
(63)
which clearly shows that $\kappa\to 0$ as $d\to\sqrt{\frac{\pi}{24\rho}}$.
Thus, as $d$ increases from $0$ to $\sqrt{\frac{\pi}{24\rho}}$, $\kappa$
monotonically decreases from $\kappa_{\text{crit}}$ to $0$ (see Fig.5).
Therefore, the maximum thickness $d_{\text{dec}}$ that a solution satisfying
the dominant energy condition can have, satisfies
$\displaystyle\kappa_{\text{dec}}=\kappa_{\text{crit}}+\frac{{\cos^{1/3}(\sqrt{6\pi\rho}\,d_{\text{dec}})}-\,_{2}F_{1}\\!\Bigl{(}-\frac{1}{2},-\frac{1}{6};\frac{1}{2};\sin^{2}(\sqrt{6\pi\rho}\,d_{\text{dec}})\Bigr{)}}{{\sin(\sqrt{6\pi\rho}\,d_{\text{dec}})}}\,.$
(64)
A straightforward numerical computation gives
$\sqrt{6\pi\rho}\,d_{\text{dec}}=1.52744\dots$. Therefore, if
$0<d<d_{\text{dec}}$, the dominant energy condition is satisfied anywhere.
Whereas if $d_{\text{dec}}<d<\sqrt{\frac{\pi}{24\rho}}$, there is a region
inside the slab where $p(z)>\rho$.
Figure 5: $\kappa$, $g_{u}$ and $g_{l}$ as functions of $d$.
Now, by eliminating $\kappa$ by means of (61), the solution can be
parameterized in terms of $d$ and $\rho$, and (5) becomes
$\displaystyle\mathcal{G}(z)=\frac{\,{}_{2}F_{1}\\!\Bigl{(}-\frac{1}{2},-\frac{1}{6};\frac{1}{2};\sin^{2}(\sqrt{6\pi\rho}\,z)\Bigr{)}}{{\cos^{1/3}(\sqrt{6\pi\rho}\,z)}}-\frac{{\cos^{\frac{1}{3}}(\sqrt{6\pi\rho}\,d)}}{\sin(\sqrt{6\pi\rho}\,d)}\,\frac{\sin(\sqrt{6\pi\rho}\,z)}{\cos^{\frac{1}{3}}(\sqrt{6\pi\rho}\,z)}$
$\displaystyle+\
\frac{{{}_{2}F_{1}\\!\Bigl{(}-\frac{1}{2},-\frac{1}{6};\frac{1}{2};\sin^{2}(\sqrt{6\pi\rho}\,d)\Bigr{)}}}{\sin(\sqrt{6\pi\rho}\,d)}\,\frac{\sin(\sqrt{6\pi\rho}\,z)}{\cos^{\frac{1}{3}}(\sqrt{6\pi\rho}\,z)}\,.$
(65)
Notice that it clearly shows that $\mathcal{G}(-d)=\mathcal{G}(0)=1$. By means
of (27) and (7) $p(z)$ can also be explicitly written down in terms of $d$ and
$\rho$. The inner line element (5) reads
$\displaystyle
ds^{2}=-\mathcal{G}(z)^{2}\,dt^{2}+\,\cos^{\frac{4}{3}}(\sqrt{6\pi\rho}\,z)\,(dx^{2}+dy^{2})+dz^{2},$
$\displaystyle-\infty<t<\infty,\quad-\infty<x<\infty,\quad-\infty<y<\infty,\quad{-\sqrt{\frac{\pi}{24\rho}}}<-d\leq
z\leq 0\,.$ (66)
We must impose the continuity of the components of the metric and their first
derivatives at both matching boundaries, i.e. $z=0$ and $z=-d$.
The matching at $z=0$ was already discussed in section 7. Thus, the upper
exterior solution, i.e. for $z\geq 0$, is the Rindler’s space-time
$\displaystyle ds^{2}=-(1+g_{u}z)^{2}\,dt^{2}+dx^{2}+dy^{2}+dz^{2},$
$\displaystyle-\infty<t<\infty,\quad-\infty<x<\infty,\quad-\infty<y<\infty,\quad
0\leq z<\infty\,,$ (67)
which describes a homogeneous gravitational field $-g_{u}$ in the vertical
(i.e., $z$) direction. And, according to (51), we see that the continuity of
$\partial_{z}g_{tt}$ at the upper boundary yields
$\displaystyle
g_{u}=\sqrt{6\pi\rho}\left(\kappa_{\text{crit}}-\kappa\right)\,,$ (68)
which relates the upper external gravitational field $g_{u}$ with matter
density $\rho$ and $\kappa$. By using (61), we can also write it in terms of
$d$ and $\rho$
$\displaystyle
g_{u}=\frac{\sqrt{6\pi\rho}}{{\sin(\sqrt{6\pi\rho}\,d)}}\left({\,{}_{2}F_{1}\\!\Bigl{(}-\frac{1}{2},-\frac{1}{6};\frac{1}{2};\sin^{2}(\sqrt{6\pi\rho}\,d)\Bigr{)}}-{\cos^{\frac{1}{3}}(\sqrt{6\pi\rho}\,d)}\right)\,.$
(69)
At the lower boundary, we have $g_{tt}(-d)=-\mathcal{G}(-d)^{2}=-1$,
$g_{xx}(-d)=g_{yy}(-d)=\cos^{\frac{4}{3}}(\sqrt{6\pi\rho}\,d)$, and $p(-d)=0$.
On the other hand, regarding the derivatives, since
$\mathcal{G}(z)|_{u=u_{0}}=1$ and $p(z)|_{u=u_{0}}=0$, from (9) we get
$\displaystyle\mathcal{G}^{\prime}(z)|_{u=u_{0}}=-\frac{1}{2}V^{\prime}(z)|_{u=u_{m}}=-\frac{\sqrt{6\pi\rho}}{3}\,\cot
u_{0}=-\frac{\sqrt{6\pi\rho}}{3}\,\tan(\sqrt{6\pi\rho}\,d)\,,$ (70)
where we have made use of (13) and (60). Thus,
$\displaystyle\partial_{z}g_{tt}(-d)|_{\text{interior}}=-2\,\mathcal{G}(-d)\,\mathcal{G}^{\prime}(-d)=2\frac{\sqrt{6\pi\rho}}{3}\,\tan(\sqrt{6\pi\rho}\,d)\,.$
(71)
While from (7), we get
$\displaystyle\partial_{z}g_{xx}(-d)|_{\text{interior}}=\partial_{z}g_{yy}(-d)|_{\text{interior}}=4\frac{\sqrt{6\pi\rho}}{3}\,\cos^{\frac{1}{3}}(\sqrt{6\pi\rho}\,z)\,\sin(\sqrt{6\pi\rho}\,d)\,.$
(72)
Taking into account the discussion in section 4, we can write the
corresponding lower exterior solution, i.e. for $z<-d$, as
$\displaystyle
ds^{2}=-\left(1+3g_{l}(d+z)\right)^{-\frac{2}{3}}\,dt^{2}+C_{d}\,\left(1+3g_{l}(d+z)\right)^{\frac{4}{3}}\,(dx^{2}+dy^{2})+dz^{2},$
$\displaystyle-\infty<t<\infty,\quad-\infty<x<\infty,\quad-\infty<y<\infty,\quad-d-\frac{1}{3g_{l}}<z\leq-d\,,$
(73)
which describes a homogeneous gravitational field $+g_{l}$ in the vertical
direction and finishes up at an empty singular boundary at
$z=-d-\frac{1}{3g_{l}}$.
Since $g_{tt}(-d)|_{\text{exterior}}=-1$ and
$g_{xx}(-d)|_{\text{exterior}}=g_{yy}(-d)|_{\text{exterior}}=C_{d}$, we see
that, taking into account (5), the continuity of the metric components is
assured if we set $C_{d}=\cos^{\frac{4}{3}}(\sqrt{6\pi\rho}\,d)$. And,
concerning the derivatives of metric’s components, we have
$\displaystyle\partial_{z}g_{tt}(z)|_{\text{exterior}}=2g_{l}\,\left(1+3g_{l}(d+z)\right)^{-\frac{5}{3}}\,.$
(74)
Therefore, by comparing with (71), we see that the continuity of
$\partial_{z}g_{tt}$ at the lower boundary yields
$\displaystyle g_{l}=\frac{\sqrt{6\pi\rho}}{3}\,\tan(\sqrt{6\pi\rho}\,d)\,,$
(75)
which relates the lower external gravitational field $g_{l}$ with $d$ and
$\rho$.
On the other hand, we get from (7)
$\displaystyle\partial_{z}g_{xx}(z)|_{\text{exterior}}=\partial_{z}g_{yy}(z)|_{\text{exterior}}=4\,\cos^{\frac{4}{3}}(\sqrt{6\pi\rho}\,d)\,g_{l}\,\left(1+3g_{l}(d+z)\right)^{\frac{1}{3}}\,.$
(76)
Taking into account (75), by comparing the last equation with (72), we see
that the matching is complete.
This solution is remarkably asymmetric, not only because both external
gravitational fields are different, as we can readily see by comparing (69)
and (75) (see also Fig.5), but also because the nature of vacuums is
completely different: the upper one is flat and semi-infinite, whereas the
lower one is curved and finishes up down bellow at an empty repelling boundary
where space-time curvature diverges.
This exact solutions clearly show how the attraction of distant matter can
shrink the space-time in such a way that it finishes at an empty singular
boundary, as pointed out in [1].
Free particles or photons move in the lower vacuum
($-d-\frac{1}{3g_{l}}<z<-d$) along the time-like or null geodesics discussed
in detail in [1]. So, all geodesics start and finish at the boundary of the
slab and have a turning point. Non-vertical geodesics reach a turning point
point at a finite distance from the singularity (at $z=-d-\frac{1}{3g_{l}}$),
and the smaller their horizontal momentum is, the closer they get the
singularity. The same occurs for vertically moving particles, i. e., the
higher the energy, the closer they approach the singularity. Only vertical
null geodesics just touch the singularity and bounce (see Fig. 1 of Ref. [1]
upside down).
## 8 The Newtonian limit and the restoration of the mirror symmetry
It should be noted that the solutions so far discussed are mirror-asymmetric.
In fact, it has been shown in [6] that the solution cannot have a “plane” of
symmetry in a region where $p(z)\geq 0$. In order to see this, suppose that
$z=z_{s}$ is that “plane”, then it must hold that
$\mathcal{G}^{\prime}=V^{\prime}=p^{\prime}=0$ at $z_{s}$, and so from (9) we
get that also $p(z_{s})=0$, and then $\mathcal{G}(z_{s})=1$. Now, by
differentiating (10) and using (32), we obtain
$p^{\prime\prime}(z_{s})=-4\pi\rho^{2}<0$.
Notice that (13) and the condition $V^{\prime}(z_{s})=0$ imply that $u=\pi/2$.
And then we get from (34) that the condition $\mathcal{G}^{\prime}(z_{s})=0$
implies $\kappa=\kappa_{\text{crit}}$.
Therefore, the only mirror symmetric solutions is the joining of two identical
neutral slabs discussed in section 6. We clearly get from (5), that for this
solution we have
$\displaystyle\mathcal{G}(z)=\frac{\,{}_{2}F_{1}\\!\Bigl{(}-\frac{1}{2},-\frac{1}{6};\frac{1}{2};\sin^{2}(\sqrt{6\pi\rho}\,z)\Bigr{)}}{{\left(1-\sin^{2}(\sqrt{6\pi\rho}\,z)\right)^{1/6}}}\,,$
(77)
which shows that it is a $C^{\infty}$ even function of $z$ in
$-\sqrt{\frac{\pi}{24\rho}}<z<\sqrt{\frac{\pi}{24\rho}}$. But, of course, we
have seen in section 2 that $-\rho\leq p(z)\leq 0$ in this case.
However, for the solution of the preceding section, this asymmetry turns out
to disappear when $\sqrt{6\pi\rho}\,d\ll 1$. In fact, from (69) we get
$\displaystyle
g_{u}=2\pi\rho\,d(1+\frac{2}{3}\pi\rho\,d^{2}+\dots)\text{\,\,\,for\,\,\,}\sqrt{6\pi\rho}\,d<1\,,$
(78)
while from (75) we get
$\displaystyle
g_{l}=2\pi\rho\,d(1+2\pi\rho\,d^{2}+\dots)\text{\,\,\,for\,\,\,}\sqrt{6\pi\rho}\,d<1\,.$
(79)
Hence, both gravitational fields tend to the Newtonian result $2\pi\rho\,d$,
and the difference between them is of the order $(\sqrt{6\pi\rho}\,d)^{3}$.
Furthermore, in this limit, (7) becomes
$\displaystyle\mathcal{G}(z)=1+2\pi\rho
z(z+d)+\frac{4}{3}\pi^{2}\rho^{2}z(z^{3}+d^{3})+O((\sqrt{6\pi\rho}\,d)^{6})\,,$
(80)
so
$\displaystyle g_{tt}(z)=-\mathcal{G}(z)^{2}\approx-\left(1+4\pi\rho
z(z+d)\right)\,,$ (81)
which shows that the Newtonian potential inside the slab tends to
$\displaystyle\Phi(z)=2\pi\rho z(z+d)\,.$ (82)
Since $\Phi(-\frac{d}{2}-z)=\Phi(-\frac{d}{2}+z)$, it is mirror-symmetric at
$z=-d/2$.
Moreover, we obtain from (11) that the pressure inside the slab tends to the
hydrostatic Newtonian result
$\displaystyle p(z)=-2\pi\rho^{2}z(z+d)\,.$ (83)
It should also be noted, by comparing (3) and (62), that in this limit, they
lead to
${\frac{p_{m}}{\rho}}=\frac{\pi\rho}{2}\,d^{2}\,.$ (84)
Therefore, in the Newtonian limit, the the mirror symmetry at the middle point
of the slab is restored.
## 9 Thinner Repelling Slabs
By exchanging the place of matter and vacuum, we can also match the piece of
the interior solution discarded in section 7 to the discarded asymptotically
flat tail of Taub’s vacuum, thus getting a repulsive slab.
Clearly, the inner solution is given by (7), but now
$-\sqrt{\frac{\pi}{24\rho}}<z\leq-d$. While the outer one is given by (7) with
$-d\leq z$. Therefore, we get from (75)
$\displaystyle g=\frac{\sqrt{6\pi\rho}}{3}\,\tan(\sqrt{6\pi\rho}\,d)\,,$ (85)
which relates the external gravitational field $g$ with $d$ and $\rho$. But
now, the thickness of this slab is
$d^{\prime}=\sqrt{\frac{\pi}{24\rho}}-d<\sqrt{\frac{\pi}{24\rho}}\,,$ (86)
and so
$\displaystyle
g=\frac{\sqrt{6\pi\rho}}{3}\,\cot(\sqrt{6\pi\rho}\,d^{\prime})\,.$ (87)
Of course, by means of (27), (7) and (86), $\mathcal{G}(z)$ and $p(z)$ can
also be explicitly written down in terms of $d^{\prime}$ and $\rho$ in this
case.
In this repulsive case, free particles or photons move in the vacuum ($z>0$)
along the mirror image of time-like or null geodesics discussed in detail in
[1]. All occurs in the vacuum as if there were a Taub singularity inside the
matter at a distance $|1/3g|$ from the surface—this image singularity should
not be confused with the “real” inner one situated $d^{\prime}$ from the
surface. Therefore, only the Taub’s geodesics for which the distance between
the turning point and the image singularity is smaller than $\frac{1}{3g}$,
should be cut at slab’s surface. For instance, this always occurs for vertical
photons. These facts are easily seen by looking at Fig. 1 of reference [1]
upside down and by exchanging the position of vacuum and matter.
Notice that these slabs turn out to be less repulsive than the ones discussed
in section 5, since all incoming vertical null geodesics reach the slab
surface in this case.
## 10 Concluding remarks
We have done a detailed study of the exact solution of Einstein’s equations
corresponding to a static and plane symmetric distribution of matter with
constant positive density. By matching this internal solution to vacuum ones,
we showed that different situations arise depending on the value of a
parameter $\kappa$.
We found that the dominant energy condition is satisfied only for
$\kappa\geq\kappa_{dec}=0.3513\dots$.
As a result of the matching, we get very simple complete (matter and vacuum)
exact solutions presenting some somehow astonishing properties without
counterpart in Newtonian gravitation:
The maximum depth that these slabs can reach is $\sqrt{\frac{\pi}{24\rho}}$
and the solutions turn out to be remarkably asymmetric.
We found repulsive slabs in which negative but bounded ($|p|\leq\rho$)
pressure dominate the attraction of the matter. These solutions finish deep
below at a singularity where $p=-\rho$. If their depth is smaller than
$\sqrt{\frac{\pi}{24\rho}}$, the exterior is the asymptotically flat tail of
Taub’s vacuum plane solution, while when they reach the maximum depth the
vacuum turns out to be a flat Rindler space-time with event horizons, showing
that there are incoming vertical photons which never reach the surface of the
slabs in this case.
We also found attractive solutions finishing deep below at a singularity. In
this case the outer solution in this case is a Rindler space-time.
We also described a non-singular solution of thickness $d$ surrounded by two
vacuums. This solution turns out to be attractive and remarkably asymmetric
because the nature of both vacuums is completely different: the “upper” one is
flat and semi-infinite, whereas the “lower” one is curved and finishes up down
below at an empty repelling boundary where space-time curvature diverges. The
pressure is positive and bounded, presenting a maximum at an asymmetrical
position between the boundaries. We explicitly wrote down the pressure and the
external gravitational fields in terms of $\rho$ and $d$. We show that if
$0<\sqrt{6\pi\rho}\,d<1.52744\dots$, the dominant energy condition is
satisfied all over the space-time. We also show how the mirror symmetry is
restored at the Newtonian limit. These exact solutions clearly show how the
attraction of distant matter can shrink the space-time in such a way that it
finishes at an empty singular boundary, as pointed out in [1].
We have also discussed matching an attractive slab to a repulsive one, and two
neutral ones. We also comment on how to assemble relativistic gravitational
capacitors consisting of a slice of vacuum trapped between two such slabs.
## Appendix: Some properties of ${}_{2}F_{1}(a,b;c;x)$
Here, we show how the integral appearing in the first line of (2) is
performed. By doing the change of variable $t=\sin^{2}u^{\prime}$, we can
write
$\int_{0}^{u}\sin^{a}u^{\prime}\,\cos^{b}u^{\prime}\,du^{\prime}=\frac{1}{2}\int_{0}^{\sin^{2}u}t^{\frac{a-1}{2}}\,(1-t)^{\frac{b-1}{2}}\,dt=\frac{1}{2}\
B_{\sin^{2}u}\left(\frac{a+1}{2},\frac{b+1}{2}\right)\,,$ (88)
where $B_{x}(p,q)$ is the incomplete beta function, which is related to a
hypergeometric function through
$B_{x}(p,q)=\frac{x^{p}}{p}\,_{2}F_{1}(p,1-q;p+1;x)$ (89)
(see for example [12]). Therefore,
$\displaystyle\int_{0}^{u}\sin^{a}u^{\prime}\,\cos^{b}u^{\prime}\,du^{\prime}=\frac{(\sin
u)^{a+1}}{a+1}\
_{2}F_{1}\\!\Bigl{(}\frac{a+1}{2},\frac{1-b}{2};\frac{a+3}{2};\sin^{2}u\Bigr{)}$
$\displaystyle=\frac{1}{a+1}\ (\sin u)^{a+1}\ (\cos
u)^{b+1}\>\>_{2}F_{1}\\!\Bigl{(}1,\frac{a+b+2}{2};\frac{a+3}{2};\sin^{2}u\Bigr{)}\,,$
(90)
where we used the transformation
${}_{2}F_{1}(a,b;c;x)=(1-x)^{c-a-b}\,{}_{2}F_{1}(c-a,c-b;c;x)$ in the last
step.
For the sake of completeness, we display here the very few formulas involving
hypergeometric functions ${}_{2}F_{1}(a,b;c;z)$ required to follow through all
the steps of this paper.
As it is well known
${}_{2}F_{1}(a,b;c;z)=1+\frac{ab}{c}z+\frac{a(a+1)b(b+1)}{c(c+1)}\frac{z^{2}}{2!}+\dots\,,\text{\,\,\,for\,\,\,}|z|<1\,.$
(91)
By using the transformation [12, 13]
${}_{2}F_{1}(a,b;c;z)=\frac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-a)\Gamma(c-b)}\;\;{}_{2}F_{1}(a,b;a+b-c+1;1-z)$
$\displaystyle+\,(1-z)^{c-a-b}\;\frac{\Gamma(c)\Gamma(a+b-c)}{\Gamma(a)\Gamma(b)}\;\;{}_{2}F_{1}(c-a,c-b;c-a-b+1;1-z)\,,$
(92)
with $a=-1/2$, $b=-1/6$, and $c=1/2$, we find the useful relations
$\displaystyle\frac{3}{7}\,\,(1-z)^{\frac{7}{6}}\,\,_{2}F_{1}\\!\Bigl{(}1,\frac{2}{3};\frac{13}{6};1-z\Bigr{)}=-\frac{\sqrt{\pi}\,{\Gamma(\frac{7}{6})}}{{\Gamma(\frac{2}{3})}}\,\sqrt{z}\,+\;_{2}F_{1}\\!\Bigl{(}-\frac{1}{2},-\frac{1}{6};\frac{1}{2};z\Bigr{)}$
(93)
$\displaystyle=-\frac{\sqrt{\pi}\Gamma\left(\frac{7}{6}\right)}{\Gamma\left(\frac{2}{3}\right)}\,\sqrt{z}\,+1+\frac{z}{6}+\frac{5z^{2}}{216}+\frac{11z^{3}}{1296}+\dots\,,\text{\,\,\,for\,\,\,}|z|<1\,,$
(94)
or, by making $z\to 1-z$,
$\;{}_{2}F_{1}\\!\Bigl{(}-\frac{1}{2},-\frac{1}{6};\frac{1}{2};1-z\Bigr{)}=\frac{\sqrt{\pi}\,{\Gamma(\frac{7}{6})}}{{\Gamma(\frac{2}{3})}}\sqrt{1-z}+\frac{3}{7}\,z^{\frac{7}{6}}\,_{2}F_{1}\\!\Bigl{(}1,\frac{2}{3};\frac{13}{6};z\Bigr{)}$
(95)
$\displaystyle=\frac{\sqrt{\pi}\Gamma\left(\frac{7}{6}\right)}{\Gamma\left(\frac{2}{3}\right)}\,\sqrt{1-z}+\frac{3}{7}\,z^{\frac{7}{6}}\left(1+\frac{4z}{13}+\frac{40z^{2}}{247}+\frac{128z^{3}}{1235}+\dots\right)\,,\text{\,\,\,for\,\,\,}|z|<1\,.$
(96)
## References
* [1] Gamboa Saraví, R. E.: Int. J. Mod. Phys. A 23, 1995 (2008); Errata: Int. J. Mod. Phys. A 23, 3753 (2008).
* [2] Gamboa Saraví, R. E.: Class. Quantum Grav. 25 045005 (2008).
* [3] Gamboa Saraví, R. E.: Gen. Rel. Grav. in press. Preprint arXiv:0709.3276 [gr-qc].
* [4] Taub, A. H.: Phys. Rev. 103 454, (1956).
* [5] Avakyan, R. M., Horský, J.: Sov. Astrophys. J. 11, 454, (1975).
* [6] Novotný, J., Kucera, J., Horský, J.: Gen. Rel. Grav. 19, 1195. (1987).
* [7] Stephani, H., Kramer, D., Maccallum, M., Hoenselaers, C., Herlt, E.: Exact Solutions to Einstein’s Field Equations, Second edition, Cambridge Univ. Press (2003).
* [8] Schwarzschild, K.: Sitzber. Deut. Akad. Wiss. Berlin, Kl. Math.-Phys. Tech., 424 (1916).
* [9] Taub, A. H.: Ann. Math. 53 472, (1951).
* [10] Novotný, J., Horský, J.: Czech. J. Phys. B 24, 718 (1974).
* [11] Gamboa Saraví, R. E.: in preparation.
* [12] Gradshteyn, I. S., Ryzhik, I. M.: Table of Integrals, Series, and Products, Academic Press Inc. (1963).
* [13] Abramowitz, M., Stegun, I. A., eds.: Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. New York: Dover, 1972.
|
arxiv-papers
| 2009-05-07T14:38:03 |
2024-09-04T02:49:02.367579
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Ricardo E. Gamboa Saravi",
"submitter": "Ricardo E. Gamboa Saravi",
"url": "https://arxiv.org/abs/0905.0896"
}
|
0905.0925
|
# Boolean networks with reliable dynamics
Tiago P. Peixoto tiago@fkp.tu-darmstadt.de Barbara Drossel drossel@fkp.tu-
darmstadt.de Institut für Festkörperphysik, TU Darmstadt, Hochschulstrasse 6,
64289 Darmstadt, Germany
###### Abstract
We investigated the properties of Boolean networks that follow a given
reliable trajectory in state space. A reliable trajectory is defined as a
sequence of states which is independent of the order in which the nodes are
updated. We explored numerically the topology, the update functions, and the
state space structure of these networks, which we constructed using a minimum
number of links and the simplest update functions. We found that the
clustering coefficient is larger than in random networks, and that the
probability distribution of three-node motifs is similar to that found in gene
regulation networks. Among the update functions, only a subset of all possible
functions occur, and they can be classified according to their probability.
More homogeneous functions occur more often, leading to a dominance of
canalyzing functions. Finally, we studied the entire state space of the
networks. We observed that with increasing systems size, fixed points become
more dominant, moving the networks close to the frozen phase.
###### pacs:
89.75.Da,05.65.+b,91.30.Dk,91.30.Px
## I Introduction
Boolean networks (BNs) are used to model the dynamics of a wide variety of
complex systems, ranging from neural networks Rosen-Zvi et al. (2001) and
social systems Moreira et al. (2004) to gene regulation networks Lagomarsino
et al. (2005). BNs are composed of nodes with binary states, coupled among
each other. The state of each node evolves according to a function of the
states from which it receives its inputs, similarly to what is done when using
cellular automata Wolfram (2002), but in contrast to cellular automata, BNs
have no regular lattice structure, and not all nodes are assigned the same
update function.
The simplest type of BNs are random BNs Kauffman (1969), where the connections
and the update functions are assigned at random to the nodes. These random
models have the advantage of being accessible to analytical calculations, thus
permitting a deep understanding of such systems Drossel (2008). Random BNs can
display three types of dynamical behavior, none of which is very realistic: in
the “frozen” phase, most or all nodes become fixed in a state which is
independent of the initial conditions. In the “chaotic” phase, attractors of
the dynamics are extremely long, and dynamics is very sensitive to
perturbations. At the critical point between these two phases, attractor
numbers are huge and depend strongly on the update scheme used Greil and
Drossel (2005); Klemm and Bornholdt (2005a).
In contrast to random BNs, real biological networks typically display a highly
robust behavior. For instance, the main dynamical trajectory of the yeast
cell-cycle network model derived by Li et al. Li et al. (2004) changes little
when the nodes are updated in a different order, and the system returns
quickly to this trajectory after a perturbation. In fact, whenever the
functioning of a system depends on the correct execution of a given sequence
of steps, the system must be robust with respect to the omnipresent effects of
noise.
Motivated by this requirement, we focus in the present paper on the robustness
of dynamical trajectories under fluctuations in the time at which the nodes
are updated. We consider the extreme case, where we require the system to have
a trajectory that is completely robust under a change in the update sequence.
This means that at any time all but one node would remain in their present
state when they are updated.
In contrast to the standard approach to BNs, where first the network structure
(i.e., the topology and update functions) is defined and then the dynamics is
investigated, we define first the dynamical trajectory and then construct
networks that satisfy this trajectory, with the trajectory being robust under
changes in the update sequence. A similar method has been used in Lau et al.
(2006). In the next section, we will define the model and methods used. Then,
we will discuss the properties of the networks constructed by this methods,
considering the topology, the update functions, and the state space structure.
Finally, we will outline directions for further investigations.
## II The model
A BN is defined as a directed network of $N$ nodes representing Boolean
variables $\mathbold{\sigma}\in\\{1,0\\}^{N}$, which are subject to a
dynamical update rule,
$\sigma_{i}(t+1)=f_{i}\left(\mathbold{\sigma}(t)\right)u_{i}(t)+\sigma_{i}(t)\left[1-u_{i}(t)\right]$
(1)
where $f_{i}$ is the update function assigned to node $i$, which depends
exclusively on the states of its inputs. The binary vector $\mathbold{u}(t)$
represents the _update schedule_ , and has components $u_{i}(t)=1$ if node $i$
should update at time $t$, or $u_{i}(t)=0$ if it should retain the same value.
The update functions $f_{i}$ are conveniently indexed by the outputs of their
truth table as follows: Given an arbitrary input ordering, each input value
combination
$\mathbold{\sigma}_{j}=\\{\sigma_{0},\sigma_{1},\dots,\sigma_{k-1}\\}$ will
have an associated index $c(\mathbold{\sigma})=\sum_{i}\sigma_{i}2^{i}$ which
uniquely identifies it. Any update function $f$ can in turn be uniquely
indexed by $f=\sum_{j}f(\mathbold{\sigma}_{j})2^{c(\mathbold{\sigma}_{j})}$,
where $f(\mathbold{\sigma}_{j})$ is the output of the indexed function, given
the input value combination $\mathbold{\sigma}_{j}$.
The update schedule can be chosen in three different ways: (a) Synchronous
(parallel update), where $\mathbold{u}(t)=\mathbb{1}$, and all nodes are
updated simultaneously every time step; (b) Asynchronous and deterministic,
where, for instance, $\mathbold{u}(t)=\\{1-\Theta((t+t^{0}_{i})\mod
t_{i})\\}$, where $t_{i}$ is the period with which vertex $i$ is updated,
$t^{0}_{i}$ is a local phase, and $\Theta(x)$ is the Heaviside step function;
and finally (c) Asynchronous and stochastic, where $u_{j}=1$ and $u_{i\neq
j}=0$; in the fully stochastic case $j$ is a random value in the range
$[1,N]$, chosen independently at each time step.
The choice of update schedule should take into account the fact that processes
in biological (cellular) networks are subject to stochastic fluctuations which
can affect the timing of the different steps. In principle, a network could be
organized such that the time interval between subsequent updates is so large
that the update sequence is not affected by a small level of noise in the
update times. In this case, an asynchronous deterministic updating scheme
would be appropriate. However, more generally, the noise will also affect the
sequence in which nodes are updated, suggesting an updating scheme that
contains some degree of stochasticity.
In principle, networks can respond in different ways to stochasticity in the
update sequence (see Fig. 1): (a) The system has no specific sequence of
states, and it quickly looses memory of its past states. (b) The system has
some degree of ordering in the sequence of states, with “checkpoint” states
that occur in a given order, and with certain groups of states occurring in
between. (c) The system has entirely reliable dynamics, where the sequence of
states is always the same on the attractor, no matter in which order the nodes
are updated.
In this paper, we will focus on systems that have an attractor that has
entirely reliable dynamics. Many cellular processes, such as the response to
some external signal, or the cell cycle, can only function correctly if the
system goes through the required sequence of states in the correct order.
Therefore, considering the idealization of fully reliable dynamics is
biologically motivated. Furthermore, studying networks with entirely reliable
dynamics is also of theoretical interest, since it is an idealized situation
on which one can build when studying more complicated cases. Entirely reliable
dynamics can be implemented by enforcing that consecutive states of the
attractor trajectory differ only in the value of one node. In other words, the
Hamming distance between successor states is always 1. It is obvious that this
is the only possible type of trajectory that can be entirely independent of
the update schedule. If two subsequent states differed by the state of two or
more nodes, then it would be possible to devise an update sequence which would
update one node but not the other, in contradiction to our assumption.
Entirely reliable attractors are represented in state space as simple loops.
We denote the number of different states on the attractor by
$L=\sum_{i}l_{i}$, where $l_{i}$ is the number of times node $i$ changed its
state during a full period (since the trajectory is periodic, $l_{i}$ must be
equal to 0 or a multiple of 2). Furthermore, if the states of the system were
represented by the corners of a $N$-dimensional Hamming hypercube, the
trajectory should follow its edges (see Fig. 2). The shortest possible
trajectory length, considering that no node remains at a constant value, is
$L=2N$, with $l_{i}=2$ for all nodes. The longest possible trajectory length
is $L=2^{N}$, where all states of the system are visited, and the trajectory
corresponds to a Hamiltonian walk on the $N$-dimensional Hamming hypercube
111Unlike in some types of graphs, those trajectories are always possible on
hypercubes, and are known as Grey codes in computer science Knuth (2005)..
(a) Stochastic dynamics
(b) “Checkpoint” states
(c) Entirely reliable trajectory
Figure 1: (Color online) Ilustration of levels of dynamical reliability. Each
node on the graphs above is a state of the system, and the edges represent
possible transitions between them. Figure 2: Example of reliable trajectory of
length $6$ on a system of size $N=3$.
## III Minimal reliable networks
### III.1 Construction rule
The goal of this section is to construct BNs that have a given entirely
reliable trajectory, and to investigate their properties. A fully reliable
trajectory has the property that the sequence of states is independent of the
updating scheme, which means that under parallel update only one node at a
time changes its state. How networks that go through a given sequence of
states can be constructed, was demonstrated by Lau et al Lau et al. (2006),
who investigated all possible networks which exhibit the main trajectory of
the Yeast cell cycle regulatory network. Thus, we first define the dynamics,
from which we obtain the topology and functions, opposite to what is usually
done in the literature on Boolean Networks.
In fact, there exist many networks that display a given trajectory. Even when
the full state space structure is specified, which defines the successor state
of each of its $2^{N}$ possible states, it is possible to construct a network
that has this state space structure. This can be done by constructing a fully
connected graph with $k=N$ and by assigning to each node the function that has
the required output for each of the $2^{N}$ input states. In the end, inputs
that never affect the output can be removed. If there are different sets of
inputs that can be simultaneously removed, different networks are obtained.
When not the entire state space structure, but only one reliable trajectory is
specified, there exist consequently many networks with different topology and
functions which have this trajectory and may differ in the rest of their state
space. We will restrict ourselves to _minimal_ networks, i.e., networks with
the smallest possible number of inputs for each node and the simplest possible
functions, which have the maximum possible number of identical entries in the
truth table. This minimality condition is motivated by the putative cost
associated with more connections or more complicated functions, which would
decrease the fitness of an organism. This is in contrast to what was done in
Lau et al. (2006), where all possible networks were considered, which is only
feasible on very small systems.
Such minimal networks can be constructed by a straightforward algorithm,
because the inputs and the function required for each node can be determined
independently from those of all the other nodes. The inputs for a given node
must include all predecessor nodes, which change their state 1 time step
before the given node changes its state. Additional inputs are required if the
given node assumes, during the course of the trajectory, different binary
states for the same configuration of the predecessor nodes. The choice of
these “excess” inputs is usually not unique and may include self inputs. We
perform this choice at random, but only from the possibilities which minimize
the number of inputs to each node. If not all possible configurations of the
states of the input nodes occur during the course of the trajectory, the
update function of the given node is not unique. We first assign those truth
table entries of the update function that are specified by the trajectory.
Then, we assign to all remaining entries the same output value, and we choose
the majority of output values assigned so far. (If there is no majority, we
choose either value with probability 1/2.)
The algorithm used for choosing the minimal set of inputs proceeds as follows:
To each node, we first assign all predecessor nodes as inputs. Then, if
needed, we choose “excess” inputs. We first set the number of excess inputs to
$k^{\prime}=1$, and we test in a random order the ${N\choose k^{\prime}}$
possible node combinations until we find a node set which, together with the
predecessors, is a valid input set. If no valid combination is found, we
increase $k^{\prime}$ by 1 and repeat the search. Once a valid combination is
found, the corresponding truth table is completed by applying the minimality
condition to its unspecified entries. The run time of this algorithm increases
as $O(lN^{\max(\max(k^{\prime}),1)})$, where $l$ is the average number of
flips per node, and $\max(k^{\prime})$ is the maximum value of $k^{\prime}$
for all nodes. We have observed that the run times are feasible for networks
of size up to $N=400$ and $l=12$. 222We note that an alternative procedure
that starts with an input set that contains all nodes and then randomly
removes inputs until a removal is no longer possible, is for larger $k$ much
faster than the procedure used by us. However, it will in general not produce
a minimal network. As an example, consider the case where only one minimal set
of $k$ inputs is possible for a given node. In this case, if one of these
inputs is removed early in the iteration, the input set obtained at the end
will have a size larger than the minimal $k$.
An example for a reliable trajectory and two possible networks with their
functions, obtained with the above algorithm, is given in Figure 3.
Figure 3: (Color online) Example of a random reliable trajectory for $N=10$
and $l=4$, and two possible minimal networks. The edges with dashed (red)
lines represent the inputs that are different between the two networks. Below
each network are the outputs of the truth table of each node, ordered from top
to bottom, and left to right, according to their input combination indices.
Outputs marked in grey (cyan) correspond to input combinations present in the
given trajectory.
We choose the reliable trajectory at random, without taking into consideration
possible particular features of biological networks, such as different
temporal activation patterns of the different nodes, which reflect the
function that the network must fulfill. Instead, we will consider a null
model, where the values of the nodes change randomly. The only restriction
which is imposed is that the trajectory is reliable. The only two parameters
of this trajectory are the number of nodes $N$, and the average number of
flips per node $l$. We generate a random ensemble of reliable trajectories in
the following way: First, we determine how often each node shall be flipped.
To this purpose, for each node $i$ a random number $\lambda_{i}$ is chosen
from a Poisson distribution with mean $(l-2)/2$, implying that node $i$ shall
be flipped $l_{i}\equiv 2\lambda_{i}+2$ times. The average number of flips of
each node is thus identical to $l$, and each node is flipped at least twice.
The length of the trajectory is then
$L=2N+2\sum_{i}\lambda_{i}=\sum_{i}l_{i}$. Then, we arrange these flips in a
randomly chosen order. If the resulting trajectory contains the same network
state twice, it is discarded, and a new sequence of flips is chosen.
### III.2 Topological characteristics
We first present results for the topological characteristics of the obtained
networks. We evaluate the degree distribution and the local correlations. The
degree distribution is of course strongly dependent on $l$. Local correlations
can arise when two nodes that are influenced by the same nodes are more likely
to influence each other.
Unless stated otherwise, we averaged the results from several independent
realizations of the minimal trajectories and minimal networks, for different
$N$ and $l$. The number of realizations for small $N$, up to $20$, were at
least $2000$. For intermediary values of $N$, up to $100$, it varied from 50
to 300, depending on $l$. For the larger networks, $N>100$, it ranged from 200
to 6 networks for $l<12$, and one realization for $N=400$ and $l=12$.
#### III.2.1 Degree distribution
The number of inputs of a node is at least as large as the number of its
predecessors. Whenever the state of the node cannot be written as a function
of the predecessors alone, “excess” inputs must be chosen, as already
mentioned before. The number of different predecessors $n_{p}$ per node
approaches, for large $N$, on average $l$, since it becomes unlikely for large
$N$ that the same node is chosen twice as predecessor. The typical truth table
size grows therefore with $l$ as $2^{l}$. Since the number of different
predecessor states grows only quadratically as $n_{p}l\sim l^{2}$, one can
expect the number of “excess” inputs to be small, and number of inputs per
node should be
${\left<k\right>}\simeq l,$ (2)
for sufficiently large $N$. This is confirmed by our numerical investigations,
as is shown in Fig. 4.
Figure 4: (Color online) Average degree ${\left<k\right>}$ as function of $l$
for networks of different size $N$. The straight line is the function
${\left<k\right>}=l$.
The degree distribution mirrors the distribution of the number of
predecessors. Since all nodes flip on average the same number of times, the
distribution is expected to follow a Poisson distribution for large enough
$l$. This is indeed the case, as Fig. 5 shows. For small $l$ however, the
distributions are more narrow, because we imposed the condition that each node
flips at least twice, leaving little freedom for additional predecessors when
$l$ is close to 2.
Figure 5: (Color online) In-degree and out-degree distributions of minimal
networks for different values of $l$, for $N=100$. The solid lines correspond
to Poisson distributions with the same average.
#### III.2.2 Local correlations
We obtained information about the local topology of the minimal networks by
evaluating the probability that the neighbours of a given node are connected
to each other. This probability is the so-called _clustering coefficient_
${\left<c\right>}$ Newman (2003). Random uncorrelated networks show absence of
clustering only in the limit $N\to\infty$. Thus, for finite $N$, it is
necessary to compare the obtained value with a random network of equal size
and with equal degree distribution. In order to do this, we calculated the
clustering coefficient ${\left<c_{s}\right>}$ on _shuffled_ networks, where
the links were rewired randomly, preserving the in- and out-degree of each
node. We then calculated the ratio ${\left<c\right>}/{\left<c_{s}\right>}$,
for networks of different size and average flip number $l$. If the ratio
approaches $1$, the network does not exhibit any special clustering. The
results for several values of $N$ and $l$ are shown in Fig. 6.
Figure 6: (Color online) Clustering ratio
${\left<c\right>}/{\left<c_{s}\right>}$ as a function of the average number of
flips per node $l$, for different network sizes $N$. The gray straight line
corresponds to a decay of the type $1/N$.
The most evident feature of Fig. 6 is that clustering is stronger for smaller
$l$, i.e., for sparse networks. For larger $l$ (and hence larger
${\left<k\right>}$), the average distance between nodes decreases, and the
shuffled and original networks have a similar degree of clustering. This
difference between networks with smaller and larger average degree becomes
more pronounced when the size of the networks $N$ is increased. From the data
in Fig. 6, it appears that that the ratio
${\left<c\right>}/{\left<c_{s}\right>}$ increases slowly with $N$. We will
argue in the following that this ratio will reach a finite asymptotic value in
the limit $N\to\infty$.
The finding that the clustering coefficients are larger than for random
networks can be explained by considering the above-mentioned excess inputs
that are required when the function assigned to a node cannot be based on its
predecessor nodes alone. Let us consider two consecutive flips of a node $j$
on a given trajectory. These flips are preceded by flips of the predecessor
nodes, which we call $v$ and $w$. The average time between the two considered
flips of node $j$ is $\sim L/l=N$, implying that there is a considerable
probability that node $v$ flips again before the second flip of node $j$,
giving the sequence
$vj\cdots v\cdots wj\,.$
The update function assigned to node $j$ needs an excess input if neither node
$w$ nor any other predecessor of node $j$ (which can exist only for $l>2$)
flips between the first flip of $j$ and the second flip of $v$. The simplest
choice of this excess input is node $j$ itself. Indeed, self-inputs occur more
often than in the shuffled networks, as is shown in Fig. 7. Since the number
of different possible excess inputs is proportional to $N$, we expect that the
fraction $n_{l}$ of nodes with self-inputs decreases as $n_{l}\sim 1/N$ for
large $N$, but remains larger than that of shuffled networks by a constant
multiplicative factor.
Figure 7: (Color online) Fraction $n_{l}$ of nodes with self-input, as a
function of $N$, for different values of $l$. The dashed curves are obtained
for shuffled networks, with the same degree sequence. The inset shows the
ratio $n_{l}/n^{s}_{l}$, where $n^{s}_{l}$ is the self-input ratio for the
shuffled networks.
The excess input cannot be a self-input if node $w$ flips also in the same
interval, giving the sequence
$vj\cdots v\cdots w\cdots wj\,.$
In this case, an excess input $u$ must be chosen among those nodes that flip
between the two consecutive flips of node $w$, if none of the other
predecessors of $j$ flips in this interval, giving the sequence
$vj\cdots v\cdots w\cdots u\cdots wj\,.$
Now, the average distance between the flips of node $w$ and node $u$ is
smaller than that between two randomly chosen nodes, since $w$ is required to
flip in the indicated interval. Therefore, the probability that $w$ is an
input to $u$ or vice versa is larger than random, and it scales as $1/N$ in
the limit $N\to\infty$. Since $w$ and $u$ are inputs to $j$, it follows that
the clustering coefficient is larger than the random value
${\left<c_{s}\right>}$.
From this consideration, it follows that the ratio
${\left<c\right>}/{\left<c_{s}\right>}$ approaches a constant value in the
limit $N\to\infty$. Furthermore, it follows that this ratio is larger for
smaller $l$, since it is less likely that there exist additional inputs to $j$
that flip in the required interval and make excess inputs unnecessary. The
slight increase seen in Fig. 6 can probably be attributed to a finite-size
effect.
Figure 8: (Color online) The $z$-score of the different three-node subgraphs
of minimal reliable networks, for different values of $l$ and $N=100$. The
profile $z_{\text{st}}$ corresponds to the the signal-transduction interaction
network in mammalian cells Milo et al. (2004).
In order to determine which three-node subgraphs contribute to the increased
clustering, we evaluated their $z$-score, which indicates to what extent the
frequency of each subgraph is different compared to the random case. The
$z$-score is defined as
$z_{i}=\frac{{\left<N_{i}\right>}-{\left<N^{s}_{i}\right>}}{\sqrt{{\left<(N^{s}_{i})^{2}\right>}-{\left<N^{s}_{i}\right>}^{2}}},$
(3)
where $N_{i}$ is the number of occurrences of subgraph $i$, and $N_{i}^{s}$ is
the number of occurrences of the same subgraph on a shuffled network with the
same degree sequence. Fig. 8 shows the different possible subgraphs and their
$z$-score. Subgraphs with more links have a higher $z$-score and are therefore
_network motifs_. Sparser subgraphs, where there is no link between two of the
nodes, are rarer than at random, as predicted by the clustering coefficient.
The abundance of denser motifs increases with $l$, as the network itself
becomes more dense, but the overall trend of the $z$-score is the same. One
peculiar feature is the absence of simple loops (subgraph 6), also know as
feedback loops Alon (2007). As was described above, the clustering is mostly
due to the correlations between the inputs of a given node. A simple loop does
not have this type of correlation. Furthermore, it was shown by Klemm et al
Klemm and Bornholdt (2005b) in a study of the reliability of small Boolean
networks, that feedback loops are harmful to reliable dynamics. These authors
obtained a $z$-score profile very similar to Fig. 8 (see Fig. 4 of Klemm and
Bornholdt (2005b)). They also showed that this profile is qualitatively
similar to real biological networks studied in Milo et al. (2004). A direct
comparison is shown in Fig. 8, with the motif profile of the signal-
transduction interaction network in mammalian cells Milo et al. (2004).
(a) $k=2$
(b) $k=3$
(c) $k=3$, without self-loops.
(d) $k=4$, without self-loops.
Figure 9: (Color online) Distribution of the different update functions, for
different numbers of inputs, $k$, and for different flip numbers, $l$, for
networks of size $N=20$.
In Fig 8, we did not keep track of the self-inputs, for simplicity. When self-
loops are included in the subgraphs, their number increases from 13 to 86,
which makes the analysis and presentation more elaborate. We performed this
analysis and found that a subgraph with a specific number of self-loops has a
larger $z$-score than its counterpart with less or no self-loops. The
$z$-score pattern of Fig. 8, on the other hand, is repeated for subgraphs
which share the same number of self-loops, which shows that motif occurrence
and self-regulation are largely independent.
### III.3 Properties of update functions
We evaluated the frequency of the different types of update functions in
minimal networks, for different values of $l$, see Fig. 9. Unless otherwise
stated, the results were obtained from $10^{4}$ independent realizations of
networks with $N=20$. We compared the results with those obtained for larger
values of $N$, with no discernible difference other than the reduced
statistical quality. Functions with different numbers of inputs were evaluated
separately.
The functions seem to be distributed according to different classes, where
functions of the same type occur with the same probability, while some do not
occur at all. In order to understand this distribution, it is necessary to
describe in detail what conditions need to be met by the functions, according
to the imposed dynamics and construction rules.
The subsystem composed only of the inputs of a given node follows a certain
“local trajectory” (i.e., sequence of states), which determines, together the
minimality condition described in Sec. III, the update function of the
considered node. The probabilities of the different possible trajectories
depend on the way the global trajectory is specified, and on the rules for
choosing excess inputs. The restrictions imposed on the local trajectories of
the inputs are as follows:
1. 1.
The local trajectory of the inputs must correspond to a periodic walk on the
$k$-dimensional hypercube representing their states, since the Hamming
distance at each step must be $1$. We note that in this subsystem, the same
input state is allowed to repeat within a period (only the global state
cannot). The vertices of the hypercube can be annotated with the output value
of the function at the corresponding input state (see Fig. 10 for examples).
2. 2.
For large $N$, the trajectories of any two different nodes will be
approximately random and uncorrelated. The only restriction is that every face
of the hypercube will be visited exactly $l_{v}$ times, where $v$ is the index
of the input node that has a fixed state on this face. On average we have
${\left<l_{v}\right>}=l$.
3. 3.
The output values of the function can be distributed on the vertices of the
hypercube that are visited during the walk in any possible way, with the
restriction that the output value must change $l_{j}$ times along the walk,
where $j$ is the index of the considered node. An exception are functions with
self-inputs: the vertices on the hypercube face corresponding to the self-
input must all have the same output value.
4. 4.
The output values at the vertices of the hypercube which are not visited by
the walk must be equal to the majority of the output values on the walk (this
is the minimality condition defined in section III).
5. 5.
Functions that can be reduced to a function with smaller $k$ cannot occur due
to the minimality condition, and the corresponding trajectory can be confined
to a hypercube of smaller dimension.
Fig. 10 shows examples of trajectories that are allowed or not allowed for the
case $k=3$.
(a) Valid trajectory
(b) Invalid (restriction 2)
(c) Invalid (restriction 4)
(d) Invalid (restriction 5)
Figure 10: Example of input and output trajectories on the $k$-hypercube
representing the states of the inputs, for functions with $k=3$. Allowed
transitions are represented by arrows. The color on each vertex represent the
output value. Fig. (a) represents one type of valid trajectory. Figs.
LABEL:sub@fig:traj_r2 to LABEL:sub@fig:traj_r4 represent invalid trajectories
according to the indicated restriction: LABEL:sub@fig:traj_r2 not all sides of
the cube are visited; LABEL:sub@fig:traj_r3 the function is not minimal;
LABEL:sub@fig:traj_r4 the function can be reduced to $k=2$.
The listed restrictions result in the observed distribution of update
functions. We will describe in detail all the possibilities for $k=2$, and
discuss in a more general and approximate manner the functions with $k>2$.
#### III.3.1 Functions with $k=2$
Fig. 9 shows that only 8 of the 16 possible functions occur, and all of them
with equal probability. They are all _canalyzing functions_ , with three
entries 1 (or 0) in the truth table, and one entry 0 (or 1). The hypercube
representation of all functions is shown in Fig. 11. The functions that are
not possible are obviously the constant functions (first row of Fig. 11, from
left to right), and the functions which are insensitive to one of their
inputs, due to restriction 4 (second and third row). The other functions which
do not occur are the reversible functions, which change the output at every
change of an input (fourth row). Those functions, however, are not entirely
impossible: It is possible to construct a trajectory that meets all the listed
requirements, with the specification that the output flips as often as all
inputs together (restrictions 2 and 3). Such trajectories follow the pattern
$vj\cdots wj\cdots vj\cdots wj,$
where $v$ and $w$ are the inputs of $j$. This pattern is impossible for $l=2$,
but can occur for larger $l$, albeit with a small probability, since $k=2$
functions are less likely for larger $l$; furthermore, the probability that a
node has two predecessors which occur twice decreases with $N$ as $\sim
1/N^{2}$.
Figure 11: Representation of all 16 functions with $k=2$ on the 2-hypercube.
On the left are the functions which do not (or rarely) occur in the minimal
networks, and on the right are the canalysing functions which occur with equal
probability.
#### III.3.2 Functions with $k>2$
Functions with $k>2$ seem to fall into different classes, which occur with
different probabilities. This can be seen by plotting the distribution of the
probabilities $p_{f}$ of the different functions, as shown in Fig.
12LABEL:sub@fig:all-classes for $k=5$. The different classes seem to
correspond to different function homogeneity values, defined as the number of
minority output values in the truth table, $d$. This can be verified by
selecting only those functions with a given value of $d$, and plotting their
distribution of probabilities, as shown in Figs 12LABEL:sub@fig:class-d1 to
LABEL:sub@fig:class-d4. The most frequent class comprises the functions with
only one entry in the truth table deviating from the others ($f=2^{i}$ and
$f=2^{k}-2^{i}$), with $d=1$ (see Fig 12LABEL:sub@fig:class-d1). Those are
_canalysing_ functions, where all inputs are canalysing inputs. Functions with
the same homogeneity fall into subclasses which have different probabilities.
Those functions are often negated functions ($f^{\prime}=2^{k}-f$) of one
another, and this is due to the existence of self-inputs: Self-regulated
functions are not equivalent functionally when they are negated (the input
corresponding its own output must be negated as well), despite sharing the
same homogeneity. The $0\leftrightarrow 1$ symmetry, however, is always
preserved. When self-loops are ignored, the distribution becomes symmetric
with respect to negation of the output (see Fig 9LABEL:sub@fig:fhist-k3-nl),
and the homogeneity classification becomes the predominant criterion to
distinguish between the classes (compare Figs 12LABEL:sub@fig:all-classes and
LABEL:sub@fig:all-classes-nl). But even in the absence of self-loops, the
probability classes are not uniquely defined by the homogeneity, and there are
overlaps between the different classes, as Figs 12LABEL:sub@fig:class-d2 to
LABEL:sub@fig:class-d4 show. Nevertheless, there is a general tendency that
functions with larger $d$ are less likely.
(a) All functions
(b) No self-loops
(c) $d=1$, no self-loops
(d) $d=2$, no self-loops
(e) $d=3$, no self-loops
(f) $d=4$, no self-loops
Figure 12: (Color online) Distribution of function weights $p_{f}$, subdivided
according to the value of the truth table homogeneity $d$, for different
values of the average flip number $l$, for fixed values $k=5$ and $N=20$.
Fig. 13 shows the probability of finding a function with a given value of $d$.
Since the number of different functions in a given class increases rapidly
with $d$ for small $d$, the maximum of this distribution is shifted to values
of $d$ larger than $1$. If this distribution is corrected by the number
$N_{d}$ of different functions found with the same value of $d$, an overall
decreasing function of $d$ is obtained, as shown in the graphs in the left
column of Fig. 13).
(a) $k=3$
(b) $k=4$
(c) $k=6$
Figure 13: (Color online) Distribution of functions with different values of
the truth table homogeneity $d$, for different average flip number $l$, and
$N=20$.
The observed difference in probability due to different homogeneity can be
explained as follows. We consider a node with $k$ inputs. We denote by
$M=\sum_{i}m_{i}\in[l/2,L-l/2]$ the total time during which the node is in the
state that it assumes less often. The sum is taken over all intervals during
which the node has this state.
If we denote the different possible (combined) states of the input nodes by
letters, we can represent the sequence of states through which the considered
node and its input states go by the following picture:
The shaded areas correspond to the output value 1. A state of the input nodes
that appears inside the shaded (clear) area, must appear again inside the
shaded (clear) area each time it is repeated. If we consider only the above
scenario, and essentially ignore that the trajectories must follow the edges
of a $k$-hypercube, we can show that functions with smaller values of $d$
should occur more often.
Our approximations rely on the fact that, for $N\to\infty$ and $l\gg 1$ (and
hence $L\to\infty)$, the shaded areas will be more numerous and will be
further apart in time and less correlated. In this limit, the input state
number $i$ occurs, say, $n_{i}$ times. The probability that each of the input
states occurs only in one type of area is given approximately by
$\prod_{i}\left[\left(\frac{M}{L}\right)^{n_{i}}+\left(\frac{L-M}{L}\right)^{n_{i}}\right]\,.$
(4)
The maximum of this function is attained at $M=l/2$ (or $M=L-l/2$, which is
excluded since we chose $M$ such that it counts the minority part), which is
the minimal possible value. The value of $d$ is bounded by $M$, but can be
smaller since the same input state can repeat. We can in fact see that the
case where the same state repeats at all $M$ times is more probable, by
considering all the possible permutations of the state sequence, for a given
value of $d$,
$\left[\prod_{i\leq d}{M-\displaystyle\sum_{j<i}n_{j}\choose
n_{i}}\right]\left[\prod_{i>d}{L-M-\displaystyle\sum_{d<j<i}n_{j}\choose
n_{i}}\right]$ (5)
and observing that it has a maximum at $d=1$, since $M\ll L$. (This means that
there are $M$ shaded areas of size 1 each.) It follows that with increasing
$l$ the weight of update functions with $d=1$ will become much larger than
that of every other update function, as is evident from Figs. 9 and 13. The
dominance of $d=1$ functions can already be seen for small values of $l$,
although it is less pronounced.
### III.4 State space structure
Finally, we investigated the state space of the constructed networks. We
considered the system under a stochastic update scheme, since this scheme
underlies the study presented in this paper. In this case, we define an
attractor as a recurrent set of states in state space, with the property that
there are no transitions that escape this set (i.e. a strongly connected
component in the state space graph that has no outgoing connections). The
number of states in this set is called the size of the attractor.
We evaluated the probabilities of attractors of a given size on the ensemble
of minimal networks, and their average basin size. For small networks (up to
$N=12$) the attractor size probability was obtained by exact enumeration of
the state space. For larger $N$, the state space was sampled, taking care that
the same attractor was not counted twice. This method, however, leads to a
bias, since attractors with smaller basins are less likely to be counted, and
the extent of this bias depends on the size of the network. Nevertheless, this
bias is not relevant for our point of interest, which is on the occurrence of
various attractors, but not on their precise statistics. Fig. 14 shows that
there exist almost always fixed points, and that there are often attractors
which are much larger than the imposed reliable trajectory (we considered
attractors of up to $n_{a}=10^{5}$ states). Note that the probabilities in
Fig. 14 do not sum to $1$, since a given network may have many attractors of
different sizes.
Figure 14: (Color online) Probability of attractor sizes $n_{a}$, for $l=2$
and $l=7$. Attractors corresponding to the given trajectories are plotted
separately with symbols. For each value of $N$ and $l$, $10^{4}$ different
networks were analysed. In the case of attractor sampling, $100$ different
random initial conditions per network were used.
The basin of attraction was measured as the probability of reaching an
attractor, starting from a random configuration. Fig. 15 shows that the
omnipresent fixed point has a large basin of attraction. Larger attractors
occur with smaller probabilities. The weight of the fixed point, compared to
the weight of the imposed reliable trajectory, increases with increasing $N$.
This can be explained by the entries in the truth table which are not uniquely
determined by the reliable trajectory: While number of entries fixed
throughout the trajectory grow linearly with $l$, the number of remaining
entries (as well as their contribution to the state space) grow exponentially.
In this increasingly large region of the state space, the functions behave as
constant functions.
Figure 15: (Color online) Average attractor probabilities (basin size),
$\left<p_{a}\right>$, for $l=2$ and $l=7$. Attractors corresponding to the
given trajectories are plotted separately with symbols. For each value of $N$
and $l$, $10^{4}$ different networks were analysed, and $100$ different random
initial conditions per network were used.
Attractors which are larger than the given trajectories are due to a portion
of network begin frozen in the value they have at the fixed point, while other
nodes remain frustrated, and their states change stochastically, visiting a
larger portion of the state space, without entering the fixed point or the
reliable trajectory.
For comparison, we briefly looked at the attractor sizes obtained using a
synchronous updating scheme. Not surprisingly, the attractors become much
shorter in this case, with attractors larger than the given trajectory having
only a small probability (not shown).
## IV Conclusion
We have constructed minimal Boolean networks which follow a given reliable
trajectory in state space. The trajectories considered have the necessary
feature that only one node can change its value at any moment in time, which
guarantees that the sequence of states is independent of the order in which
nodes are updated. Otherwise the nodes change their states at randomly
assigned times in the given trajectory, thus constituting a null model for
reliable dynamics. The minimality condition imposed on the networks was that
it contains the smallest possible set of inputs for each node that allows for
the given trajectory. Additionally, the truth table entries that are not fixed
by the trajectory were set to the majority value imposed by the trajectory. We
then investigated the topology, the update functions, and the state space of
those networks.
The network structure, as manifest in the degree distribution, does not
deviate significantly from a random topology. However, the network exhibits
larger clustering than a random network, and exhibits a characteristic motif
profile, which resembles both real networks of gene regulation and the pattern
of dynamically reliable motifs found in Klemm and Bornholdt (2005b). The
existence of clustering and motifs was explained by considering the “excess”
inputs that are required to avoid contradictions in the truth table, and how
they must be correlated among each other.
The update functions of the nodes show a characteristic distribution, where
only a subset of the possible functions occur, and these are divided into
distinct classes, which occur with different probabilities. The main factor
discerning the different classes is their homogeneity, characterized by the
number of entries of the minority bit in the truth table. Function with
homogeneity 1 occur with increased probability, and become the dominant
functions in the limit of large trajectories, $l\to\infty$, for fixed $k$.
Functions with more minority entries occur with a smaller probability, and
this probability decreases as the number of minority entries increase. We
presented an analytical justification for this finding, considering how the
local trajectory of the input states of a given function must behave, in the
limit $l\gg 1$.
Finally we investigated the state space of the constructed networks,
considering the possible attractors it can have, in addition to the given
reliable trajectory. To this aim, we used a stochastic update scheme. We
observed that the network almost always exhibits a fix point of the dynamics,
and often attractors which can be much larger than the given trajectory. The
basin size of the fixed point is very large, and dominates the basin size of
the given trajectory in the limit of large system size. This is a consequence
of the minimality condition imposed on the network: The region of state space
dictated by the imposed trajectory increases only linearly with system size,
while the entire state space grows exponentially. Outside the state space
region fixed by the reliable trajectory, the constructed functions behave as
constant functions, which drive the system nearer to the frozen phase.
In this work, we have used a null model for reliable trajectories, where the
nodes change their values at random times. Real gene regulatory networks
deviate significantly from this, since they must agree with the cell cycle or
the pathway taken during embryonic development. Certain proteins need to be
always present in the cell, while others are produced only under specific
conditions. The degree distribution and the update functions must reflect this
behavior. However, some of the features found for the null model presented in
this work, should also be present in more realistic systems. The existence of
clustering, and the motif profile found, for instance, do not depend strongly
on the specific temporal patterns of the nodes, but are imposed by the
reliability condition. Similarly, the dominance of strongly canalyzing
functions is a consequence of the reliability condition and should be
relatively robust to the introduction of temporal correlations. Nevertheless,
biochemistry makes some canalyzing functions more likely than others.
An important feature of biological networks that is not reflected in the null
model presented in this paper, is the robustness with respect to perturbations
of a node. Such a robustness can only be obtained when more than the minimum
possible number of inputs is assigned to a node. Indeed, it has been shown in
Gershenson et al. (2006) that more redundancy allows for more robustness.
Similarly, requiring that the reliable trajectory has the largest basin of
attraction or that other attractors of the system are also reliable
trajectories, may increase the number of links in the network.
Finally, the requirement that trajectories are fully reliable is an
idealization which goes beyond what is necessary for gene regulatory networks.
Real networks have checkpoint states, but between these states, the precise
sequence of events is not always important. On the other hand, full
reliability may be necessary for certain subsystems of the gene network, where
a strict sequence of local states is required. The minimal reliable networks
discussed in this paper should be compared more realistically to such reliable
modules.
## V Acknowledgements
We thank Ron Milo for providing the signal-transduction network data. We
acknowledge the support of this work by the Humboldt Foundation and by the DFG
under contract number Dr300/5-1.
## References
* Rosen-Zvi et al. (2001) M. Rosen-Zvi, A. Engel, and I. Kanter, Phys. Rev. Lett. 87, 078101 (2001).
* Moreira et al. (2004) A. A. Moreira, A. Mathur, D. Diermeier, and L. A. N. Amaral, Proc. Nat. Ac. Sci. 101, 12085 (2004).
* Lagomarsino et al. (2005) M. C. Lagomarsino, P. Jona, and B. Bassetti, Phys. Rev. Lett. 95, 158701 (2005).
* Wolfram (2002) S. Wolfram, _A New Kind of Science_ (Wolfram Media, 2002), ISBN 1579550088.
* Kauffman (1969) S. A. Kauffman, J. Theor. Biol. 22, 437 (1969).
* Drossel (2008) B. Drossel, in _Reviews of Nonlinear Dynamics and Complexity_ , edited by H. G. Schuster (Wiley, 2008), vol. 1.
* Greil and Drossel (2005) F. Greil and B. Drossel, Phys. Rev. Lett. 95, 048701 (2005).
* Klemm and Bornholdt (2005a) K. Klemm and S. Bornholdt, Physical Review E 72, 055101 (2005a).
* Li et al. (2004) F. Li, T. Long, Y. Lu, Q. Ouyang, and C. Tang, Proc. Nat. Ac. Sci. p. 0305937101 (2004).
* Lau et al. (2006) K. Y. Lau, S. Ganguli, and C. Tang, Mol. Syst. Biol Phys Rev E 75, 051907 (2006).
* Newman (2003) M. E. J. Newman, SIAM Review 45, 167 (2003).
* Alon (2007) U. Alon, _An introduction to systems biology: design principles of biological circuits_ (Chapman & Hall/CRC, 2007).
* Klemm and Bornholdt (2005b) K. Klemm and S. Bornholdt, Proc. Nat. Ac. Sci. 102, 18414 (2005b).
* Milo et al. (2004) R. Milo, S. Itzkovitz, N. Kashtan, R. Levitt, S. Shen-Orr, I. Ayzenshtat, M. Sheffer, and U. Alon, Science 303, 1538 (2004).
* Gershenson et al. (2006) C. Gershenson, S. A. Kauffman, and I. Shmulevich, in _Artificial Life X, Proceedings of the Tenth International Conference on the Simulation and Synthesis of Living Systems_ , edited by L. S. Yaeger, M. A. Bedau, D. Floreano, R. L. Goldstone, and A. Vespignani (MIT Press, 2006), pp. 35–42.
* Knuth (2005) D. E. Knuth, _The Art of Computer Programming, Volume 4, Fascicle 3: Generating All Combinations and Partitions_ (Addison-Wesley Professional, 2005).
|
arxiv-papers
| 2009-05-06T22:14:22 |
2024-09-04T02:49:02.376910
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Tiago P. Peixoto, Barbara Drossel",
"submitter": "Tiago Peixoto",
"url": "https://arxiv.org/abs/0905.0925"
}
|
0905.0946
|
[labelstyle=]
# The Sarkisov program
Christopher D. Hacon Department of Mathematics
University of Utah
155 South 1400 East
JWB 233
Salt Lake City, UT 84112-0090, USA hacon@math.utah.edu and James McKernan
Department of Mathematics
University of California at Santa Barbara
Santa Barbara, CA 93106, USA mckernan@math.ucsb.edu Department of Mathematics
MIT
77 Massachusetts Avenue
Cambridge, MA 02139, USA mckernan@math.mit.edu
###### Abstract.
Any two birational Mori fibre spaces are connected by a sequence of Sarkisov
links.
The first author was partially supported by NSF grant no: 0757897 and by the
Clay Mathematics Institute. The second author was partially supported by NSA
grant no: H98230-06-1-0059, NSF grant no: 0701101 and an Eisenbud fellowship.
Some of this work was done whilst both authors were visiting MSRI and both
authors would like to thank MSRI for its hospitality.
###### Contents
1. 1 Introduction
2. 2 Notation and conventions
3. 3 The combinatorics of ample models
4. 4 Proof of (1.3)
## 1\. Introduction
We prove that any two birational Mori fibre spaces are connected by a sequence
of elementary transformations, known as Sarkisov links:
###### Theorem 1.1.
Suppose that $\phi\colon X\longrightarrow S$ and $\psi\colon Y\longrightarrow
T$ are two Mori fibre spaces with $\mathbb{Q}$-factorial terminal
singularities.
Then $X$ and $Y$ are birational if and only if they are related by a sequence
of Sarkisov links.
Recall the following:
###### Conjecture 1.2.
Let $(Z,\Phi)$ be a kawamata log terminal pair.
Then we may run $f\colon Z\dasharrow X$ the $(K_{Z}+\Phi)$-MMP such that
either
1. (1)
$(X,\Delta)$ is a log terminal model, that is $K_{X}+\Delta$ is nef, or
2. (2)
there is a Mori fibre space $\phi\colon X\longrightarrow S$, that is
$\rho(X/S)=1$ and $-(K_{X}+\Delta)$ is $\phi$-ample,
where $\Delta=f_{*}\Phi$.
We will refer to the log terminal model $X$ and the Mori fibre space $\phi$ as
the output of the $(K_{Z}+\Phi)$-MMP. If $h\colon Z\dasharrow X$ is any
sequence of divisorial contractions and flips for the $(K_{Z}+\Phi)$-MMP then
we say that $h$ is the result of running the $(K_{Z}+\Phi)$-MMP. In other
words if $h$ is the result of running the $(K_{Z}+\Phi)$-MMP then $X$ does not
have to be either a log terminal model or a Mori fibre space.
By [1] the only unknown case of (1.2) is when $K_{Z}+\Phi$ is pseudo-effective
but neither $\Phi$ nor $K_{Z}+\Phi$ is big. Unfortunately the output is not
unique in either case. We will call two Mori fibre spaces $\phi\colon
X\longrightarrow S$ and $\psi\colon Y\longrightarrow T$ Sarkisov related if
$X$ and $Y$ are outcomes of running the $(K_{Z}+\Phi)$-MMP, for the same
$\mathbb{Q}$-factorial kawamata log terminal pair $(Z,\Phi)$. This defines a
category, which we call the Sarkisov category, whose objects are Mori fibre
spaces and whose morphisms are the induced birational maps $X\dasharrow Y$
between two Sarkisov related Mori fibre spaces. Our goal is to show that every
morphism in this category is a product of Sarkisov links. In particular a
Sarkisov link should connect two Sarkisov related Mori fibre spaces.
###### Theorem 1.3.
If $\phi\colon X\longrightarrow S$ and $\psi\colon Y\longrightarrow T$ are two
Sarkisov related Mori fibres spaces then the induced birational map
$\sigma\colon X\dasharrow Y$ is a composition of Sarkisov links.
Note that if $X$ and $Y$ are birational and have $\mathbb{Q}$-factorial
terminal singularities, then $\phi$ and $\psi$ are automatically the outcome
of running the $K_{Z}$-MMP for some projective variety $Z$, so that (1.1) is
an easy consequence of (1.3).
It is proved in [1] that the number of log terminal models is finite if either
$\Phi$ or $K_{Z}+\Phi$ is big, and it is conjectured that in general the
number of log terminal models is finite up to birational automorphisms.
Moreover Kawamata, see [5], has proved:
###### Theorem 1.4.
Suppose that $\sigma\colon X\dasharrow Y$ is a birational map between two
$\mathbb{Q}$-factorial varieties which is an isomorphism in codimension one.
If $K_{X}+\Delta$ and $K_{Y}+\Gamma$ are kawamata log terminal and nef and
$\Gamma$ is the strict transform of $\Delta$ then $\sigma$ is the composition
of $(K_{X}+\Delta)$-flops.
Note that if the pairs $(X,\Delta)$ and $(Y,\Gamma)$ both have
$\mathbb{Q}$-factorial terminal singularities then the birational map $\sigma$
is automatically an isomorphism in codimension one.
We recall the definition of a Sarkisov link. Suppose that $\phi\colon
X\longrightarrow S$ and $\psi\colon Y\longrightarrow T$ are two Mori fibre
spaces. A Sarkisov link $\sigma\colon X\dasharrow Y$ between $\phi$ and $\psi$
is one of four types: There is a divisor $\Xi$ on the space $L$ on the top
left (be it $L=X$ or $L=X^{\prime}$) such that $K_{L}+\Xi$ is kawamata log
terminal and numerically trivial over the base (be it $S$, $T$, or $R$). Every
arrow which is not horizontal is an extremal contraction. If the target is $X$
or $Y$ it is a divisorial contraction. The horizontal dotted arrows are
compositions of $(K_{L}+\Xi)$-flops. Links of type IV break into two types,
IVm and IVs. For a link of type IVm both $s$ and $t$ are Mori fibre spaces.
For a link of type IVs both $s$ and $t$ are small birational contractions. In
this case $R$ is not $\mathbb{Q}$-factorial; for every other type of link all
varieties are $\mathbb{Q}$-factorial. Note that there is an induced birational
map $\sigma\colon X\dasharrow Y$ but not necessarily a rational map between
$S$ and $T$.
The Sarkisov program has its origin in the birational classification of ruled
surfaces. A link of type I corresponds to the diagram Note that there are no
flops for surfaces so the top horizontal map is always the identity. The top
vertical arrow on the left is the blow up of a point in $\mathbb{P}^{2}$ and
$\psi$ is the natural map given by the pencil of lines. A link of type III is
the same diagram, reflected in a vertical line,
A link of type II corresponds to the classical elementary transformation
between ruled surfaces, The birational map $X^{\prime}\longrightarrow X$
blows up a point in one fibre and the birational map
$Y^{\prime}\longrightarrow Y$ blows down the old fibre. Finally a link of type
IV corresponds to switching between the two ways to project
$\mathbb{P}^{1}\times\mathbb{P}^{1}$ down to $\mathbb{P}^{1}$,
$\begin{diagram}$
It is a fun exercise to factor the classical Cremona transformation
$\sigma\colon\mathbb{P}^{2}\dasharrow\mathbb{P}^{2}$,
$[X:Y:Z]\longrightarrow[X^{-1}:Y^{-1}:Z^{-1}]$ into a product of Sarkisov
links. Indeed one can use the Sarkisov program to give a very clean proof that
the birational automorphism of $\mathbb{P}^{2}$ is generated by this
birational map $\sigma$ and $\operatorname{PGL}(3)$. More generally the
Sarkisov program can sometimes be used to calculate the birational
automorphism group of Mori fibre spaces, especially Fano varieties. With this
said, note that the following problem seems quite hard:
###### Question 1.5.
What are generators of the birational automorphism group of $\mathbb{P}^{3}$?
Note that a link of type IVs only occurs in dimension four or more. For an
example of a link of type IVs simply take $S\dasharrow T$ to be a flop between
threefolds, let $S\longrightarrow R$ be the base of the flop and let
$X=S\times\mathbb{P}^{1}$ and $Y=T\times\mathbb{P}^{1}$ with the obvious maps
down to $S$ and $T$. It is conceivable that one can factor a link of type IVs
into links of type I and III. However given any positive integer $k$ it is
easy to write down examples of links of type IV which cannot be factored into
fewer than $k$ links of type I, II or III.
Let us now turn to a description of the proof of (1.3). The proof is based on
the original ideas of the Sarkisov program (as explained by Corti and Reid
[3]; see also [2]). We are given a birational map $\sigma\colon X\dasharrow Y$
and the objective is to factor $\sigma$ into a product of Sarkisov links. In
the original proof one keeps track of some subtle invariants and the idea is
to prove:
* •
the first Sarkisov link $\sigma_{1}$ exists,
* •
if one chooses $\sigma_{1}$ appropriately then the invariants improve, and
* •
the invariants cannot increase infinitely often.
Sarkisov links arise naturally if one plays the $2$-ray game. If the relative
Picard number is two then there are only two rays to contract and this gives a
natural way to order the steps of the minimal model program. One interesting
feature of the original proof is that it is a little tricky to prove the
existence of the first Sarkisov link, even if we assume existence and
termination of flips. In the original proof one picks a linear system on $Y$
and pulls it back to $X$. There are then three invariants to keep track of;
the singularities of the linear system on $X$, as measured by the canonical
threshold, the number of divisors of log discrepancy one (after rescaling to
the canonical threshold) and the pseudo-effective threshold. Even for
threefolds it is very hard to establish that these invariants satisfy the
ascending chain condition.
Our approach is quite different. We don’t consider any linear systems nor do
we try to keep track of any invariants. Instead we use one of the main results
of [1], namely finiteness of ample models for kawamata log terminal pairs
$(Z,A+B)$. Here $A$ is a fixed ample $\mathbb{Q}$-divisor and $B$ ranges over
a finite dimensional affine space of Weil divisors. The closure of the set of
divisors $B$ with the same ample model is a disjoint union of finitely many
polytopes and the union of all of these polytopes corresponds to divisors in
the effective cone.
Now if the space of Weil divisors spans the Néron-Severi group then one can
read off which ample model admits a contraction to another ample model from
the combinatorics of the polytopes, (3.3). Further this property is preserved
on taking a general two dimensional slice, (3.4). Sarkisov links then
correspond to points on the boundary of the effective cone which are contained
in more than two polytopes, (3.7). To obtain the required factorisation it
suffices to simply traverse the boundary. In other words instead of
considering the closed cone of curves and playing the $2$-ray game we look at
the dual picture of Weil divisors and we work inside a carefully chosen two
dimensional affine space. The details of the correct choice of underlying
affine space are contained in §4.
To illustrate some of these ideas, let us consider an easy case. Let $S$ be
the blow up of $\mathbb{P}^{2}$ at two points. Then $S$ is a toric surface and
there are five invariant divisors. The two exceptional divisors, $E_{1}$ and
$E_{2}$, the strict transform $L$ of the line which meets $E_{1}$ and $E_{2}$,
and finally the strict transform $L_{1}$ and $L_{2}$ of two lines, one of
which meets $E_{1}$ and one of which meets $E_{2}$. Then the cone of effective
divisors is spanned by the invariant divisors and according to [4] the
polytopes we are looking for are obtained by considering the chamber
decomposition given by the invariant divisors. Since $L_{1}=L+E_{1}$ and
$L_{2}=L+E_{2}$ the cone of effective divisors is spanned by $L$, $E_{1}$ and
$E_{2}$. Since $-K_{S}$ is ample, we can pick an ample $\mathbb{Q}$-divisor
$A$ such that $K_{S}+A\sim_{\mathbb{Q}}0$ and $K_{S}+A+E_{1}+E_{2}+L$ is
divisorially log terminal. Let $V$ be the real vector space of Weil divisors
spanned by $E_{1}$, $E_{2}$ and $L$. In this case projecting
$\mathcal{L}_{A}(V)$ from the origin we get
We have labelled each polytope by the corresponding model. Imagine going
around the boundary clockwise, starting just before the point corresponding to
$L$. The point $L$ corresponds to a Sarkisov link of type IVm, the point
$L+E_{2}$ a link of type II, the point $E_{2}$ a link of type III, the point
$E_{1}$ a link of type I and the point $L+E_{1}$ another link of type II.
## 2\. Notation and conventions
We work over the field of complex numbers $\mathbb{C}$. An
$\mathbb{R}$-Cartier divisor $D$ on a variety $X$ is nef if $D\cdot C\geq 0$
for any curve $C\subset X$. We say that two $\mathbb{R}$-divisors $D_{1}$,
$D_{2}$ are $\mathbb{R}$-linearly equivalent ($D_{1}\sim_{\mathbb{R}}D_{2}$)
if $D_{1}-D_{2}=\sum r_{i}(f_{i})$ where $r_{i}\in\mathbb{R}$ and $f_{i}$ are
rational functions on $X$. We say that an $\mathbb{R}$-Weil divisor $D$ is big
if we may find an ample $\mathbb{R}$-divisor $A$ and an $\mathbb{R}$-divisor
$B\geq 0$, such that $D\sim_{\mathbb{R}}A+B$. A divisor $D$ is pseudo-
effective, if for any ample divisor $A$ and any rational number $\epsilon>0$,
the divisor $D+\epsilon A$ is big. If $A$ is a $\mathbb{Q}$-divisor, we say
that $A$ is a general ample $\mathbb{Q}$-divisor if $A$ is ample and there is
a sufficiently divisible integer $m>0$ such that $mA$ is very ample and
$mA\in|mA|$ is very general.
A log pair $(X,\Delta)$ is a normal variety $X$ and an $\mathbb{R}$-Weil
divisor $\Delta\geq 0$ such that $K_{X}+\Delta$ is $\mathbb{R}$-Cartier. We
say that a log pair $(X,\Delta)$ is log smooth, if $X$ is smooth and the
support of $\Delta$ is a divisor with global normal crossings. A projective
birational morphism $g\colon Y\longrightarrow X$ is a log resolution of the
pair $(X,\Delta)$ if $Y$ is smooth and the strict transform $\Gamma$ of
$\Delta$ union the exceptional set $E$ of $g$ is a divisor with normal
crossings support. If we write
$K_{Y}+\Gamma+E=g^{*}(K_{X}+\Delta)+\sum a_{i}E_{i},$
where $E=\sum E_{i}$ is the sum of the exceptional divisors then the log
discrepancy $a(E_{i},X,\Delta)$ of $E_{i}$ is $a_{i}$. By convention the log
discrepancy of any divisor $B$ which is not exceptional is $1-b$, where $b$ is
the coefficient of $B$ in $\Delta$. The log discrepancy $a$ is the infinimum
of the log discrepancy of any divisor.
A pair $(X,\Delta)$ is kawamata log terminal if $a>0$. We say that the pair
$(X,\Delta)$ is log canonical if $a\geq 0$. We say that the pair $(X,\Delta)$
is terminal if the log discrepancy of any exceptional divisor is greater than
one.
We say that a rational map $\phi\colon X\dasharrow Y$ is a rational
contraction if there is a resolution $p\colon W\longrightarrow X$ and $q\colon
W\longrightarrow Y$ of $\phi$ such that $p$ and $q$ are contraction morphisms
and $p$ is birational. We say that $\phi$ is a birational contraction if $q$
is in addition birational and every $p$-exceptional divisor is
$q$-exceptional. If in addition $\phi^{-1}$ is also a birational contraction,
we say that $\phi$ is a small birational map. We refer the reader to [1] for
the definitions of negative and non-positive rational contractions and of log
terminal models.
If $\mathcal{C}$ is a closed convex in a finite dimensional real vector space
then $\mathcal{C}^{*}$ denotes the dual convex set in the dual real vector
space.
## 3\. The combinatorics of ample models
We fix some notation. $Z$ is a smooth projective variety, $V$ is a finite
dimensional affine subspace of the real vector space
$\operatorname{WDiv}_{\mathbb{R}}(Z)$ of Weil divisors on $Z$, which is
defined over the rationals, and $A\geq 0$ is an ample $\mathbb{Q}$-divisor on
$Z$. We suppose that there is an element $\Theta_{0}$ of $\mathcal{L}_{A}(V)$
such that $K_{Z}+\Theta_{0}$ is big and kawamata log terminal.
We recall some definitions and notation from [1]:
###### Definition 3.1.
Let $D$ be an $\mathbb{R}$-divisor on $Z$.
We say that $f\colon Z\dasharrow X$ is the ample model of $D$, if $f$ is a
rational contraction, $X$ is a normal projective variety and there is an ample
divisor $H$ on $X$ such that if $p\colon W\longrightarrow Z$ and $q\colon
W\longrightarrow X$ resolve $f$ and we write
$p^{*}D\sim_{\mathbb{R}}q^{*}H+E$, then $E\geq 0$ and for every
$B\sim_{\mathbb{R}}p^{*}D$ if $B\geq 0$ then $B\geq E$.
Note that if $f$ is birational then $q_{*}E=0$.
###### Definition 3.2.
Let
$\displaystyle V_{A}$ $\displaystyle=\\{\,\Theta\,|\,\Theta=A+B,B\in V\,\\},$
$\displaystyle\mathcal{L}_{A}(V)$ $\displaystyle=\\{\,\Theta=A+B\in
V_{A}\,|\,\text{$K_{Z}+\Theta$ is log canonical and $B\geq 0$}\,\\},$
$\displaystyle\mathcal{E}_{A}(V)$
$\displaystyle=\\{\,\Theta\in\mathcal{L}_{A}(V)\,|\,\text{$K_{Z}+\Theta$ is
pseudo-effective}\,\\}.$
Given a rational contraction $f\colon Z\dasharrow X$, define
$\mathcal{A}_{A,f}(V)=\\{\,\Theta\in\mathcal{E}_{A}(V)\,|\,\text{$f$ is the
ample model of $(Z,\Theta)$}\,\\}.$
In addition, let $\mathcal{C}_{A,f}(V)$ denote the closure of
$\mathcal{A}_{A,f}(V)$.
###### Theorem 3.3.
There are finitely many $1\leq i\leq m$ rational contractions $f_{i}\colon
Z\dasharrow X_{i}$ with the following properties:
1. (1)
$\displaystyle{\\{\,\mathcal{A}_{i}=\mathcal{A}_{A,f_{i}}\,|\,1\leq i\leq
m\,\\}}$ is a partition of $\mathcal{E}_{A}(V)$. $\mathcal{A}_{i}$ is a finite
union of interiors of rational polytopes. If $f_{i}$ is birational then
$\mathcal{C}_{i}=\mathcal{C}_{A,f_{i}}$ is a rational polytope.
2. (2)
If $1\leq i\leq m$ and $1\leq j\leq m$ are two indices such that
$\mathcal{A}_{j}\cap\mathcal{C}_{i}\neq\varnothing$ then there is a
contraction morphism $f_{i,j}\colon X_{i}\longrightarrow X_{j}$ and a
factorisation $f_{j}=f_{i,j}\circ f_{i}$.
Now suppose in addition that $V$ spans the Néron-Severi group of $Z$.
1. (3)
Pick $1\leq i\leq m$ such that a connected component $\mathcal{C}$ of
$\mathcal{C}_{i}$ intersects the interior of $\mathcal{L}_{A}(V)$. The
following are equivalent
* •
$\mathcal{C}$ spans $V$.
* •
If $\Theta\in\mathcal{A}_{i}\cap\mathcal{C}$ then $f_{i}$ is a log terminal
model of $K_{Z}+\Theta$.
* •
$f_{i}$ is birational and $X_{i}$ is $\mathbb{Q}$-factorial.
2. (4)
If $1\leq i\leq m$ and $1\leq j\leq m$ are two indices such that
$\mathcal{C}_{i}$ spans $V$ and $\Theta$ is a general point of
$\mathcal{A}_{j}\cap\mathcal{C}_{i}$ which is also a point of the interior of
$\mathcal{L}_{A}(V)$ then $\mathcal{C}_{i}$ and
$\overline{\operatorname{NE}}(X_{i}/X_{j})^{*}\times\mathbb{R}^{k}$ are
locally isomorphic in a neighbourhood of $\Theta$, for some $k\geq 0$. Further
the relative Picard number of $f_{i,j}\colon X_{i}\longrightarrow X_{j}$ is
equal to the difference in the dimensions of $\mathcal{C}_{i}$ and
$\mathcal{C}_{j}\cap\mathcal{C}_{i}$.
###### Proof.
(1) is proved in [1].
Pick $\Theta\in\mathcal{A}_{j}\cap\mathcal{C}_{i}$ and
$\Theta^{\prime}\in\mathcal{A}_{i}$ so that
$\Theta_{t}=\Theta+t(\Theta^{\prime}-\Theta)\in\mathcal{A}_{i}\qquad\text{if}\qquad
t\in(0,1].$
By finiteness of log terminal models, cf. [1], we may find a positive constant
$\delta>0$ and a birational contraction $f\colon Z\dasharrow X$ which is a log
terminal model of $K_{Z}+\Theta_{t}$ for $t\in(0,\delta]$. Replacing
$\Theta^{\prime}=\Theta_{1}$ by $\Theta_{\delta}$ we may assume that
$\delta=1$. If we set
$\Delta_{t}=f_{*}\Theta_{t},$
then $K_{X}+\Delta_{t}$ is kawamata log terminal and nef, and $f$ is
$K_{Z}+\Theta_{t}$ non-positive for $t\in[0,1]$. As $\Delta_{t}$ is big the
base point free theorem implies that $K_{X}+\Delta_{t}$ is semiample and so
there is an induced contraction morphism $g_{i}\colon X\longrightarrow X_{i}$
together with ample divisors $H_{1/2}$ and $H_{1}$ such that
$K_{X}+\Delta_{1/2}=g_{i}^{*}H_{1/2}\qquad\text{and}\qquad
K_{X}+\Delta_{1}=g_{i}^{*}H_{1}.$
If we set
$H_{t}=(2t-1)H_{1}+2(1-t)H_{1/2},$
then
$\displaystyle K_{X}+\Delta_{t}$
$\displaystyle=(2t-1)(K_{X}+\Delta_{1})+2(1-t)(K_{X}+\Delta_{1/2})$
$\displaystyle=(2t-1)g_{i}^{*}H_{1}+2(1-t)g_{i}^{*}H_{1/2}$
$\displaystyle=g_{i}^{*}H_{t},$
for all $t\in[0,1]$. As $K_{X}+\Delta_{0}$ is semiample, it follows that
$H_{0}$ is semiample and the associated contraction $f_{i,j}\colon
X_{i}\longrightarrow X_{j}$ is the required morphism. This is (2).
Now suppose that $V$ spans the Néron-Severi group of $Z$. Suppose that
$\mathcal{C}$ spans $V$. Pick $\Theta$ in the interior of
$\mathcal{C}\cap\mathcal{A}_{i}$. Let $f\colon Z\dasharrow X$ be a log
terminal model of $K_{Z}+\Theta$. It is proved in [1] that $f=f_{j}$ for some
index $1\leq j\leq m$ and that $\Theta\in\mathcal{C}_{j}$. But then
$\mathcal{A}_{i}\cap\mathcal{A}_{j}\neq\varnothing$ so that $i=j$.
If $f_{i}$ is a log terminal model of $K_{Z}+\Theta$ then $f_{i}$ is
birational and $X_{i}$ is $\mathbb{Q}$-factorial.
Finally suppose that $f_{i}$ is birational and $X_{i}$ is
$\mathbb{Q}$-factorial. Fix $\Theta\in\mathcal{A}_{i}$. Pick any divisor $B\in
V$ such that $-B$ is ample $K_{X_{i}}+f_{i*}(\Theta+B)$ is ample and
$\Theta+B\in\mathcal{L}_{A}(V)$. Then $f_{i}$ is $(K_{Z}+\Theta+B)$-negative
and so $\Theta+B\in\mathcal{A}_{i}$. But then $\mathcal{C}_{i}$ spans $V$.
This is (3).
We now prove (4). Let $f=f_{i}$ and $X=X_{i}$. As $\mathcal{C}_{i}$ spans $V$,
(3) implies that $f$ is birational and $X$ is $\mathbb{Q}$-factorial so that
$f$ is a $\mathbb{Q}$-factorial weak log canonical model of $K_{Z}+\Theta$.
Suppose that ${E}_{1},{E}_{2},\dots,{E}_{k}$ are the divisors contracted by
$f$. Pick $B_{i}\in V$ numerically equivalent to $E_{i}$. If we let
$E_{0}=\sum E_{i}$ and $B_{0}=\sum B_{i}$ then $E_{0}$ and $B_{0}$ are
numerically equivalent. As $\Theta$ belongs to the interior of
$\mathcal{L}_{A}(V)$ we may find $\delta>0$ such that $K_{Z}+\Theta+\delta
E_{0}$ and $K_{Z}+\Theta+\delta B_{0}$ are both kawamata log terminal. Then
$f$ is $(K_{Z}+\Theta+\delta E_{0})$-negative and so $f$ is a log terminal
model of $K_{Z}+\Theta+\delta E_{0}$ and $f_{j}$ is the ample model of
$K_{Z}+\Theta+\delta E_{0}$. But then $f$ is also a log terminal model of
$K_{Z}+\Theta+\delta B_{0}$ and $f_{j}$ is also the ample model of
$K_{Z}+\Theta+\delta B_{0}$. In particular $\Theta+\delta
B_{0}\in\mathcal{A}_{j}\cap\mathcal{C}_{i}$. As we are supposing that $\Theta$
is general in $\mathcal{A}_{j}\cap\mathcal{C}_{i}$, in fact $f$ must be a log
terminal model of $K_{Z}+\Theta$. In particular $f$ is
$(K_{Z}+\Theta)$-negative.
Pick $\epsilon>0$ such that if $\Xi\in V$ and $\|\Xi-\Theta\|<\epsilon$ then
$\Xi$ belongs to the interior of $\mathcal{L}_{A}(V)$ and $f$ is
$(K_{Z}+\Xi)$-negative. Then the condition that $\Xi\in\mathcal{C}_{i}$ is
simply the condition that $K_{X}+\Delta=f_{*}(K_{Z}+\Xi)$ is nef. Let $W$ be
the affine suspace of $\operatorname{WDiv}_{\mathbb{R}}(X)$ given by pushing
forward the elements of $V$ and let
$\mathcal{N}=\\{\,\Delta\in W\,|\,\text{$K_{X}+\Delta$ is nef}\,\\}.$
Given $({a}_{1},{a}_{2},\dots,{a}_{k})\in\mathbb{R}^{k}$ let $B=\sum
a_{i}B_{i}$ and $E=\sum a_{i}E_{i}$. If $\|B\|<\epsilon$ then, as $\Xi+B$ is
numerically equivalent to $\Xi+E$, $K_{X}+\Delta\in\mathcal{N}$ if and only if
$K_{X}+\Delta+f_{*}B\in\mathcal{N}$. In particular $\mathcal{C}_{i}$ is
locally isomorphic to $\mathcal{N}\times\mathbb{R}^{k}$.
But since $f_{j}$ is the ample model of $K_{Z}+\Theta$, in fact we can choose
$\epsilon$ sufficiently small so that $K_{X}+\Delta$ is nef if and only if
$K_{X}+\Delta$ is nef over $X_{j}$, see §3 of [1]. There is a surjective
affine linear map from $W$ to the space of Weil divisors on $X$ modulo
numerical equivalence over $X_{j}$ and this induces an isomorphism
$\mathcal{N}\simeq\overline{\operatorname{NE}}(X/X_{j})^{*}\times\mathbb{R}^{l},$
in a neighbourhood of $f_{*}\Theta$.
Note that $K_{X}+f_{*}\Theta$ is numerically trivial over $X_{j}$. As
$f_{*}\Theta$ is big and $K_{X}+f_{*}\Theta$ is kawamata log terminal we may
find an ample $\mathbb{Q}$-divisor $A^{\prime}$ and a divisor $B^{\prime}\geq
0$ such that
$K_{X}+A^{\prime}+B^{\prime}\sim_{\mathbb{R}}K_{X}+f_{*}\Theta,$
is kawamata log terminal. But then
$-(K_{X}+B^{\prime})\sim_{\mathbb{R}}-(K_{X}+\Delta^{\prime})+A^{\prime},$
is ample over $X_{j}$. Hence $f_{ij}\colon X\longrightarrow X_{j}$ is a Fano
fibration and so by the cone theorem
$\rho(X_{i}/X_{j})=\operatorname{dim}\mathcal{N}.$
This is (4). ∎
###### Corollary 3.4.
If $V$ spans the Néron-Severi group of $Z$ then there is a Zariski dense open
subset $U$ of the Grassmannian $G(\alpha,V)$ of real affine subspaces of
dimension $\alpha$ such that if $[W]\in U$ and it is defined over the
rationals then $W$ satisfies (1-4) of (3.3).
###### Proof.
Let $U\subset G(\alpha,V)$ be the set of real affine subspaces $W$ of $V$ of
dimension $\alpha$, which contain no face of any $\mathcal{C}_{i}$ or
$\mathcal{L}_{A}(V)$. In particular the interior of $\mathcal{L}_{A}(W)$ is
contained in the interior of $\mathcal{L}_{A}(V)$. (3.3) implies that (1-2)
always hold for $W$ and (1-4) hold for $V$ and so (3) and (4) clearly hold for
$W\in U$. ∎
From now on in this section we assume that $V$ has dimension two and satisfies
(1-4) of (3.3).
###### Lemma 3.5.
Let $f\colon Z\dasharrow X$ and $g\colon Z\dasharrow Y$ be two rational
contractions such that $\mathcal{C}_{A,f}$ is two dimensional and
$\mathcal{O}=\mathcal{C}_{A,f}\cap\mathcal{C}_{A,g}$ is one dimensional.
Assume that $\rho(X)\geq\rho(Y)$ and that $\mathcal{O}$ is not contained in
the boundary of $\mathcal{L}_{A}(V)$. Let $\Theta$ be an interior point of
$\mathcal{O}$ and let $\Delta=f_{*}\Theta$.
Then there is a rational contraction $\pi\colon X\dasharrow Y$ which factors
$g=\pi\circ f$ and either
1. (1)
$\rho(X)=\rho(Y)+1$ and $\pi$ is a $(K_{X}+\Delta)$-trivial morphism, in which
case, either
1. (a)
$\pi$ is birational and $\mathcal{O}$ is not contained in the boundary of
$\mathcal{E}_{A}(V)$, in which case, either
1. (i)
$\pi$ is a divisorial contraction and $\mathcal{O}\neq\mathcal{C}_{A,g}$, or
2. (ii)
$\pi$ is a small contraction and $\mathcal{O}=\mathcal{C}_{A,g}$, or
2. (b)
$\pi$ is a Mori fibre space and $\mathcal{O}=\mathcal{C}_{A,g}$ is contained
in the boundary of $\mathcal{E}_{A}(V)$, or
2. (2)
$\rho(X)=\rho(Y)$, in which case, $\pi$ is a $(K_{X}+\Delta)$-flop and
$\mathcal{O}\neq\mathcal{C}_{A,g}$ is not contained in the boundary of
$\mathcal{E}_{A}(V)$.
###### Proof.
By assumption $f$ is birational and $X$ is $\mathbb{Q}$-factorial. Let
$h\colon Z\dasharrow W$ be the ample model corresponding to $K_{Z}+\Theta$.
Since $\Theta$ is not a point of the boundary of $\mathcal{L}_{A}(V)$ if
$\Theta$ belongs to the boundary of $\mathcal{E}_{A}(V)$ then $K_{Z}+\Theta$
is not big and so $h$ is not birational. As $\mathcal{O}$ is a subset of both
$\mathcal{C}_{A,f}$ and $\mathcal{C}_{A,g}$ there are morphisms $p\colon
X\longrightarrow W$ and $q\colon Y\longrightarrow W$ of relative Picard number
at most one. There are therefore only two possibilities:
1. (1)
$\rho(X)=\rho(Y)+1$, or
2. (2)
$\rho(X)=\rho(Y)$.
Suppose we are in case (1). Then $q$ is the identity and $\pi=p\colon
X\longrightarrow Y$ is a contraction morphism such that $g=\pi\circ f$.
Suppose that $\pi$ is birational. Then $h$ is birational and $\mathcal{O}$ is
not contained in the boundary of $\mathcal{E}_{A}(V)$. If $\pi$ is divisorial
then $Y$ is $\mathbb{Q}$-factorial and so $\mathcal{O}\neq\mathcal{C}_{A,g}$.
If $\pi$ is a small contraction then $Y$ is not $\mathbb{Q}$-factorial and so
$\mathcal{C}_{A,g}=\mathcal{O}$ is one dimensional. If $\pi$ is a Mori fibre
space then $\mathcal{O}$ is contained in the boundary of $\mathcal{E}_{A}(V)$
and $\mathcal{O}=\mathcal{C}_{A,g}$.
Now suppose we are in case (2). By what we have already proved
$\rho(X/W)=\rho(Y/W)=1$. $p$ and $q$ are not divisorial contractions as
$\mathcal{O}$ is one dimensional. $p$ and $q$ are not Mori fibre spaces as
$\mathcal{O}$ cannot be contained in the boundary of $\mathcal{E}_{A}(V)$.
Hence $p$ and $q$ are small and the rest is clear. ∎
###### Lemma 3.6.
Let $f\colon W\dasharrow X$ be a birational contraction between projective
$\mathbb{Q}$-factorial varieties. Suppose that $(W,\Theta)$ and $(W,\Phi)$ are
both kawamata log terminal.
If $f$ is the ample model of $K_{W}+\Theta$ and $\Theta-\Phi$ is ample then
$f$ is the result of running the $(K_{W}+\Phi)$-MMP.
###### Proof.
By assumption we may find an ample divisor $H$ on $W$ such that $K_{W}+\Phi+H$
is kawamata log terminal and ample and a positive real number $t<1$ such that
$tH\sim_{\mathbb{R}}\Theta-\Phi$. Note that $f$ is the ample model of
$K_{W}+\Phi+tH$. Pick any $s<t$ sufficiently close to $t$ so that $f$ is
$(K_{W}+\Phi+sH)$-negative and yet $f$ is still the ample model of
$K_{W}+\Phi+sH$. Then $f$ is the unique log terminal model of $K_{W}+\Phi+sH$.
In particular if we run the $(K_{W}+\Phi)$-MMP with scaling of $H$ then, when
the value of the scalar is $s$, the induced rational map is $f$. ∎
We now adopt some more notation for the rest of this section. Let $\Theta=A+B$
be a point of the boundary of $\mathcal{E}_{A}(V)$ in the interior of
$\mathcal{L}_{A}(V)$. Enumerate
${\mathcal{T}}_{1},{\mathcal{T}}_{2},\dots,{\mathcal{T}}_{k}$ the polytopes
$\mathcal{C}_{i}$ of dimension two which contain $\Theta$. Possibly re-
ordering we may assume that the intersections $\mathcal{O}_{0}$ and
$\mathcal{O}_{k}$ of $\mathcal{T}_{1}$ and $\mathcal{T}_{k}$ with the boundary
of $\mathcal{E}_{A}(V)$ and
$\mathcal{O}_{i}=\mathcal{T}_{i}\cap\mathcal{T}_{i+1}$ are all one
dimensional. Let $f_{i}\colon Z\dasharrow X_{i}$ be the rational contractions
associated to $\mathcal{T}_{i}$ and $g_{i}\colon Z\dasharrow S_{i}$ be the
rational contractions associated to $\mathcal{O}_{i}$. Set $f=f_{1}\colon
Z\dasharrow X$, $g=f_{k}\colon Z\dasharrow Y$, $X^{\prime}=X_{2}$,
$Y^{\prime}=X_{k-1}$. Let $\phi\colon X\longrightarrow S=S_{0}$, $\psi\colon
Y\longrightarrow T=S_{k}$ be the induced morphisms and let $Z\dasharrow R$ be
the ample model of $K_{Z}+\Theta$.
###### Theorem 3.7.
Suppose $\Phi$ is any divisor such that $K_{Z}+\Phi$ is kawamata log terminal
and $\Theta-\Phi$ is ample.
Then $\phi$ and $\psi$ are two Mori fibre spaces which are outputs of the
$(K_{Z}+\Phi)$-MMP which are connected by a Sarkisov link if $\Theta$ is
contained in more than two polytopes.
###### Proof.
We assume for simplicity of notation that $k\geq 3$. The case $k\leq 2$ is
similar and we omit it. The incidence relations between the corresponding
polytopes yield a commutative heptagon,
$\begin{diagram}$
where $p$ and $q$ are birational maps. $\phi$ and $\psi$ are Mori fibre spaces
by (3.5). Pick $\Theta_{1}$ and $\Theta_{k}$ in the interior of
$\mathcal{T}_{1}$ and $\mathcal{T}_{k}$ sufficiently close to $\Theta$ so that
$\Theta_{1}-\Phi$ and $\Theta_{k}-\Phi$ are ample. As $X$ and $Y$ are
$\mathbb{Q}$-factorial, (3.6) implies that $\phi$ and $\psi$ are possible
outcomes of the $(K_{Z}+\Phi)$-MMP. Let $\Delta=f_{*}\Theta$. Then
$K_{X}+\Delta$ is numerically trivial over $R$.
Note that there are contraction morphisms $X_{i}\longrightarrow R$ and that
$\rho(X_{i}/R)\leq 2$. If $\rho(X_{i}/R)=1$ then $X_{i}\longrightarrow R$ is a
Mori fibre space. By (3.3) there is facet of $\mathcal{T}_{i}$ which is
contained in the boundary of $\mathcal{E}_{A}(V)$ and so $i=1$ or $k$. Thus
$X_{i}\dasharrow X_{i+1}$ is a flop, $1<i<k-1$. Since $\rho(X^{\prime}/R)=2$
it follows that either $p$ is a divisorial contraction and $s$ is the identity
or $p$ is a flop and $s$ is not the identity. We have a similar dichotomy for
$q\colon Y^{\prime}\dasharrow Y$ and $t\colon T\longrightarrow R$.
There are then four cases. If $s$ and $t$ are the identity then $p$ and $q$
are divisorial extractions and we have a link of type II.
If $s$ is the identity and $t$ is not then $p$ is a divisorial extraction and
$q$ is a flop and we have a link of type I. Similarly if $t$ is the identity
and $s$ is not then $q$ is a divisorial extraction and $p$ is a flop and we
have a link of type III.
Finally suppose neither $s$ nor $t$ is the identity. Then both $p$ and $q$ are
flops. Suppose that $s$ is a divisorial contraction. Let $F$ be the divisor
contracted by $s$ and let $E$ be its inverse image in $X$. Since $\phi$ has
relative Picard number one $\phi^{*}(F)=mE$, for some positive integer $m$.
Then $K_{X}+\Delta+\delta E$ is kawamata log terminal for any $\delta>0$
sufficiently small and $E=\mathbf{B}(K_{X}+\Delta+\delta E/R)$. If we run the
$(K_{X}+\Delta+\delta E)$-MMP over $R$ then we end with a birational
contraction $X\dasharrow W$, which is a Mori fibre space over $R$. Since
$\rho(X/R)=2$, $W=Y$ and we have a link of type III, a contradiction.
Similarly $t$ is never a divisorial contraction. If $s$ is a Mori fibre space
then $R$ is $\mathbb{Q}$-factorial and so $t$ must be a Mori fibre space as
well. This is a link of type IVm. If $s$ is small then $R$ is not
$\mathbb{Q}$-factorial and so $t$ is small as well. Thus we have a link of
type IVs. ∎
## 4\. Proof of (1.3)
###### Lemma 4.1.
Let $\phi\colon X\longrightarrow S$ and $\psi\colon Y\longrightarrow T$ be two
Sarkisov related Mori fibre spaces corresponding to two $\mathbb{Q}$-factorial
kawamata log terminal projective varieties $(X,\Delta)$ and $(Y,\Gamma)$.
Then we may find a smooth projective variety $Z$, two birational contractions
$f\colon Z\dasharrow X$ and $g\colon Z\dasharrow Y$, a kawamata log terminal
pair $(Z,\Phi)$, an ample $\mathbb{Q}$-divisor $A$ on $Z$ and a two
dimensional rational affine subspace $V$ of
$\operatorname{WDiv}_{\mathbb{R}}(Z)$ such that
1. (1)
if $\Theta\in\mathcal{L}_{A}(V)$ then $\Theta-\Phi$ is ample,
2. (2)
$\mathcal{A}_{A,\phi\circ f}$ and $\mathcal{A}_{A,\psi\circ g}$ are not
contained in the boundary of $\mathcal{L}_{A}(V)$,
3. (3)
$V$ satisfies (1-4) of (3.3),
4. (4)
$\mathcal{C}_{A,f}$ and $\mathcal{C}_{A,g}$ are two dimensional, and
5. (5)
$\mathcal{C}_{A,\phi\circ f}$ and $\mathcal{C}_{A,\psi\circ g}$ are one
dimensional.
###### Proof.
By assumption we may find a $\mathbb{Q}$-factorial kawamata log terminal pair
$(Z,\Phi)$ such that $f\colon Z\dasharrow X$ and $g\colon Z\dasharrow Y$ are
both outcomes of the $(K_{Z}+\Phi)$-MMP.
Let $p\colon W\longrightarrow Z$ be any log resolution of $(Z,\Phi)$ which
resolves the indeterminancy of $f$ and $g$. We may write
$K_{W}+\Psi=p^{*}(K_{Z}+\Phi)+E^{\prime},$
where $E^{\prime}\geq 0$ and $\Psi\geq 0$ have no common components,
$E^{\prime}$ is exceptional and $p_{*}\Psi=\Phi$. Pick $-E$ ample over $Z$
with support equal to the full exceptional locus such that $K_{W}+\Psi+E$ is
kawamata log terminal. As $p$ is $(K_{W}+\Psi+E)$-negative, $K_{Z}+\Phi$ is
kawamata log terminal and $Z$ is $\mathbb{Q}$-factorial, the
$(K_{W}+\Psi+E)$-MMP over $Z$ terminates with the pair $(Z,\Phi)$ by (3.6).
Replacing $(Z,\Phi)$ with $(W,\Psi+E)$, we may assume that $(Z,\Phi)$ is log
smooth and $f$ and $g$ are morphisms.
Pick general ample $\mathbb{Q}$-divisors $A,{H}_{1},{H}_{2},\dots,{H}_{k}$ on
$Z$ such that ${H}_{1},{H}_{2},\dots,{H}_{k}$ generate the Néron-Severi group
of $Z$. Let
$H=A+{H}_{1}+{H}_{2}+\dots+{H}_{k}.$
Pick sufficiently ample divisors $C$ on $S$ and $D$ on $T$ such that
$-(K_{X}+\Delta)+\phi^{*}C\qquad\text{and}\qquad-(K_{Y}+\Gamma)+\psi^{*}D,$
are both ample. Pick a rational number $0<\delta<1$ such that
$-(K_{X}+\Delta+\delta
f_{*}H)+\phi^{*}C\qquad\text{and}\qquad-(K_{Y}+\Gamma+\delta
g_{*}H)+\psi^{*}D,$
are both ample and $K_{Z}+\Phi+\delta H$ is both $f$ and $g$-negative.
Replacing $H$ by $\delta H$ we may assume that $\delta=1$. Now pick a
$\mathbb{Q}$-divisor $\Phi_{0}\leq\Phi$ such that $A+(\Phi_{0}-\Phi)$,
$-(K_{X}+f_{*}\Phi_{0}+f_{*}H)+\phi^{*}C\quad\text{and}\quad-(K_{Y}+g_{*}\Phi_{0}+g_{*}H)+\psi^{*}D,$
are all ample and $K_{Z}+\Phi_{0}+H$ is both $f$ and $g$-negative.
Pick general ample $\mathbb{Q}$-divisors $F_{1}\geq 0$ and $G_{1}\geq 0$
$F_{1}\sim_{\mathbb{Q}}-(K_{X}+f_{*}\Phi_{0}+f_{*}H)+\phi^{*}C\quad\text{and}\quad
G_{1}\sim_{\mathbb{Q}}-(K_{Y}+g_{*}\Phi_{0}+g_{*}H)+\psi^{*}D.$
Then
$K_{Z}+\Phi_{0}+H+F+G,$
is kawamata log terminal, where $F=f^{*}F_{1}$ and $G=g^{*}G_{1}$.
Let $V_{0}$ be the affine subspace of $\operatorname{WDiv}_{\mathbb{R}}(Z)$
which is the translate by $\Phi_{0}$ of the vector subspace spanned by
${H}_{1},{H}_{2},\dots,{H}_{k},F,G$. Suppose that
$\Theta=A+B\in\mathcal{L}_{A}(V_{0})$. Then
$\Theta-\Phi=(A+\Phi_{0}-\Phi)+(B-\Phi_{0}),$
is ample, as $B-\Phi_{0}$ is nef by definition of $V_{0}$. Note that
$\Phi_{0}+F+H\in\mathcal{A}_{A,\phi\circ f}(V_{0})$,
$\Phi_{0}+G+H\in\mathcal{A}_{A,\psi\circ g}(V_{0})$, and $f$, respectively
$g$, is a weak log canonical model of $K_{Z}+\Phi_{0}+F+H$, respectively
$K_{Z}+\Phi_{0}+G+H$. (3.3) implies that $V_{0}$ satisfies (1-4) of (3.3).
Since ${H}_{1},{H}_{2},\dots,{H}_{k}$ generate the Néron-Severi group of $Z$
we may find constants ${h}_{1},{h}_{2},\dots,{h}_{k}$ such that $G$ is
numerically equivalent to $\sum h_{i}H_{i}$. Then $\Phi_{0}+F+\delta
G+H-\delta(\sum h_{i}H_{i})$ is numerically equivalent to $\Phi_{0}+F+H$ and
if $\delta>0$ is small enough $\Phi_{0}+F+\delta G+H-\sum\delta
h_{i}H_{i}\in\mathcal{L}_{A}(V_{0})$. Thus $\mathcal{A}_{A,\phi\circ
f}(V_{0})$ is not contained in the boundary of $\mathcal{L}_{A}(V_{0})$.
Similarly $\mathcal{A}_{A,\psi\circ g}(V_{0})$ is not contained in the
boundary of $\mathcal{L}_{A}(V_{0})$. In particular $\mathcal{A}_{A,f}(V_{0})$
and $\mathcal{A}_{A,g}(V_{0})$ span $V_{0}$ and $\mathcal{A}_{A,\phi\circ
f}(V_{0})$ and $\mathcal{A}_{A,\psi\circ g}(V_{0})$ span affine hyperplanes of
$V_{0}$, since $\rho(X/S)=\rho(Y/T)=1$.
Let $V_{1}$ be the translate by $\Phi_{0}$ of the two dimensional vector space
spanned by $F+H-A$ and $F+G-A$. Let $V$ be a small general perturbation of
$V_{1}$, which is defined over the rationals. Then (2) holds. (1) holds, as it
holds for any two dimensional subspace of $V_{0}$, (3) holds by (3.4) and this
implies that (4) and (5) hold. ∎
###### Proof of (1.3).
Pick $(Z,\Phi)$, $A$ and $V$ given by (4.1). Pick points
$\Theta_{0}\in\mathcal{A}_{A,\phi\circ f}(V)$ and
$\Theta_{1}\in\mathcal{A}_{A,\psi\circ g}(V)$ belonging to the interior of
$\mathcal{L}_{A}(V)$. As $V$ is two dimensional, removing $\Theta_{0}$ and
$\Theta_{1}$ divides the boundary of $\mathcal{E}_{A}(V)$ into two parts. The
part which consists entirely of divisors which are not big is contained in the
interior of $\mathcal{L}_{A}(V)$. Consider tracing this boundary from
$\Theta_{0}$ to $\Theta_{1}$. Then there are finitely many $2\leq i\leq l$
points $\Theta_{i}$ which are contained in more than two polytopes
$\mathcal{C}_{A,f_{i}}(V)$. (3.7) implies that for each such point there is a
Sarkisov link $\sigma_{i}\colon X_{i}\dasharrow Y_{i}$ and $\sigma$ is the
composition of these links. ∎
## References
* [1] C. Birkar, P. Cascini, C. Hacon, and J. McKernan, _Existence of minimal models for varieties of log general type_ , arXiv:math.AG/0610203.
* [2] A. Bruno and K. Matsuki, _Log Sarkisov program_ , Internat. J. Math. 8 (1997), no. 4, 451–494.
* [3] A. Corti, _Factoring birational maps of threefolds after Sarkisov_ , J. Algebraic Geom. 4 (1995), no. 2, 223–254.
* [4] Y. Hu and S. Keel, _Mori dream spaces and GIT_ , Michigan Math. J. 48 (2000), 331–348, Dedicated to William Fulton on the occasion of his 60th birthday.
* [5] Y. Kawamata, _Flops connect minimal models_ , Publ. Res. Inst. Math. Sci. 44 (2008), no. 2, 419–423.
|
arxiv-papers
| 2009-05-07T03:24:15 |
2024-09-04T02:49:02.385703
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Christopher D. Hacon, James McKernan",
"submitter": "James McKernan",
"url": "https://arxiv.org/abs/0905.0946"
}
|
0905.1194
|
CUBICAL HOMOLOGY OF ASYNCHRONOUS
TRANSITION SYSTEMS
A. A. Khusainov
###### Abstract
We show that a set with an action of a locally finite-dimensional free
partially commutative monoid and the corresponding semicubical set have
isomorpic homology groups. We build a complex of finite length for the
computing homology groups of any asynchronous transition system with finite
maximal number of mutually independent events. We give examples of computing
the homology groups.
2000 Mathematics Subject Classification 18G10, 18G35, 55U10, 68Q10, 68Q85
Keywords: semicubical set, homology of small categories, free partially
commutative monoid, trace monoid, asynchronous transition system.
## Introduction
By [1], an asynchronous transition system $(S,s_{0},E,I,{\rm Tran\,})$
consists of arbitrary sets $E$ and $S$ with a distinguished element $s_{0}\in
S$, an irreflexive symmetric relation $I\subseteq E\times E$, and a subset
${\rm Tran\,}\subseteq S\times E\times S$ satisfying the following axioms.
1. (i)
for every $e\in E$, there are $s\in S$ and $s^{\prime}\in S$ such those
$(s,e,s^{\prime})\in{\rm Tran\,}$;
2. (ii)
if $(s,e,s^{\prime})\in{\rm Tran\,}$ and $(s,e,s^{\prime\prime})\in{\rm
Tran\,}$, then $s^{\prime}=s^{\prime\prime}$;
3. (iii)
for any pair $(e_{1},e_{2})\in I$ and triples $(s,e_{1},s_{1})\in{\rm
Tran\,}$, $(s_{1},e_{2},u)\in{\rm Tran\,}$ there exists $s_{2}\in S$ such that
$(s,e_{2},s_{2})\in{\rm Tran\,}$ and $(s_{2},e_{1},u)\in{\rm Tran\,}$.
Elements $s\in S$ are called states, $e\in E$ events, $s_{0}$ is an initial
state and $I\subseteq E\times E$ independence relation. Triples
$(s,e,s^{\prime})\in{\rm Tran\,}$ are transitions.
Asynchronous transition systems are introduced by Mike Shields [2] and Marek
Bednarczyk [3] for a simulation of concurrent computational systems. The
application of the partially commuting in parallel programming belongs to
Antoni Mazurkievicz. In [4], it was proposed to consider an asynchronous
transition system as a pointed set with action of a free partially commutative
monoid. It allows to introduce [4] and find approach to studying homology
groups of asynchronous transition systems [5]. Erik Goubault [6] and Philippe
Gaucher [7] have defined homology groups for higher dimensional automata,
which are other models of parallel computational systems. The our main result
(Theorem 3.3) shows that in some finiteness conditions, the homology groups of
asynchronous transition systems and of corresponding semiregular higher
dimensional automata are isomorpic (see Corollary 3.5). In [5], it was built
an algorithm to computing the first integer homology group of any asynchronous
transition system. In [8, 1], by a resolution of Ludmila Polyakova [9], it was
built a complex for the computations of homology groups in the case of finite
asynchronous transition system. We will build a complex for the computing the
homology groups of an asynchronous transition system without infinite sets of
mutually independent events (Corollary 3.9). If the maximal number of mutually
independent events is finite, then this complex has a finite length.
## 1 Homology of categories and semicubical sets
Let ${\mathcal{A}}$ be a category. Denote by ${\mathcal{A}}^{op}$ the opposite
category. Given $a,b\in{\rm Ob\,}{\mathcal{A}}$, let ${\mathcal{A}}(a,b)$ be
the set of all morphisms $a\rightarrow b$. For a small category
${\mathscr{C}}$, denote by ${\mathcal{A}}^{{\mathscr{C}}}$ the category of
functors ${\mathscr{C}}\rightarrow{\mathcal{A}}$ and natural transformations.
Throughout this paper let ${\rm Set}$ be the category of sets and maps, ${\rm
Ab}$ the category of abelian groups and homomorphisms, ${\,\mathbb{Z}}$ the
set or additive group of integers, ${\,\mathbb{N}}$ the set of nonnegative
integers or free monoid $\\{1,a,a^{2},\cdots\\}$ generated by one element. For
any family of abelian groups $\\{A_{j}\\}_{j\in J}$, the direct sum is denoted
by $\bigoplus\limits_{j\in J}A_{j}$. Elements of direct summands is written as
pairs $(j,g)$ with $j\in J$ and $g\in A_{j}$. If $A_{j}=A$ for all $j\in J$,
then the direct sum is denoted by $A^{(J)}$.
### 1.1 Semicubical sets
Suppose that ${\,\mathbb{I}}=\\{0,1\\}$ is the set ordered by $0<1$. For
integer $n\geqslant 0$, let ${\,\mathbb{I}}^{n}$ be the Cartesian power of
${\,\mathbb{I}}$. Denote by $\Box_{+}$ the category of partially ordered sets
${\,\mathbb{I}}^{n}$ and maps, which can be decomposed into compositions of
the increasing maps
$\delta_{i}^{k,\varepsilon}:{\,\mathbb{I}}^{k-1}\rightarrow{\,\mathbb{I}}^{k}$,
$1\leqslant i\leqslant k$, $\varepsilon\in{\,\mathbb{I}}$ defined by
$\delta_{i}^{k,\varepsilon}(x_{1},\cdots,x_{k-1})=(x_{1},\cdots,x_{i-1},\varepsilon,x_{i},\cdots,x_{k-1}).$
A semicubical set [10] is any functor $X:\Box_{+}^{op}{\rightarrow}{\rm Set}$.
Morphisms are defined as natural transformations. Since every morphism
$f:{\,\mathbb{I}}^{m}\rightarrow{\,\mathbb{I}}^{n}$ of the category $\Box_{+}$
has the canonical decomposition
$f=\delta_{j_{n-m}}^{n,\varepsilon_{n-m}}\cdots\delta_{j_{1}}^{m+1,\varepsilon_{1}}$
such that $1\leqslant j_{1}<\cdots<j_{n-m}\leqslant n$, a functor $X$ is
defined by values $X_{n}=X({\,\mathbb{I}}^{n})$ on objects and
$\partial_{i}^{k,\varepsilon}=X(\delta_{i}^{k,\varepsilon})$ on morphisms.
Hence a semicubical set may be given as a pair
$(X_{n},\partial_{i}^{n,\varepsilon})$ consisting of a sequnce of sets
$(X_{n})_{n\in{\,\mathbb{N}}}$ and a family of maps
$\partial_{i}^{n,\varepsilon}:X_{n}\rightarrow X_{n-1}$ defined for
$1\leqslant i\leqslant n$, $\varepsilon\in\\{0,1\\}$, and satisfying the
condition
$\partial_{i}^{n-1,\alpha}\circ\partial_{j}^{n,\beta}=\partial_{j-1}^{n-1,\beta}\circ\partial_{i}^{n,\alpha}~{},\mbox{
for }\alpha,\beta\in\\{0,1\\},n\geqslant 2,\mbox{ and }1\leqslant i<j\leqslant
n.$
For example, any directed graph given by a pair of maps ${\rm dom},{\rm
cod}:X_{1}\rightarrow X_{0}$ assigning to every arrow its source and target,
we can considered as the semicubical set with $\partial_{1}^{1,0}={\rm dom}$,
$\partial_{1}^{1,1}={\rm cod}$, and $X_{n}=\emptyset$ for all $n\geqslant 2$.
For $n\geqslant 2$, the maps $\partial_{i}^{n,\varepsilon}$ are empty. By
[10], cubical sets [11] provide examples of semicubical sets. Similarly, we
can define semicubical objects $(X_{n},\partial_{i}^{n,\varepsilon})$ in an
arbitrary category ${\mathcal{A}}$.
### 1.2 Homology of small categories
Homology groups of small categories with coefficients in functors into the
category of Abelian groups will be considered in this subsection.
Homology of categories and derived functors of the colimit.
###### Definition 1.1
Let ${\mathscr{C}}$ be a small category and $F:{\mathscr{C}}\rightarrow{\rm
Ab}$ a functor. Denote by $C_{*}({\mathscr{C}},F)$ a chain complex of Abelian
groups
$C_{n}({{\mathscr{C}}},F)=\bigoplus_{c_{0}\rightarrow\cdots\rightarrow
c_{n}}F(c_{0}),\quad n\geqslant 0,$
with differentials
$d_{n}=\sum\limits_{i=0}^{n}(-1)^{i}d^{n}_{i}:C_{n}({\mathscr{C}},F)\rightarrow
C_{n-1}({\mathscr{C}},F)$ defined for $n>0$ by the face operators
$d^{n}_{i}(c_{0}\stackrel{{\scriptstyle\alpha_{1}}}{{\rightarrow}}c_{1}\stackrel{{\scriptstyle\alpha_{2}}}{{\rightarrow}}\cdots\stackrel{{\scriptstyle\alpha_{n}}}{{\rightarrow}}c_{n},a)=\\\
\left\\{\begin{array}[]{ll}(c_{1}\stackrel{{\scriptstyle\alpha_{2}}}{{\rightarrow}}\cdots\stackrel{{\scriptstyle\alpha_{n}}}{{\rightarrow}}c_{n},F(c_{0}\stackrel{{\scriptstyle\alpha_{1}}}{{\rightarrow}}c_{1})(a))~{}~{},&\mbox{if}~{}i=0\\\
(c_{0}\stackrel{{\scriptstyle\alpha_{1}}}{{\rightarrow}}\cdots\stackrel{{\scriptstyle\alpha_{i-1}}}{{\rightarrow}}c_{i-1}\stackrel{{\scriptstyle\alpha_{i+1}\alpha_{i}}}{{\rightarrow}}c_{i+1}\stackrel{{\scriptstyle\alpha_{i+2}}}{{\rightarrow}}\cdots\stackrel{{\scriptstyle\alpha_{n}}}{{\rightarrow}}c_{n},a)\quad,&\mbox{if}~{}1\leqslant
i\leqslant n-1\\\
(c_{0}\stackrel{{\scriptstyle\alpha_{1}}}{{\rightarrow}}\cdots\stackrel{{\scriptstyle\alpha_{n-1}}}{{\rightarrow}}c_{n-1},a)~{}~{},&\mbox{if}~{}i=n\end{array}\right.$
The quotient groups $H_{n}(C_{*}({\mathscr{C}},F))={\rm Ker\,}(d_{n})/{\rm
Im\,}(d_{n+1})$ are called the $n$-th homology groups of the category
${\mathscr{C}}$ with coefficients in $F$.
It is known [10] that the functors $H_{n}(C_{*}({\mathscr{C}},-)):{\rm
Ab}^{\mathscr{C}}\rightarrow{\rm Ab}$ are natural isomorphic to left derived
of the colimit functor $\underrightarrow{\lim}^{\mathscr{C}}:{\rm
Ab}^{\mathscr{C}}\rightarrow{\rm Ab}$. So we will be denote them by
$\underrightarrow{\lim}_{n}^{\mathscr{C}}$.
For any small category ${\mathscr{C}}$, denote by
$\Delta_{\mathscr{C}}{\,\mathbb{Z}}$, or $\Delta{\,\mathbb{Z}}$ shortly, the
functor ${\mathscr{C}}\rightarrow{\rm Ab}$ with constant values
${\,\mathbb{Z}}$ on objects and $1_{\,\mathbb{Z}}$ on morphisms. The values of
left satellites
$\underrightarrow{\lim}_{n}^{\mathscr{C}}\Delta_{\mathscr{C}}{\,\mathbb{Z}}$
of the colimit functor on $\Delta_{\mathscr{C}}{\,\mathbb{Z}}$ are called the
homology groups of the category ${\mathscr{C}}$ and denoted by
$H_{n}({\mathscr{C}})$. It follows from Eilenberg’s Theorem [12, Appl. 2] that
homology groups of the geometric realization of the nerve of ${\mathscr{C}}$
are isomorphic to $H_{n}({\mathscr{C}})$. For example, if the category denoted
by ${\,\rm pt}$ consists of unique object and the identity morphism, then
$H_{n}({\,\rm pt})=0$ for all $n>0$ and $H_{0}({\,\rm pt})={\,\mathbb{Z}}$.
Coinitial functors. Let ${\mathscr{C}}$ be a small category. If
$H_{n}({\mathscr{C}})=0$ for $n>0$ and
$H_{0}({\mathscr{C}})\cong{\,\mathbb{Z}}$, then ${\mathscr{C}}$ is called
acyclic. Let $S:{\mathscr{C}}\rightarrow{\mathscr{D}}$ be a functor into an
arbitrary category ${\mathscr{D}}$. For any $d\in{\rm Ob\,}({\mathscr{D}})$, a
fibre (or comma-category [13]), denoted by $S/d$, is the category whose
objects ${\rm Ob\,}(S/d)$ consists of pairs $(c,\alpha)$ with $c\in{\rm
Ob\,}({\mathscr{C}})$, and $\alpha\in{\mathscr{D}}(S(c),d)$. Morphisms in
$S/d$ are triples $(f,\alpha_{1},\alpha_{2})$ of morphisms
$f\in{\mathscr{C}}(c_{1},c_{2})$, $\alpha_{1}\in{\mathscr{D}}(S(c_{1}),d)$,
and $\alpha_{2}\in{\mathscr{D}}(S(c_{2}),d)$ satisfying $\alpha_{2}\circ
S(f)=\alpha_{1}$. A forgetful functor $Q_{d}:S/d\rightarrow{\mathscr{C}}$ of
the fibre is defined by $Q_{d}(c,\alpha)=c$ on objects and
$Q_{d}(f,\alpha_{1},\alpha_{2})=f$ on morphisms. If the functor $S$ is a full
inclusion ${\mathscr{C}}\subseteq{\mathscr{D}}$, then $S/d$ is denoted by
${\mathscr{C}}/d$.
###### Definition 1.2
If the category $S/d$ is acyclic for every object $d\in{\mathscr{D}}$, then
functor $S:{\mathscr{C}}\rightarrow{\mathscr{D}}$ is called strongly
coinitial.
By [14, Theorem 2.3], it can be proved, that a functor
$S:{\mathscr{C}}\rightarrow{\mathscr{D}}$ between small categories is strongly
coinitial if and if the canonical morphisms
$\underrightarrow{\lim}_{n}^{{\mathscr{C}}^{op}}(F\circ
S^{op})\rightarrow\underrightarrow{\lim}_{n}^{{\mathscr{D}}^{op}}F$ are
isomorhisms for all functors and $n\geqslant 0$ [15, Proposition 1.4].
### 1.3 Homology of semicubical sets
Let $X\in{\rm Set}^{\Box_{+}^{op}}$ be a semicubical set. The category of
singular cubes $h_{*}/X$ is the fibre of the Yoneda Embedding
$h_{*}:\Box_{+}\rightarrow{\rm Set}^{\Box_{+}^{op}}$ over $X$. Consider a
category $\Box_{+}/X$ whose objects are elements
$\sigma\in\coprod\limits_{n\in{\,\mathbb{N}}}X_{n}$. Morhisms from $\sigma\in
X_{m}$ to $\tau\in X_{n}$ are given by triples $(\alpha,\sigma,\tau)$,
$\alpha\in\Box_{+}({\,\mathbb{I}}^{m},{\,\mathbb{I}}^{n})$, satifying
$X(\alpha)(\tau)=\sigma$. The categories $h_{*}/X$ and $\Box_{+}/X$ are
isomorphic and we will identify their. A homological system on a semicubical
set $X$ is any functor $F:(\Box_{+}/X)^{op}\rightarrow{\rm Ab}$.
Homology groups with coefficients in a homological system. We turn to study
homology groups $\underrightarrow{\lim}_{n}^{(\Box_{+}/X)^{op}}F$ of the
category of singular cubes with coefficients in a homological system $F$ on
$X$. Consider abelian groups $C_{n}(X,F)=\bigoplus\limits_{\sigma\in
X_{n}}F(\sigma)$. Define boundary operators
$d_{i}^{n,\varepsilon}:C_{n}(X,F)\rightarrow C_{n-1}(X,F)$ as homomorphisms
such those the following diagrams commute for $1\leqslant i\leqslant n$ and
$\varepsilon\in{\,\mathbb{I}}=\\{0,1\\}$,
$\begin{CD}\bigoplus\limits_{\sigma\in
X_{n}}F(\sigma)@>{d_{i}^{n,\varepsilon}}>{}>\bigoplus\limits_{\sigma\in
X_{n-1}}F(\sigma)\\\
@A{in_{\sigma}}A{}A@A{in_{X(\delta^{n,\varepsilon}_{i})(\sigma)}}A{}A\\\
F(\sigma)@>{F(\delta_{i}^{n,\varepsilon},X(\delta^{n,\varepsilon}_{i})(\sigma),\sigma)}>{}>F(X(\delta^{n,\varepsilon}_{i})(\sigma))\end{CD}$
###### Definition 1.3
Let $F:(\Box_{+}/X)^{op}\rightarrow{\rm Ab}$ be a homological system on a
semicubical set $X$. Homology groups $H_{n}(X,F)$ with coefficients in $F$ are
$n$-th homology groups of the complex $C_{*}(X,F)$ consisting of the groups
$C_{n}(X,F)=\bigoplus\limits_{\sigma\in X_{n}}F(\sigma)$ and differentials
$d_{n}=\sum\limits_{i=1}^{n}(-1)^{i}(d^{n,1}_{i}-d^{n,0}_{i})$.
Groups $H_{n}(X,\Delta{\,\mathbb{Z}})$ are called $n$-th integer homology
groups.
By [10, 4.3], for any semicubical set $X$ and a homological system $F$ on $X$,
there is isomorphisms $\underrightarrow{\lim}_{n}^{(\Box_{+}/X)^{op}}F\cong
H_{n}(X,F)$ for all $n\geqslant 0$. It follows that homology groups of cubical
sets studied in [11] and [16] are isomorphic to homology groups of the
corresponding semicubical sets with constant homological systems.
## 2 Homology of free partially commutative
monoids
In this section, we study a factorization category and semicubical set
concerned with a free partially commutative monoid. We will prove that
homology groups of the free partially commutative monoid and the corresponding
semicubical set are isomorphic. It allows us to build a complex to the
computing the homology groups of a free partially commutative monoid.
Each monoid $M$ will be considered as a category with ${\rm Ob\,}M=\\{M\\}$
consisting of the unique object and ${\rm Mor\,}M=M$. It has an influence on
denotations and terminology. In particular, a right $M$–set $X$, with the
action $(x,\mu)\mapsto x\cdot\mu$ for $x\in X$ and $\mu\in M$ is considered
and denoted as the functor ${X}:{M}^{op}\rightarrow{\rm Set}$ defined by
${X}(\mu)(x)=x\cdot\mu$. Morphisms of right $M$-sets are natural
transformations.
### 2.1 Homology of a free finitely generated commutative monoid
Category of factorizations. Suppose that ${\mathscr{C}}$ is a small category.
If a morphism $f\in{\rm Mor\,}{\mathscr{C}}$ belongs to ${\mathscr{C}}(a,b)$,
then we write ${\rm dom}f=a$ and ${\rm cod}f=b$. A category of factorizations
${\mathfrak{F}}{\mathscr{C}}$ [17] has the set of objects ${\rm
Ob\,}({\mathfrak{F}}{\mathscr{C}})={\rm Mor\,}({\mathscr{C}})$ and the sets of
morphisms ${\mathfrak{F}}{\mathscr{C}}(\alpha,\beta)$ consisting of all pairs
$(f,g)$ of $f\in{\mathscr{C}}({\rm dom}\beta,{\rm dom}\alpha)$ and
$g\in{\mathscr{C}}({\rm cod}\alpha,{\rm cod}\beta)$ satisfying
$g\circ\alpha\circ f=\beta$.
The composition of morphisms
$\alpha\stackrel{{\scriptstyle(f_{1},g_{1})}}{{\rightarrow}}\beta$ and
$\beta\stackrel{{\scriptstyle(f_{2},g_{2})}}{{\rightarrow}}\gamma$ is defined
by $\alpha\stackrel{{\scriptstyle(f_{1}\circ f_{2},g_{2}\circ
g_{1})}}{{\longrightarrow}}\gamma$. The identity of an object
$a\stackrel{{\scriptstyle\alpha}}{{\rightarrow}}b$ of the category
${\mathfrak{F}}{\mathscr{C}}$ consists of the pair of identity morphisms
$\alpha\stackrel{{\scriptstyle(1_{a},1_{b})}}{{\longrightarrow}}\alpha$. We
will study the category of factorizations of a monoid considered as a category
with an unique object. Denote by
${\mathfrak{F}}{\mathscr{C}}\stackrel{{\scriptstyle{\rm
cod}}}{{\rightarrow}}{\mathscr{C}}$ the functor that assigns to each
$a\rightarrow b$ its the codomain $b$ and to each morphism
$\alpha\stackrel{{\scriptstyle(f,g)}}{{\rightarrow}}\beta$ the morphism
$g:{\rm cod}(\alpha)\rightarrow{\rm cod}(\beta)$. By [15, Lemma 1.9], this
functor is strongly coinitial.
###### Lemma 2.1
Let ${\,\mathbb{N}}=\\{1,a,a^{2},\ldots\\}$ be the free monoid generated by
$a$ and let $T=\\{1,a\\}$. Suppose that
${\mathfrak{F}}{T}\subset{\mathfrak{F}}{\,\mathbb{N}}$ is full subcategory
with the set of objects $T$. Then the inclusion
${\mathfrak{F}}{T}\subset{\mathfrak{F}}{\,\mathbb{N}}$ is strongly coinitial.
Proof. The category ${\mathfrak{F}}{T}$ consists of two objects and two
morphisms $1\stackrel{{\scriptstyle(1,a)}}{{\rightarrow}}a$,
$1\stackrel{{\scriptstyle(a,1)}}{{\rightarrow}}a$ except the identity
morphisms. It easy to see that for any integer $p\geqslant 0$, the comma-
category ${\mathfrak{F}}{T}/a^{p}$ is the poset
$(1,1,a^{p})<(1,a,a^{p-1})>(a^{1},1,a^{p-1})<\cdots\\\
\cdots>(a^{s},1,a^{t})<(a^{s},a,a^{t-1})>(a^{s+1},1,a^{t-1})<\cdots>(a^{p},1,1)$
Since the geometric realization of its nerve is homeomorphic to the unit
segment, $H_{q}({\mathfrak{F}}{T}/a^{p})\cong H_{q}(pt)$ for all $q\geqslant
0$. ${\Box}$
Suppose that ${\,\mathbb{N}}$ is the free monoid generated by $a$. For
$n\geqslant 1$, denote $a_{1}=(a,1,\cdots,1)$, $a_{2}=(1,a,1,\cdots,1)$,
$\cdots$, $a_{n}=(1,\cdots,1,a)$. Consider the subset
$T^{n}\subset{\,\mathbb{N}}^{n}$ consisting of finite products
$a_{i_{1}}a_{i_{2}}\cdots a_{i_{k}}$ where $1\leqslant
i_{1}<i_{2}<\cdots<i_{k}\leqslant n$. Let ${\mathfrak{F}}T^{n}$ be the full
subcategory of ${\mathfrak{F}}{\,\mathbb{N}}^{n}$ with the class of objects
$T^{n}$. For every $\alpha=(a^{p_{1}},a^{p_{2}},\cdots,a^{p_{n}})$, the comma-
category ${\mathfrak{F}}T^{n}/\alpha$ is isomorphic to
$({\mathfrak{F}}T/a^{p_{1}})\times\cdots\times({\mathfrak{F}}T/a^{p_{n}})$. It
follows from the Künneth formula [15, Lemma 1.16] and Lemma 2.1 the following.
###### Lemma 2.2
The inclusion ${\mathfrak{F}}T^{n}\subset{\mathfrak{F}}{\,\mathbb{N}}^{n}$ is
strongly coinitial.
A semicubical set of a free finitely generated commutative monoid. Let
$a_{1}$, $a_{2}$, $\dots$, $a_{n}$ the above generators of
${\,\mathbb{N}}^{n}$. Consider the semicubical set $T^{n}_{*}$ consisting of
the subsets
$T^{n}_{k}=\\{a_{i_{1}}a_{i_{2}}\cdots a_{i_{k}}:1\leqslant
i_{1}<i_{2}<\cdots<i_{k}\leqslant n\\}$
and maps
$T^{n}_{k-1}~{}~{}{{{\partial^{k,0}_{s}}\atop\longleftarrow}\atop{\longleftarrow\atop{\partial^{k,1}_{s}}}}~{}~{}T^{n}_{k}~{}$,
$1\leqslant s\leqslant k$ defined as follows.
$\partial_{s}^{k,0}(a_{i_{1}}\cdots
a_{i_{k}})=\partial_{s}^{k,1}(a_{i_{1}}\cdots a_{i_{k}})=a_{i_{1}}\cdots
a_{i_{s-1}}\widehat{a_{i_{s}}}a_{i_{s+1}}\cdots a_{i_{k}}$
Here $a_{i_{1}}\cdots a_{i_{s-1}}\widehat{a_{i_{s}}}a_{i_{s+1}}\cdots
a_{i_{k}}$ is the word obtained by removing the symbol $a_{i_{s}}$. Objects of
the category $\Box_{+}/T^{n}_{*}$ may be considered as pairs $(k,\sigma)$
where $\sigma\in T^{n}_{k}$. Every $\sigma\in T^{n}_{k}$ has an unique
decomposition $a_{i_{1}}\cdots a_{i_{k}}$ such that $1\leqslant
i_{1}<\cdots<i_{k}\leqslant n$. And so the objects $(k,\sigma)$ may be
identify with elements $a_{i_{1}}\cdots a_{i_{k}}\in T^{n}$. Morphisms in
$\Box_{+}/T^{n}_{*}$ are triples
$(\delta:{\,\mathbb{I}}^{m}\rightarrow{\,\mathbb{I}}^{k},a_{j_{1}}\cdots
a_{j_{m}},a_{i_{1}}\cdots a_{i_{k}}),$
such those $T^{n}(\delta)(a_{j_{1}}\cdots a_{j_{m}})=a_{i_{1}}\cdots
a_{i_{k}}$.
We will establish a functor
${\mathfrak{S}}:\Box_{+}/T^{n}_{*}\rightarrow{\mathfrak{F}}T^{n}$. Toward this
end, define ${\mathfrak{S}}(a_{i_{1}}\cdots a_{i_{k}})=a_{i_{1}}\cdots
a_{i_{k}}$ on objects. Every morphism of the category $\Box_{+}/T^{n}_{*}$ has
a decomposition
$(\delta_{s}^{k,\varepsilon},a_{i_{1}}\cdots\widehat{a_{i_{s}}}\cdots
a_{i_{k}},a_{i_{1}}\cdots a_{i_{k}})$, $\varepsilon\in\\{0,1\\}$. Hence, it is
enough to define the values
${\mathfrak{S}}(\delta_{s}^{k,0},a_{i_{1}}\cdots\widehat{a_{i_{s}}}\cdots
a_{i_{k}},a_{i_{1}}\cdots
a_{i_{k}})=(a_{i_{s}},1):a_{i_{1}}\cdots\widehat{a_{i_{s}}}\cdots
a_{i_{k}}\rightarrow a_{i_{1}}\cdots a_{i_{k}}\\\
{\mathfrak{S}}(\delta_{s}^{k,1},a_{i_{1}}\cdots\widehat{a_{i_{s}}}\cdots
a_{i_{k}},a_{i_{1}}\cdots
a_{i_{k}})=(1,a_{i_{s}}):a_{i_{1}}\cdots\widehat{a_{i_{s}}}\cdots
a_{i_{k}}\rightarrow a_{i_{1}}\cdots a_{i_{k}}$ (1)
It is easy to see that the map ${\mathfrak{S}}$ has the unique functorial
extension.
For each $\sigma=a_{i_{1}}\cdots a_{i_{k}}$, the category
${\mathfrak{S}}/\sigma$ has the terminal object. Therefore, the following
assertion holds.
###### Lemma 2.3
The functor ${\mathfrak{S}}:\Box_{+}/T^{n}_{*}\rightarrow{\mathfrak{F}}T^{n}$
is strongly coinitial.
### 2.2 Homology of a free partially commutative monoid with coefficient in a
right module
###### Definition 2.1
Let $E$ be a set. Suppose that $I\subseteq E\times E$ is an irreflexive
symmetric relation on $E$. Monoid given by the set of generators $E$ and the
relations $ab=ba$ for all $(a,b)\in I$ is called free partially commutative
and denoted by $M(E,I)$. If $(a,b)\in I$, then the elements $a,b\in E$ are
called the commuting generators.
Our definition is more general than it is given in [18]. We do not demand that
the set $E$ is finite.
For any graph, a subgraph is called a $n$-clique if that is isomorphic to the
complete graph $K_{n}$. A clique is a subgraph which is equal to a $n$-clique
for some cardinal number $n\geqslant 1$. Let $M(E,I)$ be a free partially
commutative monoid given by a set of generators $E$ and relations $ab=ba$ for
all $(a,b)\in I$. Denote by $V$ the set of all maximal cliques of its
indepedence graph. For every $v\in V$, denote by $E_{v}\subseteq E$ the set of
vertices of $v$. The set $E_{v}$ is a maximal subset of $E$ consisting of
mutually commuting elements. The set $E_{v}$ generate the maximal commutative
submonoid $M(E_{v})\subseteq M(E,I)$. The monoid $M(E,I)$ is called locally
finite-dimensional if the sets $E_{v}$ are finite for all $v\in V$. This
property holds if and only if the independence graph does not contain infinite
cliques.
The coinitial subcategory of a category of factorizations. As above, V is the
set of maximal cliques in the independence graph of $M(E,I)$.
###### Proposition 2.4
Suppose for $v\in V$ that $T_{v}\subset M(E_{v})$ is the subset of products
$a_{1}a_{2}\cdots a_{n}$ of distinct elements $a_{j}\in E_{v}$, $1\leqslant
j\leqslant n$. Here $n$ may be taken finite values $\leqslant|E_{v}|$. It is
supposed that the product equals $1\in T_{v}$ for $n=0$. If $M(E,I)$ is
locally finite-dimensional, then the inclusion $\bigcup\limits_{v\in
V}{\mathfrak{F}}T_{v}\subset{\mathfrak{F}}M(E,I)$ is strongly coinitial.
Proof. The composition of strongly coinitial functors is strongly coinitial.
Since the inclusion $\bigcup\limits_{v\in
V}{\mathfrak{F}}M(E_{v})\subseteq{\mathfrak{F}}M(E,I)$ is strongly coinitial
by [15, Theorem 2.3], it is enough to show that the inclusion
$\bigcup\limits_{v\in V}{\mathfrak{F}}T_{v}\subset\bigcup\limits_{v\in
V}{\mathfrak{F}}M(E_{v})$ is strongly coinitial. For each
$\alpha\in\bigcup\limits_{v\in V}{\mathfrak{F}}M(E_{v})$, there is $w\in V$
such that $\alpha\in{\mathfrak{F}}M(E_{w})$. All divisors of $\alpha$ belong
to $M(E_{w})$. Hence $\bigcup\limits_{v\in
V}{\mathfrak{F}}T_{v}/\alpha={\mathfrak{F}}T_{w}/\alpha$. By Lemma 2.2, the
inclusion ${\mathfrak{F}}T_{w}\subset{\mathfrak{F}}M(E_{w})$ is strongly
coinitial. Therefore $H_{q}(\bigcup\limits_{v\in
V}{\mathfrak{F}}T_{v}/\alpha)\cong H_{q}(pt)$. ${\Box}$
Cubical homology of free partially commutative monoids. For an arbitrary set
$E$ with an irreflexive symmetric relation $I\subseteq E\times E$, we
construct a semicubical set $T(E,I)$ depending on a some total ordering
relationship $\leqslant$ on $E$. Toward this end for any integer $n>0$, we
define $T_{n}(E,I)$ as the set of all tuples $(a_{1},\cdots,a_{n})$ of
mutually commuting elements $a_{1}<\cdots<a_{n}$ in $E$,
$T_{n}(E,I)=\\{(a_{1},\cdots,a_{n}):(a_{1}<\cdots<a_{n})\&(1\leqslant
i<j\leqslant n\Rightarrow(a_{i},a_{j})\in I)\\}.$
The set $T_{0}(E,I)$ consists of unique empty word $1$. Maps
$\partial^{n,\varepsilon}_{i}:T_{n}(E,I)\rightarrow T(E,I)_{n-1}$ for
$1\leqslant i\leqslant n$ act as
$\partial^{n,0}_{i}(a_{1},\cdots,a_{n})=\partial^{n,1}_{i}(a_{1},\cdots,a_{n})=(a_{1},\cdots,\widehat{a_{i}},\cdots,a_{n})$
(2)
It easy to see that $T(E,I)$ is equal to union of the semicubical sets
$(T_{v})_{*}$ defined by
$(T_{v})_{n}=\\{(a_{1},\cdots,a_{n})\in T_{n}(E,I)~{}:~{}(1\leqslant
i\leqslant n\Rightarrow a_{i}\in E_{v})\\},$
where $E_{v}\subseteq E$ are the maximal subsets of mutually commuting
generators of $M(E,I)$. Face operators
$(T_{v})_{n}\stackrel{{\scriptstyle\partial^{n,\varepsilon}_{i}}}{{\rightarrow}}(T_{v})_{n-1}$
act for $1\leqslant i\leqslant n$ and $\varepsilon\in\\{0,1\\}$ by (2).
Let ${\mathfrak{S}}:\Box_{+}/\bigcup\limits_{v\in
V}(T_{v})_{*}\rightarrow\bigcup\limits_{v\in V}{\mathfrak{F}}T_{v}$ be the
functor assigning to every singular qube
$(a_{1},\cdots,a_{n})\in\bigcup\limits_{v\in V}(T_{v})_{n}$ the object
$a_{1}\cdots a_{n}$. The functor ${\mathfrak{S}}$ acts on morphisms by the
equation (1).
For each functor $F:M(E,I)^{op}\rightarrow{\rm Ab}$, denote by $\overline{F}$
a homological system on $T(E,I)$ defined as the composition
$(\Box_{+}/T(E,I))^{op}\stackrel{{\scriptstyle{\mathfrak{S}}^{op}}}{{\longrightarrow}}{\bigcup\limits_{v\in
V}({\mathfrak{F}}T_{v})^{op}}\subset({\mathfrak{F}}M(E,I))^{op}\stackrel{{\scriptstyle{\rm
cod}^{op}}}{{\longrightarrow}}M(E,I)^{op}\stackrel{{\scriptstyle
F}}{{\rightarrow}}{\rm Ab}$
We will suppose below that $E$ is a totally ordered set.
###### Proposition 2.5
Let $M(E,I)$ be a locally finite-dimensional free partially commutative monoid
and $F:M(E,I)^{op}\rightarrow{\rm Ab}$ a functor. The homology groups of
$M(E,I)$ are isomorphic to the cubical homology groups
$\underrightarrow{\lim}_{n}^{M(E,I)^{op}}F\cong
H_{n}(T(E,I),\overline{F}),\quad n\geqslant 0.$
Proof. The image of ${\mathfrak{S}}|_{\Box_{+}/(T_{v})_{*}}$ is contained in
the category ${\mathfrak{F}}T_{v}$ and defines a functor which we denote by
${\mathfrak{S}}_{v}:\Box_{+}/(T_{v})_{*}\rightarrow{\mathfrak{F}}T_{v}$. For
any $\alpha\in\bigcup\limits_{v\in V}{\mathfrak{F}}T_{v}$, there exists $w\in
V$ such that $\alpha\in{\mathfrak{F}}T_{w}$. It follows that there is an
isomorphism ${\mathfrak{S}}/\alpha\cong{\mathfrak{S}}_{w}/\alpha$. By Lemma
2.3, for every free finitely generated monoid $M(E_{v})$, the functor
${\mathfrak{S}}_{v}:\Box_{+}/(T_{v})_{*}\rightarrow{\mathfrak{F}}T_{v}$ is
strongly coinitial. Consequently, the category ${\mathfrak{S}}_{v}/\alpha$ is
acyclic. It follows that ${\mathfrak{S}}/\alpha$ is acyclic. Thus
${\mathfrak{S}}$ is strongly coinitial. Using the strong coinitiality of the
inclusion $\bigcup\limits_{v\in
V}{\mathfrak{F}}T_{v}\subseteq{\mathfrak{F}}M(E,I)$, we have the isomorphisms
$\underrightarrow{\lim}_{n}^{M(E,I)^{op}}F\cong\underrightarrow{\lim}_{n}^{(\Box_{+}/\bigcup\limits_{v\in
V}(T_{v})_{*})^{op}}~{}\overline{F}$
${\Box}$
It allows us to construct the following complex for the computing the groups
$\underrightarrow{\lim}_{n}^{M(E,I)^{op}}F$.
###### Corollary 2.6
Suppose that $M(E,I)$ is a locally finite-dimensional free partially
commutative monoid. Then for any right $M(E,I)$-module $G$, the groups
$H_{n}(M(E,I)^{op},G)$ are isomorphic to homology groups of the complex
$0\leftarrow G\stackrel{{\scriptstyle
d_{1}}}{{\leftarrow}}\bigoplus\limits_{a_{1}\in
T_{1}(E,I)}G\stackrel{{\scriptstyle
d_{2}}}{{\leftarrow}}\bigoplus\limits_{(a_{1},\ldots,a_{n})\in
T_{n}(E,I)}G\leftarrow\cdots\\\
\cdots\leftarrow\bigoplus\limits_{(a_{1},\ldots,a_{n-1})\in
T_{n-1}(E,I)}G\stackrel{{\scriptstyle
d_{n}}}{{\longleftarrow}}\bigoplus\limits_{(a_{1},\ldots,a_{n})\in
T_{n}(E,I)}G\leftarrow\cdots~{},$
the $n$-th member of that for each $n\geqslant 0$ equals a direct sum of
backup copies of the abelian group $G$ in all $n$-tuples of mutually commuting
elements $a_{1}<a_{2}<\cdots<a_{n}$ in $E$ where the differentials are defined
by
$d_{n}(a_{1},\cdots,a_{n},g)=\sum\limits_{s=1}^{n}(-1)^{s}(a_{1},\cdots,\widehat{a_{s}},\cdots,a_{n},G(a_{s})(g)-g)$
(3)
## 3 Homology of sets with an action of a free partially commutative monoid
This section is devoted to homology groups of right $M(E,I)$-sets $X$ with
coefficients in functors $F:(M(E,I)/X)^{op}\rightarrow{\rm Ab}$. We show that
these groups may be studied as the homology groups of the monoid $M(E,I)$. We
prove the main result of this paper about the isomorphism of homology groups
of any $M(E,I)$-set $X$ and the corresponding semicubical set $Q_{*}X$. This
result is applied for the computing homology groups of state spaces in the
simplest cases.
### 3.1 Homology of right $M(E,I)$-sets
Let $M$ be a monoid and $X\in{\rm Set}^{M^{op}}$ a right $M$-set. Suppose that
$F:(M/X)^{op}\rightarrow{\rm Ab}$ is a functor. Consider a functor
$S=Q^{op}:(M/X)^{op}\rightarrow M^{op}$ opposite to the forgetful functor of
the fibre. Let ${\rm Lan}^{S}F:M^{op}\rightarrow Ab$ be the left Kan extension
[13] of the functor $F$ along to $S$. Objects of the category $(M/X)^{op}$ may
be considered as elements $x\in X$. Morphisms between them are triples
$x\stackrel{{\scriptstyle\mu}}{{\rightarrow}}y$ such those $\mu\in M$ and
$x\cdot\mu=y$. We get the following assertion by replacing a monoid instead of
the category in [10, Proposition 3.7].
###### Lemma 3.1
A right $M$-module ${\rm Lan}^{S}F$ is the Abelian group
$\bigoplus\limits_{x\in X}F(x)$ with the action on $(x,f)$ with $x\in X$ and
$f\in F(x)$ defined by
$(x,f)\mu={\rm
Lan}^{S}F(\mu)(x,f)=(x\cdot\mu,F(x\stackrel{{\scriptstyle\mu}}{{\rightarrow}}x\cdot\mu)(f)).$
There are isomorphisms
$\underrightarrow{\lim}_{n}^{(M/X)^{op}}F\cong\underrightarrow{\lim}_{n}^{M^{op}}{\rm
Lan}^{S}{F}$ for all $n\geqslant 0$.
Let $X$ be a right $M(E,I)$-set. Consider an arbitrary total ordering on $E$.
Define sets
$Q_{n}X=\\{(x,a_{1},\cdots,a_{n}):a_{1}<\cdots<a_{n}\&(1\leqslant i<j\leqslant
n\Rightarrow(a_{i},a_{j})\in I)\\}.$
In particular $Q_{0}X=X$. Define the maps
$Q_{n}X{{{\partial^{n,0}_{i}}\atop{\longrightarrow}}\atop{{\longrightarrow}\atop{\partial^{n,1}_{i}}}}Q_{n-1}X~{},\quad
n\geqslant 1~{},~{}~{}1\leqslant i\leqslant n~{},$
by
${\partial^{n,\varepsilon}_{i}}(x,a_{1},\cdots,a_{n})=(x\cdot
a_{i}^{\varepsilon},a_{1},\cdots,\widehat{a_{i}},\cdots,a_{n})~{},\varepsilon\in\\{0,1\\},$
where $a_{i}^{0}=1$ $a_{i}^{1}=a_{i}$.
###### Lemma 3.2
The sequence of the sets $Q_{n}X$ and the family of maps
${\partial^{n,0}_{i}}$, ${\partial^{n,1}_{i}}$ make up a semicubical set.
For any functor $F:(M(E,I)/X)^{op}\rightarrow{\rm Ab}$, we build the
homological system $\overline{F}:(\Box_{+}/Q_{*}X)^{op}\rightarrow{\rm Ab}$ as
follows. Define $\overline{F}(x,a_{1},\cdots,a_{n})=F(x)$ on objects and
$\overline{F}(\delta_{i}^{n,\varepsilon},\partial_{i}^{n,\varepsilon}(\sigma),\sigma)=F(x\stackrel{{\scriptstyle
a_{i}^{\varepsilon}}}{{\rightarrow}}xa_{i}^{\varepsilon})$ on morphisms for
$\sigma=(x,a_{1},\cdots,a_{n})$.
###### Theorem 3.3
Let $M(E,I)$ be a locally finite-dimensional free partially commutative monoid
and $X$ a right $M(E,I)$-set. Suppose $F:(M(E,I)/X)^{op}\rightarrow{\rm Ab}$
is a functor and $\overline{F}:(\Box_{+}/Q_{*}X)^{op}\rightarrow{\rm Ab}$ the
corresponding homological system of Abelian groups. Then
$\underrightarrow{\lim}_{n}^{(M(E,I)/X)^{op}}F\cong
H_{n}(Q_{*}X,\overline{F})$ for all $n\geqslant 0$. In other words,
$\underrightarrow{\lim}_{n}^{(M(E,I)/X)^{op}}F$ are isomorphic to homology
groups of the complex
$0\leftarrow\bigoplus\limits_{x\in Q_{0}X}F(x)\stackrel{{\scriptstyle
d_{1}}}{{\leftarrow}}\bigoplus\limits_{(x,a_{1})\in
Q_{1}X}F(x)\stackrel{{\scriptstyle
d_{2}}}{{\leftarrow}}\bigoplus\limits_{{(x,a_{1},a_{2})\in
Q_{2}X}}F(x)\leftarrow\cdots\\\
\cdots\leftarrow\bigoplus\limits_{(x,a_{1},\cdots,a_{n-1})\in
Q_{n-1}X}F(x)\stackrel{{\scriptstyle
d_{n}}}{{\longleftarrow}}\bigoplus\limits_{{(x,a_{1},\cdots,a_{n})\in
Q_{n}X}}F(x)\leftarrow\cdots~{},$
where $d_{n}(x,a_{1},\cdots,a_{n},f)=$
$\sum_{s=1}^{n}(-1)^{s}((x\cdot
a_{s},a_{1},\cdots,\widehat{a_{s}},\cdots,a_{n},F(x\stackrel{{\scriptstyle
a_{s}}}{{\rightarrow}}x\cdot a_{s})(f))\\\
-(x,a_{1},\cdots,\widehat{a_{s}},\cdots,a_{n},f))$
Proof. Applying Lemma 3.1 to the monoid $M=M(E,I)$ gives the following complex
for a computation of homology groups with coefficients in ${\rm Lan}^{S}F$ by
Corollary 2.6.
$0\leftarrow{\rm Lan}^{S}F\stackrel{{\scriptstyle
d_{1}}}{{\leftarrow}}\bigoplus\limits_{a_{1}\in T_{1}(E,I)}{\rm
Lan}^{S}F\stackrel{{\scriptstyle
d_{2}}}{{\leftarrow}}\bigoplus\limits_{(a_{1},a_{2})\in T_{2}(E,I)}{\rm
Lan}^{S}F\leftarrow\cdots\\\
\cdots\leftarrow\bigoplus\limits_{(a_{1},a_{2},\cdots,a_{n-1})\in
T_{n-1}(E,I)}{\rm Lan}^{S}F\stackrel{{\scriptstyle
d_{n}}}{{\longleftarrow}}\bigoplus\limits_{(a_{1},a_{2},\cdots,a_{n})\in
T_{n}(E,I)}{\rm Lan}^{S}F\leftarrow\cdots$
where the differentials are defined by (3). For $(a_{1},\ldots,a_{n})\in
T_{n}(E,I)$ and $g\in{\rm Lan}^{S}F$, the values $d_{n}(a_{1},\cdots,a_{n},g)$
are equal to
$\sum\limits_{s=1}^{n}(-1)^{s}(a_{1},\cdots,\widehat{a_{s}},\cdots,a_{n},{\rm
Lan}^{S}F(a_{s})(g))-\sum\limits_{s=1}^{n}(-1)^{s}(a_{1},\cdots,\widehat{a_{s}},\cdots,a_{n},g)$
Substituting $g=(x,f)\in\bigoplus\limits_{x\in X}F(x)={\rm Lan}^{S}F$ and
taking into account the equality ${\rm Lan}^{S}F(a_{s})(x,f)=(x\cdot
a_{s},F(x\stackrel{{\scriptstyle a_{s}}}{{\rightarrow}}x\cdot a_{s})(f))$
realized by Lemma 3.1, we obtain
$d_{n}(x,a_{1},\cdots,a_{n},f)=\sum_{s=1}^{n}(-1)^{s}(x\cdot
a_{s},a_{1},\cdots,\widehat{a_{s}},\cdots,a_{n},F(x\stackrel{{\scriptstyle
a_{s}}}{{\rightarrow}}x\cdot a_{s})(f))\\\
-\sum_{s=1}^{n}(-1)^{s}(x,a_{1},\cdots,\widehat{a_{s}},\cdots,a_{n},f)$
${\Box}$
###### Corollary 3.4
Suppose that $M(E,I)$ is locally finite-dimensional. Then for any right
$M(E,I)$-set $X$, the groups
$\underrightarrow{\lim}_{n}^{(M(E,I)/X)^{op}}\Delta{\,\mathbb{Z}}$ are
isomorphic to integer homology groups of the semicubical set $Q_{*}X$.
###### Conjecture 1
If $M(E,I)$ is locally finite-dimensional, then for each right $M(E,I)$-set
$X$, the homology group
$\underrightarrow{\lim}_{1}^{(M(E,I)/X)^{op}}\Delta{\,\mathbb{Z}}$ is torsion-
free.
###### Example 3.1
Suppose that $P=\\{\star\\}$ is the right $M(E,I)$-set over a locally finite-
dimensional free partially commutative monoid $M(E,I)$. By Theorem 3.3, the
groups $\underrightarrow{\lim}_{n}^{(M(E,I)/P)^{op}}\Delta{\,\mathbb{Z}}$ are
isomorphic to the homology groups of the complex
$0\leftarrow{\,\mathbb{Z}}\stackrel{{\scriptstyle
d_{1}}}{{\longleftarrow}}\bigoplus\limits_{(\star,a_{1})\in
Q_{1}P}{\,\mathbb{Z}}\stackrel{{\scriptstyle
d_{2}}}{{\longleftarrow}}\bigoplus\limits_{(\star,a_{1},a_{2})\in
Q_{2}P}{\,\mathbb{Z}}\leftarrow\cdots\\\
\cdots~{}~{}~{}\leftarrow\bigoplus\limits_{(\star,a_{1},a_{2},\cdots,a_{n-1})\in
Q_{n-1}P}{\,\mathbb{Z}}\stackrel{{\scriptstyle
d_{n}}}{{\longleftarrow}}\bigoplus\limits_{(\star,a_{1},a_{2},\cdots,a_{n})\in
Q_{n}P}{\,\mathbb{Z}}\leftarrow\cdots~{},$
where $d_{n}(\star,a_{1},\cdots,a_{n})=0$. Consequently,
$\underrightarrow{\lim}_{n}^{(M(E,I)/P)^{op}}\Delta{\,\mathbb{Z}}\cong{\,\mathbb{Z}}^{(p_{n})}$
where $p_{n}$ is the cardinality of the set of the subsets
$\\{a_{1},\cdots,a_{n}\\}\subseteq E$ consisting of mutually commuting
elements. Here $p_{0}=1$ as the number of empty subsets.
### 3.2 Homology of asynchronous transition systems
Following [8], we will consider an asynchronous transition system as a
nondegenerated space of states with a distinguished initial point. Recall that
the category of sets and partially functions may be considered as the category
of pointed sets.
Partially functions and pointed maps. A pointed set $X$ is a set with a
distinguished element denoted by $\star$. A map $f:X\rightarrow Y$ between
pointed sets is pointed if it satisfy $f(\star)=\star$.
Let ${\rm Set}_{*}$ be the category of pointed sets and pointed maps between
them. Denote by $\sqcup$ the disjoint union. The category of sets and maps
${\rm Set}$ admits the inclusion into ${\rm Set}_{*}$ which assign to every
set $S$ the pointed set $S_{*}=S\sqcup\\{\star\\}$ and to each map
$\sigma:S\rightarrow S^{\prime}$ the map $\sigma_{*}:S_{*}\rightarrow
S^{\prime}_{*}$ defined by $\sigma_{*}(s)=\sigma(s)$ for $s\in S$ and
$\sigma_{*}(\star)=\star$.
For any partially function $f:X\rightharpoonup Y$ defined in a subset
$Domf\subseteq X$, we may define the corresponding pointed map
$f_{*}:X_{*}\rightarrow Y_{*}$ by
$f_{*}(x)=\left\\{\begin{array}[]{ll}f(x)&\mbox{if}\quad x\in Domf,\\\
{\star}&\mbox{otherwise}.\end{array}\right.$
By [3], this correspondence show that the category of sets and partially
functions is equivalent to ${\rm Set}_{*}$.
Category of state spaces.
###### Definition 3.2
A state space $\Sigma=(S,E,I,{\rm Tran\,})$ consists of sets $S$ and $E$, a
subset ${\rm Tran\,}\subseteq S\times E\times S$, and an irreflexive
antisymmetric relation $I\subseteq E\times E$ satisfying to the following two
axioms.
1. (i)
if $(s,e,s^{\prime})\in{\rm Tran\,}$ and $(s,e,s^{\prime\prime})\in{\rm
Tran\,}$, then $s^{\prime}=s^{\prime\prime}$;
2. (ii)
for any pair $(e_{1},e_{2})\in I$ and triples $(s,e_{1},s_{1})\in{\rm Tran\,}$
and $(s_{1},e_{2},v)\in{\rm Tran\,}$, there exists $s_{2}\in S$ such that
$(s,e_{2},s_{2})\in{\rm Tran\,}$ and $(s_{2},e_{1},v)\in{\rm Tran\,}$.
Elements $s\in S$ are called states, $(s,e,s^{\prime})\in{\rm Tran\,}$
transitions, $e\in E$ events. $I$ is the independence relation.
A state space is called nondegenerate if it satisfies the condition that for
every $e\in E$ there exist $s,s^{\prime}\in S$ such those
$(s,e,s^{\prime})\in{\rm Tran\,}$.
Let ${\mathcal{S}}$ denote the category of state spaces and morphisms
$(\sigma,\eta):(S,E,{\rm Tran\,},I)\rightarrow(S^{\prime},E^{\prime},{\rm
Tran\,}^{\prime},I^{\prime})$
given by maps $\sigma:S\rightarrow S^{\prime}$ and pointed maps
$\eta:E_{*}\rightarrow E^{\prime}_{*}$ satisfying the conditions
1. (i)
for any $(s_{1},e,s_{2})\in{\rm Tran\,}$, if $\eta(e)\not=\star$, then
$(\sigma(s_{1}),\eta(e),\sigma(s_{2}))\in{\rm Tran\,}^{\prime}$, otherwise
$\sigma(s_{1})=\sigma(s_{2})$;
2. (ii)
if $(e_{1},e_{2})\in I$, $\eta(e_{1})\not=\star$, $\eta(e_{2})\not=\star$,
then $(\eta(e_{1}),\eta(e_{2}))\in I^{\prime}$.
This definition with the condition $\sigma(s_{0})=s^{\prime}_{0}$ gives the
definition of morphisms between asynchronous transition systems. For $\eta$
satisfying the condition (ii), let $\eta_{\bullet}:E\rightarrow
E^{\prime}\cup\\{1\\}$ denote the map
$\eta_{\bullet}=\left\\{\begin{array}[]{ll}\eta(e),&\mbox{if}~{}\eta(e)~{}\mbox{is
defined},\\\ 1,&\mbox{otherwise}\end{array}\right.$
Let $\widetilde{\eta_{\bullet}}:M(E,I)\rightarrow M(E^{\prime},I^{\prime})$ be
the extension of $\eta_{\bullet}$ to the monoid homomorphism.
By [8, Proposition 2], every object $\Sigma$ of $\mathcal{S}$ may be given as
a pointed set $S_{*}=S\sqcup\\{\star\\}$ with a right action of a free
partially commutative monoid. There is a correspondence assigning to any
morphism from $\Sigma:M(E,I)^{op}\rightarrow{\rm Set}_{*}$ to
$\Sigma^{\prime}:M(E^{\prime},I^{\prime})^{op}\rightarrow{\rm Set}_{*}$ the
diagram
---
$\textstyle{M(E,I)^{\rm
op}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\widetilde{\eta_{\bullet}}^{\rm
op}}$$\scriptstyle{\Sigma}$$\textstyle{M(E^{\prime},I^{\prime})^{\rm
op}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{{\Sigma^{\prime}}}$$\textstyle{{\nearrow}_{\sigma_{*}}}$$\textstyle{{\rm
Set}_{*}}$
in which
$\sigma_{*}:\Sigma\rightarrow\Sigma^{\prime}\circ\widetilde{\eta_{\bullet}}^{op}$
is the natural transformation mapping $S$ into $S^{\prime}$ by $\sigma$. It
easy to see that the morphism is given by a pair $(\sigma,\varphi)$ consisting
of a map $\sigma:S\rightarrow S^{\prime}$ and a monoid homomorphism
$\varphi:M(E,I)\rightarrow M(E^{\prime},I^{\prime})$ such those
$\varphi(E)\subseteq E^{\prime}\cup\\{1\\}$
$\sigma_{*}(se)=\sigma_{*}(s)\varphi(e)$ for all $s\in S$ and $e\in E$.
Homology of state spaces. Let $U:{\rm Set}_{*}\rightarrow{\rm Set}$ be a
functor which simply forgets the distinguished points. For any state space
$\Sigma:M(E,I)^{op}\rightarrow{\rm Set}_{*}$ the composition $U\circ\Sigma$ is
a right $M(E,I)$-set. Denote by $K_{*}(\Sigma)$ the category
$(M(E,I)/(U\circ\Sigma))^{op}$. Its objects may be considered as elements in
$S_{*}$, and morphisms are triples
$s_{1}\stackrel{{\scriptstyle\mu}}{{\rightarrow}}s_{2}$, $\mu\in M(E,I)$,
$s_{1}\in S_{*}$, $s_{2}\in S_{*}$. Composition of morphisms
$(s_{2}\stackrel{{\scriptstyle\mu_{2}}}{{\rightarrow}}s_{3})\circ(s_{1}\stackrel{{\scriptstyle\mu_{1}}}{{\rightarrow}}s_{2})$
equals $(s_{1}\stackrel{{\scriptstyle\mu_{1}\mu_{2}}}{{\rightarrow}}s_{3})$.
Homology groups of a state space with coefficients in a functor
$F:K_{*}(\Sigma)\rightarrow{\rm Ab}$ are defined by
$H_{n}(\Sigma,F)=\underrightarrow{\lim}_{n}^{K_{*}(\Sigma)}F$ for $n\geqslant
0$.
Semi-regular higher-dimensional automata [6] are precisely the semicubical
sets. So for such automata $X$, there are defined homology groups $H_{n}(X,F)$
with coefficients in homological systems $F$. It follows from Theorem 3.3 the
following assertion.
###### Corollary 3.5
Let $\Sigma=(S,E,I,{\rm Tran\,})$ be a state space. If $M(E,I)$ is locally
finite-dimensional, then $H_{n}(\Sigma,F)\cong
H_{n}(Q_{*}(U\circ\Sigma),\overline{F})$ for all $n\geqslant 0$.
Direct summands of integer homology groups. We will show that the homology
groups of the one-point set considered in Example 3.1 are direct summands of
$H_{n}(\Sigma,\Delta{\,\mathbb{Z}})$. To that end we define functors with
values ${\,\mathbb{Z}}$ and $0$ on objects. For a small category
${\mathscr{C}}$, a subset $S\subseteq{\rm Ob\,}{\mathscr{C}}$ is called closed
in ${\mathscr{C}}$ if it contains with any $s\in S$ all objects $c\in{\rm
Ob\,}{\mathscr{C}}$ those admit morphisms $s\rightarrow c$. For example, the
subset $\\{\star\\}$ is closed in $K_{*}(\Sigma)$. The complement of a closed
subset in ${\rm Ob\,}{\mathscr{C}}$ is called open. If a set $S\subseteq{\rm
Ob\,}{\mathscr{C}}$ is equal to the intersection of an open subset and a
closed subset, then we can define a functor ${\,\mathbb{Z}}[S]$ by
${\,\mathbb{Z}}[S](c)=\left\\{\begin{array}[]{ll}{\,\mathbb{Z}}&\mbox{if}~{}~{}c\in
S,\\\ 0,&\mbox{otherwise,}\end{array}\right.$
on objects $c\in{\rm Ob\,}{\mathscr{C}}$. We put
${\,\mathbb{Z}}[S](c_{1}\rightarrow c_{2})=1_{{\,\mathbb{Z}}}$ if $c_{1}\in S$
and $c_{2}\in S$, and ${\,\mathbb{Z}}[S](c_{1}\rightarrow c_{2})=0$ on the
other morphisms.
###### Proposition 3.6
Suppose that $\Sigma=(S,E,I,{\rm Tran\,})$ is an arbitrary state space. Then
$H_{n}(\Sigma,\Delta{\,\mathbb{Z}})\cong{\,\mathbb{Z}}^{(p_{n})}\oplus
H_{n}(\Sigma,{\,\mathbb{Z}}[S])$ for all $n\geqslant 0$.
Proof. Consider the full subcategory $K_{*}(\emptyset)\subseteq K_{*}(\Sigma)$
with ${\rm Ob\,}(K_{*}(\emptyset))=\\{\star\\}$. The inclusion of this full
subcategory is a coretraction. The exact sequence
$0\rightarrow{\,\mathbb{Z}}[\star]\rightarrow\Delta{\,\mathbb{Z}}\rightarrow{\,\mathbb{Z}}[S]\rightarrow
0$ in ${\rm Ab}^{K_{*}(\Sigma)}$ gives the exact sequence of complexes
$0\rightarrow C_{*}(K_{*}(\Sigma),{\,\mathbb{Z}}[\star])\rightarrow
C_{*}(K_{*}(\Sigma),\Delta{\,\mathbb{Z}})\rightarrow
C_{*}(K_{*}(\Sigma),{\,\mathbb{Z}}[S])\rightarrow 0$ (4)
The chain homomorphism $C_{*}(K_{*}(\Sigma),{\,\mathbb{Z}}[\star])\rightarrow
C_{*}(K_{*}(\Sigma),\Delta{\,\mathbb{Z}})$ is equal to the composition of the
isomorphism $C_{*}(K_{*}(\Sigma),{\,\mathbb{Z}}[\star])\rightarrow
C_{*}(K_{*}({\emptyset}),\Delta{\,\mathbb{Z}})$ and coretraction
$C_{*}(K_{*}({\emptyset}),\Delta{\,\mathbb{Z}})\rightarrow
C_{*}(K_{*}(\Sigma),\Delta{\,\mathbb{Z}})$. Hence the exact sequence (4)
splits. The corresponding exact sequence of $n$-th homology groups gives the
required assertion. $\Box$
In particular, if $H_{n}(\Sigma,\Delta{\,\mathbb{Z}})\cong H_{n}({\,\rm pt})$
for all $n\geqslant 0$, then $p_{n}=0$ for $n>0$. In this case $E=\emptyset$,
$I=\emptyset$, and hence $K_{*}(\Sigma)$ is a discrete category. Since
$H_{0}(K_{*}(\Sigma))={\,\mathbb{Z}}$, this category has an unique object.
Consequently, $S=\emptyset$. It follows the following assertion important for
the calassification of state spaces.
###### Corollary 3.7
Let $\Sigma=(S,E,I,{\rm Tran\,})$ be a state space. If
$H_{n}(\Sigma,\Delta{\,\mathbb{Z}})=0$ for all $n>0$ and
$H_{0}(\Sigma,\Delta{\,\mathbb{Z}})={\,\mathbb{Z}}$, then $S=E=I={\rm
Tran\,}=\emptyset$.
Computing homology groups the state space consisting of an unique element. It
may be seemed that the homology groups of state spaces are torsion-free. We
will show that this opinion is wrong.
Recall that a simplicial scheme is a pair $(X,\mathfrak{M})$ consisting of a
set $X$ and a set $\mathfrak{M}$ of its finite nonempty subsets satisfying the
following conditions
1. (i)
$x\in X\Rightarrow\\{x\\}\in\mathfrak{M}$
2. (ii)
$S\subseteq T\in\mathfrak{M}\Rightarrow S\in\mathfrak{M}$.
Let $(X,\mathfrak{M})$ be a simplicial set. Consider an arbitrary total
ordering $<$ on $X$. Denote
$X_{n}=\\{(x_{0},x_{1},\cdots,x_{n}):\\{x_{0},x_{1},\cdots,x_{n}\\}\in\mathfrak{M}\quad\&\quad
x_{0}<x_{1}<\cdots<x_{n}\\}.$
For any set $S$, let $L(S)$ be a free Abelian group generated by $S$. Consider
a family of Abelian groups
$C_{n}(X,\mathfrak{M})=\left\\{\begin{array}[]{ll}L(X_{n})&$for$\quad
n\geqslant 0,\\\ 0,&$otherwise$.\end{array}\right.$
Define homomorphisms $d_{n}:C_{n}(X,\mathfrak{M})\rightarrow
C_{n-1}(X,\mathfrak{M})$ on the generators by
$d_{n}(x_{0},x_{1},\cdots,x_{n})=\sum_{i=0}^{n}(-1)^{i}(x_{0},\cdots,x_{i-1},x_{i+1},\cdots,x_{n}).$
It is well known that the family $(C_{n}(X,\mathfrak{M}),d_{n})$ is a chain
complex. Denote $C_{*}(X,\mathfrak{M})=(C_{n}(X,\mathfrak{M}),d_{n})$. We can
prove that homology groups of the complex $C_{*}(X,\mathfrak{M})$ does not
depend on the total ordering of $X$. They are called the homology groups
$H_{n}(X,\mathfrak{M})$ of the simplicial scheme $(X,\mathfrak{M})$.
###### Theorem 3.8
Let $\Sigma=(\\{x_{0}\\},E,I,{\rm Tran\,})$ be a state space. Suppose that the
action of $M(E,I)$ is defined by $x_{0}\cdot a=\star$ for every $a\in E$. Let
$(E,\mathfrak{M})$ be a simplicial scheme where $\mathfrak{M}$ consists of
nonempty finite subsets of mutually commuting generators and let
$p_{n}=|E_{n-1}|$ denote the cardinal number of $n$-cliques in the
independence graph of $M(E,I)$. If $M(E,I)$ is locally finite-dimensional,
then $H_{n}(\Sigma,\Delta{\,\mathbb{Z}})\cong{\,\mathbb{Z}}^{(p_{n})}\oplus
H_{n-1}(E,\mathfrak{M})$ for all $n\geqslant 2$.
Proof. Let ${\,\mathbb{Z}}[x_{0}]:K_{*}(\Sigma)\rightarrow{\rm Ab}$ be the
functor with values ${\,\mathbb{Z}}[x_{0}](x_{0})={\,\mathbb{Z}}$ and
${\,\mathbb{Z}}[x_{0}](\star)=0$. By Theorem 3.3, the homology groups
$H_{n}(\Sigma,{\,\mathbb{Z}}[x_{0}])=\underrightarrow{\lim}_{n}^{(M(E,I)/(U\circ\Sigma))^{op}}{\,\mathbb{Z}}[x_{0}]$
are isomorphic to the homology groups of the complex
$0\leftarrow{\,\mathbb{Z}}\stackrel{{\scriptstyle
d_{1}}}{{\longleftarrow}}\bigoplus\limits_{a_{1}\in
E_{1}}{\,\mathbb{Z}}\stackrel{{\scriptstyle
d_{2}}}{{\longleftarrow}}\bigoplus\limits_{(a_{1},a_{2})\in
E_{2}}{\,\mathbb{Z}}\leftarrow\cdots\\\
\cdots\leftarrow\bigoplus\limits_{(a_{1},a_{2},\cdots,a_{n-1})\in
E_{n-1}}{\,\mathbb{Z}}\stackrel{{\scriptstyle
d_{n}}}{{\longleftarrow}}\bigoplus\limits_{(a_{1},a_{2},\cdots,a_{n})\in
E_{n}}{\,\mathbb{Z}}\leftarrow\cdots~{},$
with the differentials
$d_{n}(a_{1},\cdots,a_{n})=\sum_{s=1}^{n}(-1)^{s+1}(a_{1},\cdots,\widehat{a_{s}},\cdots,a_{n})$.
This complex equals the shifted complex $(C_{k-1}(E,\mathfrak{M}),d_{k-1})$ in
the dimensions $k\geqslant 1$. Hence its $k$-th homology groups are isomorphic
to $H_{k-1}(E,\mathfrak{M})$ for $k\geqslant 2$.
By Proposition 3.6, we obtain
$H_{n}(\Sigma,\Delta{\,\mathbb{Z}})\cong{\,\mathbb{Z}}^{(p_{n})}\oplus
H_{n}(\Sigma,{\,\mathbb{Z}}[x_{0}])$ for all $n\geqslant 0$. The required
assertion follows from $H_{n}(\Sigma,{\,\mathbb{Z}}[x_{0}])\cong
H_{n-1}(E,\mathfrak{M})$ for all $n\geqslant 2$. ${\Box}$
Any asynchronous transition system may be considered as a pair
$T=(\Sigma,s_{0})$ of its nondegenerate state space $\Sigma$ with an initial
state $s_{0}\in S$. Each morphism between asynchronous transition systems
$(\Sigma,s_{0})\rightarrow(\Sigma^{\prime},s_{0}^{\prime})$ may be given by a
morphism of state spaces $(\sigma,\eta):\Sigma\rightarrow\Sigma^{\prime}$ such
that $\sigma(s_{0})=s_{0}^{\prime}$.
Let $T=(\Sigma,s_{0})$ be an asynchronous transition system with a state space
$\Sigma=(S,E,I,{\rm Tran\,})$. Events $e_{j}\in E$, $j\in J$, are caled
mutually independent if $(e_{j},e_{j^{\prime}})\in I$ for all $j,j^{\prime}\in
J$ such those $j\not=j^{\prime}$. A state $s\in S\sqcup\\{\star\\}$ is
available if there exists $\mu\in M(E,I)$ such that $s_{0}\cdot\mu=s$. The set
of available states is denoted by $S(s_{0})$. The monoid $M(E,I)$ will act on
the set of the available states. Suppose that $T(s_{0})$ is the state space
whose set of states equals $S(s_{0})$.
Define homology groups of space of available states by
$H_{n}(K_{*}(T(s_{0})),\Delta{\,\mathbb{Z}})$. Applying Theorem 3.3, we get
the following.
###### Corollary 3.9
Suppose that an asynchronous transition system $T=(\Sigma,s_{0})$ does not
contain infinite subsets of mutually independent events. Let
$\Sigma=(S,E,I,{\rm Tran\,})$ be its the state space. Suppose that the set $E$
is totally ordered. Then $H_{n}(K_{*}(T(s_{0})),\Delta{\,\mathbb{Z}})$ are
isomorphic to homology groups of the complex
$0\leftarrow\bigoplus\limits_{s\in
S(s_{0})}{\,\mathbb{Z}}\stackrel{{\scriptstyle
d_{1}}}{{\longleftarrow}}\bigoplus\limits_{(s,e_{1})\in S\times
E_{1}}{\,\mathbb{Z}}\stackrel{{\scriptstyle
d_{2}}}{{\longleftarrow}}\bigoplus\limits_{{(s,e_{1},e_{2})\in S\times
E_{2}}}{\,\mathbb{Z}}\leftarrow\cdots\\\
\cdots~{}~{}\leftarrow\bigoplus\limits_{(s,e_{1},\cdots,e_{n-1})\in S\times
E_{n-1}}{\,\mathbb{Z}}\stackrel{{\scriptstyle
d_{n}}}{{\longleftarrow}}\bigoplus\limits_{(s,e_{1},\cdots,e_{n})\in S\times
E_{n}}{\,\mathbb{Z}}\leftarrow\cdots~{}$ (5)
where $E_{n}$ consists of tuples $e_{1}<\cdots<e_{n}$ of mutually commuting
elements of $E$ with
$d_{n}(s,e_{1}\cdots e_{n})=\sum_{i=1}^{n}(-1)^{i}\left((s\cdot
e_{i},e_{1},\cdots,\widehat{e_{i}},\cdots,e_{n})-(s,e_{1},\cdots,\widehat{e_{i}},\cdots,e_{n})\right)$
If the cardinal numbers of mutually independent events is bounded above by a
natural number, then this comlex has a finite length.
## 4 Concluding remarks
Let $\Sigma=(S,E,I,Tran)$ be a state space. Consider the full subcategory
$K(\Sigma)\subset K_{*}(\Sigma)$ whose objects are elements $s\in S$. In [5],
it was studied homology groups of the category $K(\Sigma)$. It was built an
algorithm of computing the first integer homology group of this category and
applied for the calculation of homology groups of finite Petri CE nets. An
algorithm of computing all integer homology groups of finite Petri CE nets is
not found [5, Open Problem 1]. We put forward a conjecture whose confirmation
would be to solve this problem. Let $Q_{*}^{\prime}(U\circ\Sigma)\subseteq
Q_{*}(U\circ\Sigma)$ be a semicubical subset consisting of sets
$Q_{n}^{\prime}(U\circ\Sigma)=\\{(s,e_{1},\cdots,e_{n})\in
Q_{n}(U\circ\Sigma):se_{1}\cdots e_{n}\not=\star\\}$
Consider any functor $F:K(\Sigma)\rightarrow Ab$. Extend it to $K_{*}(\Sigma)$
by $F(\star)=0$.
###### Conjecture 2
Let $\Sigma=(S,E,I,Tran)$ be a state space. If the monoid $M(E,I)$ is locally
finite-dimensional, then for all integer $n\geqslant 0$, the groups
$\underrightarrow{\lim}_{n}^{K(\Sigma)}F$ are isomorphic to $n$-th homology
groups of the complex consisting of groups
$\bigoplus\limits_{(s,e_{1},\cdots,e_{n})\in
Q^{\prime}_{n}(U\circ\Sigma)}F(s)$ and differentials given by
$d_{n}(s,e_{1},\cdots,e_{n},f)=$
$\sum_{i=1}^{n}(-1)^{i}((s\cdot
e_{i},e_{1},\cdots,\widehat{e_{i}},\cdots,e_{n},F(s\stackrel{{\scriptstyle
e_{i}}}{{\rightarrow}}s\cdot a_{i})(f))\\\
-(s,e_{1},\cdots,\widehat{e_{i}},\cdots,e_{n},f))$
## References
* [1] Nielsen M., Winskel G., “Petri nets and bisimulation”, Theoretical Computer Science, 153:1-2, 1996, 211–244
* [2] Shields M.W., “Concurrent machines”, Computer Journal, 28 (1985), 449–465
* [3] Bednarczyk M. A., Categories of Asynchronous Systems, Ph.D. thesis, University of Sussex, report 1/88, 1988; http://www.ipipan.gda.pl/$~{}\widetilde{~{}}$marek
* [4] Husainov A. A., Tkachenko V. V., “Homology groups of asynchronous transition systems”, Mathematical modeling and the near questions of mathematics. Collection of the scientifics works., KhGPU, Khabarovsk, 2003, 23–33 (Russian)
* [5] Husainov A., “On the homology of small categories and asynchronous transition systems”, Homology Homotopy Appl., 6:1 (2004), 439–471; http://www.rmi.acnet.ge/hha
* [6] Goubault E., The Geometry of Concurrency, Ph.D. Thesis, Ecole Normale Supérieure, 1995; http://www.dmi.ens.fr/$~{}\widetilde{~{}}$goubault
* [7] Gaucher P., “About the globular homology of higher dimensional automata”, Cah. Topol. Geom. Differ., 43:2, 2002, 107–156
* [8] Khusainov A.A., Lopatkin V.E., Treshchev I.A., “Algebraic topology approach to mathematical model analysis of concurrent computational processes”, Sibirskiĭ Zhurnal Industrial noĭ Matematiki, 11:1 (2008), 141–152 (Russian)
* [9] Polyakova L.Yu., “Resolutions for free partially commutative monoids”, Sib. Math. J., 48:6 (2007), 1038–1045; translation from Sib. Mat. Zh., 48:6 (2007), 1295–1304
* [10] Khusainov A.A., “Homology groups of semicubical sets”, Sib. Math. J., 49:1 (2008), 180–190; translation from Sib. Mat. Zh., 49:1 (2008), 224–237
* [11] Kaczynski T., Mischaikov K., Mrozek M., Computational homology, Springer-Verlag, New York, 2004, (Appl. Math. Sci.; 157)
* [12] Gabriel P., Zisman M., Calculus of Fractions and Homotopy Theory [Russian translation], Moscow, Mir, 1971
* [13] MacLane S., Categories for the Working Mathematician, Springer-Verlag, Berlin, 1971
* [14] Oberst U., “Homology of categories and exactness of direct limits”, Math. Z., 107 (1968), 87–115
* [15] Husainov A. A., “On the Leech dimension of a free partially commutative monoid”, Tbilisi Math. J., 1:1 (2008), 71–87; http://ncst.org.ge/Journals/TMJ/index.html
* [16] Kaczynski T., Mischaikov K., Mrozek M., “Computational homology”, Homology Homotopy Appl., 5:2 (2003), 233 – 256
* [17] Baues H.-J., Wirsching G., “Cohomology of small categories”, J. Pure Appl. Algebra, 38:2-3 (1985), 187–211
* [18] Diekert V., Métivier Y., “Partial Commutation and Traces”, Handbook of formal languages, 3, Springer-Verlag, New York, 1997, 457–533
|
arxiv-papers
| 2009-05-08T07:50:26 |
2024-09-04T02:49:02.398127
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Ahmet A. Husainov",
"submitter": "Ahmet Husainov",
"url": "https://arxiv.org/abs/0905.1194"
}
|
0905.1216
|
# Theory of neutrino oscillations using condensed matter physics
Including production process and energy-time uncertainty
Harry J. Lipkin Supported in part by U.S. Department of Energy, Office of
Nuclear Physics, under contract number DE-AC02-06CH11357. Department of
Particle Physics Weizmann Institute of Science, Rehovot 76100, Israel
School of Physics and Astronomy, Raymond and Beverly Sackler Faculty of Exact
Sciences, Tel Aviv University, Tel Aviv, Israel
Physics Division, Argonne National Laboratory Argonne, IL 60439-4815, USA
harry.lipkin@weizmann.ac.il
###### Abstract
Neutrino scillations cannot arise from an initial isolated one particle state
if four-momentum is conserved. The transition matrix element is generally
squared and summed over all final states with no interference between
orthogonal final states. Lorentz covariant descriptions based on relativistic
quantum field theory cannot describe interference between orthogonal states
with different $\nu$ masses producing neutrino oscillations. Simplified model
presents rigorous derivation of handwaving argument about “energy-time
uncertainty”. Standard time-dependent perturbation theory for decays shows how
energy spectrum of final state is much broader than natural line width at
times much shorter than decay lifetime. Initial state containing two
components with different energies decay into two orthogonal states with
different $\nu$ masses completely separated at long times with no
interference. At short times the broadened energy spectra of the two
amplitudes overlap and interfere. “Darmstadt oscillation” experiment attempts
to measure the momentum difference between the two contributing coherent
initial states and obtain information about $\nu$ masses without detecting the
$\nu$. Simple interpretation gives value for the squared $\nu$ mass difference
differing by less than a factor of three from values calculated from the
KAMLAND experiment. Treatment holds only in laboratory frame with values of
energy, time and momentum determined by experimental environment at rest in
the laboratory.
## I Introduction - The basic paradox of neutrino oscillations
### A The problem
1. 1.
The original neutrino experiment by Lederman et al [1] showed a neutrino
emitted in a $\pi\rightarrow\mu\nu$ decay entering a detector and producing
only muons and no electrons.
2. 2.
The neutrino enters detector as coherent mixture of mass eigenstates with
right relative magnitudes and phases to cancel the amplitude for producing
electron at detector.
3. 3.
$\nu$ wave function must have states with different masses, momenta and/or
energies .
4. 4.
In initial one-particle state components with different momenta have different
energies.
5. 5.
Lederman et al experiment can’t exist if energy and momentum are conserved
### B The Solution
1. 1.
If momentum is conserved in the interaction, violation of energy conservation
needed.
2. 2.
Energy-time uncertainty in the laboratory frame allows components of initial
wave packet with different energies to produce same final $\nu_{e}$ with the
same single energy.
### C Darmstadt application
Radioactive ion circulates in storage ring before decay[3]
1. 1.
Transition probability depends on relative phase between two components
2. 2.
Relative phase and transition probability change in propagation through
storage ring.
3. 3.
Phase changes produce oscillations in decay probablity
4. 4.
Oscillations can give information about $\nu$ masses without detecting the
$\nu$ .
### D A simple example of resolution of the paradox
Time dependent perturbation theory shows violation of energy conservation by
energy-time uncertainty in sufficiently short times[4]. The time dependent
amplitude $\beta_{f}(E_{i})$ for the decay from an initial state with energy
$E_{i}$ into a final state with a slightly different energy $E_{f}$ is
$\frac{\beta_{f}(E_{i})}{g}\cdot(E_{i}-E_{f})=\left[e^{-i(E_{i}-E_{f})t}-1]\right]\cdot
e^{-2iE_{f}t}$ (1)
where we have set $\hbar=1$ and $g$ is the interaction coupling constant.
We now generalize this expression to the case where two initial states with
energies $E_{f}-\delta$ and $E_{f}+\delta$ decay into the same final state
with energy $E_{f}$ and define $x\equiv E_{i}-E_{f}$
$\frac{e^{2iE_{f}t}}{g}\cdot[\beta_{f}(E_{f}+x-\delta)+\beta_{f}(E_{f}+x+\delta)]=\left[\frac{e^{-i(x-\delta)t}-1}{(x-\delta)}\right]+\left[\frac{e^{-i(x+\delta)t}-1}{(x+\delta)}\right]$
(2)
The square of the transition amplitude denoted by $T$ is then given by
$\frac{|T^{2}|}{g^{2}}\equiv\left[\frac{\beta_{f}(E_{f}+x-\delta)+\beta_{f}(E_{f}+x+\delta)}{g}\right]^{2}=4\cdot\left[\frac{\sin^{2}[(x-\delta)t/2]}{(x-\delta)^{2}}+\frac{\sin^{2}[(x+\delta)t/2]}{(x+\delta)^{2}}\right]+T_{int}$
(3)
where the interference term $T_{int}$ is
$T_{int}=\left[\frac{e^{-i(x-\delta)t}-1}{(x-\delta)}\right]\cdot\left[\frac{e^{i(x+\delta)t}-1}{(x+\delta)}\right]+cc=4\left[\frac{2\sin^{2}[\delta
t/2]+2\sin^{2}[xt/2]\cos[\delta t]-\sin^{2}(\delta
t)}{x^{2}-\delta^{2}}\right]$ (4)
If the time is sufficiently short so that the degree of energy violation
denoted by $x$ is much larger than the energy difference $\delta$ between the
two initial states, $x\gg\delta$ and
$x\gg\delta;~{}~{}~{}|T^{2}|\approx
8g^{2}\cdot\left[\frac{\sin^{2}[xt/2]}{x^{2}}\right]\cdot[1+\cos\delta t]$ (5)
The transition probability is given by the Fermi Golden Rule. We integrate the
the square of the transition amplitude over $E_{i}$ or $x$, introduce the
density of final states $\rho(E_{f})$ and and assume that $\delta$ is neglibly
small in the integrals.
$\int_{-\infty}^{+\infty}|T^{2}|\rho(E_{f})dx\approx\int_{-\infty}^{+\infty}8g^{2}\cdot\left[\frac{\sin^{2}[xt/2]}{x^{2}}\right]\cdot[1+\cos\delta
t]\rho(E_{f})dx$ (6)
The transition probability per unit time $W$ is then
$W\approx
4g^{2}\cdot\int_{-\infty}^{+\infty}du\left[\frac{\sin^{2}u}{u^{2}}\right]\cdot\rho(E_{f})[1+\cos(\delta
t)]\cdot t=4\pi g^{2}\rho(E_{f})$ (7)
The interference term between the two initial states is seen to be comparable
to the direct terms when $\cos(\delta t)\approx 1$; i.e. when the energy
uncertainty is larger than the energy difference between the two initial
states.
This example shows in principle how two initial states with a given momentum
difference can produce a coherent final state containing two neutrinos with
the same energy and the given momentum difference. A measurement of the
momentum difference between the two initial states can provide information on
neutrino masses without detecting the neutrino.
In this simple example the amplitudes and the coupling constant $g$ are
assumed to be real. In a more realistic case there is an additional extra
relative phase between the two terms in eq.(2) which depends upon the initial
state wave function. In the GSI experiment[3] this phase varies linearly with
the time of motion of the initial ion through the storage ring. This phase
variation can produce the observed oscillations.
## II The basic physics of neutrino oscillations
### A Interference is possible only if we can’t know everything
Neutrino oscillations are produced from a coherent mixture of different $\nu$
mass eigenstates. The mass of a $\nu$ produced in a reaction where all other
particles have definite momentum and energy is determined by conservation of
energy and momentum. Interference between amplitudes from different $\nu$ mass
eigenstates is not observable in such a “missing mass” experiment. Something
must prevent knowing the neutrino mass from conservation laws. Ignorance alone
does not produce interference. Quantum mechanics must hide information. To
check how coherence and oscillations can occur we investigate what is known
and what information is hidden by quantum mechanics.
A simple example is seen in the decay $\pi\rightarrow\mu+\nu$. If the momenta
of the initial $\pi$ and recoil $\mu$ are known the $\nu$ mass is known from
energy and momentum conservation and there are no oscillations. But
oscillations have been observed in macroscopic neutrino detectors at rest in
the laboratory system. Oscillations arise only when the outgoing $\nu$ is a
wave packet containing coherent mixtures of two mass eigenstates with
different masses and therefore different momenta and/or energies. The decay
interaction conserves momentum in the laboratory system. The incident pion
wave packet must contain coherent mixtures with the same momentum difference.
The pion is a one-particle state with a definite mass. Two states with
different momenta must have different energies. A transition from a linear
combination of two states with different energies to a final state with a
single energy can occur only with a violation of energy conservation. This
violation can only occur if the $\pi$, $\mu$ and $\nu$ are not isolated but
interacting with another system that absorbs the missing energy.
A simple description of neutrino oscillations which neglects these
interactions has a missing four-momentum expressed simply in the laboratory
system as a missing energy. Covariant descriptions and Lorentz transformations
with a missing four-momentum are not easily described in treatments which
separate the decay process from interactions with the environment. In other
Lorentz frames both energy and momentum conservation are violated.
The momentum difference between the two coherent components of the initial
pion wave packet depends on the mass difference between neutrino mass
eigenstates. Measuring this momentum difference can give information about the
neutrino masses even if the neutrino is not detected. In most cases such a
measurement is not feasible experimentally. The GSI experiment[3] describes a
unique opportunity.
### B Energy and momentum in the GSI experiment
The search for what is known and what is hidden by quantum mechanics leads to
the energy-time uncertainty described in our simple example (1). The short
time interval between the last observation of the initial ion before decay and
the observed decay time enables enough violation of energy conservation to
prevent a missing mass experiment. This line broadening effect is demonstrated
in eq.(7) and related to the line broadening of any decay observed in a time
short compared to its natural line width[4]. The decay to two final states is
described by two Breit-Wigner energy distributions separated at long times.
But in this experiment[3] and in our simplified model (5) the decay time is
sufficiently short to make the separation negligible in comparison with their
broadened widths. The transition can occur coherently from two components of
the initial state with different energies and momenta into a same final state
with a different common energy and the same momentum difference. The sum of
the transition amplitudes from these two components of an initial state to the
same final state depends on their relative phase. Changes in this phase can
produce oscillations. The energy-time uncertainty is not covariant and defined
only in the laboratory system. Covariant descriptions and transformations from
the laboratory to any center-of-mass system are not valid for a description of
neutrino oscillations.
### C Summary of what is known and hidden by quantum mechanics
1. 1.
The final state has coherent pairs of states containing neutrinos with
different masses and different momenta and/or energies.
2. 2.
The initial state is a one-particle state with a definite mass.
3. 3.
Momentum is conserved in the transition.
4. 4.
The initial state can contain coherent pairs with the same momentum difference
present in the final state state but these must have different energies.
5. 5.
Energy-time uncertainty hides information and prevents use of energy
conservation.
6. 6.
The transition occurs coherently from two components of the initial state with
different energies and momenta into a same final state with a different common
energy and the same momentum difference.
7. 7.
The relative phase between components with different energies changes during
the passage of the ion through the storage ring and can produce oscillations.
A treatment of neutrino oscillations without explicit violation of energy
conservation describes a missing mass experiment where no neutrino
oscillations of any kind are allowed.
## III The basic physics of the GSI experiment
### A A first order weak transition
The initial state wave function $\left|i(t)\right\rangle$ is a “Mother” ion
wave packet containing components with different momenta. Its development in
time is described by an unperturbed Hamiltonian denoted by $H_{o}$ which
describes the motion of the initial and final states in the electromagnetic
fields constraining their motion in a storage ring.
$\left|i(t)\right\rangle=e^{iH_{o}t}\left|i(0)\right\rangle$ (8)
The time $t=0$ is defined as the time of entry into the apparatus. Relative
phases of wave function components with different momenta are determined by
localization in space at the point of entry into the apparatus. Since plane
waves have equal amplitudes over all space, these relative phases are
seriously constrained by requiring that the probability of finding the ion
outside the storage ring must be zero.
A first-order weak decay is described by the Fermi Golden Rule. The transition
probability per unit time at time $t$ from an initial state
$\left|i(t)\right\rangle$ to a final state $\left|f\right\rangle$ is
$W(t)={{2\pi}\over{\hbar}}|\left\langle
f\right|T\left|i(t)\right\rangle|^{2}\rho(E_{f})={{2\pi}\over{\hbar}}|\left\langle
f\right|Te^{iH_{o}t}\left|i(0)\right\rangle|^{2}\rho(E_{f})$ (9)
where $T$ is the transition operator and $\rho(E_{f})$ is the density of final
states . The transition operator $T$ conserves momentum.
If two components of the initial state with slightly different energies can
both decay into the same final state, their relative phase changes linearly
with time and can produce changes in the transition matrix element. The
quantitative result and the question of whether oscillations can be observed
depend upon the evolution of the initial state. The neutrino is not detected
in the GSI experiment[3], but the information that a particular linear
combination of mass and momentum eigenstates would be created existed in the
system. Thus the same final state can be created by either of three initial
states that have the same momentum difference. Violation of energy
conservation allows the decay and provides a new method for investigating the
creation of such a coherent state.
### B Time dependence and internal clocks
An external measurement of the time between the initial observation and the
decay of a radioactive ion circulating in a storage ring gives information
about the system only if an internal clock exists in the system.
1. 1.
An initial ion in a one-particle energy eigenstate has no clock. Its
propagation in time is completely described by a single unobservable phase.
2. 2.
If the initial ion is in a coherent superposition of different energy
eigenstates, the relative phase of any pair changes with energy. This phase
defines a clock which can measure the time between initial observation and
decay.
3. 3.
If the decay transition conserves energy, the final states produced by the
transition must also have different energies.
4. 4.
The decay probability is proportional to the square of the sum of the
transition matrix elements to all final states. There are no interference
terms between orthogonal final states with different energies and their
relative phases are unobservable.
The probability $P_{i}(t)$ that the ion is still in its initial state at time
$t$ and not yet decayed satisfies an easily solved differential equation,
$\frac{d}{dt}P_{i}(t)=-W(t)P_{i}(t);~{}~{}~{}~{}~{}\frac{d}{dt}log(P_{i})=-W(t);~{}~{}~{}~{}~{}P_{i}(t)=e^{-\int
W(t)dt}$ (10)
If $W(t)$ is independent of time eq. (10) gives an exponential decay. The
observation of a nonexponential decay implies that $W(t)$ is time dependent.
Time dependence can arise if the initial ion is in a coherent superposition of
different energy eigenstates, whose relative phases change with time. This
phase defines a clock which can measure the time between initial observation
and decay. Since the time $dt$ is infinitesimal, energy need not be conserved
in this transition. A non-exponential decay can occur only if there is a
violation of energy conservation. All treatments which assume energy
conservation; e.g.[8] will only predict exponential decay.
$W(t)$ depends upon the unperturbed propagation of the initial state before
the time $t$ where its motion in the storage ring is described by classical
electrodynamics. Any departure from exponential decay must come from the
evolution in time of the initial unperturbed state. This can change the wave
function at the time of the decay and therefore the value of the transition
matrix element. What happens after the decay cannot change the wave function
before the decay. Whether or not and how the final neutrino is detected cannot
change the decay rate.
### C The role of Dicke superradiance
Dicke[10] has shown that whenever two initial state components can produce
amplitudes for decay into the same final state, a linear combination called
“superradiant” has both components interfering constructively to enhance the
transition. The orthogonal state called “subradiant” has maximum destructive
interference and may even produce a cancelation.
The wave function of the initial state before the transition can contain pairs
of components with a momentum difference allowing both to decay into the same
final state. This wave function can be expressed as a linear combination of
superradiant and subradiant states with a relative magnitude that changes with
time. The variation between superradiant and subradiant wave functions affects
the transition matrix element and can give rise to oscillations in the decay
probability. Since the momentum difference depends on the mass difference
between the two neutrino eigenstates these oscillations can provide
information about neutrino masses.
## IV Detailed analysis of a simplified model for Darmstadt Oscillations
### A The initial and final states for the transition matrix
The initial radioactive “Mother” ion is in a one-particle state with a
definite mass moving in a storage ring. There is no entanglement[8] since no
other particles are present. To obtain the required information about this
initial state we need to know the evolution of the wave packet during passage
around the storage ring. This is not easily calculated. It requires knowing
the path through straight sections, bending sections and focusing electric and
magnetic fields.
The final state is a “Daughter” ion and a $\nu_{e}$ neutrino, a linear
combination of several $\nu$ mass eigenstates. This $\nu_{e}$ is a complicated
wave packet containing different masses, energies and momenta. The observed
oscillations arise only from $\nu$ components with different masses and
different momenta and/or energies.
### B Kinematics for a simplified two-component initial state.
Consider the transition for each component of the wave packet which has a
momentum $\vec{P}$ and energy $E$ in the initial state. The final state has a
recoil ion with momentum denoted by $\vec{P}_{R}$ and energy $E_{R}$ and a
neutrino with energy $E_{\nu}$ and momentum $\vec{p}_{\nu}$. If both energy
and momenta are conserved,
$E_{R}=E-E_{\nu};~{}~{}\vec{P}_{R}=\vec{P}-\vec{p}_{\nu};~{}~{}M^{2}+m^{2}-M_{R}^{2}=2EE_{\nu}-2\vec{P}\cdot\vec{p}_{\nu}$
(11)
where $M$, $M_{R}$ and $m$ denote respectively the masses of the mother and
daughter ions and the neutrino. We neglect transverse momenta and consider the
simplified two-component initial state for the “mother” ion having momenta $P$
and $P+\delta P$ with energies $E$ and $E+\delta E$. The final state has two
components having neutrino momenta $p_{\nu}$ and $p_{\nu}+\delta p_{\nu}$ with
energies $E_{\nu}$ and $E_{\nu}+\delta E_{\nu}$ together with a recoil ion
having the same momentum and energy for both components. The changes in these
variables produced by a small change $\Delta(m^{2})$ in the squared neutrino
mass are seen from eq. (11) to satisfy the relation
$\frac{\Delta(m^{2})}{2}=E\delta E_{\nu}+E_{\nu}\delta E-P\delta
p_{\nu}-p_{\nu}\delta P=-E\delta E\cdot\left[1-\frac{\delta E_{\nu}}{\delta
E}+\frac{p_{\nu}}{P}-\frac{E_{\nu}}{E}\right]\approx-E\delta E$ (12)
where we have noted that momentum conservation in the transition requires
$P\delta p_{\nu}=P\delta P=E\delta E$, $E$ and $P$ are of the order of the
mass $M$ of the ion and $p_{\nu}$ and $E_{\nu}$ are much less than $M$. To
enable coherence the two final neutrino components must have the same energy,
i.e. $\delta E_{\nu}=0$. Since $\delta E\not=0$ we are violating energy
conservation.
The relative phase $\delta\phi$ at a time t between the two states
$\left|P\right\rangle$ and $\left|P+\delta P\right\rangle$ is given by $\delta
E\cdot t$. Equation (12) relates $\delta E$ to the difference between the
squared masses of the two neutrino mass eigenstates. Thus
$E\cdot\delta
E=-{{\Delta(m^{2})}\over{2}};~{}~{}~{}~{}\delta\phi\approx-\delta E\cdot
t=-{{\Delta(m^{2})}\over{2E}}\cdot t=-{{\Delta(m^{2})}\over{2\gamma M}}\cdot
t$ (13)
where $\gamma$ denotes the Lorentz factor $E/M$.
### C Dicke superradiance and subradiance in the experiment
Consider the transition from a simplified initial state for the “mother” ion
with only two components denoted by $\left|\vec{P}\right\rangle$ and
$\left|\vec{P}+\delta\vec{P}\right\rangle$ having momenta $\vec{P}$ and
$\vec{P}+\delta\vec{P}$ with energies $E$ and $E+\delta E$. The final state
denoted by $\left|f(E_{\nu})\right\rangle$ has a “daughter” ion and an
electron neutrino $\nu_{e}$ which is a linear combination of two neutrino mass
eigenstates denoted by $\nu_{1}$ and $\nu_{2}$ with masses $m_{1}$ and
$m_{2}$. To be coherent and produce oscillations the two components of the
final wave function must have the same neutrino energy $E_{\nu}$ and the same
momentum $\vec{P}_{R}$ and energy $E_{R}$ for the “daughter” ion.
$\left|f(E_{\nu})\right\rangle\equiv\left|\vec{P}_{R};\nu_{e}(E_{\nu})\right\rangle=\left|\vec{P}_{R};\nu_{1}(E_{\nu})\right\rangle\left\langle\nu_{1}\right|\nu_{e}\rangle+\left|\vec{P}_{R};\nu_{2}(E_{\nu})\right\rangle\left\langle\nu_{2}\right|\nu_{e}\rangle$
(14)
where $\left\langle\nu_{1}\right|\nu_{e}\rangle$ and
$\left\langle\nu_{2}\right|\nu_{e}\rangle$ are elements of the neutrino mass
mixing matrix, commonly expressed in terms of a mixing angle denoted by
$\theta$.
$\cos\theta\equiv\left\langle\nu_{1}\right|\nu_{e}\rangle;~{}~{}~{}\sin\theta\equiv\left\langle\nu_{2}\right|\nu_{e}\rangle;~{}~{}~{}\left|f(E_{\nu})\right\rangle=\cos\theta\left|\vec{P}_{R};\nu_{1}(E_{\nu})\right\rangle+\sin\theta\left|\vec{P}_{R};\nu_{2}(E_{\nu})\right\rangle$
(15)
After a very short time two components with different initial state energies
can decay into a final state which has two components with the same energy and
a neutrino state having two components with the same momentum difference
$\delta\vec{P}$ present in the initial state.
The momentum conserving transition matrix elements between the two initial
momentum components to final states with the same energy and momentum
difference $\delta\vec{P}$ are
$\left\langle
f(E_{\nu})\right|T\left|\vec{P})\right\rangle=\cos\theta\left\langle\vec{P}_{R};\nu_{1}(E_{\nu})\right|T\left|\vec{P})\right\rangle;~{}~{}~{}\left\langle
f(E_{\nu})\right|T\left|\vec{P}+\delta\vec{P})\right\rangle=\sin\theta\left\langle\vec{P}_{R};\nu_{2}(E_{\nu})\right|T\left|\vec{P}+\delta\vec{P})\right\rangle$
(16)
We neglect transverse momenta and set $\vec{P}\cdot\vec{p}_{\nu}\approx
Pp_{\nu}$ where $P$ and $p_{\nu}$ denote the components of the momenta in the
direction of the incident beam. The Dicke superradiance analog [10] is seen by
defining superradiant and subradiant states.
$\left|Sup(E_{\nu})\right\rangle\equiv\cos\theta\left|P)\right\rangle+\sin\theta\left|P+\delta
P)\right\rangle;~{}~{}~{}\left|Sub(E_{\nu})\right\rangle\equiv\cos\theta\left|P+\delta
P)\right\rangle-\sin\theta\left|P)\right\rangle$ (17)
The transition matrix elements for these two states are then
$\frac{\left\langle
f(E_{\nu})\right|T\left|Sup(E_{\nu})\right\rangle}{\left\langle
f\right|T\left|P\right\rangle}=[\cos\theta+\sin\theta];~{}~{}~{}\frac{\left\langle
f(E_{\nu})\right|T\left|Sub(E_{\nu})\right\rangle}{\left\langle
f\right|T\left|P\right\rangle}=[\cos\theta-\sin\theta]$ (18)
where we have neglected the dependence of the transition operator $T$ on the
small change in the momentum $P$. The squares of the transition matrix
elements are
$\frac{|\left\langle
f(E_{\nu})\right|T\left|Sup(E_{\nu})\right\rangle|^{2}}{|\left\langle
f\right|T\left|P\right\rangle|^{2}}=[1+\sin
2\theta];~{}~{}~{}\frac{|\left\langle
f(E_{\nu})\right|T\left|Sub(E_{\nu})\right\rangle|^{2}}{|\left\langle
f\right|T\left|P\right\rangle|^{2}}=[1-\sin 2\theta]$ (19)
For maximum neutrino mass mixing, $\sin 2\theta=1$ and
$|\left\langle
f(E_{\nu})\right|T\left|Sup(E_{\nu})\right\rangle|^{2}=2|\left\langle
f\right|T\left|P\right\rangle|^{2};~{}~{}~{}|\left\langle
f(E_{\nu})\right|T\left|Sub(E_{\nu})\right\rangle|^{2}=0$ (20)
This is the standard Dicke superradiance in which all the transition strength
goes into the superradiant state and there is no transition from the
subradiant state.
Thus from eq. (17) the initial state at time t varies periodically between the
superradiant and subradiant states. The period of oscillation $\delta t$ is
obtained by setting $\delta\phi\approx-2\pi$,
$\delta t\approx{{4\pi\gamma
M}\over{\Delta(m^{2})}};~{}~{}~{}\Delta(m^{2})={{4\pi\gamma M}\over{\delta
t}}\approx 2.75\Delta(m^{2})_{exp}$ (21)
where the values $\delta t=7$ seconds and $\Delta(m^{2})=2.22\times
10^{-4}\rm{eV}^{2}=2.75\Delta(m^{2})_{exp}$ are obtained from the GSI
experiment and neutrino oscillation experiments[5].
The theoretical value (21) obtained with minimum assumptions and no fudge
factors is in the same ball park as the experimental value obtained from
completely different experiments. Better values obtained from better
calculations can be very useful in determining the masses and mixing angles
for neutrinos.
### D Effects of spatial dependence
The initial wave function travels through space as well as time. In a storage
ring the ion moves through straight sections, bending sections and focusing
fields. All must be included to obtain a reliable estimate for
$\Delta(m^{2})$. That this requires a detailed complicated calculation is seen
in examining two extreme cases
1. 1.
Circular motion in constant magnetic field. The cyclotron frequency is
independent of the momentum of the ion. Only the time dependent term
contributes to the phase and $\delta\phi^{cyc}$ is given by eq. (13)
2. 2.
Straight line motion with velocity $v=(P/E)\cdot t$. The phase of the initial
state at point $x$ in space and time $t$, its change with energy and momentum
changes $\delta P$ and $\delta E$ are
$\phi^{SL}=P\cdot x-E\cdot t;~{}~{}~{}~{}\delta\phi^{SL}=(\delta P\cdot
v-\delta E)\cdot t=\frac{P\delta P-E\delta E}{E}\cdot t=0$ (22)
The large difference between the two results (13 and (22) indicate that a
precise determination of the details of the motion of the mother ion in the
storage ring is needed before precise predictions of the squared neutrino mass
difference can be made.
### E A tiny energy scale
The experimental result sets a scale in time of seven seconds and a tiny
energy scale
$\Delta E\approx 2\pi\cdot\frac{\hbar}{7}=2\pi\cdot\frac{6.6\cdot
10^{-16}}{7}\approx 0.6\cdot 10^{-15}{\rm eV}$ (23)
This tiny energy difference between two waves which beat with a period of
seven seconds must be predictable from standard quantum mechanics using a
scale from another input. Another tiny scale available in the parameters that
describe this experiment is the mass-squared difference between two neutrino
mass eigenstates. A prediction (21) has been obtained from following the
propagation of the initial state through the storage ring during the time
before the decay.
That these two tiny energy scales obtained from completely different inputs
are within an order of magnitude of one another suggests some relation
obtainable by a serious quantum-mechanical calculation. We have shown here
that the simplest model relating these two tiny mass scales gives a result
that differs by only by a factor of less than three.
Many other possible mechanisms might produce oscillations. The
experimenters[3] claim that they have investigated all of them. These other
mechanisms generally involve energy scales very different from the scale
producing a seven second period.
The observed oscillation is seen to arise from the relative phase between two
components of the initial wave function with a tiny energy difference (23).
These components travel through the electromagnetic fields required to
maintain a stable orbit. The effect in these fields on the relative phase
depends on the energy difference between the two components. Since the energy
difference is so tiny the effect on the phase is expected to be also tiny and
calculable.
## V Conclusions
Neutrino oscillations cannot occur if the momenta of all other particles
participating in the reaction are known and momentum and energy are conserved.
A complete description of the decay process must inlude the interaction with
the environent and violation of energy conservation. A new oscillation
phenomenon providing information about neutrino mixing is obtained by
following the initial radioactive ion before the decay. Difficulties
introduced in conventional $\nu$ experiments by tiny neutrino absorption cross
sections and very long oscillation wave lengths are avoided. Measuring the
decay time enables every $\nu$ event to be observed and counted without the
necessity of observing the $\nu$ via the tiny absorption cross section. The
confinement of the initial ion in a storage ring enables long wave lengths to
be measured within the laboratory.
## VI Acknowledgement
The theoretical analysis in this paper was motivated by discussions with Paul
Kienle at a very early stage of the experiment in trying to understand whether
the effect was real or just an experimental error. It is a pleasure to thank
him for calling my attention to this problem at the Yukawa Institute for
Theoretical Physics at Kyoto University, where this work was initiated during
the YKIS2006 on “New Frontiers on QCD”. Discussions on possible experiments
with Fritz Bosch, Walter Henning, Yuri Litvinov and Andrei Ivanov are also
gratefully acknowledged along with a critical review of the present
manuscript. The author also acknowledges further discussions on neutrino
oscillations as “which path” experiments with Eyal Buks, Avraham Gal, Terry
Goldman, Maury Goodman, Yuval Grossman, Moty Heiblum, Yoseph Imry, Boris
Kayser, Lev Okun, Gilad Perez, Murray Peshkin, David Sprinzak, Ady Stern, Leo
Stodolsky and Lincoln Wolfenstein.
## REFERENCES
* [1] G. Danby et al, Physical Review Letters 9 (1962) p.36
* [2] L.Stodolsky Phys.Rev.D58:036006 (1998), arXiv:hep-ph/9802387
* [3] Yu.A. Litvinov, H. Bosch et al nucl-ex0801/2079
* [4] Gordon Baym, Lectures on quantum mechanics, W.A. Benjamin, Inc, (1969), p. 248
* [5] A.N.Ivanov, R.Reda and P.Kienle nucl-th0801/2121
* [6] Manfred Faber nucl-th0801/3262
* [7] Harry J. Lipkin, hep-ph/0801.1465 and hep-ph/0805.0435
* [8] Andrew G. Cohen, Sheldon L. Glashow, Zoltan Ligeti, hep-ph/0810. 4602
* [9] Harry J. Lipkin hep-ph/0505141, Phys.Lett. B642 (2006) 366
* [10] R. H. Dicke, Phys. Rev. 93, 99 (1954)
|
arxiv-papers
| 2009-05-08T09:43:21 |
2024-09-04T02:49:02.406358
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Harry J. Lipkin",
"submitter": "Harry Lipkin J",
"url": "https://arxiv.org/abs/0905.1216"
}
|
0905.1253
|
# Non-Thermal Production of WIMPs,
Cosmic $e^{\pm}$ Excesses and $\gamma$-rays from the Galactic Center
Xiao-Jun Bi1,2, Robert Brandenberger3,4,5,6,7, Paolo Gondolo8,6, Tianjun
Li9,10,6, Qiang Yuan1 and Xinmin Zhang4,5,6 1 Key Laboratory of Particle
Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences,
Beijing 100049, P. R. China
2 Center for High Energy Physics, Peking University, Beijing 100871, P.R.
China
3 Department of Physics, McGill University, Montréal, QC, H3A 2T8, Canada
4 Theoretical Physics Division, Institute of High Energy Physics, Chinese
Academy of Sciences, Beijing 10049, P.R. China
5 Theoretical Physics Center for Science Facilities (TPCSF), Chinese Academy
of Sciences, P.R. China
6 Kavli Institute for Theoretical Physics China, Chinese Academy of Sciences,
Beijing 100190, P.R. China
7 Theory Division, CERN, CH-1211 Geneva, Switzerland
8Department of Physics and Astronomy, University of Utah, Salt Lake City, UT
84112, USA
9 Key Laboratory of Frontiers in Theoretical Physics, Institute of Theoretical
Physics, Chinese Academy of Sciences, Beijing 100190, P. R. China
10 George P. and Cynthia W. Mitchell Institute for Fundamental Physics, Texas
A$\&$M University, College Station, TX 77843, USA
###### Abstract
In this paper we propose a dark matter model and study aspects of its
phenomenology. Our model is based on a new dark matter sector with a
$U(1)^{\prime}$ gauge symmetry plus a discrete symmetry added to the Standard
Model of particle physics. The new fields of the dark matter sector have no
hadronic charges and couple only to leptons. Our model can not only give rise
to the observed neutrino mass hierarchy, but can also generate the baryon
number asymmetry via non-thermal leptogenesis. The breaking of the new
$U(1)^{\prime}$ symmetry produces cosmic strings. The dark matter particles
are produced non-thermally from cosmic string loop decay which allows one to
obtain sufficiently large annihilation cross sections to explain the observed
cosmic ray positron and electron fluxes recently measured by the PAMELA, ATIC,
PPB-BETS, Fermi-LAT, and HESS experiments while maintaining the required
overall dark matter energy density. The high velocity of the dark matter
particles from cosmic string loop decay leads to a low phase space density and
thus to a dark matter profile with a constant density core in contrast to what
happens in a scenario with thermally produced cold dark matter where the
density keeps rising towards the center. As a result, the flux of $\gamma$
rays radiated from the final leptonic states of dark matter annihilation from
the Galactic center is suppressed and satisfies the constraints from the HESS
$\gamma$-ray observations.
###### pacs:
14.60.Pq, 95.35.+d
††preprint: MIFP-09-21
## I Introduction
There is strong evidence for the existence of a substantial amount of cold
dark matter (CDM). The leading CDM candidates are weakly interacting massive
particles (WIMPs), for example, the lightest neutralino in supersymmetric
models with $R$ parity. With a small cosmological constant, the CDM scenario
is consistent with both the observations of the large scale structure of the
Universe (scales much larger than $1$Mpc) and the fluctuations of the cosmic
microwave background BOPS .
However, the collisionless CDM scenario predicts too much power on small
scales, such as a large excess of dwarf galaxies klypin ; moore , the over-
concentration of dark matter (DM) in dwarf galaxies moore94 ; burkert ; MB
and in large galaxies NS . To solve this problem, two of us with their
collaborators proposed a scenario based on non-thermal production of WIMPs,
which can be relativistic when generated. The WIMPs’ comoving free-streaming
scales could be as large as or possibly even larger than 0.1 Mpc. Then, the
density fluctuations on scales less than the free-streaming scale would be
suppressed zhang . Thus, the discrepancies between the observations of DM
halos on sub-galactic scales and the predictions of the standard WIMP DM
picture could be resolved.
Recently, the ATIC Chang:2008zz and PPB-BETS Torii:2008 collaborations have
reported measurements of the cosmic ray electron/positron spectrum at energies
of up to $\sim 1$ TeV. The data shows an obvious excess over the expected
background for energies in the ranges $\sim 300-800\,\textrm{GeV}$ and $\sim
500-800\,\textrm{GeV}$, respectively. At the same time, the PAMELA
collaboration also released their first cosmic-ray measurements of the
positron fraction Adriani:2008zr and the $\bar{p}/p$ ratio Adriani:2008zq .
The positron fraction (but not the antiproton to proton ratio) shows a
significant excess for energies above $10\,\textrm{GeV}$ up to $\sim
100\,\textrm{GeV}$, compared to the background predicted by conventional
cosmic-ray propagation models. This result is consistent with previous
measurements by HEAT Barwick:1997ig and AMS Aguilar:2007yf .
Very recently, the Fermi-LAT collaboration has released data on the
measurement of the electron spectrum from 20 GeV to 1 TeV Fermi:2009zk , and
the HESS collaboration has published electron spectrum data from 340 GeV to
700 GeV Aharonian:2009ah , complementing their earlier measurements at 700 GeV
to 5 TeV HESS:2008aa . The Fermi-LAT measured spectrum agrees with ATIC below
300 GeV; however, it does not exhibit the special features at large energy.
There have already been some discussions on the implications for DM physics
obtained by combining the Fermi-LAT, HESS and PAMELA results fermidm .
The ATIC, PPB-BETS and PAMELA results indicate the existence of a new source
of primary electrons and positrons, while the hadronic processes are
suppressed. It is well known that DM annihilation can be a possible origin for
primary cosmic rays signal which could account for the ATIC, PPB-BETS and
PAMELA data simultaneously, as discussed first in first and also in later
(see string for a list of references) 111Note, however, that there are also
astrophysical (see e.g. astro ) or other particle physics (see e.g. string )
explanations.. However, the fact that the $\bar{p}/p$ ratio does not show an
excess gives strong constraints on DM models if they are to explain the data.
In particular, it is very difficult to use well-known DM candidates like the
neutralino to explain the ATIC and PAMELA data simultaneously donato since
they would also yield an excess of antiprotons. Therefore, if the observed
electron/positron or positron excesses indeed arise from DM annihilation, it
seems to us that there may exist special connections between the DM sector and
lepton physics Bi:2009md (see also lepto1 ; lepto2 ).
In this paper, we propose a DM model and study its implications for DM
detection. We fit our model to two different combinations of the experiment
data: one set of data from the ATIC, PPB-BETS and PAMELA experiments; the
other from the Fermi-LAT, HESS, and PAMELA experiments. Our results show that
our model can naturally explain the $e^{\pm}$ excesses while at the same time
solving the small scale problems of the standard $\Lambda$CDM model via non-
thermal DM production. For a single Majorana DM particle, its annihilation
cross section has $s$ wave suppression. Thus, we consider two degenerate
Majorana DM particles. We add a new DM sector with a $U(1)^{\prime}$ gauge
symmetry and introduce an additional discrete symmetry to the Standard Model
(SM). The DM particles are stable due to the discrete symmetry. During the
$U(1)^{\prime}$ gauge symmetry breaking phase transition a network of cosmic
strings is generated. The decay of cosmic string loops is a channel for
producing a non-thermal distribution of DM. This non-thermal distribution
allows for DM masses and annihilation cross sections large enough to explain
the cosmic ray anomalies while simultaneously remaining consistent with the
observed DM energy density. In addition, the observed neutrino masses and
mixings can be explained via the seesaw mechanism, and the baryon number
asymmetry can be generated via non-thermal leptogenesis Jeannerot:1996yi .
It has been recently recognized that a large annihilation cross section of DM
particles into leptons to account for the cosmic ray anomalies will induce a
large flux of $\gamma$ rays from the Galactic Center (GC) berg or from the
centers of dwarf galaxies essig . The predicted $\gamma$ ray fluxes based on
the NFW profile for the standard CDM scenario have been shown to be in slight
conflict with the current observations of HESS hess . However, in our model
the DM particles are produced non-thermally, so the high velocity of the DM
particles will lower the phase space density of DM and lead to a DM profile
with a constant density core kaplinghat . Therefore our model with non-
thermally produced DM on one hand gives rise to a large annihilation cross
section to account for the positron/electron excess observed locally while on
the other hand it suppresses the DM density at the GC and leads to a low flux
of $\gamma$ ray radiation.
Our paper is organized as follows: in Section II, we describe in detail the
model and the production mechanism of the DM particles. In Section III we
study aspects of the phenomenology of the model, including studies of some
constraints on the model parameters from particle physics experiments,
implications for the PAMELA, ATIC, PPB-BETS, Fermi-LAT, and HESS results, and
also the $\gamma$-ray radiation from the GC. Section IV contains the
discussion and conclusions.
## II The Dark Matter Model
### II.1 The Dark Matter Sector
The DM model we propose consists of adding a new “DM sector” to the Standard
Model. The new particles have only leptonic charges and are uncharged under
color. This ensures that the DM particles annihilate preferentially into
leptons. To ensure the existence of a stable DM particle, the new sector is
endowed with a discrete symmetry which plays a role similar to that of
R-parity in supersymmetric models. The lightest particles which are odd under
the $Z_{2}$ symmetry which we introduce are the candidate DM particles.
In our convention, we denote the right-handed leptons and Higgs doublet as
$e^{i}_{R}(\textbf{1},-1)$ and
$H(\textbf{2},-\frac{1}{2})=\displaystyle{(H^{0},H^{-})^{T}}$, respectively,
where their $SU(2)_{L}\times U(1)_{Y}$ quantum numbers are given in
parenthesis.
We consider the generalized Standard Model with an additional $U(1)^{\prime}$
gauge symmetry broken at an intermediate scale. In particular, all the SM
fermions and Higgs fields are uncharged under this $U(1)^{\prime}$ gauge
symmetry. To break the $U(1)^{\prime}$ gauge symmetry, we introduce a SM
singlet Higgs field $S$ with $U(1)^{\prime}$ charge $\mathbf{-2}$. Moreover,
we introduce four SM singlet chiral fermions $\chi_{1}$, $\chi_{2}$, $N_{1}$,
and $N_{2}$, a SM singlet scalar field $\widetilde{E}$ and a SM doublet scalar
field $H^{\prime}$ with $SU(2)_{L}\times U(1)_{Y}$ quantum numbers
$(\mathbf{1},\mathbf{-1})$ and $(\mathbf{2},\mathbf{\frac{1}{2}})$,
respectively. The $U(1)^{\prime}$ charges for $\chi_{i}$ and $H^{\prime}$ are
$\mathbf{1}$, while the $U(1)^{\prime}$ charges for $N_{i}$ and
$\widetilde{E}$ are $\mathbf{-1}$. Thus, our model is anomaly free. To have
stable DM candidates, we introduce a ${\bf Z}_{2}$ symmetry. Under this ${\bf
Z}_{2}$ symmetry, only the particles $\chi_{i}$ and $\widetilde{E}$ are odd
while all the other particles are even. The $\chi$ particles will be the DM
candidates, whereas the chiral fermions $N_{i}$ will play the role of right-
handed neutrinos.
The relevant part of the most general renormalizable Lagrangian consistent
with the new symmetries is
$\displaystyle-{\cal L}$ $\displaystyle=$ $\displaystyle{1\over
2}m_{S}^{2}S^{\dagger}S+{1\over
2}m^{2}_{\widetilde{E}}\widetilde{E}^{\dagger}\widetilde{E}+{1\over
2}m_{H^{\prime}}^{2}H^{\prime\dagger}H^{\prime}+{{\lambda}\over
4}(S^{\dagger}S)^{2}+{{\lambda_{1}}\over
4}(\widetilde{E}^{\dagger}\widetilde{E})^{2}+{{\lambda_{2}}\over
4}(H^{\prime\dagger}H^{\prime})^{2}$ (1) $\displaystyle+{{\lambda_{3}}\over
2}(S^{\dagger}S)(\widetilde{E}^{\dagger}\widetilde{E})+{{\lambda_{4}}\over
2}(\widetilde{E}^{\dagger}\widetilde{E})(H^{\prime\dagger}H^{\prime})+{{\lambda_{5}}\over
2}(S^{\dagger}S)(H^{\prime\dagger}H^{\prime})+{{\lambda_{6}}\over
2}(S^{\dagger}S)(H^{\dagger}H)$ $\displaystyle+{{\lambda_{7}}\over
2}(\widetilde{E}^{\dagger}\widetilde{E})(H^{\dagger}H)+{{\lambda_{8}}\over
2}(H^{\prime\dagger}H^{\prime})(H^{\dagger}H)+\left(y_{e}^{i}\overline{e_{R}^{i}}\widetilde{E}\chi_{1}+y_{e}^{\prime
i}\overline{e_{R}^{i}}\widetilde{E}\chi_{2}\right.$
$\displaystyle\left.+y_{\chi}^{ij}S\overline{\chi^{c}_{i}}\chi_{j}+y_{N}^{ij}S^{\dagger}N_{i}N_{j}+y_{\nu}^{ij}L_{i}H^{\prime}N_{j}+\textrm{H.c.}\right)~{}.~{}$
As we will discuss in the following subsection, the vacuum expectation value
(VEV) for $S$ is around $10^{9}$ GeV. Then, the couplings $\lambda_{3}$,
$\lambda_{5}$ and $\lambda_{6}$ should be very small - about $10^{-12}$ \- in
order for the model to be consistent with the expected value of the SM Higgs.
This fine-tuning problem could be solved naturally if we were to consider a
supersymmetric model. Moreover, in order to explain the recent cosmic ray
data, the Yukawa couplings $y_{\chi}^{ij}$ should be around $10^{-6}$. This
would generate a DM mass around 1 TeV. Such small Yukawa couplings
$y_{\chi}^{ij}$ can be explained via the Froggat-Nielsen mechanism
Froggatt:1978nt which will not be studied here.
To explain the neutrino masses and mixings via the “seesaw mechanism”, we
require that the VEV of $H^{\prime}$ be about 0.1 GeV if $y_{N}^{ij}\sim 1$
and $y_{\nu}^{ij}\sim 1$. In this case, the lightest active neutrino is
massless since we only have two right-handed neutrinos $N_{i}$. In addition,
in our $U(1)^{\prime}$ model, the Higgs field forming the strings is also the
Higgs field which gives masses to the right-handed neutrinos. There are right-
handed neutrinos trapped as transverse zero modes in the core of the strings.
When cosmic string loops decay, they release these neutrinos. This is an out-
of-equilibrium process. The released neutrinos acquire heavy Majorana masses
and decay into massless leptons and electroweak Higgs particles to produce a
lepton asymmetry, which is converted into a baryon number asymmetry via
sphaleron transitions Jeannerot:1996yi . Thus, we can explain the baryon
number asymmetry via non-thermal leptogenesis.
In this paper, we consider two degenerate Majorana DM candidates $\chi_{1}$
and $\chi_{2}$ since the annihilation cross section for a single Majorana DM
particle is too small to explain the recent cosmic ray experiments Bi:2009md .
For simplicity, we assume that the Lagrangian is invariant under
$\chi_{1}\leftrightarrow\chi_{2}$. Thus, we have
$\displaystyle y_{e}^{i}\equiv y_{e}^{\prime
i}~{},~{}~{}~{}y_{\chi}^{ij}\equiv y_{\chi}^{ji}~{}.~{}\,$ (2)
To make sure that we have two degenerate Majorana DM candidates $\chi_{1}$ and
$\chi_{2}$, we choose $y_{\chi}^{12}=0$, and assume
$m_{\chi}<m_{\widetilde{E}}$.
### II.2 Non-Thermal Dark Matter Production via Cosmic Strings
We assume that the $U(1)^{\prime}$ gauge symmetry is broken by the VEV of the
scalar field $S$. To be specific, we take the potential of $S$ to be
$V(S)\,=\,\frac{1}{4}\lambda\bigl{(}|S|^{2}-\eta^{2}\bigr{)}^{2}\,,$ (3)
where $\lambda$ is the self-interaction coupling constant. The VEV of $S$
hence is $\langle S\rangle=\eta$ with $m^{2}_{S}=\lambda\eta^{2}$. Due to
finite temperature effects, the symmetry is unbroken at high temperatures.
During the cooling of the very early universe, a symmetry breaking phase
transition takes place at a temperature $T_{c}$ with
$T_{c}\,\simeq\,\sqrt{\lambda}\eta\,.$ (4)
During this phase transition, inevitably a network of local cosmic strings
will be formed. These strings are topologically non-trivial field
configurations formed by the Higgs field $S$ and the $U(1)^{\prime}$ gauge
field $A$. The mass per unit length of the strings is given by $\mu=\eta^{2}$.
During the phase transition, a network of strings forms, consisting of both
infinite strings and cosmic string loops. After the transition, the infinite
string network coarsens and more loops form from the intercommuting of
infinite strings. Cosmic string loops loose their energy by emitting
gravitational radiation. When the radius of a loop becomes of the order of the
string width $w\simeq\lambda^{-1/2}\eta^{-1}$, the loop releases its final
energy into a burst of $A$ and $S$ particles 222We are not considering here DM
production from cosmic string cusp annihilation since the efficiency of this
mechanism may be much smaller than the upper estimate established in RHBcusp ,
as discussed e.g. in Olum . DM production from cusp annihilation has been
considered in Morissey .. Those particles subsequently decay into DM
particles, with branching ratios $\epsilon$ and $\epsilon^{\prime}$. For
simplicity we assume that all the final string energy goes into $A$ particles.
A single decaying cosmic string loop thus releases
$N\,\simeq\,2\pi\lambda^{-1}\epsilon$ (5)
DM particles which we take to have a monochromatic distribution with energy
$E\sim{T^{c}\over 2}$, the energy of an S-quantum in the broken phase. In our
model, we assume that the masses for $A$, $S$ and $N_{i}$ are roughly the
same, so we have $\epsilon=1$.
Given the symmetry we have imposed, the number densities of $\chi_{1}$ and
$\chi_{2}$ are equal. Thus, the number density $n_{DM}$ of DM particles, the
sum of the number densities of $\chi_{1}$ and $\chi_{2}$, is
$\displaystyle
n_{DM}\,\equiv\,n_{\chi_{1}}+n_{\chi_{2}}\,=\,2n_{\chi_{1}}\,=\,2n_{\chi_{2}}~{}.~{}\,$
(6)
If the $S$ and $A$ quanta were in thermal equilibrium before the phase
transition, then the string network is formed with a microscopic correlation
length $\xi(t_{c})$ (where $t_{c}$ is the time at which the phase transition
takes place). The correlation length gives the mean curvature radius and mean
separation of the strings. As discussed in Kibble (see also the reviews
ShelVil ), the initial correlation length is
$\xi(t_{c})\,\sim\,\lambda^{-1}\eta^{-1}\,.$ (7)
After string formation, there is a time interval during which the dynamics of
the strings is friction-dominated. In this period, the correlation length
increases faster than the Hubble radius because loop intercommutation is very
efficient. As was discussed e.g. in RobRiotto , the correlation length scale
$\xi(t)$ in the friction epoch scales as
$\xi(t)\,=\,\xi(t_{c})\left({t\over t_{c}}\right)^{3\over 2}~{}.~{}\,$ (8)
The friction epoch continues until $\xi(t)$ becomes comparable to the Hubble
radius $t$. After this point, the string network follows a “scaling solution”
with $\xi(t)\sim t$. This scaling solution continues to the present time.
The loss of energy from the network of long strings with correlation length
$\xi(t)$ is predominantly due to the production of cosmic string loops. The
number density of cosmic string loops created per unit of time is given by
ShelVil ; RobRiotto :
${dn\over dt}\,=\,\nu\xi^{-4}{d\xi\over dt}~{},~{}\,$ (9)
where $\nu$ is a constant of order 1. We are interested in loops decaying
below the temperature $T_{\chi}$ when the DM particles fall out of thermal
equilibrium (loops decaying earlier will produce DM particles which simply
thermalize). We denote the corresponding time by $t_{\chi}$.
The DM number density released from $t_{\chi}$ till today is obtained by zhang
summing up the contributions of all decaying loops. Each loop yields a number
$N$ of DM particles. We track the loops decaying at some time $t$ in terms of
the time $t_{f}$ when that loop was created. Since the loop density decreases
sharply as a function of time, it is the loops which decay right after
$t_{\chi}$ which dominate the integral. For the values of $G\mu$ which we are
interested in, it turns out that loops decaying around $t_{\chi}$ were created
in the friction epoch, and the loop number density is determined by inserting
(8) into (9). Changing the integration variable from $t$ to $\xi(t)$, we
integrate the redshifted number density to obtain:
$n^{nonth}_{DM}(t_{0})\,=\,N\nu\int^{\xi_{0}}_{\xi_{F}}\left({t\over
t_{0}}\right)^{3\over 2}\xi^{-4}d\xi~{},~{}\,$ (10)
where the subscript $0$ refers to parameters which are evaluated today. In the
above, $\xi_{F}=\xi(t_{F})$ where $t_{F}$ is the time at which cosmic string
loops which are decaying at the time $t_{\chi}$ formed.
Now the loop’s time-averaged radius (radius averaged over a period of
oscillation) shrinks at a rate ShelVil
${dR\over dt}\,=\,-\Gamma_{loops}G\mu\,,$ (11)
where $\Gamma_{loops}$ is a numerical factor $\sim 10-20$. Since loops form at
time $t_{F}$ with an average radius
$R(t_{F})\,\simeq\,\lambda^{1/2}{g^{*}}^{3/4}G\mu M_{pl}^{1\over
2}t_{F}^{3\over 2},$ (12)
where $g_{*}$ counts the number of massless degrees of freedom in the
corresponding phase, they have shrunk to a point at the time
$t\,\simeq\,\lambda^{1/2}{g^{*}}^{3/4}\Gamma^{-1}_{loops}M_{\rm Pl}^{1\over
2}t_{F}^{3\over 2}.$ (13)
Thus
$t_{F}\,\sim\,\lambda^{-1/3}{g^{*}}^{-1/2}\Gamma^{2\over 3}_{loops}M_{\rm
Pl}^{-{1\over 3}}t_{\chi}^{2\over 3}.$ (14)
Now the entropy density is
$s\,=\,{2\pi^{2}\over 45}g_{*}T^{3}\,.$ (15)
The time $t$ and temperature $T$ are related by
$t\,=\,0.3g_{*}^{-{1\over 2}}(T){M_{\rm Pl}\over T^{2}}\,,$ (16)
where $M_{\rm Pl}$ is the Planck mass. Thus using Eqs. (8) and (10), we find
that the DM number density today released by decaying cosmic string loops is
given by
$Y^{nonth}_{DM}\equiv\,{n^{nonth}_{DM}\over
s}\,=\,{{6.75}\over{\pi}}\epsilon\nu\lambda^{3/2}\Gamma_{loops}^{-2}g_{*_{T_{c}}}^{3/2}g_{*_{T_{\chi}}}g_{*_{T_{F}}}^{-5/2}M_{\rm
Pl}^{2}\,{T_{\chi}^{4}\over T_{c}^{6}}\,,$ (17)
where the subscript on $g^{*}$ refers to the time when $g^{*}$ is evaluated.
The DM relic abundance is related to $Y_{\chi}$ by:
$\displaystyle\Omega_{\chi}\,h^{2}$ $\displaystyle\approx$ $\displaystyle
m_{\chi}Y_{\chi}s(t_{0})\rho_{c}(t_{0})^{-1}h^{2}$ (18) $\displaystyle\approx$
$\displaystyle 2.82\times 10^{8}\,Y^{tot}_{\chi}\,(m_{\chi}/{\rm
GeV})~{},~{}\,$
where $h$ is the Hubble parameter in units of $100{\rm km}{\rm s}^{-1}{\rm
Mpc}^{-1}$, $m_{\chi}$ is the DM mass, and
$Y^{tot}_{\chi}=Y^{therm}_{\chi}+Y^{nonth}_{\chi}$.
To give some concrete numbers, we choose the parameter values $\epsilon=1$,
$\nu=1$, $\lambda=0.5$, $\Gamma=10$, $M_{\rm Pl}=1.22\times 10^{19}~{}{\rm
GeV}$ and $\Omega_{\chi}\,h^{2}=0.11$. In our model, we have
$g_{*_{T_{c}}}=136$, $g_{*_{T_{F}}}=128$, and $g_{*_{T_{\chi}}}=128$. We
define the dimensionless ratios
$\displaystyle\alpha\equiv{{m_{\chi}}\over{T_{\chi}}}~{},~{}~{}~{}\beta\equiv{{Y^{nonth}_{\chi}}\over{Y^{tot}_{\chi}}}~{}.~{}\,$
(19)
Demanding that we obtain a specific value of $\beta$ for the above choices of
the parameter values will fix $T_{c}$ via (18). For various values of $\alpha$
and $\beta$, we present the resulting $T_{c}$ values for the cases
$m_{\chi}=620~{}{\rm GeV}$, $m_{\chi}=780~{}{\rm GeV}$, and
$m_{\chi}=1500~{}{\rm GeV}$, respectively, in Table 1. In short, $T_{c}$ must
be around $10^{9}$ GeV if we want to generate enough DM density non-thermally
via cosmic strings.
Table 1: The required $T_{c}$ values in units of GeV for various choices of $\alpha$ and $\beta$ in the cases $m_{\chi}=620~{}{\rm GeV}$, $m_{\chi}=780~{}{\rm GeV}$, and $m_{\chi}=1500~{}{\rm GeV}$, respectively. $\alpha$ | 1 | 1 | 2 | 2 | 5 | 5
---|---|---|---|---|---|---
$\beta$ | 1 | 0.5 | 1 | 0.5 | 1 | 0.5
$T_{c}$ ($m_{\chi}=620~{}{\rm GeV}$) | $7.7\times 10^{9}$ | $8.6\times 10^{9}$ | $4.8\times 10^{9}$ | $5.4\times 10^{9}$ | $2.6\times 10^{9}$ | $2.9\times 10^{9}$
$T_{c}$ ($m_{\chi}=780~{}{\rm GeV}$) | $9.3\times 10^{9}$ | $1.0\times 10^{10}$ | $5.9\times 10^{9}$ | $6.6\times 10^{9}$ | $3.2\times 10^{9}$ | $3.6\times 10^{9}$
$T_{c}$ ($m_{\chi}=1500~{}{\rm GeV}$) | $1.6\times 10^{10}$ | $1.8\times 10^{10}$ | $1.0\times 10^{10}$ | $1.1\times 10^{10}$ | $5.5\times 10^{9}$ | $6.2\times 10^{9}$
$\alpha$ | 10 | 10 | 15 | 15 | 20 | 20
$\beta$ | 1 | 0.5 | 1 | 0.5 | 1 | 0.5
$T_{c}$ ($m_{\chi}=620~{}{\rm GeV}$) | $1.7\times 10^{9}$ | $1.9\times 10^{9}$ | $1.3\times 10^{9}$ | $1.4\times 10^{9}$ | $1.0\times 10^{9}$ | $1.2\times 10^{9}$
$T_{c}$ ($m_{\chi}=780~{}{\rm GeV}$) | $2.0\times 10^{9}$ | $2.2\times 10^{9}$ | $1.5\times 10^{9}$ | $1.7\times 10^{9}$ | $1.3\times 10^{9}$ | $1.4\times 10^{9}$
$T_{c}$ ($m_{\chi}=1500~{}{\rm GeV}$) | $3.5\times 10^{9}$ | $3.9\times 10^{9}$ | $2.6\times 10^{9}$ | $3.0\times 10^{9}$ | $2.2\times 10^{9}$ | $2.4\times 10^{9}$
## III Phenomenology of the model
### III.1 Constraints on the Model Parameters
The coupling constants $y_{e}^{i}$ between right-handed leptons and the DM
sector are constrained by experiments, and especially by the precise value of
muon anomalous magnetic moment $g-2$. Assuming that the masses of $\chi$ and
$\tilde{E}$ are nearly degenerate, we obtain that the contribution to the muon
anomalous magnetic moment from the new coupling is about bi02
$\delta
a_{i}\,\sim\,(y_{e}^{i})^{2}\frac{1}{192\pi^{2}}\frac{m_{e^{i}}^{2}}{m_{\chi}^{2}}~{}.~{}\,$
(20)
The $2\sigma$ upper bound from the E821 Collaboration on $\delta a_{\mu}$ is
smaller than $\sim 40\times 10^{-10}$ e821 , from which we get for
$m_{\chi}\sim 1$ TeV,
$y_{\mu}\,\lesssim\,10\,.$ (21)
For the electron anomalous magnetic momentum we assume the contribution from
the dark sector is within the experimental error pdg
$\delta a_{e}\,\leq\,7\times 10^{-13}\,.$ (22)
Then we get a upper limit on $y_{e}$ which is about $30$. Therefore the
constraints on the couplings of the model due to the heavy masses of the new
particles are quite loose.
Now we study the constraints from the experimental limits on lepton flavor
violation (LFV) processes such as $\mu\to e\gamma$, $\tau\to\mu(e)\gamma$ and
so on. The branching ratios for the radiative LFV processes are given by bi02
$Br(e_{i}\to
e_{j}\gamma)\,\sim\,\alpha_{em}m_{i}^{5}/2\times\left(\frac{Y_{e}^{i}y_{e}^{j}}{384\pi^{2}m_{\chi}^{2}}\right)^{2}/\Gamma_{i}\,,$
(23)
where $\Gamma_{i}$ is the width of $e_{i}$. Given the experimental constraint
on the process $\mu\to e\gamma$ we get
$Br(\mu\to e\gamma)\,\sim\,10^{-8}\times(y_{e}y_{\mu})^{2}\lesssim
10^{-11}~{},~{}\,$ (24)
which gives that $y_{e}y_{\mu}\lesssim 0.03$. For the process
$\tau\to\mu(e)\gamma$ we have
$Br(\tau\to\mu(e)\gamma)\,\sim\,10^{-9}\times(y_{\mu(e)}y_{\tau})^{2}\lesssim
10^{-7}~{},~{}\,$ (25)
which leads to the conclusion that $y_{\tau}y_{\mu(e)}\lesssim 10$. Connecting
the DM sector to the PAMELA and Fermi-LAT (or ATIC) results usually requires a
large branching ratio into electron and positron pairs. From the LFV
constraints shown above we conclude that it is possible to have a large
branching ratio for the annihilation of the DM particles directly into
$e^{+}e^{-}$, or via $\mu^{+}\mu^{-}$.
### III.2 Explanation for the Cosmic $e^{\pm}$ Excesses
In our model the DM sector only couples to the SM lepton sector. Therefore DM
annihilates into leptons dominantly. Furthermore, since DM is produced non-
thermally in our model the DM annihilation rates can be quite large with a
sizable Yukawa coupling $y_{e}^{i}$. Thus our model can naturally explain the
cosmic $e^{\pm}$ excesses observed.
Because the annihilation cross sections for $\chi_{1}\chi_{1}$ and
$\chi_{2}\chi_{2}$ to leptons are $s$ wave suppressed, the dominant cross
sections of $\chi_{1}\chi_{2}$ annihilating into charged leptons are given by
Bi:2009md
$\displaystyle\sigma_{ij}v$ $\displaystyle\equiv$
$\displaystyle\sigma_{\chi_{1}\chi_{2}\rightarrow e_{R}^{i}e_{R}^{cj}}v$ (26)
$\displaystyle=$
$\displaystyle\frac{4}{32\pi}|y_{e}^{i}|^{2}|y_{e}^{j}|^{2}\frac{1}{s\sqrt{s\left(s-4m_{\chi}^{2}\right)}}\left\\{\sqrt{s\left(s-4m_{\chi}^{2}\right)}+\left[2\left(m_{\widetilde{E}}^{2}-m_{\chi}^{2}\right)-\frac{2m_{\chi}^{2}s}{s+2m_{\widetilde{E}}^{2}-2m_{\chi}^{2}}\right]\right.$
$\displaystyle\times\ln\left|\frac{s+2m_{\widetilde{E}}^{2}-2m_{\chi}^{2}-\sqrt{s\left(s-4m_{\chi}^{2}\right)}}{s+2m_{\widetilde{E}}^{2}-2m_{\chi}^{2}+\sqrt{s\left(s-4m_{\chi}^{2}\right)}}\right|+2\left(m_{\widetilde{E}}^{2}-m_{\chi}^{2}\right)^{2}$
$\displaystyle\left.\times\left[\frac{1}{s+2m_{\widetilde{E}}^{2}-2m_{\chi}^{2}-\sqrt{s\left(s-4m_{\chi}^{2}\right)}}-\frac{1}{s+2m_{\widetilde{E}}^{2}-2m_{\chi}^{2}+\sqrt{s\left(s-4m_{\chi}^{2}\right)}}\right]\right\\}~{},~{}\,$
where $v$ is the relative velocity between the two annihilating particles in
their center of mass system. The overall factor 4 will be cancelled when we
calculate the lepton fluxes, so, we will leave it in our discussions. Up to
$\mathcal{O}(v^{2})$, the above cross section can be simplified as Bi:2009md
$\displaystyle\sigma_{ij}v$ $\displaystyle\simeq$
$\displaystyle\frac{4}{128\pi}|y_{e}^{i}|^{2}|y_{e}^{j}|^{2}\left\\{\frac{8}{(2+r)^{2}}+\left[\frac{1}{(2+r)^{2}}-\frac{8}{(2+r)^{3}}\right]v^{2}\right\\}\frac{1}{m_{\chi}^{2}}~{},~{}\,$
(27)
where
$\displaystyle
r\equiv\frac{m_{\widetilde{E}}^{2}-m_{\chi}^{2}}{m_{\chi}^{2}}>0\,.$ (28)
With $v\sim 10^{-3}$ and $r\sim 0$, we obtain Bi:2009md
$\displaystyle\langle\sigma_{ij}v\rangle$ $\displaystyle\lesssim$
$\displaystyle 4\times 1.2\times
10^{-25}\,\textrm{cm}^{3}\textrm{sec}^{-1}\left(\frac{700\,\textrm{GeV}}{m_{\chi}}\right)^{2}|y_{e}^{i}|^{2}|y_{e}^{j}|^{2}~{}.~{}\,$
(29)
We emphasize that the Yukawa couplings $y_{e}^{i}$ should be smaller than
${\sqrt{4\pi}}$ for the perturbative analysis to be valid.
In our model with non-thermal production of DM particles, we consider two
separate fits to the ATIC/PPB-BETS/PAMELA and Fermi-LAT/HESS/PAMELA datasets.
Firstly we consider a numerical fit to the ATIC, PPB-BETS and PAMELA data
Bi:2009md . In this case we assume the DM mass to be $620$ GeV and that DM
annihilates into electron/positron pairs predominantly, i.e., $y_{e}^{i}\sim
0$ for $i=2,~{}3$. In the second case we fit the Fermi-LAT, HESS and PAMELA
data by taking the DM mass $1500$ GeV and assuming that DM annihilates into
$\mu^{+}\mu^{-}$ pairs dominantly. Note that all lepton fluxes resulting from
DM annihilation are proportional to $n_{\chi}^{2}\sigma_{\rm ann}$ for models
with a single DM candidate $\chi$. Because
$n_{\chi_{1}}=n_{\chi_{2}}=n_{\chi}/2$ in our model, the lepton fluxes are
proportional to
$\displaystyle n_{\chi_{1}}n_{\chi_{2}}\sigma_{\rm ann}~{}=~{}{1\over
4}n_{\chi}^{2}\sigma_{\rm ann}~{}.~{}\,$ (30)
This will cancel the overall factor 4 in the above annihilation cross sections
in Eqs. (26) and (27).
Figure 1: Left: The $e^{+}+e^{-}$ spectrum including the contribution from DM
annihilation compared with the observational data from ATIC Chang:2008zz ,
PPB-BETS Torii:2008 , HESS HESS:2008aa ; Aharonian:2009ah and Fermi-LAT
Fermi:2009zk . Right: The $e^{+}/(e^{-}+e^{+})$ ratio including the
contribution from DM annihilation as a function of energy compared with the
data from AMS Aguilar:2007yf , HEAT Barwick:1997ig ; Coutu:2001jy and PAMELA
Adriani:2008zr . Two sets of fitting parameters are considered: in one model
(Model I) the DM mass is $620$ GeV with $e^{+}e^{-}$ being the main
annihilation channel to fit the ATIC data, while in the other model (Model II)
the DM mass is $1500$ GeV and we assume that $\mu^{+}\mu^{-}$ is the main
annihilation channel to fit the Fermi-LAT data.
In Fig. 1 we show that both cases can give a good fit to the data after
considering the propagation of electrons and positrons in interstellar space
Bi:2009md with the annihilation cross section $0.75\times
10^{-23}\,\textrm{cm}^{3}\textrm{s}^{-1}$ and $3.6\times
10^{-23}\,\textrm{cm}^{3}\textrm{s}^{-1}$, respectively. The model parameters
of the two fits are given in Table 2. For the first fit, we do not need the
boost factor at all by choosing $y_{e}^{1}=2.6$, which is still smaller than
the upper limit ${\sqrt{4\pi}}$ for a valid perturbative theory. Moreover,
choosing $y_{e}^{2}=3$ in the second fit, we just need a small boost factor
about 10 which may be due to the clumps of the DM distribution yuan .
Therefore, the results on the observed cosmic $e^{\pm}$ excesses can be
explained naturally in our model.
### III.3 $\gamma$-Ray Radiation from the Galactic Center
Since the explanations of the anomalous cosmic ray require a very large
annihilation cross section to account for the observational results, this
condition leads to a strong $\gamma$-ray radiation from the final lepton
states. In particular, observations of the GC berg or the center of dwarf
galaxies essig have already led to constraints on the flux of the
$\gamma$-ray radiation.
The HESS observation of $\gamma$-rays from the GC hess sets constraints on
the Galactic DM profile. The NFW profile in the standard CDM scenario leads to
too large a flux of $\gamma$-rays, thus conflicting with the HESS observation.
On the other hand, if DM is produced non-thermally as suggested in Section II
the DM profile will have a constant density core kaplinghat so that the
$\gamma$-ray radiation from the GC will be greatly suppressed.
In our numerical studies, we consider the following two cases to constrain the
DM profile:
* •
Case I: we simply require that the $\gamma$-ray flux due to final state
radiation (FSR) do not exceed the HESS observation.
* •
Case II: we make a global fit to the HESS data by assuming an astrophysical
source with power law spectrum plus an additional component from FSR resulting
from DM annihilation.
Figure 2: Upper: the FSR $\gamma$-ray fluxes from a region with
$|l|<0.8^{\circ}$ and $|b|<0.3^{\circ}$ close to GC compared with the
observational data from HESS hess . The left panel compares the two models
given in Table 2 directly with the data, while the right panel shows the
combined fitting results using a power law astrophysical background together
with the FSR contribution from DM annihilation at $95\%$ ($2\sigma$)
confidence level. Lower: constraints on the DM profile parameters $\gamma$ and
$c_{vir}$ due to the HESS observation of $\gamma$-ray radiation from the GC by
assuming different final leptonic states. The left panel corresponds to the
constraint Case I, while the right panel corresponds to Case II. The two
curves in the right panel represent the $1\sigma$ and $2\sigma$ upper bounds
respectively.
Let us consider a DM profile taking the form
$\rho(r)\,=\,\frac{\rho_{s}}{\left(\frac{r}{r_{s}}\right)^{\gamma}\left(1+\frac{r}{r_{s}}\right)^{3-\gamma}}~{},~{}\,$
(31)
where $\rho_{s}$ is the scale density and $r_{s}\equiv
r_{vir}/c_{vir}(1-\gamma)$ is the scale radius, with $r_{vir}$ the virial
radius of the halo333The virial radius is usually defined as the range inside
which the average density of DM is some factor of the critical density
$\rho_{c}$, e.g., $18\pi^{2}+82x-39x^{2}$ with
$x=\Omega_{M}(z)-1=-\frac{\Omega_{\Lambda}}{\Omega_{M}(1+z)^{3}+\Omega_{\Lambda}}$
for a $\Lambda$CDM universe Bryan:1997dn . and $c_{vir}$ the concentration
parameter. In this work the concentration parameter $c_{vir}$ and shape
parameter $\gamma$ are left free, and we normalize the local DM density to be
$0.3$ GeV cm-3. Then the virial radius and total halo mass are solved to get
self-consistent values. Given the density profile, the $\gamma$-ray flux along
a specific direction can be written as
$\displaystyle\phi(E,\psi)$ $\displaystyle=$ $\displaystyle C\times W(E)\times
J(\psi)$ (32) $\displaystyle=$
$\displaystyle\frac{\rho_{\odot}^{2}R_{\odot}}{4\pi}\times\frac{\langle\sigma
v\rangle}{2m_{\chi}^{2}}\frac{{\rm d}N}{{\rm
d}E}\times\frac{1}{\rho_{\odot}^{2}R_{\odot}}\int_{LOS}\rho^{2}(l){\rm
d}l~{},~{}\,$
where the integral is taken along the line-of-sight, $W(E)$ and $J(\psi)$
represent the particle physics factor and the astrophysical factor
respectively. Thus, if the particle physics factor is fixed using the locally
observed $e^{+}e^{-}$ fluxes, we can get constraints on the astrophysical
factor, and hence the DM density profile, according to the $\gamma$-ray flux.
For the emission from a diffuse region with solid angle $\Delta\Omega$, we
define the average astrophysical factor as
$J_{\Delta\Omega}=\frac{1}{\Delta\Omega}\int_{\Delta\Omega}J(\psi){\rm
d}\Omega~{}.~{}\,$ (33)
The constraints on the average astrophysical factor $J_{\Delta\Omega}$ for the
two models are gathered in Table 2, in which $J_{\Delta\Omega}^{\rm max}$
shows the maximum $J$ factor corresponding to Case I, while
$J_{\Delta\Omega}^{1\sigma,2\sigma}$ corresponds to Case II, at the $68\%$
($1\sigma$) and $95\%$ ($2\sigma$) confidence levels. The $\gamma$-ray fluxes
of the two cases are shown in the upper panels of Fig. 2.
In the lower panels of Fig. 2 we show the iso-$J_{\Delta\Omega}$ lines in the
$\gamma-c_{vir}$ plane for Case I (left) and Case II (right) respectively. In
this figure we also show the mass condition of $(1-2)\times 10^{12}$ M⊙ of the
Milky Way halo. From Fig. 2 we can see that the NFW profile with $\gamma=1$
(chosen based on N-body simulation in the standard CDM scenario) is
constrained by the HESS data, if the observed cosmic $e^{\pm}$ excesses are
interpreted as DM annihilation. However, if DM is produced non-thermally the
high velocity of the DM particle will make the DM behave like warm DM and lead
to a flat DM profile which suppresses the $\gamma$-ray flux from the GC.
Table 2: Parameters of the two scenarios adopted to fit the ATIC/PPB-BETS/PAMELA or Fermi-LAT/HESS/PAMELA data. | channel | $m_{\chi}$(GeV) | $\langle\sigma v\rangle$($10^{-23}$cm3 s-1) | $J_{\Delta\Omega}^{\rm max}$ | $J_{\Delta\Omega}^{1\sigma}$ | $J_{\Delta\Omega}^{2\sigma}$
---|---|---|---|---|---|---
Model I | $e^{+}e^{-}$ | $620$ | $0.75$ | $300$ | $42$ | $97$
Model II | $\mu^{+}\mu^{-}$ | $1500$ | $3.6$ | $200$ | $81$ | $111$
## IV Discussion and Conclusions
In this paper we have proposed a DM model and studied aspects of its
phenomenology. We have shown that our model can simultaneously explain the
cosmic ray anomalies recently measured by the ATIC, PPB-BETS and PAMELA
experiments or by the Fermi-LAT, HESS and PAMELA experiments, resolve the
small-scale structure problems of the standard $\Lambda$CDM paradigm, explain
the observed neutrino mass hierarchies, explain the baryon number asymmetry
via non-thermal leptogenesis and suppress the $\gamma$ ray radiation from the
GC.
In this model, DM couples only to leptons. In direct detection experiments it
would show as an “electromagnetic” event rather than a nuclear recoil.
Experiments that reject electromagnetic events would thus be ignoring the
signal. However, in the Fermi-LAT/HESS/PAMELA fits, the DM particle couples
mainly to muons, and there being no muons in the target of direct detection
experiments, no significant signal would be expected. In the ATIC/PPB-
BETS/PAMELA fit, the DM couples predominantly to electrons; the electron
recoil energy is of order $m_{e}v_{\rm DM}^{2}\sim 0.1$ eV, and it would be
too small to be detectable in current devices. Alternatively, this energy
could cause fluorescence Starkman:1995ye , albeit the fluorescence cross
section would be prohibitively small. Regarding the annual modulation signal
observed by DAMA dama , although this experiment accepts all recoil signals,
an estimate of the electron scattering cross section shows that the present
model predicts a cross section which is about $8$ orders of magnitude smaller
than $\sim 1{\rm pb}$ required to account for the modulation lepto1 .
Therefore we do not expect a signal in direct detection experiments if the DM
model presented here is realized. In addition, the capture of DM particles in
the Sun or the Earth is also impossible since the DM will not loose its
kinetic energy when scattering with electrons in the Sun. Therefore we do not
expect high energy neutrino signals from the Sun or the Earth either.
## Acknowledgments
We thank Pei-Hong Gu for helpful discussions. We wish to acknowledge the
hospitality of the KITPC under their program “Connecting Fundamental Theory
with Cosmological Observations” during which the ideas reported here were
discussed. This work was supported in part by the Natural Sciences Foundation
of China (Nos. 10773011, 10821504, 10533010, 10675136), by the Chinese Academy
of Sciences under the grant No. KJCX3-SYW-N2, by the Cambridge-Mitchell
Collaboration in Theoretical Cosmology, and by the Project of Knowledge
Innovation Program (PKIP) of Chinese Academy of Sciences, Grant No.
KJCX2.YW.W10. RB wishes to thank the Theory Division of the Institute of High
Energy Physics (IHEP) and the CERN Theory Division for hospitality and
financial support. RB is also supported by an NSERC Discovery Grant and by the
Canada Research Chairs Program. PG thanks IHEP for hospitality and
acknowledges support from the NSF Award PHY-0756962.
## References
* (1) N. A. Bahcall, J. P. Ostriker, S. Perlmutter and P. J. Steinhardt, “The Cosmic Triangle: Revealing the State of the Universe,” Science 284, 1481 (1999) [arXiv:astro-ph/9906463].
* (2) A. A. Klypin, A. V. Kravtsov, O. Valenzuela and F. Prada, “Where are the missing galactic satellites?,” Astrophys. J. 522, 82 (1999) [arXiv:astro-ph/9901240].
* (3) B. Moore, F. Governato, T. R. Quinn, J. Stadel and G. Lake, “Resolving the Structure of Cold Dark Matter Halos,” Astrophys. J. 499, L5 (1998) [arXiv:astro-ph/9709051].
* (4) B. Moore, “Evidence against dissipationless dark matter from observations of galaxy haloes,” Nature 370, 629 (1994).
* (5) A. Burkert, “The Structure of dark matter halos in dwarf galaxies,” IAU Symp. 171, 175 (1996) [Astrophys. J. 447, L25 (1995)] [arXiv:astro-ph/9504041].
* (6) S. S. McGaugh, W. J. G. de Blok, Astrophys. J. 499, 41 (1998).
* (7) J. F. Navarro, & M. Steinmetz, Astrophys. J. 528, 607 (2000).
* (8) W. B. Lin, D. H. Huang, X. Zhang and R. H. Brandenberger, “Non-thermal production of WIMPs and the sub-galactic structure of the universe,” Phys. Rev. Lett. 86, 954 (2001) [arXiv:astro-ph/0009003]; R. Jeannerot, X. Zhang and R. H. Brandenberger, “Non-thermal production of neutralino cold dark matter from cosmic string decays,” JHEP 9912, 003 (1999) [arXiv:hep-ph/9901357].
* (9) J. Chang et al., “An Excess of Cosmic Ray Electrons at Energies Of 300-800 GeV,” Nature 456, 362 (2008).
* (10) S. Torii et al. [PPB-BETS Collaboration], “High-energy electron observations by PPB-BETS flight in Antarctica,” arXiv:0809.0760 [astro-ph].
* (11) O. Adriani et al. [PAMELA Collaboration], “An anomalous positron abundance in cosmic rays with energies 1.5-100 GeV,” Nature 458, 607 (2009) [arXiv:0810.4995 [astro-ph]].
* (12) O. Adriani et al., “A new measurement of the antiproton-to-proton flux ratio up to 100 GeV in the cosmic radiation,” Phys. Rev. Lett. 102, 051101 (2009) [arXiv:0810.4994 [astro-ph]].
* (13) S. W. Barwick et al. [HEAT Collaboration], “Measurements of the cosmic-ray positron fraction from 1-GeV to 50-GeV,” Astrophys. J. 482, L191 (1997) [arXiv:astro-ph/9703192].
* (14) M. Aguilar et al. [AMS-01 Collaboration], “Cosmic-ray positron fraction measurement from 1-GeV to 30-GeV with AMS-01,” Phys. Lett. B 646, 145 (2007) [arXiv:astro-ph/0703154].
* (15) Fermi Collaboration, “Measurement of the Cosmic Ray e+ plus e- spectrum from 20 GeV to 1 TeV with the Fermi Large Area Telescope,” arXiv:0905.0025 [astro-ph.HE].
* (16) H. E. S. S. Collaboration, “Probing the ATIC peak in the cosmic-ray electron spectrum with H.E.S.S,” arXiv:0905.0105 [astro-ph.HE].
* (17) F. Aharonian et al. [H.E.S.S. Collaboration], “The energy spectrum of cosmic-ray electrons at TeV energies,” Phys. Rev. Lett. 101, 261104 (2008) [arXiv:0811.3894 [astro-ph]].
* (18) L. Bergstrom, J. Edsjo and G. Zaharijas, “Dark matter interpretation of recent electron and positron data,” arXiv:0905.0333 [astro-ph.HE]. S. Shirai, F. Takahashi and T. T. Yanagida, “R-violating Decay of Wino Dark Matter and electron/positron Excesses in the PAMELA/Fermi Experiments,” arXiv:0905.0388 [hep-ph]. D. Grasso et al., “On possible interpretations of the high energy electron-positron spectrum measured by the Fermi Large Area Telescope,” arXiv:0905.0636 [astro-ph.HE]. C. H. Chen, C. Q. Geng and D. V. Zhuridov, “Resolving Fermi, PAMELA and ATIC anomalies in split supersymmetry without R-parity,” arXiv:0905.0652 [hep-ph].
* (19) S. Rudaz and F. W. Stecker, “Cosmic Ray Anti-Protons, Positrons and Gamma-Rays from Halo Dark Matter Annihilation,” Astrophys. J. 325, 16 (1988); J. R. Ellis, R. A. Flores, K. Freese, S. Ritz, D. Seckel and J. Silk, “Cosmic Ray Constraints on the Annihilations of Relic Particles in the Galactic Halo,” Phys. Lett. B 214, 403 (1988); M. S. Turner and F. Wilczek, “Positron Line Radiation from Halo WIMP Annihilations as a Dark Matter Signature,” Phys. Rev. D 42, 1001 (1990); M. Kamionkowski and M. S. Turner, “A Distinctive Positron Feature From Heavy Wimp Annihilations In The Galactic Halo,” Phys. Rev. D 43, 1774 (1991); A. J. Tylka, “Cosmic Ray Positrons from Annihilation of Weakly Interacting Massive Particles in the Galaxy,” Phys. Rev. Lett. 63, 840 (1989) [Erratum-ibid. 63, 1658 (1989)]; A. J. Tylka and D. Eichler, “Cosmic Ray Positrons from Photino Annihilation in the Galactic Halo,” E. A. Baltz and J. Edsjo, “Positron Propagation and Fluxes from Neutralino Annihilation in the Halo,” Phys. Rev. D 59, 023511 (1999) [arXiv:astro-ph/9808243]. G. L. Kane, L. T. Wang and J. D. Wells, “Supersymmetry and the positron excess in cosmic rays,” Phys. Rev. D 65, 057701 (2002) [arXiv:hep-ph/0108138]. E. A. Baltz, J. Edsjo, K. Freese and P. Gondolo, “The cosmic ray positron excess and neutralino dark matter,” Phys. Rev. D 65, 063511 (2002) [arXiv:astro-ph/0109318].
* (20) V. Barger, W. Y. Keung, D. Marfatia and G. Shaughnessy, “PAMELA and dark matter,” Phys. Lett. B 672, 141 (2009) [arXiv:0809.0162 [hep-ph]]. M. Cirelli, M. Kadastik, M. Raidal and A. Strumia, “Model-independent implications of the e+, e-, anti-proton cosmic ray spectra on properties of Dark Matter,” Nucl. Phys. B 813, 1 (2009) [arXiv:0809.2409 [hep-ph]]. N. Arkani-Hamed, D. P. Finkbeiner, T. R. Slatyer and N. Weiner, “A Theory of Dark Matter,” Phys. Rev. D 79, 015014 (2009) [arXiv:0810.0713 [hep-ph]]; M. Fairbairn and J. Zupan, “Two component dark matter,” arXiv:0810.4147 [hep-ph]; A. E. Nelson and C. Spitzer, “Slightly Non-Minimal Dark Matter in PAMELA and ATIC,” arXiv:0810.5167 [hep-ph]; I. Cholis, D. P. Finkbeiner, L. Goodenough and N. Weiner, “The PAMELA Positron Excess from Annihilations into a Light Boson,” arXiv:0810.5344 [astro-ph]; Y. Nomura and J. Thaler, “Dark Matter through the Axion Portal,” arXiv:0810.5397 [hep-ph]; D. Feldman, Z. Liu and P. Nath, “PAMELA Positron Excess as a Signal from the Hidden Sector,” Phys. Rev. D 79, 063509 (2009) [arXiv:0810.5762 [hep-ph]]. P.F. Yin, Q. Yuan, J. Liu, J. Zhang, X.J. Bi, S.H. Zhu, and X. M. Zhang, “PAMELA data and leptonically decaying dark matter,” Phys. Rev. D 79, 023512 (2009) [arXiv:0811.0176 [hep-ph]]. K. Ishiwata, S. Matsumoto and T. Moroi, “Cosmic-Ray Positron from Superparticle Dark Matter and the PAMELA Anomaly,” arXiv:0811.0250 [hep-ph].
* (21) J. Zhang, X. J. Bi, J. Liu, S. M. Liu, P. F. Yin, Q. Yuan and S. H. Zhu, “Discriminate different scenarios to account for the PAMELA and ATIC data by synchrotron and IC radiation,” arXiv:0812.0522 [astro-ph]; F. Chen, J. M. Cline and A. R. Frey, “A new twist on excited dark matter: implications for INTEGRAL, PAMELA/ATIC/PPB-BETS, DAMA,” arXiv:0901.4327 [hep-ph].
* (22) R. Brandenberger, Y. F. Cai, W. Xue and X. M. Zhang, “Cosmic Ray Positrons from Cosmic Strings,” arXiv:0901.3474 [hep-ph].
* (23) S. Profumo, “Dissecting Pamela (and ATIC) with Occam’s Razor: existing, well-known Pulsars naturally account for the ’anomalous’ Cosmic-Ray Electron and Positron Data,” arXiv:0812.4457 [astro-ph]; J. Hall and D. Hooper, “Distinguishing Between Dark Matter and Pulsar Origins of the ATIC Electron Spectrum With Atmospheric Cherenkov Telescopes,” arXiv:0811.3362 [astro-ph]; D. Hooper, P. Blasi and P. D. Serpico, “Pulsars as the Sources of High Energy Cosmic Ray Positrons,” JCAP 0901, 025 (2009) [arXiv:0810.1527 [astro-ph]]. H. B. Hu, Q. Yuan, B. Wang, C. Fan, J. L. Zhang and X. J. Bi, “On the cosmic electron/positron excesses and the knee of the cosmic rays – a key to the 50 years’ puzzle?,” arXiv:0901.1520 [astro-ph].
* (24) M. Cirelli and A. Strumia, “Minimal Dark Matter predictions and the PAMELA positron excess,” arXiv:0808.3867 [astro-ph]; F. Donato, D. Maurin, P. Brun, T. Delahaye and P. Salati, “Constraints on WIMP Dark Matter from the High Energy PAMELA $\bar{p}/p$ data,” Phys. Rev. Lett. 102, 071301 (2009) [arXiv:0810.5292 [astro-ph]].
* (25) X. J. Bi, P. H. Gu, T. Li and X. Zhang, “ATIC and PAMELA Results on Cosmic e± Excesses and Neutrino Masses,” JHEP 0904, 103 (2009) [arXiv:0901.0176 [hep-ph]].
* (26) P. J. Fox and E. Poppitz, “Leptophilic Dark Matter,” arXiv:0811.0399 [hep-ph].
* (27) R. Allahverdi, B. Dutta, K. Richardson-McDaniel and Y. Santoso, Phys. Rev. D 79, 075005 (2009) [arXiv:0812.2196 [hep-ph]]. S. Khalil, H. S. Lee and E. Ma, “Generalized Lepton Number and Dark Left-Right Gauge Model,” arXiv:0901.0981 [hep-ph].
* (28) R. Jeannerot, “A new mechanism for leptogenesis,” Phys. Rev. Lett. 77, 3292 (1996) [arXiv:hep-ph/9609442].
* (29) L. Bergstrom, G. Bertone, T. Bringmann, J. Edsjo, M. Taoso, “Gamma-ray and Radio Constraints of High Positron Rate Dark Matter Models Annihilating into New Light Particles,” arXiv:0812.3895 [astro-ph].
* (30) R. Essig, N. Sehgal and L. E. Strigari, “Bounds on Cross-sections and Lifetimes for Dark Matter Annihilation and Decay into Charged Leptons from Gamma-ray Observations of Dwarf Galaxies,” arXiv:0902.4750 [hep-ph].
* (31) F. Aharonian et al. [H.E.S.S. Collaboration], Nature 439, 695 (2006) [arXiv:astro-ph/0603021].
* (32) X. J. Bi, M. Z. Li and X. M. Zhang, “Quintessino as dark matter,” Phys. Rev. D 69, 123521 (2004) [arXiv:hep-ph/0308218]; J. A. R. Cembranos, J. L. Feng, A. Rajaraman and F. Takayama, “SuperWIMP solutions to small scale structure problems,” Phys. Rev. Lett. 95, 181301 (2005) [arXiv:hep-ph/0507150]; M. Kaplinghat, “Dark matter from early decays,” Phys. Rev. D 72, 063510 (2005) [arXiv:astro-ph/0507300].
* (33) C. D. Froggatt and H. B. Nielsen, “Hierarchy Of Quark Masses, Cabibbo Angles And CP Violation,” Nucl. Phys. B 147, 277 (1979).
* (34) R. H. Brandenberger, “On the Decay of Cosmic String Loops,” Nucl. Phys. B 293, 812 (1987).
* (35) J. J. Blanco-Pillado and K. D. Olum, “The form of cosmic string cusps,” Phys. Rev. D 59, 063508 (1999) [arXiv:gr-qc/9810005].
* (36) Y. Cui and D. E. Morrissey, “Non-Thermal Dark Matter from Cosmic Strings,” arXiv:0805.1060 [hep-ph].
* (37) T. W. B. Kibble, “Some Implications Of A Cosmological Phase Transition,” Phys. Rept. 67, 183 (1980); T. W. B. Kibble, “Phase Transitions In The Early Universe,” Acta Phys. Polon. B 13, 723 (1982).
* (38) A. Vilenkin and E. P. S. Shellard, “Cosmic strings and other topological defects” (Cambridge University Press, Cambridge, 1994); M. B. Hindmarsh and T. W. B. Kibble, “Cosmic strings,” Rept. Prog. Phys. 58, 477 (1995) [arXiv:hep-ph/9411342]; R. H. Brandenberger, “Topological defects and structure formation,” Int. J. Mod. Phys. A 9, 2117 (1994) [arXiv:astro-ph/9310041].
* (39) R. H. Brandenberger and A. Riotto, “A new mechanism for baryogenesis in low energy supersymmetry breaking models,” Phys. Lett. B 445, 323 (1999) [arXiv:hep-ph/9801448].
* (40) X. J. Bi, Y. P. Kuang and Y. H. An, “Muon anomalous magnetic moment and lepton flavor violation in MSSM,” Eur. Phys. J. C 30, 409 (2003) [arXiv:hep-ph/0211142].
* (41) G. W. Bennett et al. [Muon g-2 Collaboration], “Measurement of the Positive Muon Anomalous Magnetic Moment to 0.7 ppm,” Phys. Rev. Lett. 89, 101804 (2002) [Erratum-ibid. 89, 129903 (2002)] [arXiv:hep-ex/0208001].
* (42) C. Amsler et al., Particle Data Group, Phys. Lett. B 667, 1 (2008).
* (43) S. Coutu et al., Proc. 27th Int. Cosmic Ray Conf. 5, 1687 (2001).
* (44) Q. Yuan and X. J. Bi, “The Galactic positron flux and dark matter substructures,” JCAP 0705, 001 (2007) [arXiv:astro-ph/0611872]; J. Lavalle, Q. Yuan, D. Maurin and X. J. Bi, “Full Calculation of Clumpiness Boost factors for Antimatter Cosmic Rays in the light of $\Lambda$CDM N-body simulation results,” Astron. Astrophys. 479, 427 (2008) [arXiv:0709.3634 [astro-ph]].
* (45) G. L. Bryan and M. L. Norman, “Statistical Properties of X-ray Clusters: Analytic and Numerical Comparisons,” Astrophys. J. 495, 80 (1998) [arXiv:astro-ph/9710107].
* (46) G. D. Starkman and D. N. Spergel, Phys. Rev. Lett. 74, 2623 (1995).
* (47) R. Bernabei et al., “Investigating electron interacting dark matter,” Phys. Rev. D 77, 023506 (2008) [arXiv:0712.0562 [astro-ph]]. R. Bernabei et al. [DAMA Collaboration], “First results from DAMA/LIBRA and the combined results with DAMA/NaI,” Eur. Phys. J. C 56, 333 (2008) [arXiv:0804.2741 [astro-ph]].
|
arxiv-papers
| 2009-05-08T13:51:12 |
2024-09-04T02:49:02.413759
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Xiao-Jun Bi, Robert Brandenberger, Paolo Gondolo, Tianjun Li, Qiang\n Yuan and Xinmin Zhang",
"submitter": "Xiaojun Bi",
"url": "https://arxiv.org/abs/0905.1253"
}
|
0905.1328
|
# Spitzer SAGE Infrared Photometry of
Massive Stars in the Large Magellanic Cloud
A. Z. Bonanos11affiliation: Giacconi Fellow. ,22affiliation: Space Telescope
Science Institute, 3700 San Martin Drive, Baltimore, MD, 21218, USA; bonanos,
massa, sewilo, lennon, panagia, lsmith, meixner, kgordon@stsci.edu , D. L.
Massa22affiliation: Space Telescope Science Institute, 3700 San Martin Drive,
Baltimore, MD, 21218, USA; bonanos, massa, sewilo, lennon, panagia, lsmith,
meixner, kgordon@stsci.edu , M. Sewilo22affiliation: Space Telescope Science
Institute, 3700 San Martin Drive, Baltimore, MD, 21218, USA; bonanos, massa,
sewilo, lennon, panagia, lsmith, meixner, kgordon@stsci.edu , D. J.
Lennon22affiliation: Space Telescope Science Institute, 3700 San Martin Drive,
Baltimore, MD, 21218, USA; bonanos, massa, sewilo, lennon, panagia, lsmith,
meixner, kgordon@stsci.edu , N. Panagia22affiliation: Space Telescope Science
Institute, 3700 San Martin Drive, Baltimore, MD, 21218, USA; bonanos, massa,
sewilo, lennon, panagia, lsmith, meixner, kgordon@stsci.edu ,33affiliation:
INAF/Osservatorio Astrofisico di Catania, Via S.Sofia 78, I-95123 Catania,
Italy; and Supernova Ltd., VGV #131, Northsound Road, Virgin Gorda, British
Virgin Islands. , L. J. Smith22affiliation: Space Telescope Science Institute,
3700 San Martin Drive, Baltimore, MD, 21218, USA; bonanos, massa, sewilo,
lennon, panagia, lsmith, meixner, kgordon@stsci.edu ,
M. Meixner22affiliation: Space Telescope Science Institute, 3700 San Martin
Drive, Baltimore, MD, 21218, USA; bonanos, massa, sewilo, lennon, panagia,
lsmith, meixner, kgordon@stsci.edu , B. L. Babler44affiliation: Department of
Astronomy, 475 North Charter St., University of Wisconsin, Madison, WI 53706,
USA; brian@sal.wisc.edu, s_bracker@hotmail.com, meade@sal.wisc.edu , S.
Bracker44affiliation: Department of Astronomy, 475 North Charter St.,
University of Wisconsin, Madison, WI 53706, USA; brian@sal.wisc.edu,
s_bracker@hotmail.com, meade@sal.wisc.edu , M. R. Meade44affiliation:
Department of Astronomy, 475 North Charter St., University of Wisconsin,
Madison, WI 53706, USA; brian@sal.wisc.edu, s_bracker@hotmail.com,
meade@sal.wisc.edu , K. D. Gordon22affiliation: Space Telescope Science
Institute, 3700 San Martin Drive, Baltimore, MD, 21218, USA; bonanos, massa,
sewilo, lennon, panagia, lsmith, meixner, kgordon@stsci.edu , J. L.
Hora55affiliation: Harvard-Smithsonian Center for Astrophysics, 60 Garden St.,
MS 67, Cambridge, MA 02138, USA; jhora@cfa.harvard.edu ,
R. Indebetouw66affiliation: Department of Astronomy, University of Virginia,
PO Box 3818, Charlottesville, VA 22903, USA; remy@virginia.edu , B. A.
Whitney77affiliation: Space Science Institute, 4750 Walnut St., Suite 205,
Boulder, CO 80301, USA; bwhitney@spacescience.org
###### Abstract
We present a catalog of 1750 massive stars in the Large Magellanic Cloud, with
accurate spectral types compiled from the literature, and a photometric
catalog for a subset of 1268 of these stars, with the goal of exploring their
infrared properties. The photometric catalog consists of stars with infrared
counterparts in the Spitzer SAGE survey database, for which we present uniform
photometry from $0.3-24$ $\mu$m in the $UBVIJHK_{s}$+IRAC+MIPS24 bands. The
resulting infrared color–magnitude diagrams illustrate that the supergiant
B[e], red supergiant and luminous blue variable (LBV) stars are among the
brightest infrared point sources in the Large Magellanic Cloud, due to their
intrinsic brightness, and at longer wavelengths, due to dust. We detect
infrared excesses due to free–free emission among $\sim 900$ OB stars, which
correlate with luminosity class. We confirm the presence of dust around 10
supergiant B[e] stars, finding the shape of their spectral energy
distributions (SEDs) to be very similar, in contrast to the variety of SED
shapes among the spectrally variable LBVs. The similar luminosities of B[e]
supergiants ($\log L/L_{\odot}\geq 4$) and the rare, dusty progenitors of the
new class of optical transients (e.g. SN 2008S and NGC 300 OT), plus the fact
that dust is present in both types of objects, suggests a common origin for
them. We find the infrared colors for Wolf-Rayet stars to be independent of
spectral type and their SEDs to be flatter than what models predict. The
results of this study provide the first comprehensive roadmap for interpreting
luminous, massive, resolved stellar populations in nearby galaxies at infrared
wavelengths.
infrared: stars– stars: early-type– stars: Wolf-Rayet– stars: emission-line,
Be– galaxies: individual (LMC)– catalogs
## 1 Introduction
The Spitzer Space Telescope Legacy Survey called “Surveying the Agents of a
Galaxy’s Evolution” (SAGE; Meixner et al., 2006) provides an unprecedented
opportunity to investigate the infrared properties of a large number of
massive stars ($\gtrsim\\!\\!8\,\mbox{M${}_{\odot}$}$111These stars are
massive enough to fuse iron in their cores; they end their lives as core-
collapse supernovae.) in the Large Magellanic Cloud (LMC). The SAGE infrared
point source catalog has so far enabled studies of evolved stars (Blum et al.,
2006; Hora et al., 2008; Srinivasan et al., 2009), young stellar objects
(Whitney et al., 2008), and variables (Vijh et al., 2009); however, the hot
massive star population remains unexplored. The radiation from hot massive
stars at the mid-infrared wavelengths probed by the IRAC (3.6, 4.5, 5.8, 8.0
$\mu$m; Fazio et al., 2004) and MIPS (24, 70, and 160 $\mu$m; Rieke et al.,
2004) cameras onboard Spitzer consists of: thermal blackbody emission modified
by the atmospheric opacity, bound–free and free–free emission
(“Bremsstrahlung”), which depend on the density and velocity structure of the
stellar wind, and excess emission if circumstellar dust (or a cool companion)
exists.
Panagia & Felli (1975) and Wright & Barlow (1975) were the first to calculate
the infrared and radio free–free emission from ionized envelopes around hot
massive stars, as a function of the mass-loss rate (Ṁ), terminal velocity
($v_{\infty}$) of the wind and temperature (T) for the cases of optically thin
and thick winds to explain the radio emission from hot stars undergoing mass-
loss. They found that the flux as a function of frequency ($\nu$) and distance
(d) for an optically thick, spherically symmetric wind at infrared and radio
wavelengths scales as:
$F_{\nu,thick}\propto{\mbox{\\.{M}}}^{4/3}v_{\infty}^{-4/3}\nu^{0.6}\mbox{d}^{-2}\mbox{T}^{0.1}$,
while for an optically thin wind:
$F_{\nu,thin}\propto\mbox{\\.{M}}^{2}{v_{o}}^{-2}\mbox{R}^{-1}\;\mbox{T}^{-1.35}\nu^{-2.1}\;B_{\nu}$,
where R is the photospheric radius, $v_{o}$ the initial wind velocity at the
surface of the star, and $B_{\nu}$ is the Planck function. These calculations,
following the shell model of Gehrz et al. (1974), motivated further infrared
and radio observations of Galactic OB stars. The initial studies produced
controversial measurements of infrared excesses (see Sneden et al., 1978;
Castor & Simon, 1983; Abbott et al., 1984), whereas unambiguous excesses were
detected in the radio (e.g. Abbott et al., 1981). Leitherer & Wolf (1984)
detected infrared excesses in a large number of early-type stars and
attributed the previous non-detections to their different treatment of
intrinsic stellar colors and interstellar reddening. Bertout et al. (1985)
determined the velocity law for this large sample, pointing out that excesses
can only be measured reliably in cases of suitably slow (low $v_{\infty}$)
and/or dense (high Ṁ) winds. In principle, mass-loss rates can be determined
from radio observations, which probe the optically thick wind, e.g. as done
for OB-type stars (Abbott et al., 1981) and Wolf-Rayet stars (Barlow et al.,
1981; Abbott et al., 1986). However, Moffat & Robert (1994) presented
observational evidence for clumping in stellar winds, which yields mass-loss
rates that are too high when unaccounted for. While clumping, when not
accounted for, can undoubtedly lead to an overestimation of the mass loss, the
radio observations seem to be least affected by this problem (see e.g., Puls
et al., 2006), possibly because the wind is less clumped at larger distances
from the star. On the other hand, it has been shown that in many cases a
substantial fraction of the radio flux from early-type stars is of non-thermal
origin (Abbott et al., 1984; De Becker, 2007), thus complicating the
interpretation of radio observations.
Until now, work on infrared excesses in hot stars has been limited by (i) the
accuracy of ground based observations at wavelengths greater than 5 $\mu$m,
where infrared excess due to free–free or dust emission becomes significant,
and (ii) the systematic errors associated with disentangling reddening and
distances for Galactic stars, which can be substantial. The advent of Spitzer,
therefore, provides an opportunity to readdress and quantify the infrared
excesses in hot massive stars at half-solar metallicity. For the first time, a
large number of stars in the LMC have been observed uniformly out to 24
$\mu$m. The low foreground reddening of this sample of stars, located at the
same distance, circumvents the problems encountered with extincted Galactic
early-type stars, which are observed against a bright infrared background and
distributed at various distances throughout the Galactic plane. With the goal
of characterizing their infrared properties, we have compiled a catalog of
luminous massive stars with known spectral types in the LMC and extracted
infrared photometry for these stars from the SAGE database. Our results not
only serve as a valuable starting point for more detailed work on specific
types of massive stars, but also provide a roadmap for interpreting existing
and future infrared observations of the most luminous stars in nearby
galaxies.
Besides the LMC, Spitzer has imaged the following Local Group galaxies: the
SMC (Bolatto et al., 2007), M31 (Mould et al., 2008), M33 (Verley et al.,
2007; McQuinn et al., 2007; Thompson et al., 2008), WLM (Jackson et al.,
2007a), IC 1613 (Jackson et al., 2007b), NGC 6822 (Cannon et al., 2006), and
the dwarf irregulars: Phoenix, LGS 3, DDO 210, Leo A, Pegasus, and Sextans A
(Boyer et al., 2009). Only a small fraction of the massive stars in these
galaxies have spectroscopic classifications, rendering identifications of
objects based on their infrared colors uncertain. Our results, which are based
on stars with accurate spectral classifications, will help interpret these
photometric observations. An advantage of studying stars in the LMC, rather
than in more distant Local Group galaxies, is that blending problems are
minimized. For example, the angular resolution at the IRAC 3.6 $\mu$m band is
$1.\arcsec 7$, which corresponds to 0.4 pc in the LMC (at a distance of 48.1
kpc; Macri et al., 2006), and 8 pc in M33 (at a distance of 964 kpc; Bonanos
et al., 2006), while typical OB associations have sizes of tens of parsecs
(but half light radii of several parsecs), and young massive clusters can have
sizes smaller than a parsec (Clark et al., 2005).
The paper is organized as follows: §2 describes our spectroscopic and
photometric catalogs of massive stars in the LMC, §3 presents the resulting
color–magnitude diagrams, §4 the two–color diagrams, and §5 the infrared
excesses detected in OB stars. Sections §6, §7, §8, §9 describe the infrared
properties of Wolf-Rayet stars, luminous blue variables, supergiant B[e]
stars, and red supergiants, respectively, and §10 summarizes our results.
## 2 Catalog of Massive Stars in the LMC
We first compiled a catalog of massive stars with known spectral types in the
LMC from the literature. We then cross-matched the stars in the SAGE database,
after incorporating optical and near-infrared photometry from recent surveys
of the LMC. The resulting photometric catalog was used to study the infrared
properties of the stars. In §2.1 we describe the spectral type catalog
compiled from the literature, in §2.2 the existing optical, near-infrared and
mid-infrared surveys of the LMC that were included in the SAGE database, and
in §2.3 the cross-matching procedure and the resulting photometric catalog.
### 2.1 Spectral Type Catalog
The largest existing catalogs of OB stars in the LMC were compiled by
Sanduleak (1970, 1273 stars) and Rousseau et al. (1978, 1822 stars); however
the accuracies of the coordinates (arcminute precision) and spectral types
(from objective prism spectra) are not sufficient for a comparison with the
SAGE database. Despite the monumental effort of Brian Skiff to update
coordinates of these stars by manually identifying each one from the original
findercharts (see Sanduleak, 2008), improved spectral types require new
spectroscopy and acquiring them is an even more difficult task. We therefore
compiled a new catalog of massive stars from the literature, by targeting OB
stars having both accurate coordinates and spectral classifications, making
some of these available for the first time in electronic format. The
literature search resulted in 1750 entries. We do not claim completeness;
however, we have targeted studies of hot massive stars (defined by their
spectral types) in individual clusters and OB associations, as well as studies
of particular types of massive stars. The largest studies included are: Conti
et al. (1986, new spectral types for 191 OB stars), Fitzpatrick (1988, 1991,
study of supergiants222We have corrected the typographical error in the name
of the B7 Ia+ star Sk $-67^{\circ}$ 143 given by () to the correct name: Sk
$-69^{\circ}$ 143.), Massey et al. (1995, new spectral types for 179 OB
stars), Massey et al. (2000, study of 19 OB associations), the VLT-FLAMES
survey (Evans et al., 2006, 238 stars in N11 and NGC 2004), the studies of the
30 Doradus region by Schild & Testor (1992), Parker (1993) and Walborn &
Blades (1997), and the Wolf-Rayet catalog from Breysacher et al. (1999). Note,
we have omitted stars from the crowded R136 cluster. We have added the 6 known
luminous blue variables (Humphreys & Davidson, 1994), 10 supergiant B[e] stars
(Zickgraf, 2006), 20 Be/X-ray binaries (Liu et al., 2005; Raguzova & Popov,
2005) and early-type eclipsing and spectroscopic binaries. For comparison, we
also included 147 red supergiants from Massey & Olsen (2003), some of which
were originally classified by Humphreys (1979). The completeness of the
catalog depends on the spectral type, e.g. it is $\sim 3\%$ for the unevolved
O stars in our catalog (out of an estimated total of 6100 unevolved stars with
masses $>20\,\mbox{M${}_{\odot}$}$ in the LMC; Massey, 2009), while the Wolf-
Rayet catalog (Breysacher et al., 1999) is thought to be close to complete.
Table 1 presents our catalog of 1750 massive stars, listing: the star name,
coordinates in degrees (J2000.0), the reference and corresponding spectral
classification, along with comments. The names of the stars are taken from the
corresponding reference and are sorted by right ascension. The spectral
classifications in the catalog are typically accurate to one spectral type and
one luminosity class. We retained about a dozen stars from the above studies
with only approximate spectral types (e.g. “early B”). For double entries, we
included the best spectral classification available, usually corresponding to
the most recent reference. We updated coordinates for all stars with names
from the Sanduleak, Brunet, and Parker catalogs from Brian Skiff’s
lists333ftp://ftp.lowell.edu/pub/bas/starcats/. Our catalog contains 427 stars
from the Sanduleak catalog, 81 from the Brunet catalog and 148 from the Parker
catalog.
### 2.2 Optical and Infrared Surveys of the LMC
Several large optical and infrared photometric catalogs of the LMC have
recently become available, enabling us to obtain accurate photometry for its
massive star population in the wavelength region $0.3-8$ $\mu$m and in some
cases up to $24$ $\mu$m. The optical surveys are: the $UBVR$ catalog of Massey
(2002) with 180,000 bright stars in an area covering 14.5 square degrees, the
$UBVI$ Magellanic Clouds Photometric Survey (MCPS; Zaritsky et al., 2004)
including 24 million stars in the central 64 square degress, and the OGLE III
catalog containing $VI$ photometry of 35 million stars covering 40 square
degrees (Udalski et al., 2008). The angular resolution for each survey is
$\sim 2.\arcsec 6$ for the catalog of Massey (2002), $\sim 1.\arcsec 5$ for
MCPS, and $\sim 1.\arcsec 2$ for OGLE III.
The existing near-infrared photometric catalogs include: the Two Micron All
Sky Survey (2MASS; Skrutskie et al., 2006, extended by 6X2MASS) and the
targeted IRSF survey (Kato et al., 2007), which contains 14.8 million sources
in the central 40 square degrees of the LMC. 2MASS has a pixel scale of
$2.\arcsec 0\times 2.\arcsec 0$, an average seeing of $2.\arcsec 5$ and
limiting magnitudes of $J=15.8$, $H=15.1$ and $K_{s}=14.3$. IRSF has a pixel
scale of $0.\arcsec 45$ pixel-1, average seeing of $1.\arcsec 3$, $1.\arcsec
2$, $1.\arcsec 1$ in the $JHK_{s}$ bands, respectively, and limiting
magnitudes of $J=18.8$, $H=17.8$ and $K_{s}=16.6$. In the mid-infrared, the
Spitzer SAGE survey uniformly imaged the central 7∘ $\times$ 7∘ of the LMC in
the IRAC and MIPS bands on two epochs in 2005, separated by 3 months (Meixner
et al., 2006). The survey has recently produced a combined mosaic catalog of
6.4 million sources. IRAC, with a pixel scale of $1.\arcsec 2$ pixel-1, yields
an angular resolution of $1.\arcsec 7-2.\arcsec 0$ and MIPS at 24 $\mu$m has a
resolution of $6\arcsec$.
Given the variation in the depth, resolution and spatial coverage of these
surveys, we included the available photometry for the massive stars in our
catalog from all the MCPS, OGLE III, 2MASS, IRSF and SAGE catalogs. MCPS has
incorporated the catalog of Massey (2002) for bright stars common to both
catalogs. We note that due to problems with zero-point, calibration and PSF
fitting of bright stars, MCPS photometry of bright stars should be used with
caution. The authors warn that stars “brighter than 13.5 mag in $B$ or $V$ are
prone to substantial photometric uncertainty” (Zaritsky et al., 2004).
Photometry of higher accuracy, particularly in the optical, exists in the
literature for many of the stars in our catalog; however, it was not included
in favor of uniformity.
### 2.3 Photometric Catalog
#### 2.3.1 Matching Procedure
We used a near-final pre-release version of the SAGE catalog to obtain near-
to mid-infrared photometry for our catalog of 1750 massive stars. The SAGE
catalog includes the IRAC mosaic photometry source list, extracted from the
combined epoch 1 and epoch 2 IRAC images and merged together with 2MASS and
the 2MASS 6X Deep Point Source catalog (6X2MASS; Cutri et al., 2004), and the
MIPS 24 $\mu$m epoch 1 catalog. The IRAC and MIPS 24 $\mu$m catalogs were
constructed from the IRAC and MIPS 24 $\mu$m source lists after applying
stringent selection criteria and are highly reliable. These catalogs are
subsets of more complete, but less reliable archives. More details on the data
processing and data products can be found in Meixner et al. (2006) and the
SAGE Data Description
Document444http://irsa.ipac.caltech.edu/data/SPITZER/SAGE/doc/.
We extracted infrared counterparts to the massive stars in our list from the
SAGE IRAC mosaic photometry catalog by performing a conservative neighbor
search with a 1′′ search radius and selecting the closest match for each
source. This procedure yielded mid-infrared counterparts for 1316 of the 1750
sources. The IRAC mosaic photometry catalog, MIPS 24 $\mu$m catalog, IRSF,
MCPS, and OGLE III catalogs were crossed-matched in the SAGE database to
provide photometry for sources over a wavelength range from $0.3-24\,\mu$m. We
used this “universal catalog” to extract multi-wavelength photometry for IRAC
sources matched to the massive stars. Specifically, for IRAC sources with one
or more matches in other catalogs (all but 5), we only considered the closest
matches between sources from any two available catalogs (IRAC–MIPS24,
IRAC–MCPS, MIPS24–MCPS, IRAC–IRSF, MIPS24–IRSF, IRAC–OGLE III, MIPS24–OGLE
III), with distances between the matched sources of $\leq$1′′. For example,
for a match between the IRAC, MIPS 24 $\mu$m and MCPS catalogs
(IRAC–MIPS24–MCPS), we applied these constrains on the IRAC–MIPS24,
MIPS24–MCPS, and IRAC–MCPS matches. These stringent criteria were used to
ensure that sources from individual catalogs for each multi-catalog match
refer to the same star; they reduced the source list to 1262 sources. Six
additional massive stars had matches within 1′′ in the MIPS 24 $\mu$m Epoch 1
Catalog, but not in the IRAC Mosaic Photometry Catalog (IRACC). We
supplemented the missing IRAC data: two sources had counterparts in the IRAC
Mosaic Photometry Archive (IRACA) and 4 MIPS 24 $\mu$m sources had
counterparts in the IRAC Epoch 1 Catalog (IRACCEP1). These IRAC sources were
the closest matches to the MIPS 24 $\mu$m sources within 1′′. Five out of 6
sources also have matches within 1′′ in the MCPS catalog. Table 2 shows the
breakdown of the matched stars to the catalogs, such that 5 stars were only
matched to the IRACC, 88 only to the IRACC+IRSF catalogs etc. The above
requirements reduced the source list to 1268 sources555Preliminary estimates
indicate that matches against the final catalog will not increase the number
of matched stars by more than $1\%$.. We supplemented the missing IRAC data
for 6 sources: two sources had counterparts in the IRAC mosaic photometry
archive and 4 MIPS 24 $\mu$m sources had counterparts in the IRAC epoch 1
catalog. We defer the discussion of misidentifications and blending to §9.
#### 2.3.2 Catalog Description
Table 3 presents our final matched catalog of 1268 stars, with the star name,
IRAC designation, $UBVIJHK_{s}$+IRAC+MIPS24 photometry and errors, reference
paper, corresponding spectral classification and comments, sorted by
increasing right ascension. Overall, the photometry is presented in order of
shortest to longest wavelength. The 17 columns of photometry are presented in
the following order: $UBVI$ from MCPS, $VI$ form OGLE III, $JHK_{s}$ from
2MASS, $JHK_{s}$ from IRSF, IRAC 3.6, 4.5, 5.8, 8.0 $\mu$m and MIPS 24 $\mu$m.
A column with the associated error follows each measurement, except for the
OGLE III $VI$ photometry. Henceforth, $JHK_{s}$ magnitudes refer to 2MASS
photometry, whereas IRSF photometry is denoted by a subscript, e.g.
$J_{IRSF}$. All magnitudes are calibrated relative to Vega (e.g. see Reach et
al., 2005, for IRAC). In Table 4, we summarize the characteristics of each
filter: effective wavelength $\lambda_{\rm eff}$, zero magnitude flux (in
$Jy$), angular resolution, and the number of detected stars in each filter.
The spatial distribution of our 1268 matched sources is shown in Figure 1,
overlayed onto the 8 $\mu$m image of the LMC. We find that the massive star
population traces the spiral features of the 8 $\mu$m emission, which maps the
surface density of the interstellar medium. Despite the fact that most studies
we included from the literature targeted individual clusters, the spatial
distribution of OB stars in our catalog is fairly uniform, with the exception
of the N11, NGC 2004 and 30 Doradus regions, which have been subjects of
several spectroscopic studies.
## 3 Color–Magnitude Diagrams
We divide the matched stars into 9 categories according to their spectral
types: O stars, early (B0-B2.5) and late (B3-B9) B stars (the latter have
supergiant or giant luminosity classifications, except for 2 B3V stars),
spectral type A, F and G (AFG) supergiants, K and M red supergiants (RSG),
Wolf-Rayet stars (WR), supergiant B[e] stars (sgB[e]), confirmed luminous blue
variables (LBV) and Be/X-ray binaries. In Figures 2 and 3, we present infrared
$[3.6]$ vs. $[3.6]-[4.5]$ and $J-[3.6]$ color–magnitude diagrams (CMDs) for
all the stars in the catalog, identifying stars in these 9 categories. The
conversion to absolute magnitudes in all CMDs is based on a true LMC distance
modulus of 18.41 mag (Macri et al., 2006). The locations of all the SAGE
catalog detections are represented by the grey two dimensional histogram (Hess
diagram). The red giants form the clump at $[3.6]>15$ mag, while the vertical
blue extension contains late-type LMC and foreground stars (free–free emission
causes the OB stars to have redder colors). The asymptotic giant branch (AGB)
stars are located at $[3.6]\sim 10$ mag, $J-[3.6]\sim 2.5$ mag. Immediately,
one notices that the sgB[e], RSG, and LBVs stars are among the brightest stars
at 3.6 $\mu$m and occupy distinct regions in the diagrams. Most of the O and B
stars are located along a vertical line at $[3.6]-[4.5]\sim 0$, as expected.
The RSG have “blue” colors because of the depression at [4.5] due to the CO
band (see e.g. Verhoelst et al., 2009). The WR stars are on average 0.3 mag
redder than the OB stars because of their higher wind densities and,
therefore, stronger free–free emission, while the sgB[e] stars are 0.6-0.7 mag
redder and have similar brightnesses to the RSG. The Be/X-ray binaries are
located among the “red” early-B stars. The LBVs have similar colors to the
WRs, but are brighter. The late-B stars are brighter than the early-B stars
because most of the former are luminous supergiants. The brightest of the AFG
supergiants is the (only) G-type supergiant Sk $-69^{\circ}30$. A reddening
vector for $E(B-V)=0.2$ mag is shown in Figure 3 to illustrate the small
effect of reddening, which decreases at longer wavelengths.
In Figures 4 and 5, we present $[8.0]$ and $[24]$ vs. $[8.0]-[24]$ CMDs,
respectively. At these wavelengths, only stars with dust are detected, as the
sensitivity of Spitzer drops sharply, while the flux from hot stellar
photospheres also decreases. Therefore only the dusty WR, LBVs, sgB[e], and
RSG are detected, as well as certain OB stars and late B and AFG supergiants,
which are likely associated with hot spots in the interstellar medium, and may
well be interacting with it. See §6, 7, 8, and 9 for a more detailed
description of their colors. Stars with cool dust are brightest at 24 $\mu$m,
while emission from warmer dust peaks at 8 $\mu$m. In these CMDs, the
locations of the massive stars overlap with those of AGB stars, young stellar
objects and planetary nebulae. Infrared colors alone are not sufficient to
distinguish among these very different types of stars, illustrating the
important diagnostic value of wide-baseline photometry.
## 4 Two–Color Diagrams
In Figures 6, 7 and 8 we present two–color diagrams (TCDs) using the near and
mid-infrared photometry from our matched catalog. We label stars according to
their spectral types and overplot all the SAGE detections in grey as Hess
diagrams. We find that the majority of the OB stars have colors near zero in
all bands, as expected, while there is an almost continuous extension of OB
star “outliers” toward the WR stars, due to thick winds or the presence of
disks. Among these outliers are the Be/X-ray binaries and other emission line
stars. The most conspicuous group of stars in all TCDs are the B[e]
supergiants, which have large excesses, of $\sim 4$ mag in the $K-[8.0]$
color. The RSG occupy a distinct position in the CMDs, because of their cooler
temperatures, but are found to have a range of mid-infrared colors due to the
amount of dust they contain (see §9). We have overplotted 6 simple theoretical
models to guide the interpretation of the stars in these diagrams. These
models are:
(i) Blackbodies (BBs) at 30,000, 10,000, 5,000, 3,000, 1,700, 1,100, 800 and
500 K. BBs are a good approximation of stellar emission in the infrared for
most stellar atmosphere spectral energy distributions (SEDs), with the
possible exception of red supergiants, which have spectra dominated by
emission lines.
(ii) A power law model F${}_{\nu}\propto\nu^{\alpha}$, for $\alpha$ ranging
from $-1.5$ to 2 in steps of 0.5. In the infrared, most stellar SEDs are close
to the Rayleigh-Jeans tail of a BB and correspond to $\alpha\simeq 2$. As
clearly illustrated by Panagia & Felli (1975) and Wright & Barlow (1975) (see
also Panagia, 1991), ionized stellar winds display flatter SEDs than BBs. This
is because at low optical depths the gas emission in the wind (mostly
free–free emission) is rather flat, and in the optically thick regime the
increase of the size of the emitting region as a function of wavelength (e.g.
$R\propto\lambda^{0.7}$ for a constant velocity wind) partially compensates
for the intrinsic thermal $\lambda^{-2}$ behaviour. Wind SED indexes
asymptotically reach $\alpha=0.6$ in the case in which both the expansion
velocity is constant (e.g. at suitably large radii where the terminal velocity
has been attained) and the optical depth is high (e.g. long wavelengths).
(iii) A model comprising a 30,000 K blackbody and an ionized wind. This
approximates the case of all early-type stars (essentially OB and WR types)
whose winds add a flatter spectral component that is noticed as an excess
which becomes more conspicuous at longer wavelengths. Calculations were made
following Panagia (1991) prescriptions for a spherically symmetric wind whose
expansion velocity increases with radius like a power law,
$v=v_{o}(r/r_{o})^{\gamma}$, and for a wide range of mass-loss rates. Although
these power laws represent just a zero-order approximation to the actual
velocity structure of the wind (most noticeably, lacking a limiting/terminal
velocity behaviour as suggested by ultraviolet line studies), they are able to
reproduce the power-law behaviour of the wind SEDs in the regime of high
optical depths, i.e. where the wind contribution becomes dominant. For the
sake of simplicity, we show here only the curves corresponding to $\gamma=0.5$
and $\gamma=0$. Since the gas opacity is an increasing function of the
wavelength, for each given mass-loss rate, the wind emission is higher at long
wavelengths. As a result the corresponding trajectories on the various TCDs
are curves that start at the colors appropriate for a pure stellar SED (a
power law $\alpha\simeq 2$), end at the asymptotic slope for very optically
thick winds (e.g., 0.95 for $\gamma=0.5$ and 0.6 for $\gamma=0$), and display
a characteristic upward concave curvature because of a more prompt increase in
the redder colors (i.e. a positive second derivative). This is the only model
not labeled in Figures 7 and 8, due to space constraints.
(iv) A model comprising a 30,000 K blackbody plus emission from an optically
thin ionized nebula. Since the angular resolution of Spitzer is never better
than $1.\arcsec 7$, which at the LMC distance corresponds to more than 0.4 pc,
we have to consider the case of an unresolved HII region surrounding an early-
type star. For these models we adopted the gas emissivity as computed by Natta
& Panagia (1976), which includes emission lines, bound–free and free–free
continuum for both H and He, and is almost constant over the entire wavelength
range $1-20\,\mu$m. As expected, the curves start from the bare atmosphere
point and asymptotically tend to a point corresponding to a spectral slope of
$\alpha\sim 0$, displaying an upward concave curvature.
(v) A model with a 30,000 K blackbody plus cool dust emission (140 K and 200 K
BB) with the amount of dust increasing in logarithmic steps of 0.5. This model
may illustrate the case of an early-type star surrounded by a cool dusty
envelope, perhaps similar to a sgB[e] star (Kastner et al., 2006), or stars
with cold dust in their vicinity. In these calculations the dust opacity is
assumed to be inversely proportional to the wavelength
($\kappa_{dust}\propto\lambda^{-1}$). At these dust temperatures the peak of
the dust emission occurs at about 18 and 26 $\mu$m, and, therefore, the
corresponding dust SED is rapidly (almost exponentially) increasing toward
longer wavelengths, thus displaying a very negative slope ($\alpha<<0$).
(vi) A model with a 3,500 K blackbody plus 250 K dust, which was selected to
represent the case of red supergiants (according to the new RSG effective
temperature scale for a M3 I in the LMC, see Levesque et al., 2007). Also in
these calculations the dust opacity is assumed to be inversely proportional to
the wavelength ($\kappa_{dust}\propto\lambda^{-1}$), and the amount of dust is
increased in logarithmic steps of 0.5. As expected, the starting point
corresponds to the bare stellar atmosphere and the ending point is where the
dust emission dominates (the peak of the dust emission SED occurs at about 15
$\mu$m), and the tracks display a typical upward concave curvature.
From these simple models and considerations, it appears that ionized gas
emission produces a SED that declines with wavelength less steeply than a
blackbody, thus possibly causing an excess both in the infrared and in the
radio domains but without reversing the overall declining trend of the SED. It
follows that an observed flux increase toward longer wavelengths necessarily
requires the presence of an additional component that has a suitably low
temperature (e.g. less than 500 K for an upturn around 5 $\mu m$) and has a
suitably high emission to affect, and eventually dominate the observed SED.
These properties correspond to the characteristics of a dusty layer, that is
able to absorb mostly optical and ultraviolet photons and reradiate that
energy in the infrared.
The locations of WR, LBV, sgB[e] and RSG are discussed in §6, 7, 8 and 9,
respectively.
## 5 Infrared Excesses in OB stars
The primary cause of mid-infrared excess in OB stars is free–free emission
from winds or disks. Since this emission is a monotonically increasing
function of the optical depth that depends on the square of the electron
density, one may find a different mid-infrared emission from stars with
identical mass-loss rates but different wind velocity fields. Therefore, winds
that are slow, clumped (e.g., Blomme et al., 2003) or compressed toward the
equatorial plane to form a disk (Owocki et al., 1994) will have stronger mid-
infrared emission. Given the sensitivity to these parameters, Castor & Simon
(1983) and Abbott et al. (1984) concluded that mid-infrared emission alone
cannot be used to determine the mass-loss rates of OB stars. More recently,
Puls et al. (2006) argued that this very sensitivity makes mid-infrared
emission extremely important in disentangling the nature of clumped winds. We
therefore proceed to examine the mid-infrared excesses of the 354 O and 586
early-B stars in our photometric catalog.
In Figures 9 and 10, we plot $J_{IRSF}$ vs. $J_{IRSF}-[3.6]$, $J_{IRSF}-[5.8]$
and $J_{IRSF}-[8.0]$ colors for O and early-B stars, respectively, indicating
their luminosity classes, binarity and emission line classification properties
by different symbols. We compare the observed colors with colors of plane-
parallel non-LTE TLUSTY stellar atmosphere models (Lanz & Hubeny, 2003, 2007)
of appropriate metallicity and effective temperatures. We have not dereddened
the stars, because this requires $BV$ photometry of higher accuracy than that
provided by the MCPS catalog; however, the low extinction toward the LMC
($\overline{E(B-V)}=0.14$ mag; Nikolaev et al., 2004) makes such corrections
small, as illustrated by the reddening vectors. For reference, TLUSTY models
reddened by $E(B-V)=0.2$ mag are also shown. We clearly detect infrared
excesses despite not having dereddened the stars. At longer wavelengths, the
excess is larger because the flux due to free–free emission for optically thin
winds remains essentially constant with wavelength. Fewer stars are detected
at longer wavelengths primarily because of the decreasing sensitivity of
Spitzer and the overall decline of the SED.
We find that for $J<13$ mag, more luminous OB stars exhibit larger infrared
excesses in all colors due to winds. This can easily be understood considering
that the infrared excess is an increasing function of the infrared optical
depth, which in turn is proportional to the ratio
$\dot{M}^{2}/(v_{exp}^{2}R_{*}^{3})$ (e.g. Panagia, 1991). In early-type stars
the mass-loss rate is found to be proportional to a power $k$ (higher than
unity) of the stellar luminosity, i.e. $\dot{M}\propto L^{k}$ with $k\simeq
1.2-1.6$ (Lamers & Leitherer, 1993; Scuderi et al., 1998). The average
expansion velocity is a function of the effective temperature, ranging from an
initial value at the photosphere $v_{0}\simeq v_{sound}\propto T_{\rm
eff}^{1/2}$ to a terminal value $v_{\infty}\propto T_{\rm eff}^{2}$ (Panagia &
Macchetto, 1982), so that approximately
$v_{exp}\simeq(v_{0}v_{\infty})^{0.5}\propto T_{\rm eff}^{1.25}$. Since the
stellar radius can be expressed as $R\propto L^{1/2}/T_{\rm eff}^{2}$, it
follows that the infrared optical depth in early-type star winds is
proportional to $L^{(2k-1.5)}\times T_{\rm eff}^{3.5}$, i.e. a $\sim 0.9-1.7$
power of the luminosity. This result can straightforwardly account for the
observed increase of the infrared excess as a function of luminosity for O and
B type stars in the LMC. We would like to mention that our findings confirm
and provide a simple explanation for the luminosity effect between hypergiants
and dwarfs noted by Leitherer & Wolf (1984) in their sample of 82 Galactic
OB(A) stars. The brightest stars with the largest excesses shown in Figure 9
are late O supergiants, with large Ṁ and low $v_{\infty}$. The spread in
excesses at any given $J-$band magnitude ($\sim 0.2$ mag at $J-[3.6]$, $\sim
0.4$ mag at $J-[8.0]$) cannot be due to reddening, which would displace all OB
stars in a similar fashion. It is most likely related to the range of
mechanisms that produce free–free emission, the properties of the winds (Ṁ,
$v_{\infty}$, clumping factors) and stellar rotation rates.
Stars with $J>13$ mag that exhibit large excesses in Figure 9 are classified
as emission line Oe/On/One, i.e. with evidence of fast rotation or
circumstellar disks. In Figure 10, a larger fraction of stars with $J>13$ mag
are found to have large excesses, reflecting the higher fraction of emission
line stars among B-stars compared to O-stars (Negueruela et al., 2004). The “B
extr” stars (Schild, 1966; Garmany & Humphreys, 1985; Conti et al., 1986) are
found to occupy similar parts of the CMD as the Be(Fe II) stars (notation by
Massey et al., 1995) and Be-Fe stars (notation by Evans et al., 2006) (and
also Be/X-ray binaries), suggesting these are objects of one and the same
nature, with the presence of Fe II lines perhaps due to higher densities in
the circumstellar disks. We note that some Be stars do not exhibit excesses,
while several stars not classified as Be stars have excesses. Variability
(disk dissipation), disk orientation, or insufficient spectral coverage
leading to inaccurate classifications can explain these outliers. Therefore,
infrared CMDs of optically blue selected stars can be used to identify new Be
stars. An alternate explanation for some of the outliers is the new class of
double periodic variables (or DPVs; Mennickent et al., 2003, 2005, 2008). DPVs
are evolved B and A-type giants in an Algol mass-transfer configuration with
circumprimary and circumbinary disks. These disks are thought to be
responsible for the observed long period variability and infrared excess
detected in the near-infrared.
### 5.1 OB star SEDs
Ultimately, the position of a star in a two–color or color–magnitude diagram
is the result of its SED and luminosity, and it is the SED that can help us
interpret its position in terms of its physical properties. Using the $0.3-24$
$\mu$m photometry from Table 3, we created SEDs for the stars by converting
their magnitudes to fluxes666Note, both magnitudes and fluxes exist in the
SAGE database, but only magnitudes are included in our photometric catalog.
Table 4 can be used to convert to fluxes.. We used effective wavelengths and
calibrations from Bessell et al. (1998) for $UBVI$, Rieke et al. (2008) for
the 2MASS $JHK_{s}$ and IRSF, Reach et al. (2005) for the Spitzer IRAC bands,
and the MIPS Data Handbook777http://ssc.spitzer.caltech.edu/mips/dh for the
MIPS 24 $\mu$m band. We compared the observed SEDs with models to assess the
infrared excesses in the OB stars. TLUSTY models for LMC metallicity, a micro-
turbulent velocity of 10 km s-1, and the lowest available surface gravity for
each effective temperature $T_{\rm eff}$ were selected to correspond to
supergiant stars. The exact value of $\log g$ has little effect on mid-
infrared fluxes for models with $T_{\rm eff}\geq 25$kK and, while larger, the
effect remains $\lesssim 0.1$ mag in $(J-\lambda)$ (for $\lambda\geq
2.2\,\mu$m) for the cooler models. We did not attempt to fit the SEDs, but
simply overplotted models to provide a benchmark for determining the deviation
of the observed SEDs from a “bare” photosphere. Deviations of OB stars from a
TLUSTY model are due to reddening and free–free emission from their stellar
winds.
Figure 11 shows representative SEDs for 10 OB stars, normalized by the flux at
the $J-$band. The MCPS, IRSF and SAGE measurements (filled circles), and the
2MASS and OGLE photometry (open circles) are connected by a solid line. TLUSTY
models, similarly normalized, are overplotted as solid lines, while the dotted
lines correspond to the models reddened by $E(B-V)=0.25$ and 0.50 mag (see the
Appendix for details on the treatment of interstellar extinction). In Figure
12, we plot the same SEDs, divided by the unreddened TLUSTY models. These
unitless plots make deviations from models obvious by avoiding the enormous
flux ranges encountered when plotting the SEDs of hot stars in the mid-
infrared. This ease of inspection comes with a price: when plotted against
logarithmic wavelength, equal bins on the abscissa do not correspond to equal
energy intervals. However, in these plots, the effects of reddening (clearly
seen in thet optical), infrared excess (detected in most cases) and even
variability between the 2MASS–IRSF and OGLE III–MCPS photometry are clearly
seen.
The O stars shown in the left panels of Figures 11 and 12 include the lightly
reddened, luminous and hot star Sk $-70^{\circ}$ 91, (O2 III(f*)+OB), which
only has a weak infrared excess, in spite of having very strong ultraviolet
P-Cygni wind lines ($v_{\infty}=3150$ km s-1; e.g. Massa et al., 2003). In
such stars, the wind density remains low, creating only weak infrared
emission. In contrast, the O4-5 V((f))pec star Sk $-70^{\circ}$ 60 and the O6
Iaf+ star Sk $-65^{\circ}$ 22 (with $v_{\infty}=1350$ km s-1) have strong
infrared excesses. The following two stars: Sk $-66^{\circ}$ 110 and BI 192,
are lightly reddened O9 I and O9 III stars, respectively, with distinctive
evidence for infrared emission from their winds. These examples demonstrate
that mass-loss rates cannot be derived solely from infrared excesses; an
estimate of the spectral type and luminosity class is also required. We show
examples of B giant and supergiant stars in the right panels of Figures 11 and
12. The B0 Ia star Sk $-68^{\circ}$ 52 is not consistent with the SED of a
reddened 30kK model, but shows evidence for infrared emission from a wind.
Contrast the B0 III star Sk $-67^{\circ}$ 210, which appears to be nearly
unreddened, to the moderately reddened B0 III star ST2–46, exhibiting infrared
excess. Sk $-68^{\circ}$ 8 (B5 Ia+), compared to a 15kK model, similarly shows
an infrared excess. The final SED is of Sk $-66^{\circ}$ 12 (B8 Ia), also
compared to a 15kK model (the coolest TLUSTY model available). While the star
is probably cooler than 15kK, the reduction of the effective temperature by a
few kK cannot account for the strong infrared excess, which is consistent with
free–free emission from a wind. Careful modeling of each star will be
necessary to accurately derive their wind properties.
## 6 Wolf-Rayet stars
Of the 125 WR stars in the catalog of Breysacher et al. (1999, not including
BAT99-106 and BAT99-108 through BAT99-112 in the crowded R136 cluster), 99
yield matches in the SAGE database888We label the B0 I + WN3 star Sk
$-69^{\circ}$ 194 (Massey et al., 2000) as an early-B star in our plots..
However, BAT99-45 and BAT99-83 are LBVs and BAT99-6 is an O star binary (given
its O3f*+O spectral type; Niemela et al., 2001). Only 3 of the remaining 96 WR
stars were detected at 24 $\mu$m: BAT99-22 (WN9h), BAT99-55 (WN11h), BAT99-133
(WN11h, a spectroscopic twin to the LBV BAT99-83 in minimum). Our catalog
includes various WR subtypes found in the LMC: early and late WN stars (some
with hydrogen present); early WC stars of WC4 subtype; and one WO3 star. While
most WR stars are thought to be post-main sequence stars with progenitor
masses greater than $25-75\,\mbox{M${}_{\odot}$}$ (depending on the subtype;
Crowther, 2007), there is increasing evidence that the late-type WN stars with
hydrogen in their spectra are hydrogen-burning stars with initial masses above
$40-60\,\mbox{M${}_{\odot}$}$ (the lower initial mass being uncertain; Smith &
Conti, 2008). Studies by Bartzakos et al. (2001), Foellmi et al. (2003), and
Schnurr et al. (2008) found the fraction of spectroscopic binaries among the
WR stars in the LMC to be lower than expected. Given the diverse properties of
objects encompassed under the “WR” classification, one might expect the
position of WR stars on a CMD or TCD to depend on their spectral types and
possibly be influenced by binarity.
We examine the position of the WR stars on the $J$ vs. $J-[3.6]$ CMD (Figure
13) as a function of their spectral type, subdividing them into: WN2-5, WN6-7,
WN8, WN9-11, WC4 and WO3 and labeling the known binaries. On average, the
WN6-7 and WN9-11 stars are more luminous and therefore brighter at $J$, when
compared to their early WN2-5 counterparts. In particular, the late WN9-11
stars containing hydrogen are the brightest at $J$. These are the most
luminous evolved WN stars, thought to be quiescent states of massive LBVs
(Bohannan & Walborn, 1989; Smith et al., 1994). Overall, the WR stars span 6
mag in $J$, and 2 mag in $J-[3.6]$ color, with the brighter stars being
“bluer”, i.e. with steeper SEDs (see below). The brightest star is BAT99-22
(R84 or Sk $-69^{\circ}$ 79) with a WN9h spectral type, which is known to be
blended with a red supergiant (Heydari-Malayeri et al., 1997), while the
reddest star is BAT99-38 (HD 34602 or Sk $-67^{\circ}$ 104), a triple system
(WC4(+O?)+O8I:; Moffat et al., 1990) with an associated ring nebula (Dopita et
al., 1994), which is probably responsible for the infrared excess. Dust is
known to form around some colliding wind binary WC stars (see references in
Crowther, 2007), and was recently detected in the vicinity of WN stars
(Barniske et al., 2008). The 24 $\mu$m detections around the WN11h stars
BAT99-55 and BAT99-133, if confirmed, provide additional examples of WN stars
with associated dust.
In Figure 14, we examine the colors of the WR stars in a $J-K_{s}$ vs.
$K_{s}-[8.0]$ plot, following Hadfield et al. (2007). The location of LMC WR
stars does not agree with the region Hadfield et al. (2007) defined by dashed
lines to represent colors of Galactic WR stars. There are two reasons for
this: 1) the Galactic sample includes many dusty WC9 stars that are
intrinsically reddened (Gehrz & Hackwell, 1974), and 2) line-of-sight
reddening has scattered the Galactic sample into the region these authors have
defined. Metallicity is not expected to affect the colors of WR stars, in
particular WC stars, whose winds consist of processed and therefore enriched
material. The WR stars define a rather narrow linear trend, independent of
spectral type, defined by: $0<J-K_{s}<0.7$ mag and $0.4<K_{s}-[8.0]<2.0$ mag,
corresponding to power law spectra with indexes $\alpha=1.5$ to 0.5, i.e.
optically thick winds with modest velocity gradients (e.g. Panagia, 1991).
Single WR stars are expected to have flatter SEDs than binaries, which
typically have main-sequence companions with steeper SEDs. The outliers in
this plot are once again the heavily reddened WC4 triple system BAT99-38 and
the blended star BAT99-22 (R84), whose infrared spectrum is dominated by the
red supergiant. R84 is an outlier in all the CMDs, located among the RSG. On
the $[3.6]-[4.5]$ vs. $[4.5]-[8.0]$ TCD (Figure 7) the WR stars follow a
similar linear trend. We note that the brightest WR stars in $J$ are the
bluest in the TCDs.
In Figure 15, we show SEDs for 7 WR stars with representative subtypes. The
strong C IV and He II emission lines present in the WO3 and WC4 stars (see
Breysacher et al., 1999) are responsible for the “kinks” in the SEDs of these
stars. For 3 of these stars (BAT99-123, BAT99-61, BAT99-17) we show model fits
derived from mainly optical and near-infrared spectroscopy using the
spherical, non-LTE, line-blanketed CMFGEN code of Hillier & Miller (1998). For
BAT99-123 (WO3), we use the model presented in Crowther et al. (2000) which is
derived from line profile fits to ultraviolet and optical spectroscopy. This
model has parameters of $T_{\rm eff}=150$ kK, log $L/L_{\odot}=5.3$ ,
$v_{\infty}=4,100$ km s-1 and $\dot{M}=1\times 10^{-5}\,\mbox{M${}_{\odot}$}$
yr-1 for a volume filling factor of 0.1. For BAT99-61 (WC4), we use the model
presented by Crowther et al. (2002) which has $T_{\rm eff}=52$ kK, log
$L/L_{\odot}=5.68$, $v_{\infty}=3,200$ km s-1 and $\dot{M}=4\times
10^{-5}\,\mbox{M${}_{\odot}$}$ yr-1 for a volume filling factor of 0.1. For
BAT99-17 (WN4o), we use the model summarized in Crowther (2006) which has
$T_{\rm eff}=52$ kK, log $L/L_{\odot}=5.40$ , $v_{\infty}=1,750$ km s-1 and
$\dot{M}=1.4\times 10^{-5}\,\mbox{M${}_{\odot}$}$ yr-1 for a volume filling
factor of 0.1. Overall, the model fits presented in Figure 15 show that the
observed SEDs in the mid-infrared are flatter than the model continua,
suggesting that either the model mass loss rates are too low or the winds are
more highly clumped than the models predict. We caution, however, that this
comparison is only for 3 stars. Generic CMFGEN models (Smith et al., 2002) are
shown for the other 4 WR stars: a WN 85kK model for BAT99-1 (WN3b) and a WN
45kK model for the remaining WNL stars, all of which show excess above that
predicted by the models. It is clearly desirable to investigate these apparent
differences further by performing detailed line profile fits in the mid-
infrared.
Finally, in Table 5 we present photometry for 10 WR stars with counterparts
(within 20$\arcsec$) at 70 $\mu$m and for 1 WR star with a counterpart at 160
$\mu$m. The star name, spectral type, magnitude and associated error at 70
$\mu$m and 160 $\mu$m, followed by the flux and associated error in these
bands, are given. We caution that the angular resolution of MIPS is
$18\arcsec$ and $40\arcsec$, respectively, at these bands. However, we note
that the sgB[e] and LBVs detected also show evidence for dust at the
IRAC+MIPS24 bands.
## 7 Luminous Blue Variables
There are 6 confirmed LBVs (see review by Humphreys & Davidson, 1994) in the
LMC: S Dor, BAT99-83 or R127, R 71, R 110, BAT99-45, and R 85, although
another 5–6 have been suggested as candidates in the literature (see e.g.,
Weis, 2003). The LBVs are not only among the most luminous sources at 3.6
$\mu$m (see Figure 2 and 3), with [3.6]–[4.5] colors similar to AGB stars and
intermediate between RSG and sgB[e] stars, but also at 8.0 $\mu$m (Figure 4)
and 24 $\mu$m (Figure 5). In the TCDs (Figures 6 and 7) the LBVs are located
between the OB and WR stars, with the exception of R71 (Lennon et al., 1993),
which is an extreme outlier, with $K_{s}-[8.0]=3.4$ mag and $[4.5]-[8.0]=2.5$
mag. Its recent brightening of 1.5 mag between 2006 and 2009 (see light curve
from the All Sky Automated Survey999http://www.astrouw.edu.pl/asas/;
Pojmanski, 2002) cannot account for its infrared colors. Instead, the emission
from polycyclic aromatic hydrocarbons (PAHs) detected by Voors et al. (1999)
in R71 can explain its “red” colors, since there are strong PAH lines in the
[3.6] and [8.0] bands. All 6 confirmed LBVs were detected in the IRAC bands, 3
were detected in the MIPS 24 $\mu$m band and 2 at 70 $\mu$m: R71 (saturated at
24 $\mu$m) and BAT99-83 or R127 (see Table 5). The R71 detection is consistent
with the 60 $\mu$m value in the IRAS Point Source Catalog and Voors et al.
(1999) suggest that a combination of crystalline silicates and cool dust are
responsible for it. The detection of R127 at 70 $\mu$m may imply that a
similar dusty environment exists around this star. We note that BAT99-45 is
distinctively “bluer” than the other two LBVs detected at 8.0 $\mu$m and 24
$\mu$m (see Figures 4, 5, 8).
The SEDs of all 6 confirmed LBVs are shown in Figure 15 and, unlike other
classes of stars, are highly non-uniform. Since LBVs are by definition
variable and the photometry in different bands was obtained at different
epochs, the differences in the 2MASS and IRSF photometry are most likely due
to variability. We only have MCPS photometry for two LBVs, which seems either
variable or of suspect quality. S Dor, the prototype S Doradus variable, and
R127 (in outburst during the SAGE observations; Walborn et al., 2008) are the
brightest LBVs at 24 $\mu$m (Figure 5). Many LBVs are surrounded by small
nebulae (0.2–2 pc) originating from past giant eruptions, which form cool
dust. For example, R127 was found to have a bipolar nebula (Weis, 2003). The
dust associated with its nebula is likely responsible for the bright mid-
infrared magnitudes of R127. R71 exhibits a strong excess in the IRAC bands,
as expected from its position in the TCDs.
Do LBVs represent a homogeneous class of objects? Their infrared SEDs point to
diverse properties, which is not surprising given their spectral variability
during eruptions. On the one hand, the low luminosity LBV R71, with an
estimated progenitor mass of around 40 M⊙ has one of the strongest infrared
excesses of any massive star in the LMC and is well known as a source of dust
emission. S Dor and R127 are more typical massive LBVs, both of which exhibit
an infrared excess, possibly linked to circumstellar material formed during
their recent outbursts. The other three LBVs in the sample: R110, BAT99-45
(also Sk $-69^{\circ}$ 142a, S83) and R85 all show different but coherent
infrared behaviour more consistent with a stellar wind contribution only. It
is possible that dust around these objects is cooler; however these stars were
not detected at the longer MIPS bands. The different shapes of the LBV SEDs
are likely related to the time since the last outburst event and the amount of
dust formed.
## 8 B[e] supergiants and Classical Be stars
B[e] supergiants (sgB[e]) are a distinct class of B-type stars exhibiting
forbidden emission lines, or the “B[e] phenomenon” (Lamers et al., 1998). They
are characterized by strong Balmer emission lines, narrow permitted and
forbidden low-excitation emission lines and a strong mid-infrared excess. A
two-component wind model with a hot polar wind and a slow and cool equatorial
disk wind has been proposed to explain them (Zickgraf et al., 1986); however
models of the disk have so far achieved limited success (Porter, 2003; Kraus
et al., 2007; Zsargó et al., 2008). On the Hertzsprung-Russell (HR) diagram,
sgB[e] stars are located below the Humphreys-Davidson limit (Humphreys &
Davidson, 1979), a few of them being coincident with the location of LBVs.
However, the existence of an evolutionary connection of sgB[e] stars to LBV
stars (transitional phase between Of and WR stars) is unclear. Langer & Heger
(1998) outline three possible ways sgB[e] could form circumstellar disks: from
single massive evolved stars close to critical rotation (Meynet & Maeder,
2006), blue supergiants which have left the red supergiant branch, and from
massive binary mergers (Podsiadlowski et al., 2006).
The sgB[e] stars are the most conspicuous group of stars in all infrared CMDs
and TCDs: they are among the brightest and most reddened stars in the LMC
(also noted by Buchanan et al., 2006). In the LMC, 12 stars have been
classified as sgB[e] stars (including LH 85–10, although it is not among the
11 stars listed in Zickgraf, 2006). The 11 that were included in our catalog
(S 22, S 134, R 126, R 66, R 82, S 12, LH 85-10, S 35, S 59, S 137, S 93) were
all matched in the SAGE database. Their SEDs are shown in Figure 16. The
B0.5[e] stars are sorted by decreasing flux at [3.6], the later-type B[e]
stars by decreasing effective temperature. A 25kK TLUSTY model representing
the underlying photosphere is overplotted. We find that the SED of LH 85–10, a
suspected LBV assigned a B[e] spectral type by Massey et al. (2000), resembles
the SEDs of “classical Be” stars (Negueruela, 2004). Figures 2, 3, 6, and 7
demonstrate that LH 85–10 is much fainter and less reddened than sgB[e] and
LBV stars, and is located near Be stars, and therefore may have been
misclassified as a B[e] star. The SEDs of the remaining 10 sgB[e] stars are
all very similar, with slowly decreasing flux in the optical, an inflexion
point in the near-infrared and a “bump” starting at 2 $\mu$m and peaking near
5 $\mu$m. This peak corresponds to hot dust at $\sim$600 K. The slight change
in the slopes of the SEDs between 8 and 24 $\mu$m from star to star suggests
different contributions from cool dust (150 K). R66, S35 and R126 were also
detected at 70 $\mu$m (Table 5). We find all 10 sgB[e] stars to be extincted
by $E(B-V)\gtrsim 0.5$.
Kastner et al. (2006) presented Spitzer IRS spectroscopy for 2 of these sgB[e]
(R66, R126101010Note, the sgB[e] star R126 (Sk $-69^{\circ}$ 216) is located
in NGC 2050, close to the LBV R127 (Sk $-69^{\circ}$ 220), which is in NGC
2055. These two clusters could be a “fossil” two-stage starburst.), finding
evidence for massive circumstellar disks, which can simultaneously explain the
(relatively) small amount of reddening in the optical and the observed
infrared excess. However, the mass-loss rate required to sustain such a
massive disk is very high and unlikely to persist during the He burning phase
of the star. These authors argued against a model of episodic mass ejection
based on the similarity of the SEDs of the two sgB[e] they studied. The
similarity of all 10 sgB[e] SEDs makes the scenario of episodic mass ejection
even less likely or, alternatively, suggests a shorter duration of the sgB[e]
phase or even an origin in a binary companion. The small fraction of sgB[e]
stars among B stars might present a clue to their origin. We note that the
rare, dust obscured, luminous progenitors (Prieto et al., 2008; Prieto, 2008)
of the newly discovered class of transients, including SN2008S (Smith et al.,
2009) and NGC 300 OT (Bond et al., 2009; Berger et al., 2009), could be
related to the lower luminosity B[e] supergiants, given that both are likely
post-main-sequence stars (possibly in a post-red supergiant phase) with dust
and similar luminosities. Thompson et al. (2008) presented evidence for a
short duration ($<10^{4}$ yr) of the dust-enshrouded phase prior to eruption.
The infrared emission in the sgB[e] stars is much stronger than in classical
Be stars (shown in Figure 17). It is typically several hundred times stronger
than the photosphere at 8 $\mu$m, and up to 1000 times stronger at 24 $\mu$m,
explaining the extreme positions of sgB[e] stars in the TCDs. In the classical
Be stars, free–free emission from disks can cause the 8 $\mu$m flux to be 10
times that of the underlying photosphere, as is the case for all but one (Sk
$-71^{\circ}$ 13) of the Be stars shown. The emission in classical Be stars
can be highly variable (see review by Porter & Rivinius, 2003), which may
account for the disagreement between the IRSF and 2MASS photometry for NGC
2004-033 (B1.5e), and in some cases, can even dissipate altogether, as for Sk
$-71^{\circ}$ 13 (Be (Fe-Be)). The SEDs of the remaining Be, “B extr”, Be (Fe-
Be) stars are identical, thereby providing additional evidence (see §5) that
these stars are the same objects: the most luminous of the Be stars. We also
present the SED of 1 of the 4 Be/X-ray binaries in our matched catalog,
finding it to have a peculiar shape, similar to the other Be stars, but with
excess emission in the [3.6] and [4.5] bands, possibly due to variability.
## 9 Red Supergiants & Other Stars
The red K and M supergiants are among the brightest stars in the LMC at all
infrared wavelengths. However at 24 $\mu$m they span 5.5 magnitudes (see
Figure 5). This is due to the different types and amounts of dust contributing
to the SED at these wavelengths due to different mass-loss rates (see
Verhoelst et al., 2009). Josselin et al. (2000) note that the $K-[12]$ color
index is a mass-loss indicator; therefore, we similarly interpret the range in
[24] magnitudes and the 2 magnitude spread in $K_{s}-[8.0]$ color in Figure 6
as a reflection of the range of mass-loss rates. In Figure 8, we notice a
temperature trend: red supergiants clustered at the bottom left (i.e. with
“blue” colors) are predominantly K or early M supergiants, while the bulk of
M0–M4 supergiants are found in a band, roughly following the RSG model
described in §4. The temperature trend correlates with the optical depth of
the dust emission and the amount of dust, which in turn is a function of the
mass-loss. The offset from the blackbody model can be explained by dust;
exceptions to this trend might be due to misclassifications, blends, or
variability. The same temperature trend is seen in Figure 5, with the coolest
RSG being the most luminous at 24 $\mu$m. Our catalog includes LMC 170452,
which is a rare example of a RSG that has changed spectral types. Levesque et
al. (2007) found it to vary from M4.5-M5 I in 2001 and 2004 to M1.5 I in 2005
December. The SAGE observations were also taken in 2005 (July & October),
possibly during the transition. We suggest that supergiants not appearing to
follow the temperature trend in Figure 8 could signify spectral variability
(e.g. the “blue” M2 star [SP77]55-20 and the very red K0 star 165543). The
extremely reddened (at long wavelengths) outlier in Figures 4, 5, and 8 is the
M2-3 supergiant 175549 with $[8]-[24]=4$ mag. The M0 I 138475 has
$K_{s}-[8.0]=3$ mag, making it an outlier in Figure 6. In Figure 3, it is
located with the AGB stars, implying it is an M-type AGB star.
Figure 18 shows representative SEDs of yellow and red supergiants. The SED
peak moves redward with decreasing temperature, as expected. Note that
starting at Sk $-69^{\circ}$ 30 (G5 I), a depression in the continuum due to
the CO band appears at 4.5 $\mu$m, and persists to later types. This
depression is extremely weak or absent in the M1 Ia, 138552. Some of the later
supergiants have a distinct break in their slopes at roughly 4.5 $\mu$m, also
coinciding with the CO band. The SED of LH 31-1002 (F2 I), suggests a hotter
effective temperature, inconsistent with its spectral type. Massey et al.
(2000) give $B-V=0.4$ mag for this star, in disagreement with the value from
MCPS ($B-V=-0.2$ mag), perhaps suggesting an error in its coordinates that
lead to a match with a nearby hot star. The K7 I star 139027 also appears to
have an excess at the shortest wavelengths; therefore it is a candidate
$\zeta$ Aurigae binary (Wright, 1970), with a hot companion. LMC 170452, has a
peculiar SED (with $V-K=7.2$ mag, instead of $\sim 5$), possibly reflecting
its spectral type change.
Finally, in Figure 19 we present some peculiar SEDs, illustrating examples of
nebular contamination (N11-081), variability (RXJ0544.1-7100), blending
(BAT99-22) or misidentification (Sk $-67^{\circ}$ 29, Sk $-70^{\circ}$ 33) in
our photometric catalog. Nebular contamination and misidentifications are
responsible for the OB outliers in the CMDs and TCDs. Only 3 out of 354 O
stars (N11-028, N11-046, lmc2-703– all main-sequence stars) and 15 out of 586
early-B stars have 24 $\mu$m detections, illustrating that contamination is
less than 3% at this wavelength.
## 10 Summary
This paper presents the first major catalogs of accurate spectral types and
multi-wavelength photometry of massive stars in the LMC. The spectroscopic
catalog contains 1750 massive stars, with accurate positions and spectral
types compiled from the literature, and includes the largest modern
spectroscopic surveys of hot massive stars in the LMC. The photometric catalog
comprises uniform $0.3-24$ $\mu$m photometry in the $UBVIJHK_{s}$+IRAC+MIPS24
bands for a subset of 1268 stars that were matched in the SAGE database. Our
photometric catalog increases the infrared photometry for OB stars presented
in a single study by an order of magnitude, allowing for a detailed study of
their wind parameters. The low foreground reddening toward the LMC and the
identical distance of the stars in the sample remove degeneracies inherent in
Galactic studies and enable the investigation of infrared excesses, while
minimizing systematic errors due to reddening.
We examine the infrared excesses of the half-solar metallicity OB stars in the
LMC, finding a correlation of excess with wavelength, as expected, and
luminosity. The dispersion in the relation reflects the range of mechanisms,
and therefore stellar wind parameters, which produce free–free emission in OB
stars. We discuss the positions of OB stars, WR, LBV, sgB[e], classical Be
stars, RSG, AFG supergiants and Be/X-ray binaries on CMDs and TCDs, which
serve as a roadmap for interpreting existing infrared photometry of massive
stars in nearby galaxies. We conclude that sgB[e], RSG and LBVs are among the
most luminous stars in the LMC at all infrared wavelengths. Representative
SEDs are shown for all types of stars, and in certain cases are compared to
model atmospheres to illustrate infrared excesses due to free–free emission
from winds or disks. We confirm the presence of dust around 10 sgB[e] stars
from the shape of their spectral energy distributions, which are presented for
the first time at these mid-infrared wavelengths, and find the SED shapes to
be very similar. The large luminosities and the presence of dust
characterizing both B[e] supergiants ($\log L/L_{\odot}\geq 4$) and the rare,
dusty progenitors of the new class of optical transients (e.g. SN 2008S and
NGC 300 OT), suggest a common origin for these objects. The variety of SED
shapes observed among the LBVs is likely related to the time since the last
outburst event and the amount of dust formed. Finally, we find the
distribution of infrared colors for WR stars to differ from that of Galactic
WR stars, due to the abundance of Galactic dusty WC9 types and the effects of
foreground reddening in the Milky Way.
This work demonstrates the wealth of information contained in the SAGE survey
and aims to motivate more detailed studies of the massive stars in the LMC.
Examples of studies that can be conducted using our catalogs follow: (a)
Candidate RSG, sgB[e], LBV in the LMC and other galaxies can be identified by
selecting them from infrared CMDs, ruling out AGB stars from optical and near-
infrared photometry (if it exists), and confirming these with spectroscopy.
For example, in IC 1613, a comparison of the IRAC CMD in Figure 4 of Jackson
et al. (2007b) to our Figure 2, suggests that the brightest “blue” stars are
candidate RSG or LBV stars. With the optical catalog of Garcia et al. (2009),
their visual colors can be determined and 0.3-8 $\mu$m SEDs can be
constructed. (b) Specific stars or categories of stars can be selected from
the catalog for further study. For example, mass-loss rates can be determined
for OB stars that have spectra from the Far Ultraviolet Spectroscopic Explorer
(e.g. Fullerton et al., 2000), since terminal velocities can be measured from
the ultraviolet lines. (c) The OB stars can be used to determine the LMC
reddening law, given the known spectral types and SED shapes of these stars.
(d) Variability in 2MASS–IRSF photometry can be used to pick out new,
variable, massive stars. The most luminous stars in our catalog ($V\sim 10$
mag) are saturated in the LMC microlensing surveys; however, the two epochs of
near-infrared photometry can be used, for example, to find candidate eclipsing
binaries not included in the OGLE (Wyrzykowski et al., 2003) or MACHO
(Faccioli et al., 2007) lists. (e) Our catalog can be cross correlated with
X-ray catalogs of the LMC to study non-thermal emission processes in massive
stars.
A comparison of the infrared properties of massive stars in the Large vs.
Small Magellanic Cloud (SMC) will be pursued next. Stellar winds scale with
metallicity (Mokiem et al., 2007); therefore, a comparison of infrared
excesses of OB stars in the LMC vs. the SMC will help quantify the effect of
metallicity on OB star winds, Be star properties, sgB[e] stars etc. In the
future, the James Webb Space Telescope will obtain infrared photometry of
massive stars in galaxies out to 10 Mpc and measuring the colors of massive
stars as a function of metallicity, will be useful for interpreting these
data.
We acknowledge the input from members of the Massive Stars & Starbursts
research group at STScI, in particular, Nolan Walborn. We thank Bernie Shao
for incorporating the OGLE III photometry to the database and Paul Crowther
for providing us with his CMFGEN models of WR stars. AZB acknowledges support
from the Riccardo Giacconi Fellowship award of the Space Telescope Science
Institute. The Spitzer SAGE project was supported by NASA/Spitzer grant
1275598 and NASA NAG5-12595. This work is based [in part] on archival data
obtained with the Spitzer Space Telescope, which is operated by the Jet
Propulsion Laboratory, California Institute of Technology under a contract
with NASA. Support for this work was provided by an award issued by
JPL/Caltech. This publication makes use of data products from the Two Micron
All Sky Survey, which is a joint project of the University of Massachusetts
and the Infrared Processing and Analysis Center/California Institute of
Technology, funded by the National Aeronautics and Space Administration and
the National Science Foundation. Facility: Spitzer (IRAC, MIPS)
## Appendix
We adopted a reddening curve composed of a continuous contribution and a
feature which represents silicate absorption at 9.7 $\mu$m. The continuous
portion is based on the formulation used by Fitzpatrick & Massa (2009) to fit
near-infrared extinction curves (which include 2MASS $JHK_{s}$) and is a
generalization of the analytic formula given by Pei (1992). For
$\lambda\geq\lambda_{0}$, it has the form
$k(\lambda-V)_{c}\equiv\frac{E(\lambda-V)}{E(B-V)}=\frac{0.349+2.087R(V)}{1+(\lambda/\lambda_{0})^{\alpha}}-R(V),$
(1)
where $\lambda_{0}=0.507\,\mu$m. The ratio of total to selective extinction,
$R(V)\equiv A(V)/E(B-V)$, and $\alpha$ are free parameters. Near-infrared
observations show that these parameters can vary from one sight line to the
next. In this work, we use a curve defined by $\alpha=2.05$ and $R(V)=3.11$.
These parameters provide a good representation of the mean curve determined by
Fitzpatrick & Massa (2009) and also to extinction data taken from the
literature for diffuse interstellar medium sight lines in the Milky Way. The
latter data sets extend to 24 $\mu$m and include the following: Castor & Simon
(1983), Koornneef (1983), Abbott et al. (1984), Leitherer & Wolf (1984), Rieke
et al. (1989), Wegner (1994), and Nishiyama et al. (2009). Although we cannot
be certain this same form applies to the LMC, it is consistent with the SEDs
of lightly reddened, normal hot stars that are thought to have weak winds (see
§5). In any case, the extinction for most of our objects is small enough that
inaccuracies in the interpretation of reddening should be minimal. For
completeness, we included a Drude profile (see, e.g., Fitzpatrick & Massa,
2009) to describe the 9.7 $\mu$m silicate feature, even though it has little
effect on the Spitzer bands. The adopted profile is given by
$f(\lambda)=\frac{a_{3}x^{2}}{(x^{2}-x_{0}^{2})^{2}+(x\gamma)^{2}},$ (2)
where $x=1/\lambda\;\mu{\mbox{m}}^{-1}$, and the parameters were assigned the
values $x_{0}=1/9.7\;\mu{\mbox{m}}^{-1}$, $\gamma=0.03$ and $a_{3}=3\times
10^{-4}$. Finally, the complete extinction curve is given by
$k(\lambda-V)=k(\lambda-V)_{c}+f(\lambda)$. When we plot the SEDs, we actually
use the form $k(\lambda-J)=k(\lambda-V)-k(J-V)$.
Figure 1: Spatial distribution of massive stars with IRAC counterparts, which
are found to trace the spiral features in the 8 $\mu$m SAGE mosaic image.
Different symbols denote different spectral types. The spatial distribution of
the OB stars is uniform, with the exception of the N11 (at $\sim 74^{\circ}$,
$-66.5^{\circ}$), NGC 2004 (at $\sim 82^{\circ}$, $-67.3^{\circ}$) and 30
Doradus (at $\sim 85^{\circ}$, $-69^{\circ}$) regions. Figure 2: [3.6] vs.
$[3.6]-[4.5]$ color magnitude diagram for massive stars with IRAC counterparts
in the SAGE database. The conversion to absolute magnitudes is based on a true
LMC distance modulus of 18.41 mag (Macri et al., 2006). Different symbols
denote different spectral types. The locations of all the SAGE detections are
shown in grey as a Hess diagram. The sgB[e], RSG and LBVs are among the most
luminous stars at 3.6 $\mu$m. Figure 3: Same as Figure 2, but for the
$[3.6]-J$ vs. [3.6] color magnitude diagram. The longer baseline separates the
populations more clearly. The reddening vector for $E(B-V)=0.2$ mag is shown.
Figure 4: Same as Figure 2, but for the [8.0] vs. $[8.0]-[24]$ color magnitude
diagram. The sgB[e], RSG and LBVs are also among the most luminous stars at 8
$\mu$m. Figure 5: Same as Figure 2, but for the [24] vs. $[8.0]-[24]$ color
magnitude diagram. The brightest sgB[e], RSG and LBVs are among the most
luminous stars at 24 $\mu$m. Figure 6: $J-K_{s}$ vs. $K_{s}-[8.0]$ diagram for
massive stars in our catalog. The locations of all the SAGE detections are
shown in grey as a Hess diagram. The solid lines represent models (described
in §4): (i) a BB at various temperatures, as labelled, (ii) a power law model
F${}_{\nu}\propto\nu^{\alpha}$, for $-1.5\leq\alpha\leq 2$, (iii) an OB star
plus an ionized wind (not labelled), (iv) an OB star plus emission from an
optically thin HII region, (v) an OB star plus 140 K dust, (vi) 3,500 K
blackbody plus 250 K dust (dashed line). The sgB[e], RSG and WR stars occupy
distinct regions on this diagram. Figure 7: Same as Figure 6, but for the
$[3.6]-[4.5]$ vs. $[4.5]-[8.0]$ diagram. The majority of hot massive stars lie
between the blackbody and OB star +wind model, illustrating that a BB is a
good approximation in the infrared. Figure 8: Same as Figure 6, but for the
$[3.6]-[8.0]$ vs. $[8.0]-[24]$ diagram. The RSG with “bluer” colors have
earlier spectral types, indicating a temperature sequence (see §9). Figure 9:
Infrared excesses ($J_{IRSF}$ vs. $J_{IRSF}-[3.6]$, $J_{IRSF}-[5.8]$ and
$J_{IRSF}-[8.0]$) for 354 O stars. Supergiants are shown in yellow, giants in
green, main-sequence stars in blue, stars with uncertain classifications
(“other”) in red, binaries with a large circle and Oe stars with an $\times$.
The solid lines correspond to 30kK and 50kK TLUSTY models with $\log g=4.0$. A
reddening vector for $E(B-V)=0.2$ mag is shown, as well as reddened TLUSTY
models by this same amount (dotted lines). The more luminous stars exhibit
larger infrared excesses, which increase with $\lambda$. The $0.2-0.4$ mag
spread in excesses at any $J_{IRSF}-$band magnitude reflects the range in
mass-loss rates, terminal velocities, clumping properties and, perhaps,
rotation rates of O stars. Figure 10: Same as Figure 9, but for 586 early-B
stars. The solid lines correspond to 20kK, $\log g=3.0$ and 30kK, $\log g=4.0$
TLUSTY models. A reddening vector for $E(B-V)=0.2$ mag is shown, as well as
reddened TLUSTY models by this same amount (dotted lines). The larger number
of early-B stars makes the trends identified among the O stars clearer.
Figure 11: Representative SEDs of O stars $(Left)$ and B stars $(Right)$,
normalized by their $J-$band fluxes (dashed line) and offset for display
purposes. Normalized TLUSTY model atmospheres (30kK, $\log g=3.0$ model for
the O and B0 stars; 15kK, $\log g=1.75$ model for the late B stars) are
overplotted for comparison. The MCPS, IRSF and SAGE measurements are shown as
filled circles; the 2MASS and OGLE measurements as open circles. The dotted
curves correspond to TLUSTY models reddened by $E(B-V)=0.25$ and 0.50 mag.
Infrared excesses are detected in most stars (see §5.1).
Figure 12: Ratios of the SEDs of O stars $(Left)$ and B stars $(Right)$ to the
unreddened TLUSTY models shown in Figure 11, clearly showing the deviations of
the observations from the models and cases of variability, from a comparison
of $VIJHK_{s}$ magnitudes obtained from different sources. Dotted curves
correspond to TLUSTY models reddened by $E(B-V)=0.25$ and 0.50 mag. Infrared
excesses are present in most cases. Figure 13: $J$ vs. $J-[3.6]$ color
magnitude diagram for Wolf-Rayet stars with infrared detections, labelled
according to spectral type. Black circles denote known binaries. The reddening
vector for $E(B-V)=0.2$ mag is shown. The WN2-5 stars are on average fainter,
while the WN6-7 and WN9-11 are brighter at $J-$band.
Figure 14: $J-K_{s}$ vs. $K_{s}-[8.0]$ diagram for Wolf-Rayet stars with
infrared detections, showing a tight linear correlation independent of
spectral type. The dashed lines correspond to the colors found by Hadfield et
al. (2007) for Galactic WR stars; the solid line shows a power law model
F${}_{\nu}\propto\nu^{\alpha}$, for $0\leq\alpha<2$. The difference in colors
between the LMC WR stars and the Galactic WR stars is a combination of
extinction, as illustrated by the reddening vectors, and the large fraction of
dusty WC9 stars among Galactic WR stars.
Figure 15: Same as Figure 11, but for Wolf-Rayet stars $(Left)$ and Luminous
Blue Variables $(Right)$. For the WR stars, we show model fits for BAT99-123,
BAT99-61 and BAT99-17 and generic CMFGEN models for the rest (see §6). All
show excess above that predicted by the models. The first 3 LBVs show evidence
for dust; BAT99-83 and R71 (saturated at $[24]$) were detected in the MIPS70
band. The different shapes of the LBV SEDs are likely related to the time
since the last outburst event and the amount of dust formed.
Figure 16: Same as Figure 11, but for B[e] supergiants, with a 20kK, $\log
g=2.25$ TLUSTY model overplotted. The sgB[e] have similar SEDs revealing dust,
except for LH 85-10, which has a SED similar to that of a Be star. R126, R66,
and S35 were also detected in the MIPS70 band (see §8).
Figure 17: Same as Figure 11, but for classical Be stars, with a 25kK, $\log
g=2.75$ TLUSTY model overplotted. The similarity of their SEDs implies that
the various spectral types refer to the same type of object. The Be/X-ray
binary RXJ0535.0-6700 exhibits excess emission at [3.6] and [4.5].
Figure 18: Same as Figure 11, but for A, F and G supergiants $(Left)$ and K
and M supergiants $(Right)$. No reference models are shown. The SED of LH
31-1002 (F2 I) implies a hot effective temperature, inconsistent with its
spectral type. The K7 I star 139027 has an excess at the shortest wavelengths,
possibly indicating a hot companion. The M4.5-5 supergiant 170452 is a rare
case of a RSG that changed spectral type (see §9).
Figure 19: Same as Figure 11, but for peculiar SEDs, arising from nebular
contamination (N11-081), variability (RXJ0544.1-7100), blending (BAT99-22) or
misidentification at one (Sk $-67^{\circ}$ 29; at 24 $\mu$m), or all (Sk
$-70^{\circ}$ 33) bands.
## References
* Abbott et al. (1986) Abbott, D. C., Beiging, J. H., Churchwell, E., et al. 1986, ApJ, 303, 239
* Abbott et al. (1981) Abbott, D. C., Bieging, J. H., & Churchwell, E. 1981, ApJ, 250, 645
* Abbott et al. (1984) Abbott, D. C., Telesco, C. M., & Wolff, S. C. 1984, ApJ, 279, 225
* Barlow et al. (1981) Barlow, M. J., Smith, L. J., & Willis, A. J. 1981, MNRAS, 196, 101
* Barniske et al. (2008) Barniske, A., Oskinova, L. M., & Hamann, W.-R. 2008, A&A, 486, 971
* Bartzakos et al. (2001) Bartzakos, P., Moffat, A. F. J., & Niemela, V. S. 2001, MNRAS, 324, 18
* Berger et al. (2009) Berger, E., Soderberg, A. M., Chevalier, R. A., et al. 2009, ApJ, 699, 1850
* Bertout et al. (1985) Bertout, C., Leitherer, C., Stahl, O., & Wolf, B. 1985, A&A, 144, 87
* Bessell et al. (1998) Bessell, M. S., Castelli, F., & Plez, B. 1998, A&A, 333, 231
* Blomme et al. (2003) Blomme, R., van de Steene, G. C., Prinja, R. K., et al. 2003, A&A, 408, 715
* Blum et al. (2006) Blum, R. D., Mould, J. R., Olsen, K. A., et al. 2006, AJ, 132, 2034
* Bohannan & Walborn (1989) Bohannan, B. & Walborn, N. R. 1989, PASP, 101, 520
* Bolatto et al. (2007) Bolatto, A. D., Simon, J. D., Stanimirović, S., et al. 2007, ApJ, 655, 212
* Bonanos (2009) Bonanos, A. Z. 2009, ApJ, 691, 407
* Bonanos et al. (2006) Bonanos, A. Z., Stanek, K. Z., Kudritzki, R. P., et al. 2006, ApJ, 652, 313
* Bond et al. (2009) Bond, H. E., Bedin, L. R., Bonanos, A. Z., et al. 2009, ApJ, 695, L154
* Boyer et al. (2009) Boyer, M. L., Skillman, E. D., van Loon, J. T., et al. 2009, ApJ, 697, 1993
* Breysacher et al. (1999) Breysacher, J., Azzopardi, M., & Testor, G. 1999, A&AS, 137, 117
* Brunet et al. (1975) Brunet, J. P., Imbert, M., Martin, N., et al. 1975, A&AS, 21, 109
* Buchanan et al. (2006) Buchanan, C. L., Kastner, J. H., Forrest, W. J., et al. 2006, AJ, 132, 1890
* Cannon et al. (2006) Cannon, J. M., Walter, F., Armus, L., et al. 2006, ApJ, 652, 1170
* Castor & Simon (1983) Castor, J. I. & Simon, T. 1983, ApJ, 265, 304
* Clark et al. (2005) Clark, J. S., Negueruela, I., Crowther, P. A., et al. 2005, A&A, 434, 949
* Conti et al. (1986) Conti, P. S., Garmany, C. D., & Massey, P. 1986, AJ, 92, 48
* Crowther (2006) Crowther, P. A. 2006, in Astronomical Society of the Pacific Conference Series, Vol. 353, Stellar Evolution at Low Metallicity: Mass Loss, Explosions, Cosmology, ed. H. J. G. L. M. Lamers, N. Langer, T. Nugis, & K. Annuk, 157–+
* Crowther (2007) Crowther, P. A. 2007, ARA&A, 45, 177
* Crowther et al. (2002) Crowther, P. A., Dessart, L., Hillier, D. J., et al. 2002, A&A, 392, 653
* Crowther et al. (2000) Crowther, P. A., Fullerton, A. W., Hillier, D. J., et al. 2000, ApJ, 538, L51
* Cutri et al. (2004) Cutri, R. M. et al. 2004, in Bulletin of the American Astronomical Society, Vol. 36, Bulletin of the American Astronomical Society, 1487–+
* De Becker (2007) De Becker, M. 2007, A&A Rev., 14, 171
* Dopita et al. (1994) Dopita, M. A., Bell, J. F., Chu, Y.-H., et al. 1994, ApJS, 93, 455
* Evans et al. (2006) Evans, C. J., Lennon, D. J., Smartt, S. J., et al. 2006, A&A, 456, 623
* Faccioli et al. (2007) Faccioli, L., Alcock, C., Cook, K., et al. 2007, AJ, 134, 1963
* Fazio et al. (2004) Fazio, G. G., Hora, J. L., Allen, L. E., et al. 2004, ApJS, 154, 10
* Fitzpatrick (1988) Fitzpatrick, E. L. 1988, ApJ, 335, 703
* Fitzpatrick (1991) —. 1991, PASP, 103, 1123
* Fitzpatrick & Massa (2009) Fitzpatrick, E. L. & Massa, D. 2009, ApJ, 699, 1209
* Fitzpatrick et al. (2002) Fitzpatrick, E. L., Ribas, I., Guinan, E. F., et al. 2002, ApJ, 564, 260
* Fitzpatrick et al. (2003) —. 2003, ApJ, 587, 685
* Foellmi et al. (2003) Foellmi, C., Moffat, A. F. J., & Guerrero, M. A. 2003, MNRAS, 338, 1025
* Fullerton et al. (2000) Fullerton, A. W., Crowther, P. A., De Marco, O., et al. 2000, ApJ, 538, L43
* Garcia et al. (2009) Garcia, M., Herrero, A., Vicente, B., et al. 2009, A&A in press (arxiv:0904.4455)
* Garmany & Humphreys (1985) Garmany, C. D. & Humphreys, R. M. 1985, AJ, 90, 2009
* Garmany et al. (1994) Garmany, C. D., Massey, P., & Parker, J. W. 1994, AJ, 108, 1256
* Gehrz & Hackwell (1974) Gehrz, R. D. & Hackwell, J. A. 1974, ApJ, 194, 619
* Gehrz et al. (1974) Gehrz, R. D., Hackwell, J. A., & Jones, T. W. 1974, ApJ, 191, 675
* González et al. (2005) González, J. F., Ostrov, P., Morrell, N., & Minniti, D. 2005, ApJ, 624, 946
* Guinan et al. (1998) Guinan, E. F., Fitzpatrick, E. L., Dewarf, L. E., et al. 1998, ApJ, 509, L21
* Hadfield et al. (2007) Hadfield, L. J., van Dyk, S. D., Morris, P. W., et al. 2007, MNRAS, 376, 248
* Henize (1956) Henize, K. G. 1956, ApJS, 2, 315
* Heydari-Malayeri et al. (1997) Heydari-Malayeri, M., Courbin, F., Rauw, G., et al. 1997, A&A, 326, 143
* Heydari-Malayeri & Melnick (1992) Heydari-Malayeri, M. & Melnick, J. 1992, A&A, 258, L13
* Heydari-Malayeri et al. (2003) Heydari-Malayeri, M., Meynadier, F., & Walborn, N. R. 2003, A&A, 400, 923
* Hillier & Miller (1998) Hillier, D. J. & Miller, D. L. 1998, ApJ, 496, 407
* Hora et al. (2008) Hora, J. L., Cohen, M., Ellis, R. G., et al. 2008, AJ, 135, 726
* Humphreys (1979) Humphreys, R. M. 1979, ApJS, 39, 389
* Humphreys & Davidson (1979) Humphreys, R. M. & Davidson, K. 1979, ApJ, 232, 409
* Humphreys & Davidson (1994) —. 1994, PASP, 106, 1025
* Jackson et al. (2007a) Jackson, D. C., Skillman, E. D., Gehrz, R. D., et al. 2007a, ApJ, 656, 818
* Jackson et al. (2007b) —. 2007b, ApJ, 667, 891
* Jaxon et al. (2001) Jaxon, E. G., Guerrero, M. A., Howk, J. C., et al. 2001, PASP, 113, 1130
* Josselin et al. (2000) Josselin, E., Blommaert, J. A. D. L., Groenewegen, M. A. T., et al. 2000, A&A, 357, 225
* Kastner et al. (2006) Kastner, J. H., Buchanan, C. L., Sargent, B., et al. 2006, ApJ, 638, L29
* Kato et al. (2007) Kato, D., Nagashima, C., Nagayama, T., et al. 2007, PASJ, 59, 615
* Koornneef (1983) Koornneef, J. 1983, A&A, 128, 84
* Kraus et al. (2007) Kraus, M., Borges Fernandes, M., & de Araújo, F. X. 2007, A&A, 463, 627
* Lamers & Leitherer (1993) Lamers, H. J. G. L. M. & Leitherer, C. 1993, ApJ, 412, 771
* Lamers et al. (1998) Lamers, H. J. G. L. M., Zickgraf, F.-J., de Winter, D., et al. 1998, A&A, 340, 117
* Langer & Heger (1998) Langer, N. & Heger, A. 1998, in Astrophysics and Space Science Library, Vol. 233, B[e] stars, ed. A. M. Hubert & C. Jaschek, 235–+
* Lanz & Hubeny (2003) Lanz, T. & Hubeny, I. 2003, ApJS, 146, 417
* Lanz & Hubeny (2007) —. 2007, ApJS, 169, 83
* Leitherer & Wolf (1984) Leitherer, C. & Wolf, B. 1984, A&A, 132, 151
* Lennon et al. (1993) Lennon, D. J., Wobig, D., Kudritzki, R.-P., et al. 1993, Space Science Reviews, 66, 207
* Levesque et al. (2007) Levesque, E. M., Massey, P., Olsen, K. A. G., et al. 2007, ApJ, 667, 202
* Liu et al. (2005) Liu, Q. Z., van Paradijs, J., & van den Heuvel, E. P. J. 2005, A&A, 442, 1135
* Lucke (1972) Lucke, P. B. 1972, PhD thesis, AA(UNIVERSITY OF WASHINGTON.)
* Macri et al. (2006) Macri, L. M., Stanek, K. Z., Bersier, D., et al. 2006, ApJ, 652, 1133
* Massa et al. (2003) Massa, D., Fullerton, A. W., Sonneborn, G., et al. 2003, ApJ, 586, 996
* Massey (2002) Massey, P. 2002, ApJS, 141, 81
* Massey (2003) —. 2003, ARA&A, 41, 15
* Massey (2009) —. 2009, (arxiv:0903.0155)
* Massey et al. (2004) Massey, P., Bresolin, F., Kudritzki, R. P., et al. 2004, ApJ, 608, 1001
* Massey et al. (1995) Massey, P., Lang, C. C., Degioia-Eastwood, K., et al. 1995, ApJ, 438, 188
* Massey & Olsen (2003) Massey, P. & Olsen, K. A. G. 2003, AJ, 126, 2867
* Massey et al. (2002) Massey, P., Penny, L. R., & Vukovich, J. 2002, ApJ, 565, 982
* Massey et al. (2005) Massey, P., Puls, J., Pauldrach, A. W. A., et al. 2005, ApJ, 627, 477
* Massey et al. (2000) Massey, P., Waterhouse, E., & DeGioia-Eastwood, K. 2000, AJ, 119, 2214
* McQuinn et al. (2007) McQuinn, K. B. W., Woodward, C. E., Willner, S. P., et al. 2007, ApJ, 664, 850
* Meixner et al. (2006) Meixner, M., Gordon, K. D., Indebetouw, R., et al. 2006, AJ, 132, 2268
* Mennickent et al. (2005) Mennickent, R. E., Cidale, L., Díaz, M., et al. 2005, MNRAS, 357, 1219
* Mennickent et al. (2008) Mennickent, R. E., Kołaczkowski, Z., Michalska, G., et al. 2008, MNRAS, 389, 1605
* Mennickent et al. (2003) Mennickent, R. E., Pietrzyński, G., Diaz, M., et al. 2003, A&A, 399, L47
* Meynadier et al. (2005) Meynadier, F., Heydari-Malayeri, M., & Walborn, N. R. 2005, A&A, 436, 117
* Meynet & Maeder (2006) Meynet, G. & Maeder, A. 2006, in Astronomical Society of the Pacific Conference Series, Vol. 355, Stars with the B[e] Phenomenon, ed. M. Kraus & A. S. Miroshnichenko, 27–+
* Moffat et al. (1990) Moffat, A. F. J., Niemela, V. S., & Marraco, H. G. 1990, ApJ, 348, 232
* Moffat & Robert (1994) Moffat, A. F. J. & Robert, C. 1994, ApJ, 421, 310
* Mokiem et al. (2007) Mokiem, M. R., de Koter, A., Vink, J. S., et al. 2007, A&A, 473, 603
* Mould et al. (2008) Mould, J., Barmby, P., Gordon, K., et al. 2008, ApJ, 687, 230
* Natta & Panagia (1976) Natta, A. & Panagia, N. 1976, A&A, 50, 191
* Negueruela (2004) Negueruela, I. 2004, Astronomische Nachrichten, 325, 380
* Negueruela et al. (2004) Negueruela, I., Steele, I. A., & Bernabeu, G. 2004, Astronomische Nachrichten, 325, 749
* Niemela & Bassino (1994) Niemela, V. S. & Bassino, L. P. 1994, ApJ, 437, 332
* Niemela et al. (2001) Niemela, V. S., Seggewiss, W., & Moffat, A. F. J. 2001, A&A, 369, 544
* Nikolaev et al. (2004) Nikolaev, S., Drake, A. J., Keller, S. C., et al. 2004, ApJ, 601, 260
* Nishiyama et al. (2009) Nishiyama, S., Tamura, M., Hatano, H., Kato, D., et al. 2009, ApJ, 696, 1407
* Oey & Massey (1995) Oey, M. S. & Massey, P. 1995, ApJ, 452, 210
* Olsen et al. (2001) Olsen, K. A. G., Kim, S., & Buss, J. F. 2001, AJ, 121, 3075
* Ostrov (2001) Ostrov, P. G. 2001, MNRAS, 321, L25
* Ostrov & Lapasset (2003) Ostrov, P. G. & Lapasset, E. 2003, MNRAS, 338, 141
* Ostrov et al. (2001) Ostrov, P. G., Morrell, N. I., & Lapasset, E. 2001, A&A, 377, 972
* Owocki et al. (1994) Owocki, S. P., Cranmer, S. R., & Blondin, J. M. 1994, ApJ, 424, 887
* Panagia (1991) Panagia, N. 1991, in NATO ASIC Proc. 342: The Physics of Star Formation and Early Stellar Evolution, ed. C. J. Lada & N. D. Kylafis, 565–+
* Panagia & Felli (1975) Panagia, N. & Felli, M. 1975, A&A, 39, 1
* Panagia & Macchetto (1982) Panagia, N. & Macchetto, F. 1982, A&A, 106, 266
* Parker (1993) Parker, J. W. 1993, AJ, 106, 560
* Parker et al. (1992) Parker, J. W., Garmany, C. D., Massey, P., et al. 1992, AJ, 103, 1205
* Parker et al. (2001) Parker, J. W., Zaritsky, D., Stecher, T. P., et al. 2001, AJ, 121, 891
* Pei (1992) Pei, Y. C. 1992, ApJ, 395, 130
* Podsiadlowski et al. (2006) Podsiadlowski, P., Morris, T. S., & Ivanova, N. 2006, in Astronomical Society of the Pacific Conference Series, Vol. 355, Stars with the B[e] Phenomenon, ed. M. Kraus & A. S. Miroshnichenko, 259–+
* Pojmanski (2002) Pojmanski, G. 2002, Acta Astronomica, 52, 397
* Porter (2003) Porter, J. M. 2003, A&A, 398, 631
* Porter & Rivinius (2003) Porter, J. M. & Rivinius, T. 2003, PASP, 115, 1153
* Prieto (2008) Prieto, J. L. 2008, The Astronomer’s Telegram, 1550, 1
* Prieto et al. (2008) Prieto, J. L., Kistler, M. D., Thompson, T. A., et al. 2008, ApJ, 681, L9
* Puls et al. (2006) Puls, J., Markova, N., Scuderi, S., et al. 2006, A&A, 454, 625
* Raguzova & Popov (2005) Raguzova, N. V. & Popov, S. B. 2005, Astronomical and Astrophysical Transactions, 24, 151
* Reach et al. (2005) Reach, W. T., Megeath, S. T., Cohen, M., et al. 2005, PASP, 117, 978
* Ribas et al. (2002) Ribas, I., Fitzpatrick, E. L., Maloney, F. P., et al. 2002, ApJ, 574, 771
* Rieke et al. (2008) Rieke, G. H., Blaylock, M., Decin, L., et al. 2008, AJ, 135, 2245
* Rieke et al. (1989) Rieke, G. H., Rieke, M. J., & Paul, A. E. 1989, ApJ, 336, 752
* Rieke et al. (2004) Rieke, G. H., Young, E. T., Engelbracht, C. W., et al. 2004, ApJS, 154, 25
* Rousseau et al. (1978) Rousseau, J., Martin, N., Prévot, L., et al. 1978, A&AS, 31, 243
* Sanduleak (1970) Sanduleak, N. 1970, Contributions from the Cerro Tololo Inter-American Observatory, 89
* Sanduleak (2008) —. 2008, VizieR Online Data Catalog, 3113, 0
* Schild & Testor (1992) Schild, H. & Testor, G. 1992, A&AS, 92, 729
* Schild (1966) Schild, R. E. 1966, ApJ, 146, 142
* Schnurr et al. (2008) Schnurr, O., Moffat, A. F. J., St-Louis, N., et al. 2008, MNRAS, 389, 806
* Scuderi et al. (1998) Scuderi, S., Panagia, N., Stanghellini, C., et al. 1998, A&A, 332, 251
* Skrutskie et al. (2006) Skrutskie, M. F., Cutri, R. M., Stiening, R., et al. 2006, AJ, 131, 1163
* Smith et al. (1994) Smith, L. J., Crowther, P. A., & Prinja, R. K. 1994, A&A, 281, 833
* Smith et al. (2002) Smith, L. J., Norris, R. P. F., & Crowther, P. A. 2002, MNRAS, 337, 1309
* Smith & Conti (2008) Smith, N. & Conti, P. S. 2008, ApJ, 679, 1467
* Smith et al. (2009) Smith, N., Ganeshalingam, M., Chornock, R., et al. 2009, ApJ, 697, L49
* Sneden et al. (1978) Sneden, C., Gehrz, R. D., Hackwell, J. A., et al. 1978, ApJ, 223, 168
* Srinivasan et al. (2009) Srinivasan, S., Meixner, M., Leitherer, C., et al. 2009, AJ, 137, 4810
* Testor & Niemela (1998) Testor, G. & Niemela, V. 1998, A&AS, 130, 527
* Thompson et al. (2008) Thompson, T. A., Prieto, J. L., Stanek, K. Z., et al. 2008, in press (arxiv:0809.0510)
* Udalski et al. (2008) Udalski, A., Soszynski, I., Szymanski, M. K., et al. 2008, Acta Astronomica, 58, 89
* Verhoelst et al. (2009) Verhoelst, T., van der Zypen, N., Hony, S., et al. 2009, A&A, 498, 127
* Verley et al. (2007) Verley, S., Hunt, L. K., Corbelli, E., et al. 2007, A&A, 476, 1161
* Vijh et al. (2009) Vijh, U. P., Meixner, M., Babler, B., et al. 2009, AJ, 137, 3139
* Voors et al. (1999) Voors, R. H. M., Waters, L. B. F. M., Morris, P. W., et al. 1999, A&A, 341, L67
* Walborn (1977) Walborn, N. R. 1977, ApJ, 215, 53
* Walborn & Blades (1997) Walborn, N. R. & Blades, J. C. 1997, ApJS, 112, 457
* Walborn et al. (2002) Walborn, N. R., Fullerton, A. W., Crowther, P. A., et al. 2002, ApJS, 141, 443
* Walborn et al. (2003) Walborn, N. R., Howarth, I. D., Herrero, A., & Lennon, D. J. 2003, ApJ, 588, 1025
* Walborn et al. (2004) Walborn, N. R., Morrell, N. I., Howarth, et al. 2004, ApJ, 608, 1028
* Walborn et al. (2008) Walborn, N. R., Stahl, O., Gamen, R. C., et al. 2008, ApJ, 683, L33
* Wegner (1994) Wegner, W. 1994, MNRAS, 270, 229
* Weis (2003) Weis, K. 2003, A&A, 408, 205
* Westerlund (1961) Westerlund, B. 1961, Uppsala Astronomical Observatory Annals, 5, 1
* Whitney et al. (2008) Whitney, B. A., Sewilo, M., Indebetouw, R., et al. 2008, AJ, 136, 18
* Williams et al. (2008) Williams, S. J., Gies, D. R., Henry, T. J., et al. 2008, ApJ, 682, 492
* Wright & Barlow (1975) Wright, A. E. & Barlow, M. J. 1975, MNRAS, 170, 41
* Wright (1970) Wright, K. O. 1970, Vistas in Astronomy, 12, 147
* Wyrzykowski et al. (2003) Wyrzykowski, L., Udalski, A., Kubiak, M., et al. 2003, Acta Astronomica, 53, 1
* Zaritsky et al. (2004) Zaritsky, D., Harris, J., Thompson, I. B., et al. 2004, AJ, 128, 1606
* Zickgraf (2006) Zickgraf, F.-J. 2006, in Astronomical Society of the Pacific Conference Series, Vol. 355, Stars with the B[e] Phenomenon, ed. M. Kraus & A. S. Miroshnichenko, 135–+
* Zickgraf et al. (1986) Zickgraf, F.-J., Wolf, B., Leitherer, C., et al. 1986, A&A, 163, 119
* Zsargó et al. (2008) Zsargó, J., Hillier, D. J., & Georgiev, L. N. 2008, A&A, 478, 543
Table 1: Catalog of Spectral Types for 1750 LMC Massive Stars
Star | RA (J2000) | Dec (J2000) | Referenceb | Classification
---|---|---|---|---
Namea | (deg) | (deg) | | & Comments
BAT99$-$1 | 71.3843765 | $-$70.2530289 | F03 | WN3b Sk $-$70 1
Sk $-$69 1 | 71.3865433 | $-$69.4670029 | C86 | B1 III
Sk $-$67 1 | 71.4227524 | $-$67.5705261 | C86 | B extr
Sk $-$67 2 | 71.7685394 | $-$67.1147461 | F88 | B1 Ia+
Sk $-$70 1a | 71.8649979 | $-$70.5626984 | C86 | O9 II
Sk $-$69 8 | 72.3293762 | $-$69.4318314 | F88 | B5 Ia
BAT99$-$2 | 72.4009000 | $-$69.3484000 | F03 | WN2b(h)
Sk $-$69 9 | 72.4642105 | $-$69.2011948 | J01 | O6.5 III
aafootnotetext: Star designations: Breysacher et al. (BAT99; 1999), Sanduleak
(Sk; 1970), Westerlund (W; 1961), Brunet et al. (BI; 1975), Lucke (LH; 1972),
Henize (S; 1956)
bbfootnotetext: Reference key: (B01) Bartzakos et al. (2001), (B09) Bonanos
(2009), (C86) Conti et al. (1986), (E06) Evans et al. (2006), (F88)
Fitzpatrick (1988, 1991), (F02) Fitzpatrick et al. (2002), (F03a) Fitzpatrick
et al. (2003), (F03) Foellmi et al. (2003), (G94) Garmany et al. (1994), (G05)
González et al. (2005), (G98) Guinan et al. (1998), (H92) Heydari-Malayeri &
Melnick (1992), (H03) Heydari-Malayeri et al. (2003), (H79) Humphreys (1979),
(H94) Humphreys & Davidson (1994), (J01) Jaxon et al. (2001), (L05) Liu et al.
(2005), (M95) Massey et al. (1995), (M00) Massey et al. (2000), (M02) Massey
(2002), (M02b) Massey et al. (2002), (M03) Massey (2003), (M04) Massey et al.
(2004), (M05) Massey et al. (2005), (M09) Massey (2009), (Me05) Meynadier et
al. (2005), (N94) Niemela & Bassino (1994), (N01) Niemela et al. (2001), (O95)
Oey & Massey (1995), (O01) Olsen et al. (2001), (O01b) Ostrov et al. (2001),
(O01c) Ostrov (2001), (O03) Ostrov & Lapasset (2003), (P01) Parker et al.
(2001), (P92) Parker et al. (1992), (P93) Parker (1993), (R02) Ribas et al.
(2002), (R05) Raguzova & Popov (2005), (S92) Schild & Testor (1992), (S08)
Schnurr et al. (2008), (T98) Testor & Niemela (1998), (W77) Walborn (1977),
(W97) Walborn & Blades (1997), (W02) Walborn et al. (2002), (W03) Walborn et
al. (2003), (W04) Walborn et al. (2004), (W08) Williams et al. (2008), (Z06)
Zickgraf (2006).
Note. — Table 1 is available in its entirety in the electronic version of the
Journal. A portion is shown here for guidance regarding its form and content.
Table 2: Statistics for the 1268 Matched Stars Catalogs Matched | # Stars
---|---
IRACC | 5
IRACC+IRSF | 88
IRACC+IRSF+MCPS | 601
IRACC+IRSF+MCPS+MIPS24 | 122
IRACC+IRSF+MCPS+MIPS24+OGLE | 9
IRACC+IRSF+MCPS+OGLE | 364
IRACC+IRSF+MIPS24 | 21
IRACC+IRSF+OGLE | 18
IRACC+MCPS | 33
IRACC+MCPS+MIPS24 | 1
IRACA+MIPS24 | 1
IRACA+MCPS+MIPS24 | 1
IRACCEP1+MCPS+MIPS24 | 4
Table 3: 0.3-24 $\mu$m Photometry of 1268 Massive Stars in the LMC
Star Namea | IRAC Designation | RA(J2000) | Dec(J2000) | $U$ | $\sigma_{U}$ | $B$ | $\sigma_{B}$ | … | Ref.b | Classification & Comments
---|---|---|---|---|---|---|---|---|---|---
BAT99$-$1 | J044532.25$-$701510.8 | 71.3843765 | $-$70.2530289 | 15.001 | 0.037 | 15.647 | 0.029 | … | F03 | WN3b Sk $-$70 1
Sk $-$69 1 | J044532.76$-$692801.2 | 71.3865433 | $-$69.4670029 | 12.343 | 0.01 | 13.364 | 0.01 | … | C86 | B1 III
Sk $-$67 1 | J044541.44$-$673413.9 | 71.4227524 | $-$67.5705261 | 12.829 | 0.038 | 13.72 | 0.025 | … | C86 | B extr
Sk $-$67 2 | J044704.44$-$670653.1 | 71.7685394 | $-$67.1147461 | 10.351 | 0.104 | 11.219 | 0.095 | … | F88 | B1 Ia+
Sk $-$70 1a | J044727.58$-$703345.6 | 71.8649979 | $-$70.5626984 | 12.693 | 0.01 | 13.654 | 0.01 | … | C86 | O9 II
Sk $-$69 8 | J044919.03$-$692554.6 | 72.3293762 | $-$69.4318314 | 10.838 | 0.06 | 11.714 | 0.078 | … | F88 | B5 Ia
BAT99$-$2 | J044936.24$-$692054.8 | 72.4009 | $-$69.3484 | 15.633 | 0.038 | 16.495 | 0.03 | … | F03 | WN2b(h)
aafootnotetext: Star designations: Breysacher et al. (BAT99; 1999), Sanduleak
(Sk; 1970), Westerlund (W; 1961), Brunet et al. (BI; 1975), Lucke (LH; 1972),
Henize (S; 1956)
bbfootnotetext: Reference key: (B01) Bartzakos et al. (2001), (B09) Bonanos
(2009), (C86) Conti et al. (1986), (E06) Evans et al. (2006), (F88)
Fitzpatrick (1988, 1991), (F02) Fitzpatrick et al. (2002), (F03a) Fitzpatrick
et al. (2003), (F03) Foellmi et al. (2003), (G94) Garmany et al. (1994), (G05)
González et al. (2005), (G98) Guinan et al. (1998), (H92) Heydari-Malayeri &
Melnick (1992), (H03) Heydari-Malayeri et al. (2003), (H79) Humphreys (1979),
(H94) Humphreys & Davidson (1994), (J01) Jaxon et al. (2001), (L05) Liu et al.
(2005), (M95) Massey et al. (1995), (M00) Massey et al. (2000), (M02) Massey
(2002), (M02b) Massey et al. (2002), (M03) Massey (2003), (M04) Massey et al.
(2004), (M05) Massey et al. (2005), (M09) Massey (2009), (Me05) Meynadier et
al. (2005), (N94) Niemela & Bassino (1994), (N01) Niemela et al. (2001), (O95)
Oey & Massey (1995), (O01) Olsen et al. (2001), (O01b) Ostrov et al. (2001),
(O01c) Ostrov (2001), (O03) Ostrov & Lapasset (2003), (P01) Parker et al.
(2001), (P92) Parker et al. (1992), (P93) Parker (1993), (R02) Ribas et al.
(2002), (R05) Raguzova & Popov (2005), (S92) Schild & Testor (1992), (S08)
Schnurr et al. (2008), (T98) Testor & Niemela (1998), (W77) Walborn (1977),
(W97) Walborn & Blades (1997), (W02) Walborn et al. (2002), (W03) Walborn et
al. (2003), (W04) Walborn et al. (2004), (W08) Williams et al. (2008), (Z06)
Zickgraf (2006).
Note. — Table 3 is available in its entirety in the electronic version of the
Journal. A portion is shown here for guidance regarding its form and content.
Table 4: Filter & Detection Characteristics Filter | $\lambda_{\rm eff}$ | Zero mag | Resolution | # stars
---|---|---|---|---
| ($\mu$m) | flux ($Jy$) | ($\arcsec$) | detected
$U$ | 0.36 | 1790 | 1.5/2.6 | 1121
$B$ | 0.44 | 4063 | 1.5/2.6 | 1136
$V$ | 0.555 | 3636 | 1.5/2.6 | 1136
$I$ | 0.79 | 2416 | 1.5/2.6 | 1026
$V_{OGLE}$ | 0.555 | 3636 | 1.2 | 391
$I_{OGLE}$ | 0.79 | 2416 | 1.2 | 391
$J$ | 1.235 | 1594 | 2.5 | 1203
$H$ | 1.662 | 1024 | 2.5 | 1218
$K_{s}$ | 2.159 | 666.7 | 2.5 | 1184
$J_{IRSF}$ | 1.235 | 1594 | 1.3 | 1122
$H_{IRSF}$ | 1.662 | 1024 | 1.3 | 1089
$Ks_{IRSF}$ | 2.159 | 666.7 | 1.3 | 1077
$[3.6]$ | 3.55 | 280.9 | 1.7 | 1260
$[4.5]$ | 4.493 | 179.7 | 1.7 | 1234
$[5.8]$ | 5.731 | 115.0 | 1.9 | 950
$[8.0]$ | 7.872 | 64.13 | 2 | 577
$[24]$ | 23.68 | 7.15 | 6 | 159
Table 5: MIPS70+160 Photometry Star Name | Spectral Type | mag70 | $\sigma_{mag70}$ | Flux70 | $\sigma_{Flux70}$ | mag160 | $\sigma_{mag160}$ | Flux160 | $\sigma_{Flux160}$
---|---|---|---|---|---|---|---|---|---
| | (mag) | (mag) | (mJy) | (mJy) | (mag) | (mag) | (mJy) | (mJy)
BAT99-8 | WC4 | 1.379 | 0.02796 | 218.4 | 5.623 | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$
BAT99-22 | WN9h (+RSG) | 2.327 | 0.04657 | 91.24 | 3.911 | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$
BAT99-32 | WN6(h) | 1.4 | 0.03326 | 214.4 | 6.564 | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$
BAT99-37 | WN3o | 1.503 | 0.02847 | 194.8 | 5.107 | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$
BAT99-38 | WC4+$[$O8I:$]$ | -0.8254 | 0.01853 | 1664 | 28.4 | -2.412 | 0.1344 | 1476 | 181.8
BAT99-53 | WC4 (+OB) | 1.386 | 0.03978 | 217.1 | 7.952 | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$
BAT99-55 | WN11h | 0.6481 | 0.03829 | 428.3 | 15.1 | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$
BAT99-85 | WC4 (+OB) | -0.4681 | 0.03374 | 1197 | 37.19 | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$
BAT99-123 | WO3 | 1.782 | 0.04259 | 150.7 | 5.909 | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$
BAT99-133 | WN11h | 0.3187 | 0.01856 | 580.1 | 9.914 | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$
R66 | B0.5[e] | 0.4201 | 0.01875 | 528.4 | 9.126 | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$
S35 | B1[e] Iab | 2.104 | 0.05634 | 112 | 5.806 | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$
R126 | B0.5[e] | 0.993 | 0.03794 | 311.7 | 10.89 | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$
R71 | LBV | -0.9245 | 0.01292 | 1823 | 21.7 | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$
R127 | LBV | -0.1097 | 0.03093 | 860.7 | 24.51 | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$
|
arxiv-papers
| 2009-05-08T20:01:57 |
2024-09-04T02:49:02.424181
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "A. Z. Bonanos, D. L. Massa, M. Sewilo, D. J. Lennon, N. Panagia, L. J.\n Smith, M. Meixner, B. L. Babler, S. Bracker, M. R. Meade, K. D. Gordon, J. L.\n Hora, R. Indebetouw, B. A. Whitney",
"submitter": "Alceste Bonanos",
"url": "https://arxiv.org/abs/0905.1328"
}
|
0905.1343
|
# A modular type formula for Euler infinite product
$(1-x)(1-xq)(1-xq^{2})(1-xq^{3})\cdots$
Changgui ZHANG 111Address: Laboratoire P. Painlevé CNRS UMR 8524, UFR de
Mathématiques, Université des Sciences et Technologies de Lille, Cité
scientifique, 59655 Villeneuve d’Ascq cedex, France. E-mail: zhang@math.univ-
lille1.fr
MSC 2000 Subject Classifications: 11F20, 11F27, 30D05, 33E05, 05A30.
Keywords: Modular elliptic functions, Jacobi $\theta$-function, Dedekind
$\eta$-function, Lambert series, $q$-series.
(May 7, 2009)
###### Abstract
The main goal of this paper is to give a modular type representation for the
infinite product $(1-x)(1-xq)(1-xq^{2})(1-xq^{3})\cdots$. It is shown that
this representation essentially contains the well-known modular formulae
either for Dedekind’s eta function, Jacobi theta function or for certain
Lambert series. Thus a new and unified approach is outlined for the study of
elliptic and modular functions and related series.
###### Contents
1. 1 Modular type expansion of $(x;q)_{\infty}$
1. 1.1 Main Theorem
2. 1.2 Variants of Theorem 1.1
3. 1.3 Almost modular term $M$
4. 1.4 Perturbation term $P$
5. 1.5 Remainder term relating Stirling’s formula
6. 1.6 Modular type expansion of $(x;q)_{\infty}$
2. 2 Dedekind $\eta$-function, Jacobi $\theta$-function and Lambert series
1. 2.1 Dedekind $\eta$-function
2. 2.2 Modular relation on Jacobi theta function
3. 2.3 Another proof of Theta modular equation
4. 2.4 Generalized Lambert series $L_{1}$ and $L_{2}$
5. 2.5 Classical Lambert series
6. 2.6 Some remarks when $q$ tends toward one
3. 3 Proof of Theorem 1.1
1. 3.1 Some preparatory formulas
2. 3.2 First part of $R_{N}^{(2)}$
3. 3.3 Intermediate part in $R_{N}^{(2)}$
4. 3.4 Singular integral as limit part of $R_{N}^{(2)}$
5. 3.5 End of the proof of Theorem 1.1
## Introduction
There are more and more studies on $q$-series and related topics, not only in
traditional themes, but also in more recent branches, such as quantum physics,
random matrices. A first non-trivial example of $q$-series may be the infinite
product $(1-q)(1-q^{2})(1-q^{3})(1-q^{4})\cdots$, that is considered in Euler
[10, Chap. XVI] and then is revisited by many of his successors, particularly
intensively by Hardy and Ramanujan [13, p. 238-241; p. 276-309; p. 310-321].
Beautiful formulae are numerous and motivations are often various: elliptic
and modular functions theory, number and partition theory, orthogonal
polynomials theory, etc $\dots$. Concerning this wonderful history, one may
think of Euler’s pentagonal number theorem [4, p. 30], Jacobi’s triple product
identity [5, §10.4, p. 496-501], Dedekind modular eta function [20, (44), p.
154], [1, Chapter 3], to quote only some examples of important masterpieces.
However, the infinite product $(1-x)(1-xq)(1-xq^{2})(1-xq^{3})\cdots$, also
already appearing as initial model in the same work [10, Chap. XVI] of Euler,
receives less attention although it always plays a remarkable role in all
above-mentioned subjects: several constants in the elliptic integral theory,
Gauss’ binomial formula [5, §10.2, p. 487-491], generalized Lambert series,
etc $\dots$. Indeed, rapidly the situation may become more complicated and,
what is really important, the modular relation does not occur for generic
values of $x$.
In the present paper, we shall point out how, up to an explicit part, the
function defined by the product $(1-x)(1-xq)(1-xq^{2})(1-xq^{3})...$ can be
seen somewhat modular. This non-modular part can be considered as being
represented by a divergent but Borel-summable power series on variable $\log
q$ near zero, that is, when $q$ tends toward the unit value. This result,
subject of Theorem 1.1, gives rise to new and unified approaches to treat
Jacobi theta function, Lambert series, $\dots$.
The paper is organized as follows. Section 1 is devoted to _sight-read_
certain terms contained in Theorem 1.1 of §1.1. Firstly, in §1.2, we will give
two equivalent formulations of Theorem 1.1, one of which will be used in
complex plane in §1.6. In §1.3 and §1.4, we will observe that the modular
relation remains almost valid but a perturbation term exists. In §1.5, we deal
with the remainder term of the Stirling asymptotic formula for
$\Gamma$-function. Theorem 1.9, given in §1.6, is another equivalent version
of Theorem 1.1 and will be used in Section 2, as it is formulated in terms of
complex variables. Relation (45) shows that the above-mentioned non-modular
part can be expressed in terms of the quotient of two Barnes’ double Gamma
functions.
In Section 2, we will explain how to utilize Theorem 1.1 to get the classical
modular formula for eta or theta function. In §2.1, it will be merely observed
that the so-called non-modular part identically vanishes; in the theta
function case (§2.2), two non-modular parts are of opposite sign and then
cancel each other out. In §2.3, a second proof will be delivered to
$\theta$-modular equation from the point view of $q$-difference equations. In
§2.4, we consider the first order derivatives of
$(1-x)(1-xq)(1-xq^{2})(1-xq^{3})...$ and then get some results for two
families of $q$-series, including Lambert series as special cases that will be
treated in §2.5. In §2.6, we will give some remarks about the limit behavior
when $q$ goes to one by real values.
In Section 3, we give a complete proof of our main Theorem. To do this, we
need several elementary but somewhat technical calculations, that will be
formulated in terms of various lemmas.
We are interested in studying the analytical theory of differential,
difference and $q$-difference equations, _à la Birkhoff_ [8]; see [9], [18],
[27], [28], [30]. The elliptic functions and one variable modular functions
can be seen as specific solutions of certain particular difference or
$q$-difference equations; in this line, we shall give a proof on Theta
function modular equation in §2.3. We believe that a good understanding of
singularities structure, that is, Stokes analysis [17] as well as other
geometric tools, often permits a lot more of comprehension about certain
magical formulas or, say, some Ramanujan’s dream.
Let us mention that in [14], Theorem 1.1 is employed to the study of Jackson
Gamma function.
Finally, the Author would like to express thanks to his friends and colleagues
Anne Duval and Jacques Sauloy for their numerous valuable suggestions and
remarks.
## 1 Modular type expansion of $(x;q)_{\infty}$
Let $q$ and $x$ be complex numbers; if $|q|<1$, we let
$(x;q)_{\infty}=\prod_{n=0}^{\infty}(1-xq^{n})\,.$
In §1.1 and §1.2, we will suppose that $q\in(0,1)$ and $x\in(0,1)$, so that
the infinite product $(x;q)_{\infty}$ converges in $(0,1)$; therefore, one can
take the logarithm of this function. From §1.3, we will work with complex
variables. In §1.6, a modular type expansion for $(x;q)_{\infty}$ will be
given in complex plane. As usual, $\log$ will stand for the principal branch
of the logarithmic function over its Riemann surface denoted by
$\tilde{\mathbb{C}}^{*}$ and, in the meantime, the broken plane
${\mathbb{C}}\setminus(-\infty,0]$ will be identified to a part of
$\tilde{\mathbb{C}}^{*}$.
### 1.1 Main Theorem
The main result of our paper is the following statement.
###### Theorem 1.1.
Let $q=e^{-2\pi\alpha}$, $x=e^{-2\pi(1+\xi)\alpha}$ and suppose $\alpha>0$ and
$\xi>-1$. The following relation holds:
$\displaystyle\log(x;q)_{\infty}=$
$\displaystyle-\frac{\pi}{12\alpha}+\log\frac{\sqrt{2\pi}}{\Gamma(\xi+1)}+\frac{\pi}{12}\,\alpha-\bigl{(}\xi+\frac{1}{2}\bigr{)}\log\frac{1-e^{-2\pi\xi\alpha}}{\xi}$
(1) $\displaystyle+\int_{0}^{\xi}\bigl{(}\frac{2\pi\alpha t}{e^{2\pi\alpha
t}-1}-1\bigr{)}\,dt+M(\alpha,\xi),$ (2)
where
$M(\alpha,\xi)=-\sum_{n=1}^{\infty}\frac{\cos
2n\pi\xi}{n(e^{2n\pi/\alpha}-1)}-\frac{2}{\pi}\,{\cal
PV}\int_{0}^{\infty}\sum_{n=1}^{\infty}\frac{\sin 2n\xi\pi t}{n(e^{2n\pi
t/\alpha}-1)}\,\frac{dt}{1-t^{2}}\,.$ (3)
In the above, $\displaystyle{\cal PV}\int$ stands for the principal value of a
singular integral in the Cauchy’s sense; see [25, §6.23, p. 117] or the
corresponding definition recalled later in §3.4. We will leave the proof until
Section 3.
Before extending the main theorem to the complex plane (§1.6), we first give
some equivalent statements.
### 1.2 Variants of Theorem 1.1
Throughout all the paper, we let
$B(t)=\frac{1}{e^{2\pi t}-1}-\frac{1}{2\pi t}+\frac{1}{2}\,;$ (4)
therefore, Theorem 1.1 can be stated as follows.
###### Theorem 1.2.
Let $q$, $x$, $\alpha$, $\xi$ and $M(\alpha,\xi)$ be as given in Theorem 1.1.
Then the following relation holds:
$\displaystyle\log(x;q)_{\infty}=$
$\displaystyle-\frac{\pi}{12\alpha}-\bigl{(}\xi+\frac{1}{2}\bigr{)}\,\log
2\pi\alpha+\log\frac{\sqrt{2\pi}}{\Gamma(\xi+1)}+\frac{\pi}{2}\,(\xi+1)\,\xi\alpha$
(5)
$\displaystyle+\frac{\pi}{12}\,\alpha+2\pi\alpha\int_{0}^{\xi}\bigl{(}t-\xi-\frac{1}{2}\bigr{)}\,B(\alpha
t)\,dt+M(\alpha,\xi),$ (6)
where $B$ is defined in (4).
###### Proof.
It suffices to notice the following elementary integral: for any real numbers
$\lambda$ and $\mu$,
$\int_{0}^{\lambda}B(\mu
t)dt=\frac{1}{2\pi\mu}\,\log\frac{1-e^{-2\pi\lambda\mu}}{2\pi\lambda\mu}+\frac{\lambda}{2}\,.$
(7)
∎
As usual, let ${\rm Li}_{2}$ denote the dilogarithm function; recall ${\rm
Li}_{2}$ can be defined as follows [5, (2.6.1-2), p. 102]:
${\rm
Li}_{2}(x)=-\int_{0}^{x}\log(1-t)\,\frac{dt}{t}=\sum_{n=0}^{\infty}\frac{x^{n+1}}{(n+1)^{2}}\,.$
(8)
###### Theorem 1.3.
The following relation holds for any $q\in(0,1)$ and $x\in(0,1)$:
$\displaystyle\log(x;q)_{\infty}=$ $\displaystyle\frac{1}{\log q}\,{{\rm
Li}_{2}(x)}+\log\sqrt{1-x}-\frac{\log q}{24}$ (9)
$\displaystyle-\int_{0}^{\infty}B(-\frac{\log
q}{2\pi}\,t)\,x^{t}\,\frac{dt}{t}+M(-\frac{\log q}{2\pi}\,,\log_{q}x)\,,$ (10)
where $B$ denotes the function given by (4).
###### Proof.
By the first Binet integral representation stated in [5, Theorem 1.6.3 (i), p.
28] for $\log\Gamma$, Theorem 1.1 can be formulated as follows:
$\displaystyle\log(x;q)_{\infty}=$
$\displaystyle-\frac{\pi}{12\,\alpha}\,+\frac{\pi}{12}\,\alpha-
I_{\Gamma}(\xi)-\frac{1}{2}\,\log(1-e^{-2\pi\xi\alpha}\bigr{)}$ (11)
$\displaystyle-\int_{0}^{\xi}\log(1-e^{-2\pi\alpha t})\,dt+M(\alpha,\xi),$
(12)
where $\xi=\log_{q}(x/q)$ and $I_{\Gamma}$ denotes the corresponding Binet
integral in term of the function $B$ defined by (4):
$I_{\Gamma}(\xi)=\log\Gamma(\xi+1)-\bigl{(}\xi+\frac{1}{2}\bigr{)}\,\log\xi+\xi-\log\sqrt{2\pi}=\int_{0}^{\infty}B(t)\,e^{-2\pi\,\xi
t}\,\frac{dt}{t}\,.$ (13)
Write $\log(x/q;q)_{\infty}=\log(1-x/q)+\log(x;q)_{\infty}$, and substitute
$q$ by $e^{-2\pi\alpha}$ and $x/q=e^{-2\pi\xi\alpha}=q^{\xi}$ by $x$ in (11),
respectively; we arrive at once at the following expression:
$\displaystyle\log(x;q)_{\infty}=$ $\displaystyle\frac{\pi^{2}}{6\,\log
q}-\frac{\log
q}{24}+\log\sqrt{1-x}-\int_{0}^{\log_{q}x}\log\bigl{(}1-q^{t}\bigr{)}\,dt$
$\displaystyle-\int_{0}^{\infty}B(-\frac{\log
q}{2\pi}\,t)\,x^{t}\,\frac{dt}{t}+M(-\frac{\log q}{2\pi}\,,\log_{q}x)\,,$
from which, using (8), we easily deduce the expected formula (9), for ${\rm
Li}_{2}(1)=\displaystyle\frac{\pi^{2}}{6}$. ∎
### 1.3 Almost modular term $M$
We shall write the singular integral part in (3) by means of contour
integration in the complex plane, as explained in [25, §6.23, p. 117]. Fix a
real $r\in(0,1)$ and let $\ell_{r}^{-}$ (resp. $\ell_{r}^{+}$) denote the path
that goes along the positive axis from the origin $t=0$ to infinity via the
half circle starting from $t=1-r$ to $1+r$ below (resp. over) its center point
$t=1$. Define $P^{\mp}(\alpha,\xi)$ as follows:
$P^{\mp}(\alpha,\xi):=-\frac{2}{\pi}\,\int_{\ell_{r}^{\mp}}\sum_{n=1}^{\infty}\frac{\sin
2n\xi\pi t}{n(e^{2n\pi t/\alpha}-1)}\,\frac{dt}{1-t^{2}}\,,$ (14)
where $\alpha>0$ and where $\xi$ may be an arbitrary real number.
Observe that the integral on the right hand side of (14) is independent of the
choice of $r\in(0,1)$, so that we leave out the parameter $r$ from $P^{\mp}$.
Moreover, the principal value of the singular integral considered in (3) is
merely the average of $P^{+}$ and $P^{-}$, that is to say:
$P^{-}(\alpha,\xi)+P^{+}(\alpha,\xi)=-\frac{4}{\pi}\,{\cal
PV}\int_{0}^{\infty}\sum_{n=1}^{\infty}\frac{\sin 2n\xi\pi t}{n(e^{2n\pi
t/\alpha}-1)}\,\frac{dt}{1-t^{2}}\,.$ (15)
By the residues Theorem, we find:
$P^{-}(\alpha,\xi)-P^{+}(\alpha,\xi)={2i}\,\sum_{n=1}^{\infty}\frac{\sin
2n\xi\pi}{n(e^{2n\pi/\alpha}-1)}\,,$ (16)
from which we arrive at the following expression:
$M(\alpha,\xi)=P^{-}(\alpha,\xi)-\sum_{n=1}^{\infty}\frac{e^{2n\pi\xi
i}}{n(e^{2n\pi/\alpha}-1)}\,.$ (17)
###### Theorem 1.4.
Let $M$ be as in Theorem 1.1 and let $P^{-}$ be as in (14). For any
$\xi\in{\mathbb{R}}$ and $\alpha>0$, the following relation holds:
$M(\alpha,\xi)=\log(e^{2\pi\xi
i-2\pi/\alpha};e^{-2\pi/\alpha})_{\infty}+P^{-}(\alpha,\xi)\,,$ (18)
where $\log$ denotes the principal branch of the logarithmic function over its
Riemann surface.
###### Proof.
Relation (18) follows directly from (17). Indeed, for the last series of (17),
one can expand each fraction $(e^{2n\pi/\alpha}-1)^{-1}$ as power series in
$e^{-2n\pi/\alpha}$ and then permute the summation order inside the obtained
double series, due to absolute convergence. ∎
Consequently, the term $M$ appearing in Theorem 1.1 can be considered as being
an _almost modular term_ of $\log(x;q)_{\infty}$; the correction term $P^{-}$
given by (18) will be called disruptive factor or perturbation term.
### 1.4 Perturbation term $P$
In view of the classical relation
$\cot\frac{t}{2}=\frac{2}{t}-\sum_{n=1}^{\infty}\frac{4t}{4\pi^{2}n^{2}-t^{2}}\,,$
(19)
from (14) one can obtain the following expression:
$P^{-}(\alpha,\xi)=\int_{\ell_{r}^{-*}}\frac{\sin\xi
t}{e^{t/\alpha}-1}\,\big{(}\cot\frac{t}{2}-\frac{2}{t}\bigr{)}\,\frac{dt}{t}\,.$
(20)
In the last integral (20), $r\in(0,1)$ and
$\ell^{-*}_{r}=(0,1-r)\cup\Bigl{(}\cup_{n\geq
1}\bigl{(}C_{n,r}^{-}\cup(n+r,n+1-r)\bigr{)}\Bigr{)}\,,$
where for any positive integer $n$, $C_{n,r}^{-}$ denotes the half circle
passing from $n-r$ to $n+r$ by the right hand side.
One may replace the integration path $\ell^{-*}_{r}$ by any half line from
origin to infinity which does not meet the real axis. In view of what follows
in matter of complex extension considered in §1.6, let us first introduce the
following modified complex version of $P^{-}$: for any $d\in(-\pi,0)$, let
$P^{d}(\tau,\nu)=\int_{0}^{\infty
e^{id}}\frac{\sin\frac{\nu}{\tau}\,t}{e^{it/\tau}-1}\,\big{(}\cot\frac{t}{2}-\frac{2}{t}\bigr{)}\,\frac{dt}{t}\,,$
(21)
the path of integration being the half line starting from origin to infinity
with argument $d$.
From then on, if we let $\tilde{\mathbb{C}}^{*}$ to denote the Riemann surface
of the logarithm, we will define
$S(a,b):=\\{z\in\tilde{\mathbb{C}}^{*}:\arg z\in(a,b)\\}$ (22)
for any pair of real numbers $a<b$; notice that the Poincaré’s half-plane
${\mathcal{H}}$ will be identified to $S(0,\pi)$ while the broken plane
${\mathbb{C}}\setminus(-\infty,0]$ will be seen as the subset
$S(-\pi,\pi)\subset\tilde{\mathbb{C}}^{*}$.
###### Lemma 1.5.
The family of functions $\\{P^{d}\\}_{d\in(-\pi,0)}$ given by (21) gives rise
to an analytical function over the domain
$\Omega_{-}:=S(-\pi\,,\pi)\times\Bigl{(}{\mathbb{C}}\setminus\bigl{(}(-\infty,-1]\cup[1,\infty)\bigr{)}\Bigr{)}\subset{\mathbb{C}}^{2}\,.$
(23)
Moreover, if we denote this function by $P_{-}(\tau,\nu)$, then the following
relation holds for all $\alpha>0$ and $\xi\in{\mathbb{R}}$:
$P_{-}(\alpha i,\xi\alpha i)=P^{-}(\alpha,\xi)\,.$ (24)
###### Proof.
Let $B$ be as in (4); from the relation
$\cot\frac{t}{2}-\frac{2}{t}={2i}\,B(\frac{it}{2\pi})\,,$ (25)
it follows that the function $P^{d}$ given by (21) is well defined and
analytic at $(\tau,\nu)=(\tau_{0},\nu_{0})$ whenever the corresponding
integral converges absolutely, that is, when the following condition is
satisfied:
$\bigl{|}\Re(\frac{e^{id}}{\tau_{0}}\,\nu_{0}i)\bigr{|}<\Re(\frac{e^{id}}{\tau_{0}}\,i)\,.$
Therefore, $P^{d}$ is analytic over the domain $\Omega^{d}$ if we set
$\displaystyle\Omega^{d}=\cup_{\sigma\in(0,\pi)}\bigl{(}0,\infty
e^{i(d+\sigma)}\bigr{)}\times\\{\nu\in{\mathbb{C}}:\bigl{|}\Im(\nu\,e^{-i\sigma})\bigr{|}<\sin\sigma\\}\,.$
(26)
Thus we get the analyticity domain $\Omega_{-}$ and also relation (24) by
using the standard argument of analytic continuation. ∎
Let us give some precision about the above-employed continuation procedure,
which is really a radial continuation. In fact, for any pair of directions of
arguments $d_{1}$, $d_{2}\in(-\pi,0)$, say $d_{1}<d_{2}$, the common domain
$\Omega^{d_{1}}\cap\Omega^{d_{2}}$ contains a (product) disk
$D(\tau_{0};r)\times D(0;r)$ for certain $\tau_{0}\in S(d_{2},d_{1}+\pi)$ and
some radius $r>0$, and all points in both $\Omega^{d_{1}}$ and
$\Omega^{d_{2}}$ can be almost radially joined to this disk.
On the other hand, if we take the arguments $d\in(0,\pi)$ instead of
$d\in(-\pi,0)$ in (21), we can get an analytical function, say $P_{+}$,
defined over
$\Omega_{+}:=S(0\,,2\pi)\times\Bigl{(}{\mathbb{C}}\setminus\bigl{(}(-\infty,-1]\cup[1,\infty)\bigr{)}\Bigr{)}$
and such that, for all $\alpha>0$ and $\xi\in{\mathbb{R}}$:
$P_{+}(\alpha i,\xi\alpha i)=P^{+}(\alpha,\xi)\,.$ (27)
Therefore, the Stokes relation (16) can be extended in the following manner.
###### Theorem 1.6.
For any $\tau\in{\mathcal{H}}$, the relation
$P_{-}(\tau,\nu)-P_{+}(\tau,\nu)={2i}\,\sum_{n=1}^{\infty}\frac{\sin\frac{2n\nu\pi}{\tau}}{n(e^{2n\pi
i/\tau}-1)}$ (28)
holds provided that $|\Im(\nu/\tau)|<-\Im(1/\tau)$.
###### Proof.
In view of (24) and (27), one may observe that the expected relation (28)
really reduces to (16) when $\tau=\alpha i$, $\nu=\xi i$, $\alpha>0$ and
$\xi\in{\mathbb{R}}$. Thus one can get (28) by an analytical continuation
argument. Another way to arrive at the result is to directly use the residues
theorem. ∎
Using (25), one can write (21) as follows:
$P^{d}(\tau,\nu)=2i\,\int_{0}^{\infty
e^{id}}\bigl{(}B(\frac{t}{\tau}\,i)-\frac{\tau i}{2\pi
t}-\frac{1}{2}\bigr{)}\,B(it)\,\sin\frac{2\pi\nu t}{\tau}\,\frac{dt}{t}\,,$
where $B$ denotes the odd function given by (4). We guess that this expression
contains some _modular_ information about the perturbation term !
### 1.5 Remainder term relating Stirling’s formula
Let us consider the integral term involving the function $B$ in formula (9) of
Theorem 1.3, which is, up to the sign, the remainder term $I_{\Gamma}$
appearing in the Stirling’s formula; see (13). So, we introduce the following
family of associated functions: for any $d\not=\frac{\pi}{2}\bmod\pi$, define
$g^{d}(z)=-\int_{0}^{\infty e^{id}}B(t)\,e^{-2\pi zt}\,\frac{dt}{t}\,.$ (29)
It is obvious that $g^{d}$ is analytic over the half plane
$S(-\frac{\pi}{2}-d,\frac{\pi}{2}-d)$, where $S(a,b)$ is in the sense of (22).
By usual analytic continuation, each of the families of functions
$\\{g^{d}\\}_{d\in(-\frac{\pi}{2},\frac{\pi}{2})}$ and
$\\{g^{d}\\}_{d\in(\frac{\pi}{2},\frac{3\pi}{2})}$ will give rise to a
function that we denote by $g^{+}$ and $g^{-}$ respectively; that is, $g^{+}$
is defined and analytical over the domain $S(-\pi,\pi)$ while $g^{-}$, over
$S(-2\pi,0)$. Since $B(-t)=-B(t)$, it follows that
$g^{+}(z)=-g^{-}(e^{-\pi i}\,z)$ (30)
for any $z\in S(-\pi,\pi)$. Moreover, if $z\in S(-\pi,0)$, one can choose a
small $\epsilon>0$ such that $g^{\pm}(z)=g^{d}(z)$, $d={\pi/2\mp\epsilon}$; by
applying the residues theorem to the following contour integral
$\bigl{(}\int_{0}^{\infty e^{i({\pi/2+\epsilon})}}-\int_{0}^{\infty
e^{i({\pi/2-\epsilon})}}\bigr{)}\,B(t)\,e^{-2\pi zt}\,\frac{dt}{t}\,,$
we find:
$\displaystyle g^{+}(z)-g^{-}(z)=-2\pi i\,\sum_{n\geq 1}\frac{e^{-2\pi
z(ni)}}{2\pi\,ni}=\log(1-e^{-2\pi iz})\,.$ (31)
###### Lemma 1.7.
The following relations hold: for any $z\in S(-\pi,0)$,
$\displaystyle g^{+}(z)+g^{+}(e^{\pi i}\,z)=\log(1-e^{-2\pi iz})\,;$
for any $z\in{\mathcal{H}}=S(0,\pi)$,
$\displaystyle g^{+}(z)+g^{+}(e^{-\pi i}\,z)=\log(1-e^{2\pi iz})\,.$
###### Proof.
The result follows immediately from (30) and (31). ∎
Lemma 1.7 is essentially the Euler’s reflection formula on $\Gamma$-function,
as it is easy to see that, from (13), $I_{\Gamma}(z)=-g^{+}(z)$. If we set
$G(\tau,\nu)=g^{+}(\frac{\nu}{\tau})$, that is to say:
$G(\tau,\nu)=-\log\Gamma(\frac{\nu}{\tau}+1)+\bigl{(}\frac{\nu}{\tau}+\frac{1}{2}\bigr{)}\,\log\frac{\nu}{\tau}-\frac{\nu}{\tau}+\log\sqrt{2\pi}\,,$
(32)
then $G(\tau,\nu)$ is well defined and analytic over the domain $U^{+}$ given
below:
$U^{+}:=\\{(\tau,\nu)\in{\mathbb{C}}^{*}\times{\mathbb{C}}^{*}:\nu/\tau\notin(-\infty,0]\\}\,.$
(33)
###### Proposition 1.8.
Let $G$ be as in (32). Then, for any $(\tau,\nu)\in U^{+}$,
$G(\tau,\nu)+G(\tau,-\nu)=\log(1-e^{\mp 2\pi i\nu/\tau})$ (34)
according to $\displaystyle\frac{\nu}{\tau}\in S(-\pi,0)$ or $S(0,\pi)$,
respectively.
###### Proof.
The statement comes from Lemma 1.7. ∎
### 1.6 Modular type expansion of $(x;q)_{\infty}$
We shall discuss how to understand Theorem 1.3 in the complex plane, for both
complex $q$ and complex $x$. As before, let $\tilde{\mathbb{C}}^{*}$ be the
Riemann surface of the logarithm function. Let ${\mathcal{M}}$ be the
automorphism of the $2$-dimensional complex manifold
$\tilde{\mathbb{C}}^{*}\times\tilde{\mathbb{C}}^{*}$ given as follows:
${\mathcal{M}}:(q,x)\mapsto{\mathcal{M}}(q,x)=\bigl{(}\iota(q),\iota_{q}(x)\bigr{)},$
where
$\iota(q)=q^{*}:=e^{4\pi^{2}/\log q},\qquad\iota_{q}(x)=x^{*}:=e^{2\pi i\log
x/\log q}\,.$ (35)
In the following, we will use the notations $q^{*}$ and $x^{*}$ instead of
$\iota(q)$ and $\iota_{q}(x)$ each time when any confusion does not occur.
If we let
$\tilde{D}^{*}=\exp\bigl{(}i{\mathcal{H}}\bigr{)}\subset\tilde{\mathbb{C}}^{*}$,
then ${\mathcal{M}}$ induces an automorphism over the sub-manifold
$\tilde{D}^{*}\times\tilde{\mathbb{C}}^{*}$. From then now, we always write
$q=e^{2\pi i\tau}$, $x=e^{2\pi i\nu}$ and suppose $\tau\in{\mathcal{H}}$, so
that $0<|q|<1$. Sometimes we shall use the pairs of modular variables
$(\tau^{*},\nu^{*})$ as follows:
$i(\tau)=\tau^{*}:=-1/\tau,\quad i_{\tau}(\nu)=\nu^{*}:=\nu/\tau\,,$ (36)
so that we can continue to write $q^{*}=e^{2\pi i\tau^{*}}$ and $x^{*}=e^{2\pi
i\nu^{*}}$ .
###### Theorem 1.9.
Let $q=e^{2\pi i\tau}$, $x=e^{2\pi i\nu}$ and let $q^{*}$, $x^{*}$ as in (35).
The following relation holds for any $\tau\in{\mathcal{H}}$ and
$\nu\in{\mathbb{C}}\setminus\bigl{(}(-\infty,-1]\cup[1,\infty)\bigr{)}$ such
that $\nu/\tau\notin(-\infty,0]$:
$\displaystyle(x;q)_{\infty}=$ $\displaystyle
q^{-1/24}\,\sqrt{1-x}\,\,(x^{*}q^{*};q^{*})_{\infty}$ (37)
$\displaystyle\times\,\exp\Bigl{(}\frac{{\rm Li}_{2}(x)}{\log
q}+G(\tau,\nu)+P(\tau,\nu)\Bigr{)}\,,$ (38)
where $\sqrt{1-x}$ stands for the principal branch of
$e^{\frac{1}{2}\log(1-x)}$, ${\rm Li}_{2}$ denotes the dilogarithm recalled in
(8), $G$ is given by (32) and where $P$ denotes the function $P_{-}$ defined
in Lemma 1.5.
###### Proof.
By Theorem 1.4 and relation (24), we arrive at the expression
$M(-\frac{\log q}{2\pi},\frac{\log x}{\log
q})=\log(x^{*}\,q^{*};q^{*})_{\infty}+P(\tau,\nu)\,;$
making then suitable variable change in (9) allows one to arrive at (37), by
taking into account the standard analytic continuation argument. ∎
If we denote by $G^{*}$ the anti-symetrization of $G$ given by
$G^{*}(\tau,\nu)=\frac{1}{2}\,\bigl{(}G(\tau,\nu)-G(\tau,-\nu)\bigr{)}\,,$
then, according to relation (34), we may rewrite (37) as follows:
$\displaystyle(x;q)_{\infty}=$ $\displaystyle
q^{-1/24}\,\sqrt{\frac{1-x}{1-x^{*}}}\,\,(x^{*};q^{*})_{\infty}$ (39)
$\displaystyle\times\,\exp\Bigl{(}\frac{{\rm Li}_{2}(x)}{\log
q}+G^{*}(\tau,\nu)+P(\tau,\nu)\Bigr{)}$ (40)
if $\nu\in\tau{\mathcal{H}}$, and
$\displaystyle(x;q)_{\infty}=$ $\displaystyle
q^{-1/24}\,\frac{\sqrt{(1-x)(1-1/x^{*})}}{1-x^{*}}\,\,(x^{*};q^{*})_{\infty}$
(41) $\displaystyle\times\,\exp\Bigl{(}\frac{{\rm Li}_{2}(x)}{\log
q}+G^{*}(\tau,\nu)+P(\tau,\nu)\Bigr{)}$ (42)
if $\nu\in-\tau{\mathcal{H}}$, that is, if $\displaystyle\frac{\nu}{\tau}\in
S(-\pi,0)$.
In the above, $G^{*}$ and $P$ are odd functions on the variable $\nu$:
$G^{*}(\tau,-\nu)=-G^{*}(\tau,\nu),\quad P(\tau,-\nu)=-P(\tau,\nu)\,;$ (43)
${\rm Li}_{2}$ satisfies the so-called _Landen’s transformation_ [5, Theorem
2.6.1, p. 103]:
${\rm Li}_{2}(1-x)+{\rm Li}_{2}(1-\frac{1}{x})=-\frac{1}{2}\,\bigl{(}\log
x\bigr{)}^{2}\,.$ (44)
Finally, if we write $\vec{\omega}=(\omega_{1},\omega_{2})=(1,\tau)$ and
denote by $\Gamma_{2}(z,\vec{\omega})$ the Barnes’ double Gamma function
associated to the double period $\vec{\omega}$ ([6]), then Thoerme 1.9 and
Proposition 5 of [21] imply that
$\displaystyle\frac{\Gamma_{2}(1+\tau-\nu,\vec{\omega})}{\Gamma_{2}(\nu,\vec{\omega})}=$
$\displaystyle\sqrt{i}\,\sqrt{1-x}\,\exp\Bigl{(}\frac{\pi i}{12\tau}+\frac{\pi
i}{2}\bigl{(}\frac{\nu^{2}}{\tau}-(1+\frac{1}{\tau})\nu\bigr{)}$ (45)
$\displaystyle+\frac{{\rm Li}_{2}(x)}{\log q}+G(\tau,\nu)+P(\tau,\nu)\Bigr{)}$
(46) $\displaystyle=$
$\displaystyle\,\sqrt{2\sin\pi\nu}\,\,\exp\Bigl{(}\frac{\pi
i}{12\tau}+\frac{\nu(\nu-1)\pi i}{2\tau}$ (47) $\displaystyle+\frac{{\rm
Li}_{2}(e^{2\pi i\nu})}{2\pi i\tau}+G(\tau,\nu)+P(\tau,\nu)\Bigr{)}\,.$ (48)
## 2 Dedekind $\eta$-function, Jacobi $\theta$-function and Lambert series
In the following, we will first see in what manner Theorem 1.9 essentially
contains the modular equations known for $\eta$ and $\theta$-functions; see
Theorems 2.1 and 2.2. In §2.4, we will consider two families of series, called
$L_{1}$ and $L_{2}$, that can be obtained as logarithmic derivatives of the
infinite product $(x;q)_{\infty}$; some modular type relations will be given
in Theorem 2.3. In §2.5, classical Lambert series will be viewed as particular
cases of the previous series $L_{1}$ and $L_{2}$.
### 2.1 Dedekind $\eta$-function
Let us mention a first application of Theorem 1.9 as follows.
###### Theorem 2.1.
Let $q=e^{2\pi i\tau}$, $\tau\in{\mathcal{H}}$ et let $q^{*}=e^{-2\pi
i/\tau}$. Then
$(q;q)_{\infty}=q^{-1/24}\,\sqrt{\frac{i}{\tau}}\,\,(q^{*})^{1/24}\,(q^{*};q^{*})_{\infty}\,.$
(49)
###### Proof.
If we set
$\displaystyle G_{0}(\tau,\nu)=$ $\displaystyle
G(\tau,\nu)-\log\sqrt{\frac{2\pi\nu}{\tau}}$ $\displaystyle=$
$\displaystyle\log\Gamma(\nu^{*}+1)+\nu^{*}\log\nu^{*}-\nu^{*}\,\,,$
we can write relation (37) of Theorem 1.9 as follows:
$\displaystyle(xq;q)_{\infty}=$ $\displaystyle
q^{-1/24}\,\sqrt{\frac{2\pi\nu}{(1-e^{2\pi
i\nu})\tau}}\,\,(x^{*}q^{*};q^{*})_{\infty}$
$\displaystyle\times\,\exp\Bigl{(}\frac{{\rm Li}_{2}(x)}{\log
q}+G_{0}(\tau,\nu)+P(\tau,\nu)\Bigr{)}\,,$
where $x=e^{2\pi i\nu}$. Suppose $\nu\to 0$, so that $x\to 1$, $\nu^{*}\to 0$
and $x^{*}\to 1$; from (32), it follows:
$\lim_{\nu\to 0}G_{0}(\tau,\nu)=0\,;$
therefore, one easily gets relation (49), remembering that $\displaystyle{\rm
Li}_{2}(1)=\frac{\pi^{2}}{6}$ and that $P(\tau,0)=0$ as is said in (43). ∎
The function $(q;q)_{\infty}$ plays a very important role in number theory and
it is really linked with the well-known Dedekind $\eta$-function:
$\eta(\tau)=e^{\pi\tau i/12}\prod_{n=1}^{\infty}(1-e^{2n\pi\tau
i})=q^{1/24}\,(q;q)_{\infty}\,,$ (50)
where $\tau\in{\mathcal{H}}$. For instance, see [13, Lectures VI, VIII] and
[5, Chapters 10, 11]. The modular relation (49), written as
$\eta(-\frac{1}{\tau})=\sqrt{\frac{\tau}{i}}\,\,\eta(\tau)\,,$
is traditionally obtained as consequence of Poisson’s summation formula (cf
[11, p. 597-599]) or that of Mellin transform of some Dirichlet series (cf [5,
p. 538-540]) ; see also [22], for a simple proof.
### 2.2 Modular relation on Jacobi theta function
In order to get the modular equation for Jacobi theta function, we first
mention the following relation for any $x\in{\mathbb{C}}\setminus(-\infty,0]$:
${\rm Li}_{2}(-x)+{\rm
Li}_{2}(-\frac{1}{x})=-\frac{\pi^{2}}{6}-\frac{1}{2}\,\bigl{(}\log
x\bigr{)}^{2}\,,$ (51)
which can be deduced directly from the definition (8) of ${\rm Li}_{2}$, for
$\frac{d\ }{dx}\bigl{(}{\rm Li}_{2}(-x)+{\rm
Li}_{2}(-\frac{1}{x})\bigr{)}=\frac{\log(1+x)}{-x}-\frac{\log(1+\frac{1}{x})}{-x}=\frac{\log
x}{-x}\,.$
One can also check (51) by making use of a suitable variable change and
considering both the Landen’s transformation (44) and formula [5, (2.6.6), p.
104]:
${\rm Li}_{2}(x)+{\rm Li}_{2}(1-x)=\frac{\pi^{2}}{6}-\log x\,\log(1-x)\,;$
see [26] for more information.
For any fix $q=e^{2\pi i\tau}$, $\tau\in{\mathcal{H}}$, the modular variable
transformation $x\mapsto\iota_{q}(x)=x^{*}$ introduced in (35) defines an
automorphism of the Riemann surface $\tilde{\mathbb{C}}^{*}$ of the logarithm
and satisfies the following relations ($q^{*}=\iota(q)=e^{4\pi^{2}/\log q}$):
$\iota_{q}(xy)=\iota_{q}(x)\,\iota_{q}(y);\quad\iota_{q}(e^{2k\pi
i})=(q^{*})^{-k}\,,\quad\iota_{q}(q^{k})=e^{2k\pi i}$ (52)
for all $k\in{\mathbb{R}}$. In particular, one finds:
$\iota_{q}(\sqrt{q}\,x)=e^{\pi
i}\,\iota_{q}(x)\,,\quad\iota_{q}(xe^{i\pi})=\iota_{q}(x)/\sqrt{q^{*}}\,.$
(53)
As usual, for any $m$ given complex numbers $a_{1}$, $\dots$, $a_{m}$, let
$(a_{1},\ldots,a_{m};q)_{\infty}=\prod_{k=1}^{m}(a_{k};q)_{\infty}\,.$
###### Theorem 2.2.
Let $q=e^{2\pi i\tau}$ and $x=e^{2\pi i\nu}$ and let
$\theta(q,x)=(q,-\sqrt{q}\,x,-\frac{\sqrt{q}}{x};q)_{\infty}\,.$ (54)
Then, the following relation holds for any $\tau\in{\mathcal{H}}$ and any
$\nu$ of the Riemann surface of the logarithm:
$\theta(q,x)=q^{1/8}\,\sqrt{\frac{i}{\tau
x}}\,\,\exp\Bigl{(}-\frac{(\log\frac{x}{\sqrt{q}}\,)^{2}}{2\log
q}\,\Bigr{)}\,\theta(q^{*},x^{*})\,.$ (55)
###### Proof.
First, suppose $\nu\in\tau{\mathcal{H}}$ and write $(x;q)_{\infty}$ and
$(1/x;q)_{\infty}$ by means of (39) and (41), respectively. By taking into
account relation (43) about the parity of $G^{*}$ and $P$, we find:
$\displaystyle(x,\frac{1}{x};q)_{\infty}=$ $\displaystyle
q^{-1/12}\,\frac{1-x}{1-1/x^{*}}\,\,\sqrt{-\frac{1}{x}}\,\,(x^{*},\frac{1}{x^{*}};q^{*})_{\infty}\,$
$\displaystyle\,\exp\Bigl{(}\frac{1}{\log q}\,\bigl{(}{\rm Li}_{2}(x)+{\rm
Li}_{2}(\frac{1}{x})\bigr{)}\Bigr{)}\,,$
where we used the relation $1/x^{*}=(1/x)^{*}$, deduced from (52). Thus, it
follows that
$(xq,\frac{1}{x};q)_{\infty}=q^{-1/12}\,\sqrt{-\frac{1}{x}}\,\,(x^{*},\frac{q^{*}}{x^{*}};q^{*})_{\infty}\,\exp\Bigl{(}\frac{1}{\log
q}\,\bigl{(}\bigl{(}{\rm Li}_{2}(x)+{\rm
Li}_{2}(\frac{1}{x})\bigr{)}\Bigr{)}\,,$
Change $x$ by $e^{-\pi i}\,x/\sqrt{q}$ and make use of (53) and (51); we get:
$\displaystyle(-\sqrt{q}\,x,-\frac{\sqrt{q}}{x};q)_{\infty}=$ $\displaystyle
q^{-1/12}\,\sqrt{\frac{\sqrt{q}}{x}\,}\,\,(-\sqrt{q^{*}}\,x^{*},-\frac{\sqrt{q^{*}}}{x^{*}};q^{*})_{\infty}$
$\displaystyle\times\,\exp\Bigl{(}\frac{1}{\log
q}\,\bigl{(}-\frac{\pi^{2}}{6}-\frac{1}{2}\,(\log\frac{x}{\sqrt{q}}\,)^{2}\bigr{)}\Bigr{)}$
for $x\notin(-\infty,0]$. By the modular equation (49) for $\eta$-function, we
arrive at the expected formula (55).
Finally we end the proof of the Theorem by the standard analytic continuation
argument. ∎
Known as the modular formula on Jacobi’s theta function, relation (55) can be
written as follows:
$\theta(q,x)=\sqrt{\frac{i}{\tau}}\,\,\exp\Bigl{(}-\frac{(\log x\,)^{2}}{2\log
q}\,\Bigr{)}\,\theta(q^{*},x^{*})\,,$
which has a very long history, and is attached to Gauss, Jacobi, Dedekind,
Hermite, etc …. It is generally obtained by applying Poisson’s summation
formula to the Laurent series expansion:
$(q,-\sqrt{q}\,x,-\frac{\sqrt{q}}{x};q)_{\infty}=\sum_{n\in{\mathbb{Z}}}q^{\frac{n^{2}}{2}}\,x^{n}\,,$
which is the so-called Jacobi’s triple product formula; for instance, see [5,
§10.4, p. 496-501].
### 2.3 Another proof of Theta modular equation
As what is pointed out in [29, p. 214-215], formula (55) can be interpreted in
term of $q$-difference equations. We shall elaborate on this idea and give a
simple proof for (55).
For any fix $q$ such that $0<|q|<1$, let
$f_{1}(x)=\theta(q,x),\quad
f_{2}(x)=\displaystyle\sqrt{\frac{1}{x}}\,\exp\Bigl{(}-\frac{(\log\frac{x}{\sqrt{q}}\,)^{2}}{2\log
q}\,\Bigr{)}$
and
$f(x)=\displaystyle\frac{f_{1}(x)}{f_{2}(x)}=g(x^{*})\,,$
where $x^{*}$ is given by (35). As $f_{1}$ and $f_{2}$ are solution of the
same first order linear equation
$y(qx)=\displaystyle\frac{1}{\sqrt{q}\,x}\,y(x),$
$f$ is a $q$-constant, that means, $f(qx)=f(x)$ ; equivalently, $g$ is uniform
on the variable $x^{*}$, for $qx$ is translated into $x^{*}e^{2\pi i}$ by
(52). On the other hand, it is easy to check the following relation:
$f(xe^{-2\pi i})=e^{-2\pi i\frac{\log x}{\log q}\,-\frac{2\pi^{2}}{\log
q}}\,f(x)=\frac{1}{\sqrt{q^{*}}\,x^{*}}\,f(x),$
so that, using (52), we find:
$g(q^{*}\,x^{*})=\frac{1}{\sqrt{q^{*}}\,x^{*}}\,g(x^{*})\,.$
Summarizing, $g$ is a uniform solution of
$y(q^{*}x^{*})=y(x^{*})/({\sqrt{q^{*}}\,x^{*}})$ and vanishes over the
$q^{*}$-spiral $-\sqrt{q^{*}}\,{q^{*}}^{\mathbb{Z}}$ of the $x^{*}$-Riemann
surface of the logarithm; it follows that there exists a constant $C$ such
that $g(x^{*})=C\,\theta(q^{*},x^{*})$ for all
$x^{*}\in\tilde{\mathbb{C}}^{*}$. Write
$C=\frac{\theta(q,x)}{\theta(q^{*},x^{*})}\,\sqrt{x}\,\exp\Bigl{(}\frac{(\log\frac{x}{\sqrt{q}})^{2}}{2\log
q}\Bigr{)}$
and let $x\to e^{\pi i}/\sqrt{q}$, so that $x^{*}\to e^{\pi i}/\sqrt{q^{*}}$.
Since
$\frac{\theta(q,x)}{1+\sqrt{q}\,x}\to(q;q)^{3}_{\infty}\,,\quad\frac{\theta(q^{*},x^{*})}{1+\sqrt{q^{*}}\,x^{*}}\to(q^{*};q^{*})_{\infty}^{3}$
and
$\lim_{x\to e^{\pi
i}/\sqrt{q}}\frac{1+\sqrt{q}\,x}{1+\sqrt{q^{*}}\,x^{*}}=\frac{\sqrt{q}}{\sqrt{q^{*}}\,}\,\bigl{(}\frac{dx^{*}}{dx}\bigl{|}_{x=e^{\pi
i}/\sqrt{q}}\bigr{)}^{-1}=\frac{\tau}{i}\,,$
where $q=e^{2\pi i\tau}$, we get the following expression, deduced from
$\eta$-modular equation (49):
$\displaystyle C=q^{1/4}\,e^{-\frac{\pi i}{2}-\frac{\pi^{2}}{2\log
q}}\,\frac{(q;q)_{\infty}^{3}}{(q^{*};q^{*})_{\infty}^{3}}\,\frac{\tau}{i}=q^{1/8}\,\sqrt{\frac{i}{\tau}}\,\,,$
and we end the proof of (55).
One key point of the previous proof is to use the dual variables $q^{*}$ and
$x^{*}$; the underlying idea is really linked with the concept of local
monodromy group of linear $q$-difference equations [19, §2.2.3, Théorème
2.2.3.5]. In fact, as there exists two generators for the fundamental group of
the elliptic curve ${\mathbb{C}}^{*}/q^{\mathbb{Z}}$, one needs to consider
the “monodromy operators” in two directions or “two periods”, $x\mapsto
xe^{2\pi i}$ and $x\mapsto xq$, which exactly correspond to $x^{*}\mapsto
x^{*}q^{*}$ and $x^{*}\mapsto x^{*}e^{-2\pi i}$, in view of (52).
### 2.4 Generalized Lambert series $L_{1}$ and $L_{2}$
As before, let $q=e^{2\pi i\tau}$, $x=e^{2\pi i\nu}$ and suppose
$\tau\in{\mathcal{H}}$. Consider the following series, which can be considered
as generalized Lambert series:
$L_{1}(\tau,\nu)=\sum_{n=0}^{\infty}\frac{x\,q^{n}}{1-x\,q^{n}}\,,\quad
L_{2}(\tau,\nu)=\sum_{n=0}^{\infty}\frac{(n+1)x\,q^{n}}{1-x\,q^{n}}\,,$ (56)
that are both absolutely convergent for any $x\in{\mathbb{C}}\setminus
q^{-{\mathbb{N}}}$, due to the fact $|q|<1$. By expanding each term
$(1-xq^{n})^{-1}$ into geometric series, one easily finds:
$L_{1}(\tau,\nu)=\sum_{n=0}^{\infty}\frac{x^{n+1}}{1-q^{n+1}}\,,\quad
L_{2}(\tau,\nu)=\sum_{n=0}^{\infty}\frac{x^{n+1}}{(1-q^{n+1})^{2}}\,,$ (57)
where convergence requires $x$ to be inside the unit circle $|x|<1$ of
$x$-plane.
Observe that
$L_{1}(\tau,\nu+\tau)-L_{1}(\tau,\nu)=-\frac{x}{1-x}$ (58)
and
$L_{2}(\tau,\nu+\tau)-L_{2}(\tau,\nu)=-L_{1}(\tau,\nu)\,.$ (59)
In this way, one may guess how to define more series such as $L_{3}$, $L_{4}$,
etc …
A direct computation yields the following formulas:
$L_{1}(\tau,\nu)=-x\,\frac{\partial\ }{\partial x}\log(x;q)_{\infty}\,,$ (60)
$L_{2}(\tau,\nu)=-q\,\frac{\partial\ }{\partial
q}\log(x;q)_{\infty}+L_{1}(\tau,\nu)\,;$ (61) $\displaystyle x\,\frac{\partial
x^{*}}{\partial x}=\frac{x^{*}}{\tau},\quad x\,\frac{\partial\nu}{\partial
x}=\frac{1}{2\pi i}$ (62)
and
$\displaystyle q\,\frac{\partial x^{*}}{\partial
q}=-\frac{\nu}{\tau^{2}}\,x^{*},\quad q\,\frac{\partial q^{*}}{\partial
q}=\frac{1}{\tau^{2}}\,q^{*},\quad q\,\frac{\partial\tau}{\partial
q}=\frac{1}{2\pi i}\,.$ (63)
Here and in the following, $q$ and $x$ are considered as independent variables
as well as the pair $(\tau,\nu)$ or their modular versions $(q^{*},x^{*})$ and
$(\tau^{*},\nu^{*})$.
###### Theorem 2.3.
Let $q=e^{2\pi i\tau}$, $x=e^{2\pi i\nu}$ and let $q^{*}$, $x^{*}$, $\tau^{*}$
and $\nu^{*}$ be as in (35) and (36). If $\tau\in{\mathcal{H}}$ and
$\nu/\tau\not\in(-\infty,0]$, then the following relations hold:
$\displaystyle L_{1}(\tau,\nu+\tau)=$ $\displaystyle\frac{\log(1-e^{2\pi
i\nu})}{2\pi\tau i}+\frac{e^{2\pi i\nu}}{2(1-e^{2\pi
i\nu})}+L_{1}(\tau^{*},\nu^{*}+\tau^{*})\,\frac{1}{\tau}$ (64)
$\displaystyle+\frac{1}{2\pi\tau
i}\,\Bigl{(}\frac{\Gamma^{\prime}(\frac{\nu}{\tau}+1)}{\Gamma(\frac{\nu}{\tau}+1)}-\log\frac{\nu}{\tau}-\frac{\tau}{2\nu}-\tau\,\frac{\partial\
}{\partial\nu}P(\tau,\nu)\Bigr{)}$ (65)
and
$\displaystyle L_{2}(\tau,\nu+\tau)=$ $\displaystyle\frac{1}{24}-\frac{{\rm
Li}_{2}(e^{2\pi
i\nu})}{4\pi^{2}}\,\frac{1}{\tau^{2}}-L_{1}(\tau^{*},\nu^{*}+\tau^{*})\,\frac{\nu}{\tau^{2}}$
(66)
$\displaystyle+L_{2}(\tau^{*},\nu^{*}+\tau^{*})\,\frac{1}{\tau^{2}}-\frac{\nu}{2\pi
i\tau^{2}}\,\Bigl{(}\frac{\Gamma^{\prime}(\frac{\nu}{\tau}+1)}{\Gamma(\frac{\nu}{\tau}+1)}$
(67)
$\displaystyle-\log\frac{\nu}{\tau}-\frac{\tau}{2\nu}+\frac{\tau^{2}}{\nu}\,\frac{\partial\
}{\partial\tau}P(\tau,\nu)\Bigr{)}\,.$ (68)
###### Proof.
By taking the logarithmic derivative with respect to the variable $x$ in (37)
and in view of (62), we find:
$\displaystyle x\frac{\partial\ }{\partial x}\log(x;q)_{\infty}=$
$\displaystyle-\frac{x}{2(1-x)}+\frac{x^{*}}{1-x^{*}}\,\frac{1}{\tau}+\frac{1}{\tau}\,x^{*}\frac{\partial\
}{\partial x^{*}}\log(x^{*};q^{*})_{\infty}$
$\displaystyle-\log_{q}(1-x)+\frac{1}{2\pi i}\,\bigl{(}\frac{\partial\
}{\partial\nu}G(\tau,\nu)+\frac{\partial\
}{\partial\nu}P(\tau,\nu)\bigr{)}\,,$
so that, by (60), we arrive at the following expression:
$\displaystyle L_{1}(\tau,\nu)=$
$\displaystyle\log_{q}(1-x)+\frac{x}{2(1-x)}+\bigl{(}L_{1}(\tau^{*},\nu^{*})-\frac{x^{*}}{1-x^{*}}\bigr{)}\,\frac{1}{\tau}$
$\displaystyle-\frac{1}{2\pi i}\,\bigl{(}\frac{\partial\
}{\partial\nu}G(\tau,\nu)+\frac{\partial\
}{\partial\nu}P(\tau,\nu)\bigr{)}\,.$
From (32), it follows that
$\tau\,\frac{\partial\
}{\partial\nu}G(\tau,\nu)=-\frac{\Gamma^{\prime}(\frac{\nu}{\tau}+1)}{\Gamma(\frac{\nu}{\tau}+1)}+\log\frac{\nu}{\tau}+\frac{\tau}{2\nu},$
(69)
that leads to the wanted relation (64), using (58).
On the other hand, using (63), a direct computation shows that (37) implies
the following expression:
$\displaystyle q\frac{\partial\ }{\partial q}\log(x;q)_{\infty}=$
$\displaystyle-\frac{1}{24}+\frac{\nu}{\tau^{2}}\,x^{*}\frac{\partial\
}{\partial x^{*}}\,\log(1-x^{*})-\frac{\nu}{\tau^{2}}\,x^{*}\frac{\partial\
}{\partial x^{*}}\,\log(x^{*};q^{*})_{\infty}$
$\displaystyle+\frac{1}{\tau^{2}}\,q^{*}\frac{\partial\ }{\partial
q^{*}}\,\log(x^{*};q^{*})_{\infty}-\frac{{\rm Li}_{2}(x)}{(\log q)^{2}}$
$\displaystyle+\frac{1}{2\pi i}\,\Bigl{(}\frac{\partial\
}{\partial\tau}G(\tau,\nu)+\frac{\partial\
}{\partial\tau}P(\tau,\nu)\Bigr{)},$
or, by taking into account (61):
$\displaystyle q\frac{\partial\ }{\partial q}\log(x;q)_{\infty}=$
$\displaystyle-\frac{1}{24}+\frac{{\rm
Li}_{2}(x)}{4\pi^{2}}\,\frac{1}{\tau^{2}}-\frac{x^{*}}{1-x^{*}}\,\frac{\nu}{\tau^{2}}+L_{1}(\tau^{*},\nu^{*})\,\frac{\nu+1}{\tau^{2}}$
(70) $\displaystyle-L_{2}(\tau^{*},\nu^{*})\,\frac{1}{\tau^{2}}+\frac{1}{2\pi
i}\,\bigl{(}\frac{\partial\ }{\partial\tau}G(\tau,\nu)+\frac{\partial\
}{\partial\tau}P(\tau,\nu)\bigr{)}\,.$ (71)
Since
$\tau\frac{\partial\ }{\partial\tau}G(\tau,\nu)=-\nu\frac{\partial\
}{\partial\nu}G(\tau,\nu),$
by putting together (61), (64), (69) with the last relation (70), we get the
following relation:
$\displaystyle L_{2}(\tau,\nu)=$ $\displaystyle\frac{1}{24}-\frac{{\rm
Li}_{2}(x)}{4\pi^{2}}\,\frac{1}{\tau^{2}}+\frac{x^{*}}{1-x^{*}}\,\frac{\nu-\tau}{\tau^{2}}-L_{1}(\tau^{*},\nu^{*})\,\frac{\nu+1}{\tau^{2}}$
$\displaystyle+L_{1}(\tau,\nu)+L_{2}(\tau^{*},\nu^{*})\,\frac{1}{\tau^{2}}-\frac{\nu}{2\pi
i\tau^{2}}\,\Bigl{(}\frac{\Gamma^{\prime}(\frac{\nu}{\tau}+1)}{\Gamma(\frac{\nu}{\tau}+1)}$
$\displaystyle-\log\frac{\nu}{\tau}-\frac{\tau}{2\nu}+\frac{\tau^{2}}{\nu}\,\frac{\partial\
}{\partial\tau}P(\tau,\nu)\Bigr{)}\,,$
so that we arrive at the wanted formula (66) by applying (59). ∎
### 2.5 Classical Lambert series
If we let $\nu=\tau$, we reduce series $L_{1}$ and $L_{2}$ to the following
classical Lambert series:
$L_{1}(\tau,\tau)=\sum_{n=0}^{\infty}\frac{q^{n+1}}{1-q^{n+1}}$
and
$L_{2}(\tau,\tau)=\sum_{n=0}^{\infty}\frac{(n+1)q^{n+1}}{1-q^{n+1}}=\sum_{n=0}^{\infty}\frac{q^{n+1}}{(1-q^{n+1})^{2}}\,.$
By considering the limit case $\nu=0$ in (64) and (66) of Theorem 2.3, we
arrive at the following statement.
###### Theorem 2.4.
For all $\tau\in{\mathcal{H}}$, the following relations hold:
$\displaystyle L_{1}(\tau,\tau)=$ $\displaystyle\frac{\log(-2\pi i\tau)}{2\pi
i\tau}+\frac{1}{4}$ (72)
$\displaystyle-\frac{\gamma}{2\pi\,i\tau}-\frac{1}{2\pi i}\,\frac{\partial\
}{\partial\nu}P(\tau,0)+L_{1}(\tau^{*},\tau^{*})\,\frac{1}{\tau}\,,$ (73)
where $\gamma$ denotes Euler’s constant, and
$\displaystyle L_{2}(\tau,\tau)=$ $\displaystyle\frac{1}{24}+\frac{1}{4\pi
i\tau}-\frac{1}{24\,\tau^{2}}+L_{2}(\tau^{*},\tau^{*})\,\frac{1}{\tau^{2}}\,.$
(74)
###### Proof.
If we set
$A(\tau,\nu)=\frac{1}{2\pi i\tau}\,\log\frac{1-e^{2\pi
i\nu}}{\nu/\tau}-\frac{1}{2}\,\bigl{(}\frac{e^{2\pi i\nu}}{1-e^{2\pi
i\nu}}+\frac{1}{2\pi i\nu}\bigr{)}\,,$
we can write (64) as follows:
$\displaystyle L_{1}(\tau,\nu+\tau)=$ $\displaystyle
A(\tau,\nu)+L_{1}(\tau^{*},\nu^{*}+\tau^{*})\,\frac{1}{\tau}$
$\displaystyle+\frac{1}{2\pi\tau
i}\,\Bigl{(}\frac{\Gamma^{\prime}(\frac{\nu}{\tau}+1)}{\Gamma(\frac{\nu}{\tau}+1)}-\tau\,\frac{\partial\
}{\partial\nu}P(\tau,\nu)\Bigr{)}\,,$
so that, remembering $\gamma=-\displaystyle\Gamma^{\prime}(1)$, we get (72),
as it is easy to see that
$\lim_{\nu\to 0}A(\tau,\nu)=\frac{\log(-2\pi i\tau)}{2\pi
i\tau}+\frac{1}{4}\,.$
In the same time, putting $\nu=0$ in (66) allows one to obtain relation (74),
for $P(\tau,0)=0$ for all $\tau\in{\mathcal{H}}$ implies
$\displaystyle\frac{\partial\ }{\partial\tau}P(\tau,0)=0$ identically. ∎
Formula (72) has been known since Schlömilch; see Stieltjes [23, (84), p. 54].
Relation (74) is really a modular relation and is traditionally obtained by
taking derivative with respect to the variable $\tau$ in modular formula (49);
see [1, Exercises 6 and 7, p. 71].
### 2.6 Some remarks when $q$ tends toward one
For the sake of simplicity, we will limit ourself to the real case and we
suppose $q\to 1^{-}$ by real values in $(0,1)$, so that one can let
$\tau=i\alpha$, $\alpha\to 0^{+}$. As $\tau^{*}=-1/\tau=i/\alpha$, one may
observe that $\Im(\tau^{*})\to+\infty$ and therefore $q^{*}\to 0^{+}$ rapidly
or, exactly saying, exponentially with respect to the variable $1/\alpha$. The
relation
$|x^{*}|=e^{2\pi\nu/\alpha}=e^{2\pi\,\Re(\nu)/\alpha}$
shows that, as $\alpha\to 0^{+}$, the modular variable $x^{*}$ belongs to the
unit circle if and only if the initial variable $x$ takes a real value;
otherwise, $x^{*}$ goes rapidly to $\infty i$ or $0$ according to the sign of
$\Re\nu$.
Since $e^{2\pi i(\nu+1)}=e^{2\pi i\nu}$ in $x$-plane, one can always suppose
that $\Re(\nu)\in[0,1)$; in this way, it follows that
$(x^{*}q^{*};q^{*})_{\infty}=1+O\bigl{(}e^{-(1-\Re(\nu))/\alpha}\,\bigr{)}\,\sim
1.$
###### Lemma 2.5.
Let $\tau=i\alpha$, $\alpha>0$ and let $G$, $P$ as in Theorem (1.9). Then the
following limits hold: for any fix $\nu\in(0,1)$,
$\lim_{\alpha\to 0^{+}}P(\tau,\nu)=\lim_{\alpha\to 0^{+}}G(\tau,\nu)=0\,.$
###### Proof.
The null limit of $G$ is just a consequence of the Stirling’s asymptotic
formula on $\log\Gamma$. The reader may complete the proof by direct
estimates. ∎
Thus, from Theorem 1.9, we find:
$\log(x;q)_{\infty}\sim\frac{\log({1-x})}{2}\,-\frac{{\rm
Li}_{2}(x)}{2\pi\alpha}$
when $q=e^{-2\pi\alpha}\to 1^{-}$ .
Our final remark concerns the limit behavior for generalized series $L_{1}$
and $L_{2}$. From Theorem 2.3, it is easy to see that, naturally,
$2\pi\alpha\,L_{1}(\alpha i,\nu)\sim\log(1-x),\quad
4\pi^{2}\alpha^{2}\,L_{2}(\alpha i,\nu)\sim{\rm Li}_{2}(x)$
if $\alpha\to 0^{+}$.
In a forthcoming paper [14], we shall give a compactly uniform Gevrey
asymptotic expansion for $(x;q)_{\infty}$ when $q\to 1$ inside the unit disc,
$x$ being a complex parameter; see [15, §1.4.1, p. 84-86] for Gevrey
asymptotic expansion with parameters.
## 3 Proof of Theorem 1.1
In all this section, we let
$q=e^{-a}=e^{-2\pi\alpha},\quad x=e^{-(1+\xi)a}=q^{1+\xi}$
and suppose
$a=2\pi\alpha>0,\quad 0<q<1,\quad\xi>-1,\quad 0<x<1\,,$
For any positive integer $N$, define
$V_{N}(a,\xi):=\sum_{n=1}^{N}\log(1-e^{-(n+\xi)a})\,.$ (75)
It is esay to see that
$\log\,(x;q)_{\infty}=\lim_{N\to\infty}V_{N}(a,\xi)\,.$
We shall prove Theorem 1.1 in several steps, and our approach is well inspired
by Stieltjes’ work _Étude de la fonction
$P(a)=\displaystyle\sum_{1}^{\infty}\frac{1}{e^{\frac{n}{a}}-1}$_ that one can
find in his Thesis [23, p. 57-62]. The starting point is to use the fact that
$\displaystyle\frac{1}{e^{\sqrt{2\pi}\,u}-1}-\frac{1}{\sqrt{2\pi}\,u}$ is a
self-reciprocal function with respect to Fourier sine transform [24, (7.2.2),
p. 179], so that one may write each finite sum $V_{N}$ by four or five
appropriate sine or cosine integrals depending of $N$ and make then estimation
over these integrals.
### 3.1 Some preparatory formulas
We are going to use the following formulas:
$\int_{0}^{\infty}\frac{\sin\lambda u}{e^{2\pi
u}-1}du=\frac{1}{4}+\frac{1}{2}\bigl{(}\frac{1}{e^{\lambda}-1}-\frac{1}{\lambda}\bigr{)}\,,$
(76)
and
$\int_{0}^{\infty}\frac{1-\cos\lambda u}{e^{2\pi
u}-1}\frac{du}{u}=\frac{\lambda}{4}+\frac{1}{2}\log\frac{1-e^{-\lambda}}{\lambda}\,,$
(77)
where $\lambda$ is assumed to be a real or complex number such that
$|\Im\lambda|<2\pi$; notice that the second formula can be deduced from the
first one by integrating on $\lambda$. For instance, see [23, (82) & (83), p.
57], [24, (7.2.2), p. 179] or [25, Example 2, p. 122].
From (77), it follows that
$\displaystyle V_{N}(a,\xi)=$
$\displaystyle\sum_{n=1}^{N}\log\,(n+\xi)-\frac{N}{2}\,\xi a$ (78)
$\displaystyle-\frac{N(N+1)}{4}\,a+N\log\,a+R_{N}(a,\xi)\,,$ (79)
where
$R_{N}(a,\xi)=\int_{0}^{\infty}\frac{h_{N}(au,\xi)}{e^{2\pi
u}-1}\,\frac{du}{u}$
and
$h_{N}(t,\xi)=2N-2\sum_{n=1}^{N}\cos(n+\xi)t\,.$
By using the elementary relations
$2\sum_{n=1}^{N}\cos nt=\cos Nt+\sin Nt\cot\frac{t}{2}-1$
and
$2\sum_{n=1}^{N}\sin nt=\sin Nt+(1-\cos Nt)\cot\frac{t}{2}\,,$
we obtain:
$h_{N}(t,\xi)=2N+\cos\xi t-\cos(N+\xi)t+\bigl{(}\sin\xi
t-\sin(N+\xi)t\bigr{)}\cot\frac{t}{2}\,.$
Let us define the following integrals:
$R_{N}^{(1)}(a,\xi)=\int_{0}^{\infty}\frac{\cos\xi au-\cos(N+\xi)au}{e^{2\pi
u}-1}\,\frac{du}{u}$ (80)
and
$R_{N}^{(2)}(a,\xi)=\int_{0}^{\infty}\frac{2N+\bigl{(}\sin\xi
au-\sin(N+\xi)au\bigr{)}\cot\frac{au}{2}}{e^{2\pi u}-1}\,\frac{du}{u}\,,$ (81)
so that
$R_{N}(a,\xi)=R_{N}^{(1)}(a,\xi)+R_{N}^{(2)}(a,\xi).$
We will look for the limits of $R_{N}^{(1)}$ and $R_{N}^{(2)}$ while $N$
becomes indefinitely large. To simplify, we will write $a_{N}\sim_{N}b_{N}$ if
the quantity $(a_{N}-b_{N})$ tends to zero as $N\to\infty$. From (77), we
first observe the following result.
###### Lemma 3.1.
The following relation holds:
$R_{N}^{(1)}(a,\xi)\sim_{N}\frac{N}{4}\,a-\frac{1}{2}\,\log
N-\frac{1}{2}\,\log\frac{1-e^{-\xi a}}{\xi}\,.$ (82)
###### Proof.
Applying (77) to the following integrals
$\int_{0}^{\infty}\frac{1-\cos\xi au}{e^{2\pi
u}-1}\,\frac{du}{u}\,,\quad\int_{0}^{\infty}\frac{1-\cos(N+\xi)au}{e^{2\pi
u}-1}\,\frac{du}{u}$
implies directly Lemma 3.1. ∎
The following well-known result, due to Riemann, will be often taken into
account in the course of the proof.
###### Lemma 3.2.
Let $f$ be a continuous and integrable function on a finite or infinite closed
interval $[\alpha,\beta]\subset\overline{{\mathbb{R}}}$. Then the following
relations hold:
$\int_{\alpha}^{\beta}f(t)\sin
Nt\,dt\sim_{N}0,\quad\int_{\alpha}^{\beta}f(t)\cos Nt\,dt\sim_{N}0.$
### 3.2 First part of $R_{N}^{(2)}$
The integral (81) of $R_{N}^{(2)}$ seems more complicated than $R_{N}^{(1)}$,
because of the simple poles at $u=\displaystyle\frac{2}{a}\,\pi$,
$\displaystyle\frac{4}{a}\,\pi$, $\displaystyle\frac{6}{a}\,\pi$, etc,
$\cdots$ that the function $\displaystyle\cot\frac{au}{2}$ admits on
$(0,+\infty)$. In such a situation, one very classical technique may consist
in replacing the function by its decomposition in simple parts as given in
(19). By considering $\displaystyle\frac{2}{au}$ instead of
$\displaystyle\cot\frac{au}{2}$ in (81), we are led to the following integral:
$R_{N}^{(21)}(a,\xi)=\frac{2}{a}\,\int_{0}^{\infty}\frac{Nau+\sin\xi
au-\sin(N+\xi)au}{e^{2\pi u}-1}\,\frac{du}{u^{2}}\,;$ (83)
if we set
$R_{N}^{(22)}(a,\xi)=4a\,\int_{0}^{\infty}\sum_{n=1}^{\infty}\frac{\sin(N+\xi)au-\sin\xi
au}{4\pi^{2}n^{2}-a^{2}u^{2}}\,\frac{du}{e^{2\pi u}-1}\,,$ (84)
then, in view of (19) we obtain the following equality:
$R_{N}^{(2)}(a,\xi)=R_{N}^{(21)}(a,\xi)+R_{N}^{(22)}(a,\xi)\,.$
###### Lemma 3.3.
The following relation holds for $a>0$ and $\xi>0$:
$\displaystyle R_{N}^{(21)}(a,\xi)\sim_{N}$
$\displaystyle\frac{N(N+2\xi)}{4}\,a-(N+\xi)\log(N+\xi)+N(1-\log a)$ (85)
$\displaystyle-\frac{\pi^{2}}{6a}+\xi\log\xi-\frac{1}{a}\int_{0}^{\xi
a}\log(1-e^{-t})\,dt\,.$ (86)
###### Proof.
For any pair $(N,\xi)\in{\mathbb{N}}\times{\mathbb{R}}$, let
$f_{N}(a,\xi)=\displaystyle\frac{a}{2}R_{N}^{(21)}(a,\xi)$; it is easy to see
that $a\mapsto f_{N}(a,\xi)$ represents an odd analytic function at the origin
$0$ of the real axis, for merely
$f_{N}(a,\xi)=\int_{0}^{\infty}\frac{Nau+\sin\xi au-\sin(N+\xi)au}{e^{2\pi
u}-1}\,\frac{du}{u^{2}}\,.$
Let $f_{N}^{\prime}(a,x)$ denote the derivative of $f_{N}(a,\xi)$ with respect
to the variable $a$. It follows that
$f_{N}^{\prime}(a,\xi)=\int_{0}^{\infty}\frac{(N+\xi)\bigl{(}1-\cos(N+\xi)au\bigr{)}-\xi(1-\cos\xi
au)}{e^{2\pi u}-1}\,\frac{du}{u}\,,$
so that applying (77) gives rise to the following relation:
$\displaystyle f_{N}^{\prime}(a,\xi)=$
$\displaystyle\frac{N(N+2\xi)}{4}\,a+\frac{N+\xi}{2}\,\log\frac{1-e^{-(N+\xi)a}}{N+\xi}$
$\displaystyle-\frac{N}{2}\,\log a-\frac{\xi}{2}\,\log\frac{1-e^{-\xi
a}}{\xi}\,.$
To come back to $f_{N}(a,x)$, we integrate $f_{N}^{\prime}(t,\xi)$ over the
interval $(0,a)$ and remark that $f_{N}(0,\xi)=0$; it follows that
$\displaystyle f_{N}(a,\xi)=$
$\displaystyle\frac{N(N+2\xi)}{8}\,a^{2}-\frac{N+\xi}{2}\,a\,\log(N+\xi)-\frac{N}{2}\,(\log
a-1)a$ (87)
$\displaystyle+\frac{\xi}{2}\,(\log\xi)\,a+\frac{1}{2}\,I(a,N+\xi)-\frac{1}{2}\,I(a,\xi)\,,$
(88)
where
$I(a,\delta)=\int_{0}^{\delta a}\log(1-e^{-t})\,dt\,.$
Now we suppose $a>0$ and let $N\to+\infty$. Noticing that
$I(a,N+\xi)\sim_{N}\int_{0}^{\infty}\log(1-e^{-t})\,dt=-{\rm
Li}_{2}(1)=-\frac{\pi^{2}}{6},$ (89)
we get immediately (85) from (87).
∎
The term $\displaystyle-\frac{\pi^{2}}{6a}$ included in expression (83) plays
a most important role for understanding the asymptotic behavior of
$\log(x;q)_{\infty}$ as $q\to 1^{-}$, that is, $a\to 0^{+}$. The crucial point
is formula (89), that remains valid for all complex numbers $a$ such that $\Re
a>0$.
### 3.3 Intermediate part in $R_{N}^{(2)}$
Now consider $R_{N}^{(22)}(a,\xi)$ of (84); then
$R_{N}^{(22)}(a,\xi)=\frac{2}{\pi}\,\bigl{(}I_{N}(a,\xi)-J_{N}(a,\xi)\bigr{)}\,,$
(90)
if we set
$I_{N}(a,\xi)=\int_{0}^{\infty}\sum_{n=1}^{\infty}\frac{\sin 2nN\pi t\cos
2n\xi\pi t}{n(e^{4n\pi^{2}t/a}-1)}\,\frac{dt}{1-t^{2}}$ (91)
and
$J_{N}(a,\xi)=\int_{0}^{\infty}\sum_{n=1}^{\infty}\frac{\sin 2n\xi\pi t(1-\cos
2nN\pi t)}{n(e^{4n\pi^{2}t/a}-1)}\,\frac{dt}{1-t^{2}}\,,$ (92)
Here, each series under $\displaystyle\int$ converges absolutely to an
integrable function over $(0,\infty)$ excepted maybe near zero and $t=1$.
Lemma 3.4 given below will tell us how to regularize the situation at origin;
notice also that these integrals behave _more convergent_ at $t=1$ than $0$,
due to big factors $(e^{4n\pi^{2}t/a}-1)$.
###### Lemma 3.4.
Let $\delta\in(0,1)$, $\lambda>0$ and let $\\{h_{n,N}\\}_{n,N\in{\mathbb{N}}}$
be a uniformly bounded family of continuous functions on $[0,\delta]$. For any
positive integer $M$, let $A_{M}({N})$ denote the integral given by
$A_{M}(N)=\int_{0}^{\delta}\sum_{n=1}^{M}\frac{h_{n,N}(t)}{n}\,\bigl{(}\frac{1}{e^{n\lambda
t}-1}-\frac{1}{n\lambda t}\bigr{)}\,\frac{dt}{1-t^{2}}\,.$
Then, as $M\to\infty$, the sequence $\\{A_{M}(N)\\}$ converges uniformly for
$N\in{\mathbb{N}}$.
###### Proof.
We suppose $\lambda=1$, the general case being analogous; thus, one can write
$A_{M}(N)$ as follows:
$A_{M}(N)=\sum_{n=1}^{M}\int_{0}^{n\delta}{h_{n,N}(t/n)}\,\bigl{(}\frac{1}{e^{t}-1}-\frac{1}{t}\bigr{)}\,\frac{dt}{n^{2}-t^{2}}\,$
Observe that the function $(e^{t}-1)^{-1}-t^{-1}$ increases rapidly from
$-1/2$ toward zero when $t$ tends to infinity by positive values; indeed,
$(e^{t}-1)^{-1}-t^{-1}=O(t^{-1})$ for $t\to+\infty$. Therefore, if we make use
of the relation
$\displaystyle\int_{0}^{n\delta}=\int_{0}^{\sqrt{n}\,\delta}+\int_{\sqrt{n}\,\delta}^{n\delta}$
and let $n\to+\infty$, we find:
$\Bigl{|}\int_{0}^{n\delta}{h_{n,N}(t/n)}\,\bigl{(}\frac{1}{e^{t}-1}-\frac{1}{t}\bigr{)}\,\frac{dt}{n^{2}-t^{2}}\Bigr{|}\leq
C\,n^{-3/2}\,,$
where $C$ denotes a suitable positive constant independent of $N$ and $n$;
this ends the proof of Lemma 3.4. ∎
We come back to the integral $I_{N}$ given in (91).
###### Lemma 3.5.
The following relation holds:
$I_{N}(a,\xi)\sim_{N}\frac{\pi}{48}\,a-\frac{\pi}{2}\sum_{n=1}^{\infty}\frac{\cos
2n\pi\xi}{n(e^{4n\pi^{2}/a}-1)}\,.$ (93)
###### Proof.
We fix a small $\delta>0$, cut off the interval $(0,\infty)$ into four parts
$(0,\delta)$, $(\delta,1-\delta)$, $(1-\delta,1+\delta)$ and
$(1+\delta,\infty)$, and the corresponding integrals will be denoted by
$I_{N}^{0\delta}$, $I_{N}^{\delta+}$, $I_{N}^{1\mp\delta}$ and
$I_{N}^{\delta\infty}$, respectively. According to Lemma 3.2, we find:
$I_{N}^{\delta+}(a,\xi)\sim_{N}0,\qquad I_{N}^{\delta\infty}(a,\xi)\sim_{N}0,$
(94)
for
$\sum_{n=M}^{\infty}\bigl{(}\int_{\delta}^{1-\delta}+\int_{1+\delta}^{\infty}\bigr{)}\frac{|\sin
2nN\pi t\cos 2n\xi\pi t|}{n(e^{4n\pi^{2}t/a}-1)}\,\frac{dt}{|1-t^{2}|}\to 0$
when $M\to\infty$. In the same way, we may observe that
$\displaystyle I_{N}^{0\delta}(a,\xi)$
$\displaystyle\sim_{N}\frac{a}{4\pi^{2}}\,\int_{0}^{\delta}\sum_{n=1}^{\infty}\frac{\sin
2nN\pi t\cos 2n\xi\pi t}{n^{2}t({1-t^{2}})}\,{dt}$ (95)
$\displaystyle\sim_{N}\frac{a}{4\pi^{2}}\,\int_{0}^{\delta}\sum_{n=1}^{\infty}\frac{\sin
2nN\pi t}{n^{2}}\,\frac{dt}{t}$ (96)
$\displaystyle=\frac{a}{4\pi^{2}}\,\sum_{n=1}^{\infty}\,\int_{0}^{\delta}\frac{\sin
2nN\pi t}{n^{2}}\,\frac{dt}{t}\,,$ (97)
where the first approximation relation is essentially obtained from Lemma 3.4,
combining together with Lemma 3.2. Since
$\int_{0}^{\infty}\frac{\sin t}{t}\,dt=\frac{\pi}{2},$ (98)
from (95) we deduce the following relation:
$\displaystyle
I_{N}^{0\delta}(a,\xi)\sim_{N}\frac{a}{8\pi}\sum_{n=1}^{\infty}\frac{1}{n^{2}}=\frac{\pi}{48}\,a\,.$
(99)
A similar analysis allows one to write the following relations for the
remaining integral $I_{N}^{1\mp\delta}$:
$\displaystyle I_{N}^{1\mp\delta}(a,\xi)$
$\displaystyle\sim_{N}\frac{1}{2}\,\int_{1-\delta}^{1+\delta}\sum_{n=1}^{\infty}\frac{\sin
2nN\pi t\cos 2n\xi\pi t}{n(e^{4n\pi^{2}t/a}-1)}\,\frac{dt}{1-t}$ (100)
$\displaystyle\sim_{N}\frac{1}{2}\,\int_{1-\delta}^{1+\delta}\sum_{n=1}^{\infty}\frac{\cos
2n\xi\pi}{n(e^{4n\pi^{2}/a}-1)}\,\frac{\sin 2nN\pi t}{1-t}\,dt$ (101)
$\displaystyle\sim_{N}\frac{1}{2}\,\sum_{n=1}^{\infty}\frac{\cos
2n\xi\pi}{n(e^{4n\pi^{2}/a}-1)}\,\int_{-\infty}^{+\infty}-\frac{\sin
t}{t}\,dt$ (102) $\displaystyle=-\frac{\pi}{2}\,\sum_{n=1}^{\infty}\frac{\cos
2n\xi\pi}{n(e^{4n\pi^{2}/a}-1)}\,,$ (103)
where the last equality comes from (98).
Accordingly, we obtain the wanted expression (93) by putting together the
estimates (94), (99) and (100) and thus the proof is complete. ∎
### 3.4 Singular integral as limit part of $R_{N}^{(2)}$
In order to give estimates for $J_{N}(a,\xi)$ of (92), we shall make use of
the Cauchy principal value of a singular integral. The situation we have to
consider is the following [25, §6.23, p. 117]: $f$ be a continuous function
over $(0,1)\cup(1,+\infty)$ such that, for any $\epsilon>0$, $f$ is integrable
over both intervals $(0,1-\epsilon)$ and $(1+\epsilon,+\infty)$; one defines
${\cal PV}\int_{0}^{\infty}f(t)dt=\lim_{\epsilon\to
0^{+}}\bigl{(}\int_{0}^{1-\epsilon}f(t)\,dt+\int_{1+\epsilon}^{\infty}f(t)\,dt\bigr{)}$
whenever the last limit exists.
###### Lemma 3.6.
The following relation holds:
$J_{N}(a,\xi)\sim_{N}{\cal PV}\int_{0}^{\infty}\sum_{n=1}^{\infty}\frac{\sin
2n\xi\pi t}{n(e^{4n\pi^{2}t/a}-1)}\,\frac{dt}{1-t^{2}}\,.$ (104)
###### Proof.
For any given number $\epsilon\in(0,1)$, let
$J_{N}^{(1\mp\epsilon)}(a,\xi)=\int_{1-\epsilon}^{1+\epsilon}\sum_{n=1}^{\infty}\frac{\sin
2n\xi\pi t(1-\cos 2nN\pi t)}{n(e^{4n\pi^{2}t/a}-1)}\,\frac{dt}{1-t^{2}}\,;$
Thanks to suitable variable change, we can get the following expression:
$J_{N}^{(1\mp\epsilon)}(a,\xi)=\int_{0}^{\epsilon}\sum_{n=1}^{\infty}\bigl{(}\frac{h_{n}(a,\xi,t)}{2-t}-\frac{h_{n}(a,\xi,-t)}{2+t}\bigr{)}\,\frac{1-\cos
2nN\pi t}{nt}\,dt\,,$
where
$h_{n}(a,\xi,t)=\frac{\sin 2n\xi\pi(1-t)}{e^{4n\pi^{2}(1-t)/a}-1}\,.$
From Lemma 3.2, one deduces:
$J_{N}^{(1\mp\epsilon)}(a,\xi)\sim_{N}\int_{0}^{\epsilon}\sum_{n=1}^{\infty}\bigl{(}\frac{h_{n}(a,\xi,t)}{2-t}-\frac{h_{n}(a,\xi,-t)}{2+t}\bigr{)}\,\frac{dt}{nt}\,.$
(105)
Again applying Lemma 3.2 implies that
$\displaystyle
J_{N}(a,\xi)-J_{N}^{(1\mp\epsilon)}(a,\xi)\sim_{N}\bigl{(}\int_{0}^{1-\epsilon}+\int_{1+\epsilon}^{\infty}\bigr{)}\sum_{n=1}^{\infty}\frac{\sin
2n\xi\pi t}{n(e^{4n\pi^{2}t/a}-1)}\,\frac{dt}{1-t^{2}}\,,$
which, using (105), permits to conclude, as it is clear that
$\lim_{\epsilon\to
0^{+}}\int_{0}^{\epsilon}\sum_{n=1}^{\infty}\bigl{(}\frac{h_{n}(a,\xi,t)}{2-t}-\frac{h_{n}(a,\xi,-t)}{2+t}\bigr{)}\,\frac{dt}{nt}=0\,.$
∎
### 3.5 End of the proof of Theorem 1.1
###### Proof.
Consider the functions $V_{N}(a,\xi)$ given in (75) and recall that
$\log(x;q)_{\infty}$ is the limit of $V_{N}(a,\xi)$ when $N$ goes to infinity;
so we need to know the limit behavior of the right hand side of (78) for
infinitely large $N$.
Letting
$G_{N}(a,\xi)=\sum_{n=1}^{N}\log\,(n+\xi)-\frac{N}{2}\,\xi
a-\frac{N(N+1)}{4}\,a+N\log\,a\,,$
it follows that
$V_{N}(a,\xi)=G_{N}(a,\xi)+R_{N}^{(1)}(a,\xi)+R_{N}^{(21)}(a,\xi)+\frac{2}{\pi}\,(I_{N}(a,\xi)-J_{N}(a,\xi))\,,$
where $R_{N}^{(1)}$, $R_{N}^{(21)}$, $I_{N}$ and $J_{N}$ are considered in
Lemmas 3.1, 3.3, 3.5 and 3.6, respectively. From Stirling’s asymptotic formula
[5, Theorem 1.4.1, page 18], one easily gets:
$\displaystyle\sum_{n=1}^{N}\log\,(n+\xi)=$
$\displaystyle\log\Gamma(N+\xi+1)-\log\Gamma(\xi+1)$ $\displaystyle\sim_{N}$
$\displaystyle\log\sqrt{2\pi}+(N+\xi+\frac{1}{2})\log(N+\xi+1)$
$\displaystyle-(N+\xi+1)-\log\Gamma(\xi+1)\,.$
Thanks to (85) of Lemma 3.3, one finds:
$\displaystyle G_{N}(a,\xi)+R_{N}^{(21)}(a,\xi)\sim_{N}$
$\displaystyle-\frac{N}{4}\,a+\frac{1}{2}\log
N-\log\Gamma(\xi+1)-\frac{\pi^{2}}{6a}$
$\displaystyle+\log\sqrt{2\pi}-\xi+\xi\log\xi-\frac{1}{a}\int_{0}^{\xi
a}\log(1-e^{-t})\,dt\,.$
Thus, using (80) of Lemma 3.1, one can deduce the following expression:
$\displaystyle G_{N}(a,\xi)+R_{N}^{(1)}(a,\xi)+R_{N}^{(21)}(a,\xi)$
$\displaystyle\sim_{N}$
$\displaystyle-\frac{\pi^{2}}{6a}+\log\sqrt{2\pi}-\xi-\log\Gamma(\xi+1)$
$\displaystyle-\bigl{(}\xi+\frac{1}{2}\bigr{)}\log\frac{1-e^{-\xi
a}}{\xi}+\frac{1}{a}\int_{0}^{\xi a}\frac{t}{e^{t}-1}\,dt\,,$
which implies the starting formula (1) of our paper with the help of Lemmas
3.5 and 3.6, replacing all $a$ by $2\pi\alpha$. ∎
## References
* [1] T. M. Apostol, Modular functions and Dirichlet series in number theory, Second edition, GTM 41, Springer-Verlag, New York, 1990.
* [2] T. M. Apostol, Generalized Dedekind sums and transformation formulae of certain Lambert series, Duke Math. J. 17 (1950), 147–157.
* [3] T. M. Apostol, Elementary proof of the transformation formula for Lambert series involving generalized Dedekind sums, J. Number Theory 15 (1982), no. 1, 14–24.
* [4] G. E. Andrews, The theory of partitions, Cambridge University Press, Cambridge, 1976.
* [5] G. E. Andrews, R. Askey and R. Roy, Special functions, Cambridge University Press, 2000.
* [6] E. W. Barnes, The theory of the double gamma function, Phil. Trans. R. Soc. Lond. A, 196 (1901), 265-388.
* [7] B.C. Berndt, Ramanujan’s notebooks, Part II, Springer-Verlag, New York, 1989.
* [8] G. D. Birkhoff, The generalized Riemann problem for linear differential equations and the allied problems for linear difference and $q$-difference equations, _Proc. Amer. Acad._ , 49 (1913), 521-568.
* [9] L. Di Vizio & C. Zhang, On $q$-summation and confluence, _Ann. Inst. Fourier_ , 59 (2009), 347–392.
* [10] L. Euler, Introducio in Analysin Infinitorum, 1748.
* [11] A. P. Guinand, On Poisson’s summation formula, Ann. Math., (2) 42 (1941), 591-603.
* [12] A. P. Guinand, Functional equations and self-reciprocal functions connected with Lambert series, Quart. J. Math., Oxford Ser. 15 (1944), 11–23.
* [13] G. H. Hardy, Ramanujan, Twelve lectures on subjects suggested by his life and work, Chelsea Publishing Company (originally published by Cambridge University Press), New York, 1940.
* [14] X.-Y. Li & C. Zhang, Complete asymptotic expansion of Jackson Gamma function.
* [15] J. Martinet & J.-P. Ramis, Problèmes de modules pour des équations différentielles non linéaires du premier ordre, _Inst. Hautes Études Sci. Publ. Math._ 55 (1982), 63-164.
* [16] S. Ramanujan, Collected papers of Srinivasa Ramanujan, edited by G. H. Hardy, P. V. Seshu Aiyar and B. M. Wilson, Chelsea Publishing Compagny, New York, Second printing of the 1927 original.
* [17] J.-P. Ramis, Séries divergentes et théories asymptotiques, Bull. Soc. Math. France, 121 (1993), Panoramas et Synthèses, suppl.
* [18] J.-P. Ramis, J. Sauloy & C. Zhang, Local analytic classification of $q$-difference equations, http://front.math.ucdavis.edu/0903.0853 (arXiv: 0903.0853), 2009.
* [19] J. Sauloy, Galois theory of Fuchsian $q$-difference equations. _Ann. Sci. École Norm. Sup._ , 36 (2003), 925–968.
* [20] J.-P. Serre, Cours d’arithmétique, Deuxième édition revue et corrigé, Presses Universitaires de France, Paris, 1977.
* [21] T. Shintani, On a Kronecker limit formula for real quadratic fields. J. Fac. Sci. Univ. Tokyo Sect. IA Math. 24 (1977), no. 1, 167–199.
* [22] C. L. Siegel, A simple proof of $\eta(-1/\tau)=\eta(\tau)\sqrt{\tau/2}$, Mathematica, 1 (1954), 4.
* [23] T. J. Stieltjes, Colleted Papers, Vol. II, Springer-Verlag, New York, 1993.
* [24] E. C. Titchmarsh, Introduction to the Theory of Fourier Integrals, Oxford, Clarendon Press, 1937.
* [25] E. T Whittaker & G. N. Watson, A course of modern analysis, Fourth edition, Reprinted Cambridge University Press, New York 1962.
* [26] D. Zagier, The dilogarithm function in geometry and number theory, Number theory and related topics, 231–249, Tata Inst. Fund. Res. Stud. Math., 12, Tata Inst. Fund. Res., Bombay; Oxford University Press, Oxford, 1989.
* [27] C. Zhang, Développements asymptotiques $q$-Gevrey et séries $Gq$-sommables, _Ann. Inst. Fourier_ 49 (1999), 227-261.
* [28] C. Zhang, Une sommation discrète pour des équations aux $q$-différences linéaires et à coefficients analytiques : théorie générale et exemples, in _Differential Equations and the Stokes Phenomenon_ , B.L.J Braaksma and al. ed., World Scientific, 309-329, 2002.
* [29] C. Zhang, Sur les fonctions $q$-Bessel de Jackson, Journal of Approximation Theory, 122 (2003), 208-223.
* [30] C. Zhang, Solutions asymptotiques et méromorphes d’équations aux $q$-différences, in _Théories Asymptotiques et Équations de Painlevé_ , Séminaires et Congrès 14 (2006), SMF, 341-356.
|
arxiv-papers
| 2009-05-08T21:02:59 |
2024-09-04T02:49:02.438562
|
{
"license": "Public Domain",
"authors": "Changgui Zhang (Lille, France)",
"submitter": "Changgui Zhang",
"url": "https://arxiv.org/abs/0905.1343"
}
|
0905.1357
|
# Exotic Grazing Resonances in Nanowires
Simin Feng Klaus Halterman Research and Intelligence Department
Naval Air Warfare Center, China Lake, CA 93555
###### Abstract
We investigate electromagnetic scattering from nanoscale wires and reveal for
the first time, the emergence of a family of exotic resonances, or enhanced
fields, for source waves close to grazing incidence. These grazing resonances
can have a much higher Q factor, broader bandwidth, and are much less
susceptible to material losses than the well known surface plasmon resonances
found in metal nanowires. Contrary to surface plasmon resonances however,
these grazing resonances can be excited in both dielectric and metallic
nanowires and are insensitive to the polarization state of the incident wave.
This peculiar resonance effect originates from the excitation of long range
guided surface waves through the interplay of coherently scattered continuum
modes coupled with the azimuthal first order propagating mode of the
cylindrical nanowire. The nanowire resonance phenomenon revealed here can be
utilized in broad scientific areas, including: metamaterial designs,
nanophotonic integration, nanoantennas, and nanosensors.
###### pacs:
42.25.Fx, 41.20.Jb, 78.35.+c
With the ever increasing advances in nanofabrication techniques, many
remarkable optical phenomena have been discovered, thus stimulating
considerable interest in light scattering at the nanometer length scale.
Within the emerging field of nanoplasmonics, metallic nanostructures can be
tailored to harness collective optical effects, namely, surface plasmon
resonances (SPRs). Localized field enhancement is one of the key underlying
physical characteristics in promising new nanotechnological applications. Due
to relatively high energy confinement [Q factor] and small mode volume, SPRs
have been used in the construction of tightly packed photonic devicesMaier ,
and play a major role in such applicationsNie ; Xu ; Catchpole ; Rockstuhl ;
Halterman ; Feng ; Valentine ; Shalaev as, surface-enhanced Raman
scatteringNie ; Xu , photoconversionCatchpole ; Rockstuhl , and Metamaterial
designsValentine ; Shalaev . However, SPRs are extremely sensitive to material
loss, which diminishes the Q factor significantly and can limit its benefits
for the envisaged applications. Discovery of another subwavelength field-
enhancement mechanism would be greatly beneficial in future nanoscale
research.
It is well known that surface plasmons cannot be excited by TM polarization
($\bm{H}$ is perpendicular to the axis of the rod)Kottmann ; Luk . With TE
polarization SPRs can be strongly excited at normal incidence, but diminish as
the incident angle approaches grazing incidence. Grazing scattering has a
broad applicabilityRousseau ; Gu ; Lind ; Barrick and is a nontrivial and
longstanding problem due to singularities arising at the zero grazing angle.
As the grazing angle approaches zero, intrinsic singularities in the
scattering solutions reveal possible numerical instabilities. The general
solution intended for non-grazing incidence often fail to provide a clear
picture of this limiting case due to divergences arising at very small grazing
angles. Even in the macroscopic and long wavelength regime, descriptions of
grazing behaviors often leads to contradictionsBarrick . At the grazing,
scattering and propagation are inextricably linked via surface waves.
Physically meaningful concepts like scattering cross section and efficiency
$Q$ must therefore be redefined.
In this Letter, we investigate the unexplored and nontrivial grazing phenomena
in long nanowires. We undertake the analytic and numerical challenges inherent
to grazing scattering at the nanoscale and develop a robust analytic solution
to Maxwell’s equations for grazing incidence. Proper account of the guided
surface waves is accomplished by generalizing the definition of scattering
efficiency. Our calculations reveal a family of non-plasmonic resonances with
larger Q factors, broader bandwidth and are more sustainable to material loss
than SPRs. It is determined that these resonances originate from the
excitation of guided asymmetric surface waves corresponding to the $n=1$ mode
of the nanorods, akin to first order Sommerfeld waves, traditionally ascribed
to the symmetric $(n=0)$ case for TM waves propagating along a conducting
cylinder. These cyclic higher order $(n\neq 0)$ periodic solutions have been
historically overlooked, likely due to the typically high attenuation at long
wavelengths. It is remarkable that indeed grazing light can intricately couple
to these first order cyclic surface waves and contribute to exotic grazing
resonances. Contrary to the quasistatic natureFredkin of SPRs, these grazing
resonances are strictly non-electrostatic. This phenomenon, i.e. non-
electrostatic resonances in the subwavelength regime, was brought into
attention in recent work involving the effect of volume plasmons on enhanced
fields in ultrasmall structuresFeigenbaum , revealing the richness of resonant
scatterings in the nanoscale.
Consider an infinitely long cylinder of radius $a$ with its rotation axis
along the $z$-direction [cylindrical coordinates $(\rho,\phi,z)$], and with
permittivity $\epsilon_{1}$ and permeability $\mu_{1}$. The background
permittivity $\epsilon_{0}$ and permeability $\mu_{0}$, correspond to that in
vacuum. A plane TM wave is incident onto the cylinder with a grazing angle
$\theta$ with respect to the z-axis. Without loss of generality, the incident
wave resides in the x-z plane with the harmonic time dependence,
$\exp(-i\omega t)$. TM and TE modes are coupled in the scattered field due to
the non-normal incidence. The general solutionBohren generally involves an
infinite series of Bessel functions, and often poses numerical difficulties
for small grazing angles. We take an alternative approach to derive the
general solution. The starting point is the vector Helmholtz equation in
cylindrical coordinates, which is first solved for $E_{z}$ and $H_{z}$. The
remaining transverse field components are then easily determinedJackson . By
matching tangential components of the fields at the cylinder surface
$(\rho=a)$, the general solution for arbitrary incident angles can be
obtained. The grazing solution is then derived by asymptotic expansion of the
general solution at the singularity $\theta=0$. This approach has the
advantage of a simpler analytical solution (written solely in terms of
elementary functions), and possessing only the 0 and 1 azimuthal modes. The
detailed derivation of the general and grazing solutions are beyond the scope
of this letter, and will be presented elsewhereFeng2 . Here it suffices to
provide the final calculated results. Also, it is clear that the given
geometry sets physical limits for the minimum angle of incidence,
$\theta_{\min}$, given by $\theta_{\min}=a/L$, where $L$ is the distance
between the observation and incident points. Thus in what follows,
$\theta\rightarrow 0$ implies $\theta\rightarrow\theta_{\min}$.
At grazing incidences, both the scattered waves and excited surface waves are
interconnected. To incorporate both effects, it is convenient to define the
$\theta$-dependent total $Q_{\theta}$ factor,
$Q_{\theta}\equiv Q^{s}_{\rho}\sin\theta+Q^{s}_{z}\cos\theta,$ (1)
where
$\begin{split}Q^{s}_{\rho}&=\frac{R}{2\pi
aI_{0}}\int_{0}^{2\pi}S^{s}_{\rho}(R,\phi)d\phi,\\\
Q^{s}_{z}&=\frac{1}{\pi(R^{2}-a^{2})I_{0}}\int_{a}^{R}\int_{0}^{2\pi}S^{s}_{z}(\rho,\phi)\rho
d\rho\,d\phi.\end{split}$ (2)
Here $I_{0}=\frac{1}{2}{\cal Y}_{0}|E_{0}|^{2}$ is the input intensity (${\cal
Y}_{0}$ is the vacuum admittance), and $R$ is the radius of the integration
circle around the rod. The Poynting vector components $S^{s}_{\rho}$ and
$S^{s}_{z}$ are along the $\hat{\rho}$ and $\hat{z}$ directions respectively,
and are calculated from the scattered waves. The $Q^{s}_{\rho}$ describes
resonant scattering and the $Q^{s}_{z}$ describes guided surface-wave
excitations. The $Q_{\theta}$ is a useful metric to characterize the resonant
scattering and the efficiency of energy channeling through the nanowire. The
$Q_{\theta}$ appropriatly reduces to the traditional $Q$ when $\theta=90^{\rm
o}$ (normal incidence). Our studies have shown that $Q^{s}_{\rho}$ is
independent of $R$ for all points outside of the cylinder, as is the case for
the traditional $Q$. To further elucidate the power flow in the $z$-direction,
we define the effective flow area $A_{e}$:
$A_{e}\equiv\frac{\int_{a}^{R}\int_{0}^{2\pi}S_{z}(\rho,\phi)\rho
d\rho\,d\phi}{\max\left\\{\int_{0}^{2\pi}S_{z}(\rho,\phi)d\phi\right\\}}\,,$
(3)
where $S_{z}$ is calculated from the total field (scattered plus incident)
outside the rod. The effective flow area is a function of the incident
wavelength and the grazing angle, and it is useful in locating resonant peaks.
The permittivity of the silver nanorods was obtained from a polynomial fit to
experimental dataTreacy , appropriate for the visible spectrum of interest.
For the results below, we take $R=3a$ with $a=15~{}{\rm nm}$, and all field
quantities are normalized with respect to the input field. For a silver
nanorod, the SPR at $\lambda\approx 309$ nm can be excited by a TE wave at
normal incidence. The frequency of the SPR is insensitive to the rod
diameterFredkin , however the $Q$ factor is sensitive to material loss. At low
grazing angles, this picture changes dramatically: a series of non-plasmonic
resonances emerge. These resonances have a much higher $Q$ and broader
bandwidth than the SPRs, as shown in Figs. 1 and the left plot in Fig. 2.
Moreover, these grazing resonances are much less susceptible to material loss
than the usual SPR (Fig. 1). As the incident angle transitions from normal to
grazing, the SPR continuously transforms into new resonant modes with a
corresponding redshift of the resonant frequencies. This result is consistent
with the calculated waveguide modes for the nanorod, shown in the right plot
of Fig. 2. As $k_{z}/k_{0}$ tends to unity, small changes in the grazing angle
result in a significant shift in the resonant wavelength. This explains the
features found in the left plot of Fig. 2, where the resonant peaks
continuously shift to longer wavelengths without cutoff when approaching the
minimum angle. Due to the material dispersion, the grazing resonances cannot
in general be trivially scaled to other spectral regimes. The Q factor for
dielectric nanorods is about half of that for metallic rods of the same
diameterFeng2 .
Figure 1: Comparison of surface plasmon and cyclic Sommerfeld resonances:
Left: TE mode at normal incidence and SPR excitation at 309 nm. Right: TE mode
incident at the grazing angle of $0.01^{\rm o}$ and the corresponding
resonance at about 380 nm. Solid Blue: without material loss. The $Q$ of the
grazing resonance is about 10 times higher than that of SPR. Dashed Green:
with material loss. The $Q$ of the grazing resonance is then about 23 times
higher than that of SPR. Figure 2: Cyclic Sommerfeld resonances and Waveguide
modes: At the low grazing angles, a family of guided cyclic periodical ($n=1$)
surface waves can be excited. Left plot: the peaks from low to high correspond
to $\theta=1^{\rm o}$, $\theta=0.5^{\rm o}$, $\theta=0.1^{\rm o}$,
$\theta=0.05^{\rm o}$, $\theta=0.01^{\rm o}$, $\theta=0.005^{\rm o}$, and
$\theta=0.001^{\rm o}$. The $Q$ factor and the wavelength of grazing
resonances increases continuously as the grazing angle approaches the minimum
angle. Right plot: Calculated waveguide modes of the $n=1$ azimuthal mode at
the low grazing angles for the silver nanorod. These modes are related to the
first-order grazing resonances shown on the left plot. No cutoff frequency
exists for the $n=1$ mode. Figure 3: The $Q$ factors and power flows at near-
zero grazing angles: Upper-left: Wavelength of CSR vs. grazing angle
calculated from the maximum $Q_{\theta}$ (Solid Blue), from the maximum
$A_{e}$ (Dashed Green), and from the minimum of the $n=1$ denominator (Dashed-
Dot Red). The three curves coincide with each other. Upper-right: Total
$Q_{\theta}$ factor vs. grazing angle. Lower-left: Ratio of the power flow in
the z-direction inside ($P^{c}_{z}$) and outside ($P^{s}_{z}$) the rod.
Negative value is the result of the backward propagation inside the rod.
Lower-right: Ratio of the $Q^{s}_{z}$ and the total $Q_{\theta}$ factor.
The grazing resonances are correlated with the minimum of the denominator
corresponding to the $n=1$ expansion coefficient of the scattered field, as
shown by the red dashed-dot curve in the upper-left panel of Fig. 3. These
coefficients represent the $n=1$ cyclic periodical surface waves. Thus, the
grazing resonances result from the excitation of first-order guided waves,
which travel with a spiraling trajectory and constructive interference
significantly enhances the Q factor of resonant scatterings. It is interesting
to examine the relationship between the grazing resonances and the resonant
peaks extracted from $A_{e}$. The upper-left panel of Fig. 3 demonstrates
consistency among the three curves identifying the resonant wavelengths. The
enhancement of the $Q$ factor at near-zero grazing angle is due to guided
asymmetric surface waves compounded with a coherent superposition of
scattering states, which reinforces the resonant effect. As the angle becomes
smaller, $Q^{s}_{z}$, which describes the resonant guiding, increases and
becomes dominant in the total $Q_{\theta}$ factor, as shown in the lower-right
panel of Fig. 3. Another interesting phenomenon is that while the excited
surface wave travels forward outside the nanorod, the power propagates
backward inside the rod due to the boundary conditions and negative
permittivity of silver. The lower-left panel of Fig. 3 shows the ratio of the
power inside ($P^{c}_{z}$) and outside ($P^{s}_{z}$) the rod in the
$z$-direction. As the angle decreases, the scattering in the radial direction
becomes weaker and wave guiding along the rod becomes dominant. Since the
power inside the rod arises from radial scattering, the ratio
$|P^{c}_{z}/P^{s}_{z}|$ decreases as $\theta\rightarrow\theta_{\min}$.
Figure 4: Radially symmetric scattering and backward propagation:
$\theta\approx 0.001^{\rm o}$ at the resonance $\lambda=406$ nm. Upper-left:
Poynting vector $(S_{\rho})$ in the $\rho$-direction. The $S_{\rho}$ is nearly
uniform in the x-y plane. Lower-left: Vector plot of the Poynting vector
$(S_{x},S_{y})$. Power in the x-y plane flows radially towards the center of
the rod. Upper-right: The z-direction Poynting $(S_{z})$ inside and outside
the rod. The $S_{z}$ is asymmetric and flows backward inside the rod. Lower-
right: Contour plot of the $S_{z}$ further showing the asymmetric distribution
in the x-y plane. Figure 5: Intensity of the total electric field at the
surface inside and outside the rod: The intensity shows a 2D standing-wave
pattern in the azimuthal and propagation directions. Upper plot: Just outside
the surface $\rho=a^{+}$. Lower plot: Just inside the surface $\rho=a^{-}$.
Vertical axis: arc length along the circumference of the rod. Horizontal axis:
distance along the axis of the rod. Figure 6: Influence of the radius on
grazing resonances: $\theta=0.001^{\rm o}$. Left y-axis for the solid-blue
curve. Right y-axis for the dashed-green curve. For simplicity, the integral
in $Q_{\theta}$ was performed at the surface $\rho=a^{+}$ over the azimuthal
direction only.
To better visualize the spatial characteristics of the resonant fields, we
show in Fig. 4 the Poynting vector from several perspectives at near-zero
grazing incidence. As shown in the left panel, the power
($S_{\rho},S_{x},S_{y}$) in the x-y plane is toward the center of the rod and
has nearly perfect circular symmetry. The degree of symmetry depends on how
small the angle is. However, once the energy penetrates the wire, it travels
asymmetrically backward along the wire, as shown in the right panel. Figure 5
shows the intensity of the total electric field close to the surface $\rho=a$
inside and outside the nanorod. It reveals a two-dimensional standing wave
pattern in the azimuthal and propagation directions, as the result of phase-
matched spiraling propagation along the nanorod. The geometric influence on
grazing resonances is shown in Fig. 6. Both the resonant wavelength and the
total $Q_{\theta}$ increase with increasing the radius of the nanorod.
In summary, we have investigated resonant scatterings of nanowires at near-
zero grazing incidences through a newly developed grazing solution. We also
interpreted grazing resonances in terms of cyclic Sommerfeld waves and pointed
out the non-plasmonic nature of grazing resonances. This result enriches ones
fundamental understanding of scattering on the nanoscale and its relevance to
many areas within nanotechnology. Since grazing resonances are associated with
the natural modes of a nanorod, they may also be excited by other means. The
merit of high $Q$, broadband, and low loss may render this type of field
enhancement as an attractive alternative mechanism for enhanced-field
applications in the nano-regime.
The authors gratefully acknowledge the sponsorship of program manager Dr. Mark
Spector and NAVAIR’s ILIR program from ONR.
## References
* (1) S. A. Maier, IEEE J. Sel. Top. Quantum Electron. 12, 1671 (2006).
* (2) S. Nie and S. R. Emory, Science 275, 1102 (1997).
* (3) H. Xu, E. J. Bjerneld, M. K’́all, and L. B’́orjesson, Phys. Rev. Lett. 83, 4357 (1999).
* (4) K. R. Catchpole and A. Polman, Opt. Express 16, 21793 (2008).
* (5) C. Rockstuhl, S. Fahr, and F. Lederer, J. Appl. Phys. 104, 123102 (2008).
* (6) K. Halterman, J. M. Elson, and S. Singh, Phys. Rev. B 72, 075429 (2005).
* (7) S. Feng and J. M. Elson, Opt. Express 14, 216 (2006).
* (8) J. Valentine, S. Zhang, T. Zentgraf, E. U. Avila, D. A. Genov, G. Bartal, and X. Zhang, Nature 455, 376 (2008).
* (9) V. M. Shalaev, W. Cai, U. K. Chettiar, H.-K. Yua, A. K. Sarychev, V. P. Drachev, and A. V. Kildishev, Opt. Lett. 30, 3356 (2005).
* (10) J. P. Kottmann and O. J. F. Martin, Opt. Lett. 26, 1096 (2001).
* (11) B. S. Luk’yanchuk and V. Ternovsky, Phys. Rev. B 73, 235432 (2006).
* (12) P. Rousseau, H. Khemliche, A. G. Borisov, and P. Roncin, Phys. Rev. Lett. 98, 016104 (2007)
* (13) Z.-H. Gu, I. M. Fuks, and M. Ciftan, Opt. Lett. 27, 2067 (2002).
* (14) A. Lind and J. Greenberg, J. Appl. Phys. 37, 3195 (1965).
* (15) D. E. Barrick, IEEE Trans. Antennas Propag. 46, 73 (1998).
* (16) D. R. Fredkin and I. D. Mayergoyz, Phys. Rev. Lett. 91, 253902 (2003).
* (17) E. Feigenbaum and M. Orenstein, Phys. Rev. Lett. 101, 163902 (2008).
* (18) M. J. King and J. C. Wiltse, IEEE Trans. Antennas Propag. 10 246 (1962).
* (19) C. F. Bohren and D. R. Huffman, Absorption and Scattering of Light by Small Particles, (Wiley, Weinheim, Germany, 2004).
* (20) J. D. Jackson, Classical Electrodynamics, 3rd ed. (Wiley, New York, 1999).
* (21) S. Feng, K. Halterman, and P. L. Overfelt (in preparation).
* (22) M. M. J. Treacy, Phys. Rev. B 66, 195105 (2002).
|
arxiv-papers
| 2009-05-08T23:08:23 |
2024-09-04T02:49:02.447738
|
{
"license": "Public Domain",
"authors": "Simin Feng and Klaus Halterman",
"submitter": "Simin Feng",
"url": "https://arxiv.org/abs/0905.1357"
}
|
0905.1405
|
# Lipid Domain Order and the Algebra of Morphology
Tristan S. Ursell and Rob Phillips111Address correspondence to:
phillips@pboc.caltech.edu
Department of Applied Physics, California Institute of Technology, Pasadena,
CA 91125
Lipid membranes regulate the flow of materials and information between cells
and their organelles. Further, lipid composition and morphology can play a key
role in regulating a variety of biological processes. For example, viral
uptake, plasma membrane tension regulation, and the formation of caveolae all
require the creation and control of groups of lipids that adopt specific
morphologies. In this paper, we use a simplified model mixture of lipids and
cholesterol to examine the interplay between lipid phase-separation and
bilayer morphology. We observe and theoretically analyze three main features
of phase-separated giant unilamellar vesicles. First, by tracking the motion
of ‘dimpled’ domains, we measure repulsive, elastic interactions that create
short–range translational and orientational order, leading to a stable
distribution of domain sizes, and hence maintaining lateral heterogeneity on
relatively short length scales and long time scales. Second, we examine the
transition to ‘budded’ domain morphologies, showing that the transition is
size-selective, and has two kinetic regimes, as revealed by a calculated phase
diagram. Finally, using observations of the interactions between dimpled and
budded domains, we build a theoretical framework with an elastic model that
maps the free energies and allowed transitions in domain morphology upon
coalescence, to serve as an interpretive tool for understanding the algebra of
domain morphology. In all three cases, the two major factors that regulate
domain morphology and morphological transitions are the domain size and
membrane tension.
Cellular membranes are a complex mixture of lipids, membrane proteins, and
small molecules (e.g. sterols) [1, 2]. The membrane serves mainly as a
chemical barrier and substrate for membrane proteins that are responsible for
regulating the movement of materials and information across the membrane.
However, there are a host of important tasks that require a change in membrane
morphology, such as endo- and exo-cytosis [3, 4], vesicular trafficking from
the endoplasmic reticulum and Golgi apparatus to the plasma membrane [5], and
the regulation of tension in the plasma membrane [6]. While the role of
proteins cannot be ignored in these instances (e.g. clathrin, COPI, COPII,
caveolin, SNAREs, actin) [7, 8, 9, 10, 11, 12, 13, 14, 15, 16], the lipid
composition and bilayer morphology of the membrane play an important part [5,
17, 18, 19, 20, 21, 22]. With that in mind, our goals in this paper are to
examine how lipids in a model multi-component membrane spatially organize, how
this organization relates to membrane morphology, or specifically membrane
mechanics, and to examine how transitions in membrane morphology are regulated
by bilayer mechanical properties.
In vitro studies have conclusively shown that lipids are capable of lateral
self-organization [23, 24, 25], facilitated by the structure of their
hydrophobic regions and the presence of intercalated sterols. Saturated lipids
and cholesterol are sequestered from the membrane mix to form ‘lipid rafts’
that serve as platforms for signaling and material transport across the
membrane, with sizes ranging from $\sim 50-500\,\mbox{nm}$ [2, 6, 26, 27, 28,
29, 30, 31, 32, 33, 34]. In addition to their unique chemical properties and
protein-specific interactions, lipid rafts are mechanical entities in a
thermal environment, and as such, our analysis focuses on continuum and
statistical mechanics of phase-separated bilayers.
The remainder of the paper is organized as follows. The first section examines
how dimpled domains, which exert repulsive forces on each other, spatially
organize themselves into ordered structures as their areal density increases.
Specifically, we measure the potential of mean force and orientational order,
finding that domains exhibit orientational and translational order on length
scales much larger than the domains themselves. The second section studies the
transition to a spherical ‘budded’ morphology [35, 36]. Using a mechanical
model that combines effects from bending, line tension and membrane tension,
we predict and observe size-selective budding transitions on the surface of
giant unilamellar vesicles, and derive a phase diagram for the budding
transition. The final section considers the three lipid domain morphologies –
flat, dimpled and budded – and constructs a set of transition rules that
dictate the resultant morphology resulting from coalescence of two domains.
Spatial Organization of Dimpled Domains
Our analysis begins by viewing the bilayer as a mechanical entity, endowed
with a resistance to bending [37], quantified by a bending modulus
($\kappa_{b}$) with units of energy; a resistance to stretch under applied
membrane tension ($\tau$) [37], with units of energy per unit area; and in the
case where more than one lipid phase is present, an energetic cost at the
phase boundary, quantified by an energy per unit length ($\gamma$) [38, 39].
For a given domain size, the line tension between the two phases competes with
the applied membrane tension and bending stiffness to yield morphologies that
reduce the overall elastic free energy. In particular, the bending stiffness
and membrane tension both favor a flat domain morphology. Conversely, the line
tension prefers any morphology (in three dimensions) that reduces the phase
boundary length. A natural length-scale, over which perturbations in the
membrane disappear, is established by the bending stiffness and membrane
tension, given by $\lambda=\sqrt{\kappa_{b}/\tau}$. Comparing this ‘elastic
decay length’ with domain size indicates the set of possible domain
morphologies. If the domain size is on the order of, or smaller than, the
elastic decay length, flat and dimpled morphologies arise; while domains
larger than $\lambda$ generally give rise to the budded morphology [35, 36,
40, 41]. This rule of thumb is based on the fact that bilayer deformations
from dimpling are concentrated within a few elastic decay lengths of the phase
boundary, while domain budding energetics are governed by basis shapes much
larger than $\lambda$.
Dimpled domains are characterized by a dome-like shape with finite slope at
the boundary between the two lipid phases, as shown in Figs. 1 and 2(b). In
previous work we found a distinct flat-to-dimpled transition, regulated by
line tension and domain area [42], where if all the elastic parameters are
constant in time, the domain is either flat or dimpled, but cannot make
transitions between those states (i.e. there is no coexistence regime). Hence,
domains below a critical size lie flat, and if all other membrane properties
are constant, the only way domains transition from flat to dimpled is by
coalescing to form larger domains that are above a critical size. An important
outcome of domain dimpling is the emergence of a membrane-mediated repulsive
interaction between domains that tends to inhibit coalescence. Intuitively,
the origin of this force is that dimples deform their surrounding membrane,
but this deformation decays back to an unperturbed state within a few elastic
decay lengths of the phase boundary. Two domains that are within a few elastic
decay lengths of each other have overlapping deformed regions, and thus the
total elastic free energy depends on the distance between the domains, leading
to a net repulsion. A relatively simple mechanical model and previous
measurements show that the interaction between two dimpled domains can be
approximated by the pair potential $V(r)\propto e^{-r/\lambda}$, where $r$ is
the center-to-center separation between the domains [42].
This repulsive interaction arrests coalescence, and hence significantly
affects the evolution of domain sizes in a phase-separated membrane. For a
simple physical model, in the absence of any interaction, domains would
diffuse [43] and coalesce at a rate such that domain size scales as $t^{1/3}$
[44, 45, 46]. This sets the time-scale for full phase separation on a typical
giant unilamellar vesicle (GUV) at $\sim 1$ minute (see Fig. 2(c) and [23]).
However, viewing repulsive interactions as an energetic barrier to domain
growth, and given the measured barrier height of $\sim 5k_{B}T$ (with
$k_{B}=1.38\times 10^{-23}\,J/\mbox{K}$ and $T\simeq 300\,\mbox{K}$) [42], the
rate of domain coalescence slows by the Arrhenius factor $e^{-5}=0.007$. A
clear example of the difference in the rate of domain growth, with and without
elastic interactions, is shown in Fig. 2(d and c), respectively, and [47].
Thus elastic repulsion is a plausible mechanism by which lipid lateral
heterogeneity could be maintained on the hour-long time scale required for a
cell to recycle (and hence partially homogenize) the plasma membrane [48].
Alternative schemes have been proposed that balance continuous rates of
membrane recycling and domain coalescence to yield a stable domain size
distribution [36].
We examined the role these elastic interactions play in the spatial
organization of lipid domains. Given that all the domains mutually repel each
other, as the areal density of domains increases, the arrangement of domains
that minimizes the elastic free energy takes on distinctly hexagonal order, so
as to maximize the separation between all domains. Indeed, the arrangement of
repulsive bodies on a sphere is a well studied problem in physics [49, 50,
51], and a dominant feature of such systems is the emergence of hexagonal and
translational order. We measured the strength of this organizing effect by
tracking the thermal motion of dimples on the surface of GUVs, and calculating
the radial distribution function (see ‘Materials and Methods’). For time-
courses that have no coalescence events, the vesicle and its domains are in
quasistatic equilibrium, thus the negative natural logarithm of the radial
distribution function is a measure of the potential of mean force between
domains. Our previous theoretical and experimental work showed that the
elastic interaction between domains at low areal density is well approximated
by a pair potential of the form $V(r)\propto e^{-r/\lambda}$ [42]. As the
domain areal density increases, the domains adopt spatial orientations that
maximize their mean spacing. This can be understood in terms of the free
energy of the entire group of domains. If a domain deviates from this spatial
arrangement, the sum of the elastic interaction energy with its neighboring
domains increases, providing a mild restoring force to its original position.
Thus the potential of mean force develops energy wells, up to $\sim 2\,k_{B}T$
in depth, that confine domains to a well-defined spacing, as shown in Fig.
3(b-f). It should be noted that such a restoring force can arise from the
combined pair repulsion of the hexagonally-arranged neighboring domains, and
does not necessarily mean that there are attractive, non-pairwise
interactions.
The mutual repulsion and resulting energetic confinement lead to an effective
lattice constant that depends on domain size and packing density. For example,
domains may exhibit a well-defined spacing for first, second (Fig. 3(b)),
third (Fig. 3(c-d)), and fourth (Fig. 3(e-f)) nearest-neighbors, corresponding
to a correlation in the position of domains over a few microns. Interestingly,
this means that by forming dimpled domains, the motion of individual lipids
can be correlated on length scales up to $\sim 10^{4}$ times larger than the
size of an individual lipid. As indicated by the exponential decay of these
‘ringing’ potentials (see Fig. 3(b-f)), the length scale over which this
correlation in motion exists is limited by both the relatively low strength of
pair repulsion (relative to $k_{B}T$) and the dispersion of domain sizes.
In the picture that emerges, lipid domains exhibit a transition similar to
condensation in a liquid–gas system. At the lowest areal densities, the motion
of domains is analogous to a ‘gas’ of particles that occasionally have
repulsive pairwise interactions. As the domain areal density increases a
‘condensed’ phase of domains emerges, identified by its translational and
orientational order. As the areal density of dimpled domains increases, the
system exhibits three qualitative effects: i) the lattice constant decreases,
as it must, to accommodate more domains per unit area; ii) the effective
confinement grows stronger because the membrane in between the domains is more
severely deformed by the closer packing; iii) hexagonal order clearly emerges,
as shown by the characteristic peaks in the time-averaged Fourier transforms
of domain positions in Fig. 3(b-f)(inset). The time-averaged Fourier transform
is the arithmetic mean of the Fourier transforms of domain positions from each
image in a data set, with the peaks corresponding to hexagonal order somewhat
‘smeared’ by the rotational diffusion of the entire group of domains.
To quantify the lattice constant and correlation length of interacting dimpled
domains, we added a phenomenological correction term to the previously
mentioned pair potential to account for interactions between multiple domains,
such that the total potential of mean force has the form
${V_{\mbox{\tiny
fit}}(r)=a_{1}e^{-\frac{r}{\lambda_{1}}}+a_{2}e^{-\frac{r}{\lambda_{2}}}J_{0}\left(2\pi(r-r_{o})/\lambda_{3}\right),}$
(1)
where $J_{0}$ is the $0^{\mbox{\tiny th}}$-order Bessel Function of the first
kind, and $a_{1}$, $\lambda_{1}$, $a_{2}$, $\lambda_{2}$, $r_{o}$ and
$\lambda_{3}$ are fit parameters. While this equation offers little insight
into the underlying physics of densely interacting domains, the description
does an excellent job of capturing the observed features of interactions
between multiple dimpled domains, as demonstrated in Fig. 3. Using this
formula, we extracted the correlation length ($\lambda_{2}$) and lattice
constant ($\lambda_{3}$), whose ratio, $\lambda_{2}/\lambda_{3}$, is a measure
of the translational order in the system, which is shown to increase with
domain density. As domain areal density increases, the elastic free energy
confines domains to adopt a well-defined mean spacing with hexagonal order.
Thus the motion of domains is correlated over multiple layers of neighboring
domains, as shown in Fig. 4.
The dimpled domains that exhibit this behavior arise in situations where the
tension is low and the elastic decay length is longer than the domain size. In
the regime where the elastic decay length is short compared to domain size,
‘budded’ domains emerge as a morphology with distinct transition rules and
interactions.
The Budding Transition
Similar to the analysis of domain dimpling; bending, membrane tension, line
tension, and domain size all play a role in the transition to a budded domain
morphology. Many energetic models have been proposed that describe
morphological changes which result in budding and other more complex
morphologies [40, 52, 53, 54, 55, 56]. One of the simplest models, and yet
most reconcilable with experiment, is the ‘spherical’ budding model. This
model has its foundations in classical ‘sessile’ droplet wetting theory [57],
and posits that the domain is, at all times, a section of a sphere [6, 35]. We
will recapitulate this model here, and explore some of its implications for
our experiments. This model ignores deformations near the phase boundary, and
cannot capture the existence of the dimpled state, but is a reasonable model
to employ in the regime where the elastic decay length is smaller than the
domain size.
The budding domain is characterized by a wrapping angle $\theta$, where
$\theta=0$ corresponds to a flat domain and $\theta=\pi$ corresponds to the
encapsulation of a small volume by a spherical bud, as shown in Fig. 1(c) and
5(a). The bending energy of a budding domain is calculated as a fraction of
the bending energy of a sphere, given by
$\displaystyle G_{\mbox{\tiny bend}}$ $\displaystyle=$ $\displaystyle
2\kappa_{b}\int\left(H-c_{o}\right)^{2}\,{\rm d}\mathcal{A}$ $\displaystyle=$
$\displaystyle 8\pi\kappa_{b}\frac{\mathcal{A}}{4\pi
R^{2}}\left(1-2c_{o}R\right)+C,$
where $H=1/R$ is the mean curvature, $R$ is the radius of curvature of the
domain, $8\pi\kappa_{b}$ is the bending energy of a sphere, $\mathcal{A}$ is
the domain area, and $c_{o}$ is the spontaneous curvature of the domain,
which, for simplicity, we assume is zero. A constant energy $C$, that does not
depend on domain shape, is omitted. As the domain becomes more spherical, the
areal footprint of the domain shrinks, as shown in Fig. 5(a), and work must be
done against the applied membrane tension, given by
${G_{\mbox{\tiny tens}}=-\tau\pi(R\sin{\theta})^{2}.}$ (3)
The driving force for budding is the reduction of phase-boundary line tension,
provided by
${G_{\mbox{\tiny line}}=\gamma 2\pi R\sin{\theta}.}$ (4)
Finally, for all reasonable membrane tensions, the domain area is conserved
during any change in morphology, and is given by
${\mathcal{A}=2\pi R^{2}(1-\cos{\theta}).}$ (5)
This constraint equation links $\mathcal{A}$ and $R$, allowing us to eliminate
$R$ from the total free energy, $G=G_{\mbox{\tiny bend}}+G_{\mbox{\tiny
tens}}+G_{\mbox{\tiny line}}$. After some rearrangement, the total free energy
can be written in a compact form,
${G=4\pi\kappa_{b}[\,\,\underbrace{\chi\sqrt{\alpha}\sqrt{\frac{1+\cos\theta}{8\pi}}}_{\mbox{\tiny
line tension}}-\underbrace{\alpha\frac{1+\cos\theta}{8\pi}}_{\mbox{\tiny
membrane tension}}+\underbrace{1-\cos\theta}_{\mbox{\tiny bending}}\,\,],}$
(6)
where the dimensionless area, $\alpha=\mathcal{A}/\lambda^{2}$, and
dimensionless line tension, $\chi=\gamma\lambda/\kappa_{b}$, emerge as the
regulators of domain budding. The stable morphologies are those found at
energy minima, given by $\partial G/\partial\theta=0$, with only flat
($\theta=0$) or budded ($\theta=\pi$) morphologies satisfying this equation
(in the absence of spontaneous curvature). Figure 5(b) shows the free energy
of budding as a function of wrapping angle $\theta$ and the line tension
$\chi$. From this plot, one can readily see that there are two special values
of the line tension; the first, shown in blue, is where the free energy
difference between the flat and budded states equals zero, but an energy
barrier exists between them. The second special value of line tension, shown
in red, is where the energy barrier between flat and budded morphologies
disappears, and budding becomes a spontaneous process. This graphical analysis
primes us to calculate the budding phase diagram. From the solutions for the
energy minima, it can be shown that the phase diagram has three regions, as
shown in Fig. 6: i) for certain values of $\alpha$ and $\chi$ both flat and
budded domains are stable (coexistence), but flat domains have a lower elastic
free energy; ii) in an adjacent region, both morphologies are stable
(coexistence), but buds have a lower elastic free energy; iii) in the
remaining region only buds are stable (single-phase). The boundary between the
regions of the phase diagram that have two stable morphologies (coexistence)
versus one stable morphology (single-phase) is given by the inflection point
$\left.(\partial^{2}G/\partial\theta^{2})\right|_{\theta=0}=0$, which defines
the line tension
${\chi_{\mbox{\tiny
bud}}=8\sqrt{\frac{\pi}{\alpha}}+\sqrt{\frac{\alpha}{\pi}},}$ (7)
above which only buds are stable, or alternatively stated, there is no energy
barrier to the budding process (see Fig. 6). Given that $\chi$ is a constant
material parameter for constant tension, this equation specifies a size range
over which spontaneous domain budding will occur,
${\frac{\pi}{4}\left(\chi-\sqrt{\chi^{2}-32}\right)^{2}<\alpha<\frac{\pi}{4}\left(\chi+\sqrt{\chi^{2}-32}\right)^{2}.}$
(8)
Thus budding, and in particular spontaneous budding, is a size-selective
process that can only occur if $\chi>4\sqrt{2}$. Membrane tension and line
tension can be estimated by measuring size-selective spontaneous budding on
the surface of a phase-separated vesicle. In a few instances, we were able to
capture the onset of size-selective budding, though as a function of initial
conditions and timing, this proves particularly difficult. Sample temperature
is a coarse knob that allows us to change the state of tension on the vesicle
surface. Though the exact value of the thermal area expansion coefficient for
bilayers varies with composition, a good approximate value in the temperature
range of interest is $c_{\mbox{\tiny exp}}\simeq 5\times 10^{-3}K^{-1}$ [58,
59, 60]. In Fig. 7(a-c), the temperature is increased slightly (from $\sim
18\,C$ to $\sim 20\,C$, see ‘Materials and Methods’), increasing vesicle area
by approximately 1% while maintaining the enclosed volume, thus lowering the
tension and driving the system into the spontaneous budding regime. The
average size of budding domains is $r=0.93\pm 0.18\,\mu\mbox{m}$. Using eqn.
8, and taking $\kappa_{b}=25\,k_{B}T$ as a nominal value for the bending
modulus of a domain [24, 37, 61, 62], we can solve the equations defined by
the upper and lower bound to find the line tension and membrane tension. From
this analysis, we estimate $\tau\simeq 2.4\times
10^{-4}\,k_{B}T/\mbox{nm}^{2}$ and $\gamma\simeq 0.45\,k_{B}T/\mbox{nm}$.
Using the tension and our assumption of bending modulus, we can also calculate
the elastic decay length and dimensionless domain size to find $\lambda\simeq
320\,\mbox{nm}$ and $\alpha\simeq 26$. This membrane tension, which is within
the range set by typical free vesicle [24, 37] and unstressed plasma membrane
experiments [63, 64], sets the dimensionless domain area larger than one, and
hence suggests that the spherical budding model is a good approximation. This
estimate of line tension is consistent with previous measurements [24, 65],
and quantitatively matches results from our previous work [42].
As a prelude to the calculation of allowed morphological transitions, we note
that for the morphology of a domain to move from one region of the phase
diagram to another, the domain must either change size via coalescence, or
there must be a change in membrane tension which affects both $\alpha$ and
$\chi$. On the phase diagram in Fig. 6, horizontal lines would correspond to
increasing domain area, and the dashed trajectories are increasing membrane
tension with fixed domain area. The key fact is that, except within a region
very near the phase boundary, the free energy difference between the flat and
the fully budded states is much larger than $k_{B}T$, as is the energy barrier
between those states (e.g. Fig. 5). Thus, from an equilibrium statistical
mechanics perspective, a budding domain can be approximated as a two state
system, with the spontaneous budding regime included within the budded state.
Thus, it makes sense to impose the thermodynamic requirement that the free
energy difference between morphological states be negative for a transition to
be allowed, i.e. $G|_{\theta=\pi}-G|_{\theta=0}<0$ if going from flat to
budded. This amounts to describing budding with a two-state model where
${\Delta G_{\tiny f\rightarrow
b}=G|_{\theta=\pi}-G|_{\theta=0}=\pi\kappa_{b}\left(\rho^{2}-2\chi\rho+8\right),}$
(9)
and $\rho=\sqrt{\alpha/\pi}$ is the dimensionless domain radius. Figures
7(a-c) and 8(b-d) show the two states of budding on the surface of phase-
separated vesicles. If we consider $\rho$, a measure of domain size, as an
independent variable, then the single control-parameter $\chi$ dictates
whether the thermodynamic condition $\Delta G_{\tiny f\rightarrow b}<0$ has
been met for a particular domain size. If the dimensionless line tension is
below the critical value $\chi_{c}=2\sqrt{2}$, defined by $\Delta G_{\tiny
f\rightarrow b}=0$, the budding transition is forbidden for all domain sizes.
If $\chi>\chi_{c}$, budding is allowed, thought not necessarily spontaneous,
within the size range given by
${\rho=\chi\pm\sqrt{\chi^{2}-8},}$ (10)
as demonstrated in Fig. 7(f). This size range always includes the range
specified by eqn. 8, because spontaneous budding always has a negative free
energy.
The ‘Algebra’ of Morphology
With an understanding of the conditions under which a domain transitions from
flat to dimpled [42], and dimpled to budded, we are in position to calculate
the change in free energy when domains of different morphologies coalesce. On
a short enough time-scale, coalescence only occurs between two domains at a
time, and hence we can think of the coarsening behavior of a phase-separated
membrane as many such binary coalescence events happening in succession. The
purpose of this section is to begin to build a framework for understanding how
domain morphology and coalescence work in concert to affect the morphological
evolution of a phase-separated membrane. In particular, we calculate the
allowed, resultant morphology when two domains, each of a distinct morphology,
coalesce. The change in free energy associated with a change in domain
morphology, from flat to dimpled, or dimpled to budded, is much greater than
$k_{B}T$, and hence, like the budding analysis of the previous section, for a
particular transition to be allowed, we demand that the change in free energy
be negative. Furthermore, the large reduction in line energy upon coalescence
(compared to $k_{B}T$) means that, in general, coalescence is irreversible,
and hence after each coalescence event the system is presumed to be in a
unique quasistatic equilibrium state, with a unique membrane tension. The use
of these transitions rules must then be considered in the context of these
unique states, that is, transitions involving domains of a particular size
that were allowed before a coalescence event might be prohibited afterward, or
vica versa.
Let us denote transitions that involve flat domains with the letter $f$,
dimpled domains with the letter $d$, and budded domains with letter $b$, such
that, for instance, a flat domain coalescing with a dimpled domain to yield a
budded domain would be denoted by $fd\rightarrow b$. There are six possible
binary coalescence events: $ff$, $fd$, $fb$, $dd$, $db$, and $bb$; each
resulting in a single domain of either $f$, $d$ or $b$ morphology. Thus at the
onset, there are a total of 18 possible morphological transitions, however,
not all of them are thermodynamically allowed. Specifically, any domain whose
size is greater than the critical size required for dimpling cannot adopt a
flat morphology as there is no flat-dimple coexistence, hence only the
$ff\rightarrow f$ transition can end with an $f$ domain (see Fig. 9(a)). This
eliminates five of the six possibilities that end with an $f$ domain. The only
other transition that can be eliminated immediately is $ff\rightarrow b$,
because the coalescence of two flat domains must first go through the dimpled
state.
This leaves twelve possible morphological transitions, as shown in Fig.
9(a-$\ell$). For simplicity we will assume the domains have no spontaneous
curvature (though this is straightforward to incorporate [35, 66]). The free
energy change associated with each of these twelve transitions is calculated
by knowing the free energy change associated with three simpler transitions,
namely the $f\rightarrow d$, $f\rightarrow b$, and $ff\rightarrow f$
transitions. Of these, the $f\rightarrow b$ transition free energy was
discussed in the previous section, and the $f\rightarrow d$ transition free
energy is a complicated function discussed at length in [42], though we note
the important fact that $\Delta G_{\tiny f\rightarrow
d}(\alpha_{1}+\alpha_{2})<\Delta G_{\tiny f\rightarrow d}(\alpha_{1})+\Delta
G_{\tiny f\rightarrow d}(\alpha_{2})$ if both domains are above the critical
size for dimpling, or in words, the free energy of domain dimpling as a
function of domain area grows faster than linearly. The scheme we are about to
build is a valid frame work for understanding energy based transitions because
we know that changes in free energy, when moving along a reaction coordinate,
are additive.
The transition of two flat domains coalescing to yield another flat domain is
the most fundamental transition, as shown in Fig. 9, and can be calculated as
the difference in the line tension energy between the initial and final states
given by
${\Delta G_{\tiny ff\rightarrow
f}(\alpha_{1},\alpha_{2})=2\sqrt{\pi}\kappa_{b}\chi\left[\sqrt{\alpha_{1}+\alpha_{2}}-\sqrt{\alpha_{1}}-\sqrt{\alpha_{2}}\right],}$
(11)
where $\alpha_{1}$ and $\alpha_{2}$ are the dimensionless areas of the two
domains and $\Delta G_{\tiny ff\rightarrow f}(\alpha_{1},\alpha_{2})<-k_{B}T$
for all domain areas of one lipid or more. This situation, depicted by Fig.
9(a) and shown experimentally in Fig. 2(c), is encountered at high membrane
tension, when domains are too small to dimple before and after coalescence.
Using the three basic transitions, we now address the remaining eleven
transitions in detail. The next transition we consider is two flat domains,
each too small to dimple on their own, coalescing to form a domain large
enough to dimple, as depicted in Fig. 9(b). The transition free energy is
given by
${\Delta G_{\tiny ff\rightarrow d}(\alpha_{1},\alpha_{2})=\Delta G_{\tiny
ff\rightarrow f}(\alpha_{1},\alpha_{2})+\Delta G_{\tiny f\rightarrow
d}(\alpha_{1}+\alpha_{2}),}$ (12)
and is negative as long as $\alpha_{1}+\alpha_{2}$ is greater than the
critical area required for dimpling.
The next transition is a flat and dimpled domain coalescing to form a dimpled
domain, as depicted in Fig. 9(c). The transition free energy is given by
${\Delta G_{\tiny fd\rightarrow d}(\alpha_{1},\alpha_{2})=\Delta G_{\tiny
ff\rightarrow d}(\alpha_{1},\alpha_{2})-\Delta G_{\tiny f\rightarrow
d}(\alpha_{2}).}$ (13)
No definitive statement about the resultant morphology after coalescence of a
flat and dimpled domain can be made, because the free energy of this
transition must be compared to the closely related transition of a flat and
dimpled domain coalescing to form a budded domain, to determine which has a
greater reduction in free energy. This related transition, depicted in Fig.
9(h), has the transition free energy
$\displaystyle\Delta G_{\tiny fd\rightarrow b}(\alpha_{1},\alpha_{2})=\Delta
G_{\tiny ff\rightarrow f}(\alpha_{1},\alpha_{2})-\Delta G_{\tiny f\rightarrow
d}(\alpha_{2})$ (14) $\displaystyle+\Delta G_{\tiny f\rightarrow
b}(\alpha_{1}+\alpha_{2}).$
Which of these two transitions, $fd\rightarrow d$ or $fd\rightarrow b$,
dominates depends on which has a greater reduction in free energy. Comparing
eqns. 13 and 14, asking which transition has the greater reduction in free
energy is simply asking whether $\Delta G_{\tiny f\rightarrow
d}(\alpha_{1}+\alpha_{2})>\Delta G_{\tiny f\rightarrow
b}(\alpha_{1}+\alpha_{2})$ or vice versa. This energy balance between the
$f\rightarrow d$ and $f\rightarrow b$ transitions determines the outcome of
all of the subsequent binary transitions as well, though we will rigorously
show this for the remaining cases. Because this energetic comparison crops up
so often, we will simply refer to it as the ‘bud-dimple energy balance.’
The next transition is two dimpled domains coalescing to yield a dimpled
domain, as depicted in Fig. 9(d) and shown in Fig. 8(a). The transition free
energy is given by
${\Delta G_{\tiny dd\rightarrow d}(\alpha_{1},\alpha_{2})=\Delta G_{\tiny
fd\rightarrow d}(\alpha_{1},\alpha_{2})-\Delta G_{\tiny f\rightarrow
d}(\alpha_{2}).}$ (15)
Again, we must consider a closely related transition, namely the coalescence
of two dimpled domains yielding a budded domain, as depicted in Fig. 9(i),
with transition free energy
$\displaystyle\Delta G_{\tiny dd\rightarrow b}(\alpha_{1},\alpha_{2})=\Delta
G_{\tiny dd\rightarrow d}(\alpha_{1},\alpha_{2})-\Delta G_{\tiny f\rightarrow
d}(\alpha_{1}+\alpha_{2})$ (16) $\displaystyle+\Delta G_{\tiny f\rightarrow
b}(\alpha_{1}+\alpha_{2}).$
Comparing these two related transitions, $dd\rightarrow d$ and $dd\rightarrow
b$, we see that the dominant transition is determined by the bud-dimple energy
balance.
The next transition is a flat and a budded domain coalescing to form a dimpled
domain, as depicted in Fig. 9(g). The transition free energy is given by
${\Delta G_{\tiny fb\rightarrow d}(\alpha_{1},\alpha_{2})=\Delta G_{\tiny
ff\rightarrow d}(\alpha_{1},\alpha_{2})-\Delta G_{\tiny f\rightarrow
b}(\alpha_{2}).}$ (17)
The related transition, where a flat and budded domain coalesce to form a
budded domain, as depicted in Fig. 9($\ell$), has the transition free energy
$\displaystyle\Delta G_{\tiny fb\rightarrow b}(\alpha_{1},\alpha_{2})=\Delta
G_{\tiny fb\rightarrow d}(\alpha_{1},\alpha_{2})-\Delta G_{\tiny f\rightarrow
d}(\alpha_{1}+\alpha_{2})$ (18) $\displaystyle+\Delta G_{\tiny f\rightarrow
b}(\alpha_{1}+\alpha_{2}).$
Comparing these two related transitions, $fb\rightarrow d$ and $fb\rightarrow
b$, we see that the dominant transition is determined by the bud-dimple energy
balance.
The next transition is a budded and a dimpled domain coalescing to form a
budded domain, as depicted in Fig. 9(j) and shown in Fig. 8(c-e)(green
arrows). The transition free energy is given by
${\Delta G_{\tiny bd\rightarrow b}(\alpha_{1},\alpha_{2})=\Delta G_{\tiny
fd\rightarrow b}(\alpha_{1},\alpha_{2})-\Delta G_{\tiny f\rightarrow
b}(\alpha_{2}).}$ (19)
The related transition, where a budded and dimpled domain coalesce to form a
dimpled domain, as depicted in Fig. 9(e), and shown in Fig. 8(c-e)(yellow
arrows), has the transition free energy
$\displaystyle\Delta G_{\tiny bd\rightarrow d}(\alpha_{1},\alpha_{2})=\Delta
G_{\tiny bd\rightarrow b}(\alpha_{1},\alpha_{2})-\Delta G_{\tiny f\rightarrow
b}(\alpha_{1}+\alpha_{2})$ (20) $\displaystyle+\Delta G_{\tiny f\rightarrow
d}(\alpha_{1}+\alpha_{2}).$
Comparing these two related transitions, $bd\rightarrow b$ and $bd\rightarrow
d$, we see that the dominant transition is determined by the bud-dimple energy
balance.
The last set of transitions is when two buds coalesce to form a larger bud,
depicted in Fig. 9(k), with transition energy
${\Delta G_{\tiny bb\rightarrow b}(\alpha_{1},\alpha_{2})=\Delta G_{\tiny
fb\rightarrow b}(\alpha_{1},\alpha_{2})-\Delta G_{\tiny f\rightarrow
b}(\alpha_{2}),}$ (21)
and when two buds coalesce to form a dimple, depicted in Fig. 9(f) with
transition energy
${\Delta G_{\tiny bb\rightarrow d}(\alpha_{1},\alpha_{2})=\Delta G_{\tiny
fb\rightarrow d}(\alpha_{1},\alpha_{2})-\Delta G_{\tiny f\rightarrow
b}(\alpha_{2}).}$ (22)
Comparing these two related transitions, $bb\rightarrow b$ and $bb\rightarrow
d$, we see that the dominant transition is determined by the bud-dimple energy
balance.
Given the importance of the bud-dimple energy balance in determining the
morphology resulting from a coalescence event, we note that if the resulting
domain area is outside the range specified by eqn. 10, but still larger than
the critical size required for dimpling, the dimpled morphology dominates
because the free energy change of budding is positive outside that range.
Within this size range, selecting the dominant behavior is more subtle, and
depends on the resultant domain size, material properties and tension. For
this reason, until experimental methods are devised that can track the
detailed three dimensional morphology of a phase separated vesicle (i.e. the
positions and sizes of all domains and the membrane tension), the set of
transition rules discussed in this section will remain largely an interpretive
tool, useful for understanding the set of possible transitions and resultant
morphologies, as well as their underlying physics, but difficult to
quantitatively apply to experiment.
We speculate that the kinetics of these coalescence transitions are either
relatively fast, when diffusion is the limiting time scale, as might be the
case in the transitions shown in Fig. 9(a-c,e,g,h,j,$\ell$), or relatively
slow, limited by elastic interactions (Fig. 9(d,i)) or steric hindrance (Fig.
9(f,k)). From the viewpoint of coarsening of a two-phase fluid, these
transitions represent new coarsening mechanisms that are linked to morphology,
and likely have profound effects on the kinetics of phase separation, as
demonstrated by the fact that coalescence of dimpled domains is inhibited by
an energetic barrier. Additionally, these transitions suggest interesting
biological possibilities. For instance, a small volume can be encapsulated at
a particular location, as a dimple transitions to a bud. The enclosed volume
can then diffuse to other regions of the membrane, and either engulf more
volume (see Fig. 9(i)) or deposit its contents at the site of another domain
(see Fig. 9(j)). In fact, both of these scenarios play out in Fig. 8(c-e).
Furthermore, it is possible that careful control of membrane tension [67]
could regulate how large a volume is enclosed, and to which other domains a
bud will coalesce and deposit its contents. This has implications for the
size-selectivity of endo- and exo-cytosis where membrane invagination and
fusion occur, as well as regulation of plasma membrane tension [67].
Discussion
The transition free energies calculated in the previous section have the
intuitively pleasing feature of being a sum of three simple basis transitions
($f\rightarrow d$, $f\rightarrow b$ and $ff\rightarrow f$) . However, this
type of analysis is limited by the fact that it only admits flat, dimpled and
budded as valid morphologies. More general theories and computational models
can (and have been) constructed that attempt to describe all possible shapes
of a domain from precisely flat to fully budded, and other more complex
morphologies [41, 52, 53, 55, 68, 69, 70]. Our level of experimental
sophistication is commensurate with the simplicity of the analysis employed in
the previous section. Conceptually, our model simplifies analysis by reducing
domain morphology to one of three classes of shapes, at the cost of excluding
other possible morphologies. Though overall an experimental minority, domain-
induced tube formation was the most common of these more exotic morphologies.
Normally, thin lipid tubes are drawn out by external force [71, 72, 73].
However, in a few instances we observed domains that spontaneously collapse
and nucleate a tube that rapidly grows many times longer than its persistence
length, as demonstrated in Fig. 10. Oddly, the nucleating domain is of one
lipid phase, but the tube continues to grow from the other, majority phase by
a currently unknown mechanism.
In addition to limiting the class of possible morphologies, our analysis of
morphological transitions also employs the simplification that membrane
tension is constant during a morphological transition. In reality, our
experiments take place on a spherical topology with constrained volume and
surface area, such that this approximation has a range of validity. If the
membrane area required to complete a morphological transition is small
compared to the total vesicle area (see [42] for details) the change in
membrane tension will be small. However, morphological transitions that
require relatively large areas can result in significant changes to membrane
tension, invalidating the constant tension approximation. Although, at times
this can be an advantageous feature of our experimental system, for instance,
when fairly small changes in vesicle area (on the order of 1%) can reduce the
tension enough to cause spontaneous budding, as we showed earlier in this
work.
In addition to the limitations mentioned above, our experiments have a number
of subtle complications. Notably, the task of measuring the motion and size of
lipid domains is complicated by the fact that the spherical curvature of the
vesicle slightly distorts measurements of distance and size. Additionally, the
motion of domains is confined to lie in a circle defined by a combination of
vesicle size and depth of field of the microscope objective. We developed
schemes to correct for these issues, as discussed in detail in the
supplementary information of our previous work [42].
Summary
Using a model multi-component membrane, we explored how the interplay between
composition and morphology leads to elastic forces that spatially organize
domains and significantly impact coalescence kinetics. We expanded upon
mechanical models that incorporate bending stiffness, membrane tension, phase
boundary line tension, and domain size to show that domains can adopt (at
least) three distinct morphologies: flat, dimpled and budded. We showed that
dimpled domains exhibit measurable translational and orientational order as a
function of increasing domain areal density [49, 50, 51]. Using a spherical
budding model, we showed that the transition to a budded state is a domain
size selective process, from which one can estimate the membrane tension, line
tension, and elastic decay length of a phase separated membrane. Additionally,
we found that the large energy scales associated with changes in domain
morphology allow us to define morphological transition rules, where domain
size and membrane tension are likely the key parameters that regulate the
morphological transitions.
In the context of our understanding of the physics of phase separation the
elastic forces between dimpled domains that arrest coalescence, and the
morphological transitions between flat, dimpled and budded domains, constitute
new mechanisms that govern spatial organization of domains and the temporal
evolution of domain sizes. For cellular membranes, we speculate that the
elastic forces and morphological transitions can be controlled via careful
regulation of membrane tension [67], and our work suggests intriguing
possibilities for how small volumes can be encapsulated, moved, and released
in a phase-separated membrane.
Materials and Methods
Giant unilamellar vesicles (GUVs) were prepared from a mixture of DOPC
(1,2-Dioleoyl-sn-Glycero-3-Phosphocholine), DPPC (1,2-Dipalmitoyl-sn-
Glycero-3-Phosphocholine) and cholesterol (Avanti Polar Lipids)
(25:55:20/molar) that exhibits liquid-liquid phase coexistence [23].
Fluorescence contrast between the two lipid phases is provided by the
rhodamine head-group labeled lipids: DOPE (1,2-Dioleoyl-sn-
Glycero-3-Phosphoethanolamine-N- (Lissamine Rhodamine B Sulfonyl)) or DPPE
(1,2-Dipalmitoyl-sn-Glycero-3-Phosphoethanolamine-N- (Lissamine Rhodamine B
Sulfonyl)), at a molar fraction of $\sim 0.005$. The leaflet compositions are
presumed symmetric and hence $c_{o}=0$.
GUVs were formed via electroformation [23, 74]. Briefly, $3-4\,\mu\mbox{g}$ of
lipid in chloroform were deposited on an indium-tin oxide coated slide and
dessicated for $\sim 2\,\mbox{hrs}$ to remove excess solvent. The film was
then hydrated with a $100\,\mbox{mM}$ sucrose solution and heated to $\sim
50\,\mbox{C}$ to be above the miscibility transition temperature. An
alternating electric field was applied; $10\,\mbox{Hz}$ for 120 minutes,
$2\,\mbox{Hz}$ for 50 minutes, at $\sim 500\,\mbox{Volts/m}$ over $\sim
2\,\mbox{mm}$. Low membrane tensions were initially achieved by careful
osmolar balancing with sucrose ($\sim 100\,\mbox{mM}$) inside the vesicles,
and glucose ($\sim 100-108\,\mbox{mM}$) outside. Using a custom built
temperature control stage, the in situ membrane tension was coarsely
controlled by adjusting the temperature a few degrees [59, 60].
Domains were induced by a temperature quench and imaged using standard TRITC
epi-fluorescence microscopy at 80x magnification with a cooled (-30 C) CCD
camera (Roper Scientific, $6.7\times 6.7\,\mu\mbox{m}^{2}$ per pixel, 20 MHz
digitization). Images were taken from the top or bottom of a GUV where the
surface metric is approximately flat. Data sets contained $\sim 500-1500$
frames collected at 10-20 Hz with a varying number of domains (usually $>10$).
The frame rate was chosen to minimize exposure-time blurring of the domains,
while allowing sufficiently large diffusive domain motion. Software was
written to track the position of each well-resolved domain and calculate the
radial distribution function. The raw radial distribution function was
corrected for the fictitious confining potential of the circular geometry. The
negative natural logarithm of the radial distribution function is the
potential of mean force plus a constant, as shown in Fig. 3. Detailed
explanations of these concepts can be found in the supplementary information
for [42].
Morphological transitions were induced by quenching homogeneous vesicles below
the de-mixing temperature and observing those that had many micron-sized
domains. Without precise control of membrane tension or the exact initial
conditions (i.e. the exact number and size distribution of domains) many
vesicles had to be sampled to see transitions. Often, a slight increase in
temperature ($\sim 2$C) was used to increase the available membrane area, and
hence decrease the membrane tension enough to induce morphological
transitions.
We thank Jennifer Hsiao for help with experiments, and Kerwyn Huang, Ben
Freund and Pierre Sens for stimulating discussion and comments. TSU and RP
acknowledge the support of the National Science Foundation award No.
CMS-0301657, NSF CIMMS award No. ACI-0204932, NIRT award No. CMS-0404031 and
the National Institutes of Health award No. R01 GM084211 and the Director’s
Pioneer Award.
## References
* [1] S. J. Singer and G. L. Nicolson. The fluid mosaic model of the structure of cell membranes. Science, 175:720–31, 1972.
* [2] K. Simons and E. Ikonen. Functional rafts in cell membranes. Nature, 387:569–72, 1997.
* [3] H. Gao, W. Shi, and L. B. Freund. Mechanics of receptor-mediated endocytosis. Proc Natl Acad Sci U S A, 102:9469–74, 2005.
* [4] F. R. Maxfield and T. E. McGraw. Endocytic recycling. Nat Rev Mol Cell Biol, 5:121–32, 2004.
* [5] T. D. Pollard, W. C. Earnshaw, J. Lippincott-Schwartz, and G. Johnson. Cell Biology. Saunders, 2nd edition, 2007.
* [6] P. Sens and M. S. Turner. Budded membrane microdomains as tension regulators. Phys Rev E, 73:031918, 2006.
* [7] A. F. Quest, L. Leyton, and M. Parraga. Caveolins, caveolae, and lipid rafts in cellular transport, signaling, and disease. Biochem Cell Biol, 82:129–44, 2004.
* [8] L. Hinrichsen, A. Meyerholz, S. Groos, and E. J. Ungewickell. Bending a membrane: how clathrin affects budding. Proc Natl Acad Sci U S A, 103:8715–20, 2006.
* [9] H. Cai, K. Reinisch, and S. Ferro-Novick. Coats, tethers, rabs, and snares work together to mediate the intracellular destination of a transport vesicle. Dev Cell, 12:671–82, 2007.
* [10] M. Fix, T. J. Melia, J. K. Jaiswal, J. Z. Rappoport, D. You, T. H. Sollner, J. E. Rothman, and S. M. Simon. Imaging single membrane fusion events mediated by snare proteins. Proc Natl Acad Sci U S A, 101:7311–6, 2004.
* [11] H. T. McMahon and J. L. Gallop. Membrane curvature and mechanisms of dynamic cell membrane remodelling. Nature, 438:590–6, 2005.
* [12] B. Antonny. Membrane deformation by protein coats. Curr Opin Cell Biol, 18:386–94, 2006.
* [13] J. S. Bonifacino and B. S. Glick. The mechanisms of vesicle budding and fusion. Cell, 116:153–66, 2004.
* [14] C. J. Merrifield, D. Perrais, and D. Zenisek. Coupling between clathrin-coated-pit invagination, cortactin recruitment, and membrane scission observed in live cells. Cell, 121:593–606, 2005.
* [15] K. Tsujita, S. Suetsugu, N. Sasaki, M. Furutani, T. Oikawa, and T. Takenawa. Coordination between the actin cytoskeleton and membrane deformation by a novel membrane tubulation domain of pch proteins is involved in endocytosis. J Cell Biol, 172:269–79, 2006.
* [16] D. Yarar, C. M. Waterman-Storer, and S. L. Schmid. A dynamic actin cytoskeleton functions at multiple stages of clathrin-mediated endocytosis. Mol Biol Cell, 16:964–75, 2005.
* [17] S. Mukherjee, T. T. Soe, and F. R. Maxfield. Endocytic sorting of lipid analogues differing solely in the chemistry of their hydrophobic tails. J Cell Biol, 144:1271–84, 1999.
* [18] S. Mukherjee and F. R. Maxfield. Role of membrane organization and membrane domains in endocytic lipid trafficking. Traffic, 1:203–11, 2000.
* [19] T. Harder, P. Scheiffele, P. Verkade, and K. Simons. Lipid domain structure of the plasma membrane revealed by patching of membrane components. J Cell Biol, 141:929–42, 1998.
* [20] K. Simons and E. Ikonen. How cells handle cholesterol. Science, 290:1721–6, 2000.
* [21] G. van Meer and H. Sprong. Membrane lipids and vesicular traffic. Curr Opin Cell Biol, 16:373–8, 2004.
* [22] K. D’Hondt, A. Heese-Peck, and H. Riezman. Protein and lipid requirements for endocytosis. Annu Rev Genet, 34:255–295, 2000.
* [23] S. L. Veatch and S. L. Keller. Separation of liquid phases in giant vesicles of ternary mixtures of phospholipids and cholesterol. Biophys J, 85:3074–83, 2003.
* [24] T. Baumgart, S. T. Hess, and W. W. Webb. Imaging coexisting fluid domains in biomembrane models coupling curvature and line tension. Nature, 425:821–4, 2003.
* [25] E. J. Shimshock and H. McConnell. Later phase separations in binary mixtures of cholesterol and phospholipids. Biochem Biophys Res Commun, 53:446–451, 1973.
* [26] K. Simons and W. L. Vaz. Model systems, lipid rafts, and cell membranes. Annu Rev Biophys Biomol Struct, 33:269–95, 2004.
* [27] A. Schlegel, D. Volonte, J. A. Engelman, F. Galbiati, P. Mehta, X. L. Zhang, P. E. Scherer, and M. P. Lisanti. Crowded little caves: structure and function of caveolae. Cell Signal, 10:457–63, 1998.
* [28] N. Chazal and D. Gerlier. Virus entry, assembly, budding, and membrane rafts. Microbiol Mol Biol Rev, 67:226–37, 2003.
* [29] S. Mayor and M. Rao. Rafts: scale-dependent, active lipid organization at the cell surface. Traffic, 5:231–40, 2004.
* [30] C. Dietrich, L. A. Bagatolli, Z. N. Volovyk, N. L. Thompson, M. Levi, K. Jacobson, and E. Gratton. Lipid rafts reconstituted in model membranes. Biophys J, 80:1417–28, 2001.
* [31] H. Park, Y. M. Go, P. L. St John, M. C. Maland, M. P. Lisanti, D. R. Abrahamson, and H. Jo. Plasma membrane cholesterol is a key molecule in shear stress-dependent activation of extracellular signal-regulated kinase. J Biol Chem, 273:32304–11, 1998.
* [32] J. B. Helms and C. Zurzolo. Lipids as targeting signals: lipid rafts and intracellular trafficking. Traffic, 5:247–54, 2004.
* [33] H. A. Lucero and P. W. Robbins. Lipid rafts-protein association and the regulation of protein activity. Arch Biochem Biophys, 426:208–24, 2004.
* [34] K. Gaus, E. Gratton, E. P. Kable, A. S. Jones, I. Gelissen, L. Kritharides, and W. Jessup. Visualizing lipid structure and raft domains in living cells with two-photon microscopy. Proc Natl Acad Sci U S A, 100:15554–9, 2003.
* [35] R. Lipowsky. Budding of membranes induced by intramembrane domains. J Phys II France, 2:1925–1840, 1992.
* [36] M. S. Turner, P. Sens, and N. D. Socci. Nonequilibrium raftlike membrane domains under continuous recycling. Phys Rev Lett, 95:168301, 2005.
* [37] W. Rawicz, K. C. Olbrich, T. McIntosh, D. Needham, and E. Evans. Effect of chain length and unsaturation on elasticity of lipid bilayers. Biophys J, 79:328–39, 2000.
* [38] T. Baumgart, A. T. Hammond, P. Sengupta, S. T. Hess, D. A. Holowka, B. A. Baird, and W. W. Webb. Large-scale fluid/fluid phase separation of proteins and lipids in giant plasma membrane vesicles. Proc Natl Acad Sci U S A, 104:3165–70, 2007.
* [39] P. I. Kuzmin, S. A. Akimov, Y. A. Chizmadzhev, J. Zimmerberg, and F. S. Cohen. Line tension and interaction energies of membrane rafts calculated from lipid splay and tilt. Biophys J, 88:1120–33, 2005.
* [40] J. L. Harden, F. C. Mackintosh, and P. D. Olmsted. Budding and domain shape transformations in mixed lipid films and bilayer membranes. Phys Rev E, 72:011903, 2005.
* [41] T. Taniguchi. Shape deformation and phase separation dynamics of two-component vesicles. Phys Rev Lett, 76:4444–4447, 1996.
* [42] T. S. Ursell, W. S. Klug, and R. Phillips. Morphology and interactions between lipid domains. submitted to PNAS, 2009.
* [43] P. Cicuta, S. L. Keller, and S. L. Veatch. Diffusion of liquid domains in lipid bilayer membranes. J Phys Chem B, 111:3328–31, 2007.
* [44] A. J. Bray. Theory of phase-ordering kinetics. Adv Phys, 51:481–587, 2002.
* [45] M. Seul, N. Y. Morgan, and C. Sire. Domain coarsening in a two-dimensional binary mixture: Growth dynamics and spatial correlations. Phys Rev Lett, 73:2284–2287, 1994.
* [46] L. Foret. A simple mechanism of raft formation in two-component fluid membranes. Europhys Lett, 71:508–514, 2005.
* [47] M. Yanagisawa, M. Imai, T. Masui, S. Komura, and T. Ohta. Growth dynamics of domains in ternary fluid vesicles. Biophys J, 92:115–25, 2007.
* [48] S. H. Hansen, K. Sandvig, and B. van Deurs. Internalization efficiency of the transferrin receptor. Exp Cell Res, 199:19–28, 1992.
* [49] A. R. Bausch, M. J. Bowick, A. Cacciuto, A. D. Dinsmore, M. F. Hsu, D. R. Nelson, M. G. Nikolaides, A. Travesset, and D. A. Weitz. Grain boundary scars and spherical crystallography. Science, 299:1716–8, 2003.
* [50] M. Bowick, A. Cacciuto, D. R. Nelson, and A. Travesset. Crystalline order on a sphere and the generalized thomson problem. Phys Rev Lett, 89:185502, 2002.
* [51] T. Erber and G. Hockney. Complex systems: Equilibrium configurations of n equal charges on a sphere (2¡n¡112). volume 98 of Advances in Chemical Physics. John Wiley and Sons, Inc., —1997—.
* [52] U. Seifert, K. Berndl, and R. Lipowsky. Shape transformations of vesicles: Phase diagram for spontaneous- curvature and bilayer-coupling models. Phys Rev A, 44:1182–1202, 1991.
* [53] F. Julicher and R. Lipowsky. Shape transformations of vesicles with intramembrane domains. Phys Rev E, 53:2670–2683, 1996.
* [54] P. Sens and M. S. Turner. Theoretical model for the formation of caveolae and similar membrane invaginations. Biophys J, 86:2049–57, 2004.
* [55] W. T. Gozdz and G. Gompper. Shape transformations of two-component membranes under weak tension. Europhys Lett, 55:587–593, 2001.
* [56] J. M. Allain and M. Ben Amar. Budding and fission of a multiphase vesicle. Eur Phys J E, 20:409–20, 2006.
* [57] B. Widom. Line tension and the shape of a sessile drop. J Phys Chem, 99:2803–2806, 1995.
* [58] E. Evans and D. Needham. Giant vesicle bilayers composed of mixtures of lipids, cholesterol and polypeptides. thermomechanical and (mutual) adherence properties. Faraday Discuss Chem Soc, 81:267–80, 1986.
* [59] D. Needham and E. Evans. Physical properties of surfactant bilayer membranes: thermal transitions, elasticity, rigidity, cohesion and colloidal interactions. J Phys Chem, 91:4219–4228, 1987.
* [60] J. Kas and E. Sackmann. Shape transitions and shape stability of giant phospholipid vesicles in pure water induced by area-to-volume changes. Biophys J, 60:825–44, 1991.
* [61] Z. Chen and R. P. Rand. The influence of cholesterol on phospholipid membrane curvature and bending elasticity. Biophys J, 73:267–76, 1997.
* [62] J. Henriksen, A. C. Rowat, and J. H. Ipsen. Vesicle fluctuation analysis of the effects of sterols on membrane bending rigidity. Eur Biophys J, 33:732–41, 2004.
* [63] C. E. Morris and U. Homann. Cell surface area regulation and membrane tension. J Membr Biol, 179:79–102, 2001.
* [64] G. Popescu, T. Ikeda, K. Goda, C. A. Best-Popescu, M. Laposata, S. Manley, R. R. Dasari, K. Badizadegan, and M. S. Feld. Optical measurement of cell membrane tension. Phys Rev Lett, 97:218101, 2006.
* [65] A. Tian, C. Johnson, W. Wang, and T. Baumgart. Line tension at fluid membrane domain boundaries measured by micropipette aspiration. Phys Rev Lett, 98:208102, 2007.
* [66] T. Kohyama, D. M. Kroll, and G. Gompper. Budding of crystalline domains in fluid membranes. Phys Rev E, 68:061905, 2003.
* [67] D. Raucher and M. P. Sheetz. Characteristics of a membrane reservoir buffering membrane tension. Biophys J, 77:1992–2002, 1999.
* [68] M. Laradji and P. B. Sunil Kumar. Dynamics of domain growth in self-assembled fluid vesicles. Phys Rev Lett, 93:198105, 2004.
* [69] M. Laradji and P. B. Kumar. Anomalously slow domain growth in fluid membranes with asymmetric transbilayer lipid distribution. Phys Rev E, 73:040901, 2006.
* [70] B. Hong, F. Qiu, H. Zhang, and Y. Yang. Budding dynamics of individual domains in multicomponent membranes simulated by n-varied dissipative particle dynamics. J Phys Chem B, 111:5837–5849, 2007.
* [71] G. Koster, A. Cacciuto, I. Derenyi, D. Frenkel, and M. Dogterom. Force barriers for membrane tube formation. Phys Rev Lett, 94:068101, 2005.
* [72] I. Derenyi, F. Julicher, and J. Prost. Formation and interaction of membrane tubes. Phys Rev Lett, 88:238101, 2002.
* [73] T. R. Powers, G. Huber, and R. E. Goldstein. Fluid-membrane tethers: minimal surfaces and elastic boundary layers. Phys Rev E, 65:041901, 2002\.
* [74] M. I. Angelova, S. Soleau, P. Meleard, J. F. Faucon, and P. Bothorel. Preparation of giant vesicles by external ac electric fields. kinetics and applications. Progr Colloid Polym Sci, 89:127–131, 1992.
* [75] J. Pecreaux, H. G. Dobereiner, J. Prost, J. F. Joanny, and P. Bassereau. Refined contour analysis of giant unilamellar vesicles. Eur Phys J E Soft Matter, 13:277–90, 2004.
Figure 1: Morphologies of a lipid domain. a) A domain (red) lies completely
flat when the energy from line tension is small compared to the cost of
deformation from bending and membrane tension. b) For domains with a size
roughly equal to or less than the elastic decay length, a competition between
bending and phase boundary line tension results in a morphological transition
from a flat to a dimpled state. This morphology facilitates elastic
interactions between domains that slow the kinetics of coalescence
significantly. c) Line tension in domains whose size is large compared to the
elastic decay length, can cause a transition to a fully budded state. Figure
2: Domain morphology and coalescence. a) A nearly fully phase-separated
vesicle, showing domains (red) flat with respect to the background curvature
of the vesicle (blue). b) At low tension, domains (red) dimple and establish a
non-zero boundary slope with respect to the curvature of the vesicle (blue).
c) Flat domains on the surface of a vesicle - coalescence is uninhibited by
elastic interactions. d) Dimpled domains on the surface of vesicle -
coalescence is inhibited by elastic interactions between the domains, and the
domain-size distribution is stable. Directly measuring membrane tension
disturbs the domain size evolution, however the magnitude of membrane
fluctuations [47, 75] indicates that the tension in (c) is higher than the
tension in (d). Scale bars are $10\,\mu\mbox{m}$. Figure 3: Interactions of
lipid domains as areal density increases. Left: snapshots of dimples on the
surface of GUVs. Right: corresponding potentials of mean force. a) At low
areal density, interactions are almost purely repulsive, and there is no
translational or orientational order – the domains are in a state analogous to
a gas of particles. b-f) At higher areal density, domains ‘condense’ into a
state where each domain is repelled by its neighboring domains, giving rise to
energy wells that define a lattice constant and hence translational order. The
decay envelope of these ‘ringing’ potentials indicates the length-scale over
which the motion of domains is correlated. In all plots, the blue line
indicates the fit to eqn. 1, where $\lambda_{2}$ is the order-correlation
length and $\lambda_{3}$ is the effective lattice constant. The dashed
vertical lines are the approximate minimum center-to-center distance between
domains as determined by domain size measurement (a) or one half the lattice
constant (b-f). Insets: Time-averaged Fourier transforms, showing that
mutually repulsive elastic interactions lead to (thermally smeared) hexagonal
order, except in (a) where the density is too low to order the domains. Scale
bars are $10\,\mu\mbox{m}$. Figure 4: Increase in correlated motion of dimpled
domains as a function of area occupancy. This plot shows the ratio of the
order-correlation length ($\lambda_{2}$) over the lattice constant
($\lambda_{3}$) for the vesicles in Fig. 3 (and one additional vesicle) as a
function of the total area taken up by the domains divided by the total
measurable vesicle area. The ratio $\lambda_{2}/\lambda_{3}$ quantifies how
many nearest neighbor domains (i.e. 1st, 2nd, etc.) exhibit strongly
correlated motion. Figure 5: Shapes and energies of domain budding. a) A
schematic of domain shape going from a flat domain, with area $\alpha$ and
flat radius $\rho$, through a dome shape with wrapping angle $\theta$ and
radius of curvature $R$, to a fully budded state, with an applied tension
$\tau$. b) The free energy of a budding domain as a function of line tension
($\chi$) and wrapping angle ($\theta$) for domain size $\alpha=10$. At low
line tension (before the blue line), both flat and budded morphologies are
stable, but the flat state has a lower elastic free energy and there is an
energy barrier between the two stable states. At the blue line, the free
energy difference between flat and budded is zero. Between the blue and red
lines, both morphologies are stable, but the budded state has a lower elastic
free energy. Finally, for line tensions above the red line, the energy barrier
disappears and budding is a spontaneous process. Figure 6: Equilibrium phase
diagram for domain budding as a function of dimensionless domain area and line
tension. In region ‘a’ flat and budded domains coexist, with flat domains at
lower free energy. In region ‘b’ flat and budded domains coexist, with budded
domains at lower free energy. In region ‘c’ only a single, budded phase is
stable. The line separating regions ‘a’ and ‘b’ is given by $\chi_{\mbox{\tiny
bud}}/2$ and between regions ‘b’ and ‘c’ by $\chi_{\mbox{\tiny bud}}$ (eqn.
7). Dashed lines are trajectories of increasing membrane tension (as indicated
by the arrows) at constant domain area. In all four trajectories
$\gamma=0.3\,k_{B}T/\mbox{nm}$, $\kappa_{b}=25\,k_{B}T$ and tension is varied
from $\tau=10^{-5}-10^{-2}\,k_{B}T/\mbox{nm}^{2}$; the domain areas are
$\mathcal{A}_{1}=\pi(100\mbox{nm})^{2}$,
$\mathcal{A}_{2}=\pi(250\mbox{nm})^{2}$,
$\mathcal{A}_{3}=\pi(500\mbox{nm})^{2}$, and
$\mathcal{A}_{4}=\pi(1000\mbox{nm})^{2}$. Figure 7: Size-selective domain
budding. a) Dimples on the surface of a GUV are initially arranged by their
repulsive interactions. b) and c) A slight increase in temperature decreases
membrane tension, causing smaller domains to spontaneously bud (marked with
red arrows) and wander freely on the vesicle surface, while larger domains
remain dimpled. The mean size of budding domains is $r\simeq 0.93\pm
0.18\,\mu\mbox{m}$ from which we estimate a line tension of $\gamma\simeq
0.45\,k_{B}T/\mbox{nm}$. d) Plot of the potential of mean force between the
dimpled domains in (a), moments before inducing spontaneous budding. e)
Budding stability diagram, showing solutions to $\partial G/\partial\theta=0$
with $\gamma=0.45\,k_{B}T/\mbox{nm}$. Solid red lines are stable solutions at
energy minima; dashed red lines are unstable solutions on the energy barriers.
Regions 1 and 3 are coexistence regimes, while region 2 is a spontaneous
budding regime, only stable at $|\theta|=\pi$. The blue dots indicate domain
areas with radii $r\simeq 0.93,0.93\pm 0.18\,\mu\mbox{m}$. f) The red curve
shows the free energy of budding at $\tau=1.2\times
10^{-3}\,k_{B}T/\mbox{nm}^{2}$, which is greater than zero for all domain
sizes, and hence all domains would remain flat/dimpled. The green curve shows
the free energy of budding at $\tau=2.4\times 10^{-4}\,k_{B}T/\mbox{nm}^{2}$.
Domains with radius $r\simeq 0.25-3.5\,\mu\mbox{m}$ have a negative free
energy of budding, all other sizes remain flat/dimpled. Most domains within
this size range must still overcome an energy barrier to bud, but for a small
range of domain sizes ($r\simeq 0.75-1.11\,\mu\mbox{m}$), indicated by the
blue line segment, budding is a spontaneous process. The energies are
calculated using $\kappa_{b}=25\,k_{B}T$ and $\gamma\simeq
0.45k_{B}T/\mbox{nm}$. In (a-c) scale bars are $10\,\mu\mbox{m}$. Figure 8:
Gallery of morphological transitions. a) Two dimpled domains (indicated by the
white arrows) interact on the surface of vesicle, eventually coalescing to
yield a larger dimpled domain (see Fig. 9(d)). b) An equatorial view of a
dimple-to-bud transition (indicated by the red arrows). c-e) Time courses of
multiple types of morphological transitions. Arrows are color-coded and point
to before and after each transition: red arrows indicate a dimple to bud
transition, green arrows indicate a bud engulfing a dimple to form a larger
bud (see Fig. 9(j)), and yellow arrows indicate a bud recombining with a
larger dimpled domain (see Fig. 9(e)). Using video microscopy, we can put an
upper bound on the time scale of the $d\rightarrow b$, $db\rightarrow d$ and
$db\rightarrow b$ transitions at $\sim 200\pm 80\,\mbox{ms}$, $\sim 160\pm
70\,\mbox{ms}$ and $\sim 210\pm 70\,\mbox{ms}$, respectively. Scale bars are
$10\,\mu\mbox{m}$. Figure 9: Algebra of Morphology. All rows show additive
free energies of transition from an initial state on the left to a final state
on the right. a) Two flat domains coalesce via diffusion, yielding a flat
domain. b) Domains, too small to dimple, coalesce to attain a size capable of
dimpling. c) A dimple and a domain too small to dimple coalesce to yield a
larger dimpled domain. d) Two interacting dimples coalescence, yielding a
larger dimpled domain. e) A dimple and a bud coalesce to yield a larger
dimpled domain. f) Two buds coalesce to yield a dimpled domain. g) A flat
domain coalesces with a bud, yielding a dimpled domain. h) A flat domain
coalesces with a dimpled domain to yield a bud. i) Two interacting dimples
coalescence, yielding a bud. j) A bud coalesces with a dimple, yielding a
larger budded domain. k) Two budded domains coalesce to form a larger budded
domain. l) A flat domain coalesces with a bud to yield a larger budded domain.
Figure 10: Domain-nucleated spontaneous tube formation. A time series of
spontaneous tube formation, nucleated from a domain (as indicated by the white
arrow). This relatively uncommon morphology is not explained within the
context of our simple model. The lipid tube (bright) is many times longer than
its persistence length, yet perplexingly, grows from the tube tip. With
limited optical resolution, we estimate the tube diameter to be $\leq
500\,\mbox{nm}$. The scale bar is $10\,\mu\mbox{m}$.
|
arxiv-papers
| 2009-05-09T13:22:53 |
2024-09-04T02:49:02.454536
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Tristan Ursell, Rob Phillips",
"submitter": "Tristan Ursell",
"url": "https://arxiv.org/abs/0905.1405"
}
|
0905.1425
|
Wadham College Doctor of Philosophy Trinity 2008
# Mass Determination
of New Particle States
Mario A Serna Jr
This thesis is dedicated to
my wife
for joyfully supporting me and our daughter while I studied.
Mass Determination of New Particle States
Mario Andres Serna Jr
Wadham College
Thesis submitted for the degree of Doctor of Philosophy
Trinity Term 2008
Abstract
We study theoretical and experimental facets of mass determination of new
particle states. Assuming supersymmetry, we update the quark and lepton mass
matrices at the grand unification scale accounting for threshold corrections
enhanced by large ratios of the vacuum expectation value of the two
supersymmetric Higgs fields $v_{u}/v_{d}\equiv\tan\beta$. From the hypothesis
that quark and lepton masses satisfy a classic set of relationships suggested
in some Grand Unified Theories (GUTs), we predict $\tan\beta$ needs to be
large, and the gluino’s soft mass needs to have the opposite sign to the
wino’s soft mass. Existing tools to measure the phase of the gluino’s mass at
upcoming hadron colliders require model-independent, kinematic techniques to
determine the masses of the new supersymmetric particle states. The mass
determination is made difficult because supersymmetry is likely to have a
dark-matter particle which will be invisible to the detector, and because the
reference frame and energy of the parton collisions are unknown at hadron
colliders. We discuss the current techniques to determine the mass of
invisible particles. We review the transverse mass kinematic variable $M_{T2}$
and the use of invariant-mass edges to find relationships between masses.
Next, we introduce a new technique to add additional constraints between the
masses of new particle states using $M_{T2}$ at different stages in a
symmetric decay chain. These new relationships further constrain the mass
differences between new particle states, but still leave the absolute mass
weakly determined. Next, we introduce the constrained mass variables
$M_{2C,LB}$, $M_{2C,UB}$, $M_{3C,LB}$, $M_{3C,UB}$ to provide event-by-event
lower-bounds and upper-bounds to the mass scale given mass differences. We
demonstrate mass scale determination in realistic case studies of
supersymmetry models by fitting ideal distributions to simulated data. We
conclude that the techniques introduced in this thesis have precision and
accuracy that rival or exceed the best known techniques for invisible-particle
mass-determination at hadron colliders.
Declaration
This thesis is the result of my own work, except where reference is made to
the work of others, and has not been submitted for other qualification to this
or any other university.
Mario Andres Serna Jr
###### Acknowledgements.
I would like to thank my supervisor Graham Ross for his patient guidance as I
explored many topics about which I have wanted to learn for a long time. For
the many hours of teaching, proofreading, computer time and conversations they
have provided me, I owe many thanks to my fellow students Tanya Elliot, Lara
Anderson, Babiker Hassanain, Ivo de Medeiros Varzielas, Simon Wilshin, Dean
Horton, and the friendly department post-docs James Gray, Yang-Hui He, and
David Maybury. I also need to thank Laura Serna, Alan Barr, Chris Lester, John
March Russell, Dan Tovey, and Giulia Zanderighi for many helpful comments on
my published papers which were forged into this thesis. Next I need to thank
Tilman Plehn, Fabio Maltoni and Tim Stelzer for providing us online access to
MadGraph and MadEvent tools. I would also like to thank Alex Pinder, Tom
Melina, and Elco Makker, with whom I’ve worked on fourth-year projects this
past year, for questions and conversations that have helped my work on this
thesis. I also would like to thank my mentors over the years: Scotty Johnson,
Leemon Baird, Iyad Dajani, Richard Cook, Alan Guth, Lisa Randall, Dave
Cardimona, Sanjay Krishna, Scott Dudley, and Kevin Cahill, and the many others
who have offered me valuable advice that I lack the space to mention here.
Finally I’d like to thank my family and family-in-law for their support and
engaging distractions during these past few years while I’ve been developing
this thesis. I also acknowledge support from the United States Air Force
Institute of Technology. The views expressed in this thesis are those of the
author and do not reflect the official policy or position of the United States
Air Force, Department of Defense, or the US Government.
###### Contents
1. 1 Introduction
2. 2 Mass Determination in the Standard Model and Supersymmetry
1. 2.1 Past Mass Determination and Discovery of New Particle States
2. 2.2 Dark Matter: Evidence for New, Massive, Invisible Particle States
3. 2.3 Supersymmetry: Predicting New Particle States
3. 3 Predictions from Unification and Fermion Mass Structure
1. 3.1 Supersymmetric Thresholds and GUT-Scale Mass Relations
2. 3.2 Updated fits to Yukawa matrices
4. 4 Mass Determination Toolbox at Hadron Colliders
1. 4.1 Mass Determination Challenges at Hadron Colliders
2. 4.2 Invariant Mass Edge Techniques
3. 4.3 Mass Shell Techniques
4. 4.4 Transverse Mass Techniques
5. 5 Using $M_{T2}$ with Cascade decays
1. 5.1 $M_{T2}$ and $M_{CT}$ in this context
2. 5.2 Application to symmetric decay chains
6. 6 The Variable $M_{2C}$: Direct Pair Production
1. 6.1 An improved distribution from which to determine $M_{Y}$
2. 6.2 Using $M_{T2}$ to Find $M_{2C}$ and the $\max M_{T2}$ Kink
3. 6.3 Symmetries and Dependencies of the $M_{2C}$ Distribution
4. 6.4 Application of the method: SUSY model examples
7. 7 The Variable $M_{2C}$: Significant Upstream Transverse Momentum
1. 7.1 Upper Bounds on $M_{Y}$ from Recoil against UTM
2. 7.2 Modeling and Simulation
3. 7.3 Factors for Successful Shape Fitting
4. 7.4 Estimated Performance
8. 8 The Variable $M_{3C}$: On-shell Interlineate States
1. 8.1 Introducing $M_{3C}$
2. 8.2 How to calculate $M_{3C}$
3. 8.3 Factors for Successful Shape Fitting
4. 8.4 Estimated Performance
9. 9 Discussions and Conclusions
10. A Renormalization Group Running
1. A.1 RGE Low-Energy $SU(3)_{c}\times U(1)_{EM}$ up to the Standard Model
2. A.2 RGE for Standard Model with Neutrinos up to MSSM
3. A.3 RGE for the MSSM with Neutrinos up to GUT Scale
4. A.4 Approximate Running Rules of Thumb
11. B Hierarchical Yukawa couplings and Observables
12. C Verifying $M_{T2}$ in Eq(5.4)
13. D Fitting Distributions to Data
14. E Uniqueness of Event Reconstruction
15. F Acronyms List
###### List of Figures
1. 2.1 The states of the $SU(3)$ 10 of $J^{P}=3/2^{+}$ baryons in 1962. Also shown is the $SU(3)$ 27.
2. 2.2 One Loop Contributions to the Higgs mass self energy. (a) Fermion loop. (b) Higgs loop.
3. 2.3 Direct dark matter searches showing the limits from the experiments CDMS, XENON10, and the DAMA signal. Also shown are sample dark-matter cross sections and masses predicted for several supersymmetric models and universal extra dimension models. Figures generated using http://dmtools.berkeley.edu/limitplots/.
4. 2.4 One Loop Contributions to the supersymmetric Higgs $H_{u}$ mass self energy. (a) top loop. (b) stop loop.
5. 2.5 Gauge couplings for the three non-gravitational forces as as a function of energy scale for the (left) Standard Model and (right) MSSM.
6. 3.1 Updates to the top-quark mass, strong coupling constant, and bottom-quark mass are responsible for the quantitative stress of the classic GUT relation for $y_{b}/y_{\tau}$.
7. 3.2 Updates to the strange-quark mass and $V_{cb}$ are responsible for the quantitative stress of the Georgi-Jarlskog mass relations and the need to update values from Ref. [1].
8. 4.1 (Left:) Two body decay and it’s associated $m_{ll}$ distribution. (Right:) Three body decay and it’s associated $m_{ll}$ distribution.
9. 4.2 Cascade decay from $Z$ to $Y$ to $X$ ending in $N$ and visible particle momenta $\alpha_{1}$, $\alpha_{2}$, $\alpha_{3}$.
10. 4.3 Events with the new state $Y$ is pair produced and in which each $Y$ decays through a two-body decay to a massive new state $X$ and a visible state $1$ and then where $X$ subsequently decays to a massive state $N$ invisible to the detector and visible particles $2$. All previous decay products are grouped into the upstream transverse momentum, $k$.
11. 4.4 Events with the new state $Y$ is pair produced and in which each $Y$ decays through a three-body decay to a massive state $N$ invisible to the detector and visible particles $1$, $2$, $3$, and $4$. All previous decay products are grouped into the upstream transverse momentum, $k$.
12. 4.5 The $\max M_{T2}(\chi)$ shows a kink at the true $M_{N}$ and $M_{Y}$. For this simulation, $m_{Y}=200$ GeV, and $m_{N}=99$ GeV.
13. 5.1 Shows constraints from $\max\,M_{T2}$ used with different combinations as described in eqs(5.7,5.8,5.9) and the $\max m_{12}$ described in Eq(5.11). Intersection is at the true mass $(97\,\,{\rm{GeV}},144\,\,{\rm{GeV}},181\,\,{\rm{GeV}})$ shown by sphere. Events include ISR but otherwise ideal conditions: no background, resolution, or combinatoric error.
14. 6.1 The distribution of 30000 events in 5 GeV bins with perfect resolution and no background. The three curves represent $M_{Y}=200$ GeV (dot-dashed), $M_{Y}=150$ GeV (dotted) and $M_{Y}=100$ GeV (solid) each with $M_{-}=50$ GeV. Each distribution cuts off at the correct $M_{Y}$.
15. 6.2 The $M_{T2}(\chi)$ curves for four events with $M_{N}=50$ GeV and $M_{Y}=100$ GeV. Only the events whose curves starts off at $M_{T2}(0)>M_{-}$ intersect the straight line given by $M_{T2}(\chi)-\chi=M_{-}$. The $M_{T2}$ at the intersection is $M_{2C}$ for that event.
16. 6.3 Effect of this maximally spin correlated process on the $M_{2C}$ distribution. Modeled masses are $M_{Y}=200$ GeV and $M_{N}=150$ GeV. The solid black distribution is the uncorrelated case and red dotted distribution is maximally spin correlated.
17. 6.4 Demonstrates the distribution is independent of the COM energy, angle with which the pair is produced with respect to the beam axis, and the frame of reference.
18. 6.5 $\chi^{2}$ fit of 250 events from model P1 of Ref [2] to the theoretical distributions calculated for different $M_{\chi_{2}^{o}}$ values but fixed $M_{\chi_{2}^{o}}-M_{\chi_{1}^{o}}$. The fit gives $M_{\chi_{2}^{o}}=133\pm 6$ GeV.
19. 6.6 $\chi^{2}$ fit of 3000 events from model SPS 6 of Ref [3] to the theoretical distributions calculated for different $M_{\chi_{2}^{o}}$ values but fixed $M_{\chi_{2}^{o}}-M_{\chi_{1}^{o}}$. The fit gives $M_{\chi_{2}^{o}}=221\pm 20$ GeV.
20. 7.1 Shows $g(\chi_{N})$ for the extreme event in Eq(7.2-7.4) with $M_{-}=53\,\,{\rm{GeV}}$ and $M_{N}=67.4\,\,{\rm{GeV}}$. The red dotted line has $M_{G}=150\,\,{\rm{GeV}}$ and shows an event providing a lower bound on $M_{Y}$. The blue dashed line $M_{G}=170\,\,{\rm{GeV}}$ and shows an event with both a lower bound and an upper-bound on $M_{Y}$. The black solid line shows $M_{G}=\sqrt{M_{Y}^{3}/M_{N}}$ where the lower bound equals the upper bound.
21. 7.2 The $M_{2C}$ and $M_{2C,UB}$ distributions of HERWIG events before smearing (to simulate detector resolution) is applied. The distributions’ end-points show $M_{\tilde{\chi}^{o}_{2}}\approx 120$ GeV. The top thick curve shows the net distribution, the next curve down shows the contribution of only the signal events, and the bottom dashed curve shows the contribution of only background events.
22. 7.3 We show the $M_{2C}$ and $M_{2C,UB}$ ideal distributions for five choices of $M_{\tilde{\chi}^{o}_{2}}$ assuming the HERWIG generated $m_{ll}$ and UTM distributions.
23. 7.4 Dependence of $M_{2C}$ distribution on the $m_{ll}$ distribution. Left: The $m_{ll}$ distributions. Right: The corresponding $M_{2C}$ distributions. The solid curves show the case where the $m_{ll}$ distribution when the three-body decay is dominated by the $Z$ boson channel, and the dashed curves show the case where the $m_{ll}$ distribution is taken directly from the HERWIG simulation.
24. 7.5 Left: The UTM distribution observed in the HERWIG simulation. Right: Ideal $M_{2C}$ upper bound and lower bound distribution for a range of upstream transverse momentum (UTM) values ($k_{T}=0,75,175,275,375,575\,\,{\rm{GeV}}$) where $M_{N}=70$ GeV and $M_{Y}=123$ GeV.
25. 7.6 Figure shows that even with large UTM, the distribution is independent of $k^{2}$ and the parton collision energy to the numerical accuracies as calculated from 15000 events. Shown are three distributions and their difference. (1) $k_{T}=175\,\,{\rm{GeV}}$, $k^{2}=(100\,\,{\rm{GeV}})^{2}$, $\sqrt{s}$ distributed via Eq(7.10). (2) $k_{T}=175\,\,{\rm{GeV}}$, $k^{2}=(2000\,\,{\rm{GeV}})^{2}$, $\sqrt{s}$ distributed via Eq(7.10). (3) $k_{T}=175\,\,{\rm{GeV}}$, $k^{2}=(100\,\,{\rm{GeV}})^{2}$, $\sqrt{s}=549\,\,{\rm{GeV}}$.
26. 7.7 The invariant mass of the OSSF leptons from both branches of forming a Dalitz-like wedgebox analysis. The events outside the $m_{ll}\leq 53\,\,{\rm{GeV}}$ signal rectangle provide control samples from which we estimate the background shape and magnitude. The dark events are signal, the lighter events are background.
27. 7.8 The missing transverse momentum vs $M_{2C}$ values for HERWIG data. This shows that a $\not{P}_{T}>20\,\,{\rm{GeV}}$ cut would not affect the distribution for $M_{2C}>65\,\,{\rm{GeV}}$.
28. 7.9 The result of $\chi^{2}$ fits to the data with differing assumptions for $100\,\,{\rm{fb}}^{-1}$ (left panel) and $400\,\,{\rm{fb}}^{-1}$ (right panel). The thick line with filled squares shows the final result with all cuts, resolution error, combinatorics, and backgrounds included and estimated in the shape fitting. This gives us $M_{\tilde{\chi}^{o}_{1}}=63.2\pm 4.1\,\,\,{\rm{GeV}}$ with $700$ events (signal or background) representing $100\,\,{\rm{fb}}^{-1}$. After $400\,\,{\rm{fb}}^{-1}$ this improves to $M_{\tilde{\chi}^{o}_{1}}=66.0\pm 1.8\,\,\,{\rm{GeV}}$. The error-free best case gives $M_{\tilde{\chi}^{o}_{1}}=67.0\pm 0.9\,\,\,{\rm{GeV}}$. The correct value is $M_{\tilde{\chi}^{o}_{1}}=67.4\,\,{\rm{GeV}}$.
29. 7.10 HERWIG data for $100\,\,{\rm{fb}}^{-1}$ (thick line) and the smooth ideal expectation assuming $M_{\tilde{\chi}^{o}_{1}}=70\,\,{\rm{GeV}}$ generated by Mathematica with all resolution, background, and combinatoric effects included (thin line). The $\chi^{2}$ of this curve to the HERWIG gives the solid-black square on the left frame of Fig. 7.9.
30. 8.1 Ideal $M_{3C,LB}$ and $M_{3C,UB}$ distribution for 25000 events in two cases both sharing $\Delta M_{YN}=100$ GeV and $\Delta M_{XN}=50$ GeV. The solid, thick line shows $M_{Y}=200$ GeV, and the dashed, thin line shows $M_{Y}=250$ GeV.
31. 8.2 (Left) The $M_{3C}$ distributions before (solid) and after (dashed) introducing the combinatoric ambiguity. (Right) The $M_{3C}$ distributions with and without UTM. The no UTM case ($k_{T}=0$) is shown by the solid line; the large UTM case with $k_{T}=250$ GeV is shown by the dashed line.
32. 8.3 The effect of energy resolution on the $M_{3C}$ distribution. (Left) The dotted line shows the energy resolution has washed out the sharp cut-off. (Right) $M_{3C,LB}$ with perfect energy resolution plotted against the result with realistic energy resolutions.
33. 8.4 The dependence of the $M_{3C}$ distributions on the parton collision energy. The solid line shows the collision distributed according to Eq(7.10), and the dashed line shows the collision energy fixed at $\sqrt{s}=600$ GeV.
34. 8.5 The effect of missing transverse momentum cuts on the $M_{3C}$ distributions. (Left) The $M_{3C,LB}$ result versus the $\not{P}_{T}$. (Right) The difference of the $M_{3C,UB}$ and $M_{3C,LB}$ distributions with and without the cut $\not{P}_{T}>20$ GeV. The smallest bins of $M_{3C,LB}$ are the only bins to be statistically significantly affected.
35. 8.6 Effect of a spin correlated process on the $M_{3C}$ distributions. Modeled masses are $M_{Y}=200$ GeV, $M_{X}=150$ GeV, and $M_{N}=100$ GeV. The thick black and thick blue lines show the distributions of the uncorrelated lower bound and upper bound $M_{3C}$. The dotted red lines show the distributions of the spin correlated process.
36. 8.7 Fit of ideal $M_{3C}(M_{\tilde{\chi}^{o}_{1}})$ distributions to the HERWIG generated $M_{3C}$ distributions. Includes combinatoric errors, backgrounds, energy resolution, and $\not{P}_{T}$ cuts. (Left) The observed HERWIG counts versus the expected counts for ideal $M_{\tilde{\chi}^{o}_{1}}=95$ GeV. (Right) The $\chi^{2}$ fit to ideal distributions of $M_{\tilde{\chi}^{o}_{1}}=80,85,90,95,100,105,110$ GeV. The correct mass is $M_{\tilde{\chi}^{o}_{1}}=96.0$ GeV.
37. 8.8 Combined constraint from fitting both $\max m_{ll}$ and $M_{3C}$ with the mass difference as free parameters. We parameterized the difference from the true values in the model by $\Delta M_{YN}=80.8\,\,{\rm{GeV}}+\delta\Delta M_{YN}$ and $\Delta M_{XN}=47.0\,\,{\rm{GeV}}+\delta\Delta M_{XN}$. We shown the $1,2,3\sigma$ contours.
38. A.1 The impact of RG running parameter ratios with $M_{S}=500$ GeV. These ratios determine $\chi$ defined in Eq. A.34. If $M_{S}=M_{Z}$, all three are degenerate at small $\tan\beta$.
39. E.1 Shows the ellipses defined for $p_{o}$ and $q_{o}$ in Eqs(8.13-8.14) using the correct mass scale for an event that nearly saturates the $M_{3C}$ endpoint. For this event, the $M_{3C}$ lies within $1\%$ of the endpoint and reconstructs $p$ and $q$ to within $4\%$. Perfect error resolution and combinatorics are assumed.
## Chapter 1 Introduction
In the mid-seventeenth century, a group of Oxford natural philosophers,
including Robert Boyle and Robert Hooke, argued for the inclusion of
experiments in the natural philosopher’s toolkit as a means of falsifying
theories [4]. This essentially marks the beginning of modern physics, which is
rooted in an interplay between creating theoretical models, developing
experimental techniques, making observations, and falsifying theories111There
are many philosophy of physics subtleties about how physics progress is made
[5] or how theories are falsified [6] which we leave for future philosophers.
Nevertheless, acknowledging the philosophical interplay between experiment and
scientific theory is an important foundational concept for a physics thesis..
This thesis is concerned with mass determination of new particle states in
these first two stages: we present a new theoretical observation leading to
predictions about particle masses and their relationships, and we develop new
experimental analysis techniques to extract the masses of new states produced
in hadron collider experiments. The remaining steps of the cycle will follow
in the next several years: New high-energy observations will begin soon at the
Large Hadron Collider (LHC)222This does not mean that the LHC will observe new
particles, but that the LHC will perform experiments with unexplored energies
and luminocities that will constrain theories., and history will record which
theories were falsified and which (if any) were chosen by nature.
No accepted theory of fundamental particle physics claims to be ‘true’, rather
theories claim to make successful predictions within a given domain of
validity for experiments performed within a range of accuracy and precision.
The Standard Model with an extension to include neutrino masses is the best
current ‘minimal’ model. Its predictions agree with all the highest-energy
laboratory-based observations ($1.96\,\,{\rm{TeV}}$ at the Tevatron) to within
the best attainable accuracy and precision (an integrated luminosity
${\mathcal{L}}$ of about $\int dt{\mathcal{L}}\approx 4\,\,{\rm{fb}}^{-1}$)
[7] 333Grossly speaking in particle physics, the domain of validity is given
in terms of the energy of the collision, and the precision of the experiment
is dictated by the integrated luminosity $\int dt{\mathcal{L}}$. Multiplying
the $\int dt{\mathcal{L}}$ with the cross-section for a process gives the
number of events of that process one expects to occur. The larger the
integrated luminosity, the more sensitive one is potentially to processes with
small cross sections.. The Standard Model’s agreement with collider data
requires agreement with quantum corrections [8, Ch 10]. The success of the
Standard Model will soon be challenged.
At about the time this thesis is submitted, the Large Hadron Collider (LHC)
will (hopefully) begin taking data at a substantially larger collision energy
of $10\,\,{\rm{TeV}}$ and soon after at $14\,\,{\rm{TeV}}$ [9]. The high
luminosity of this collider ($10^{34}\,\,{\rm{cm}}^{-2}\sec^{-1}$) [9] enables
us to potentially measure processes with much greater precision (hopefully
about $\int dt{\mathcal{L}}=300\,\,{\rm{fb}}^{-1}$ after three years). The LHC
holds promise to test many candidate theories of fundamental particle physics
that make testable claims at these new energy and precision frontiers.
The thesis is arranged in two parts. The first regards theoretical
determination of masses of new particle states, and the second regards
experimental determination of masses at hadron colliders. Each part begins
with a review of important developments relevant to placing the new results of
this thesis in context. The content is aimed at recording enough details to
enable future beginning graduate students to follow the motivations and, with
the help of the cited references, reproduce the work. A full pedagogical
explanation of quantum field theory, gauge theories, supersymmetry, grand
unified theories (GUTs) and renormalization would require several thousand
pages reproducing easily accessed lessons found in current textbooks. Instead
we present summaries of key features and refer the reader to the author’s
favored textbooks or review articles on the subjects for refining details.
Part I of this thesis contains Chapters 2-3. Part II of this thesis contains
Chapters 4-8.
Chapter 2 outlines theoretical approaches to mass determination that motivate
the principles behind our own work. Exact symmetry, broken symmetry, and fine-
tuning of radiative corrections form the pillars of past successes in
predicting mass properties of new particle states. By mass properties, we mean
both the magnitude and any CP violating phase that cannot be absorbed into
redefinition of the respective field operators. We highlight a few examples of
the powers of each pillar through the discoveries of the past century. We
continue with a discussion of what the future likely holds for predicting
masses of new particle states. The observation of dark-matter suggests that
nature has a stable, massive, neutral particle that has not yet been
discovered. What is the underlying theoretical origin of this dark matter? The
answer on which we focus is supersymmetry. SUSY, short for supersymmetry,
relates the couplings of current fermions and bosons to the couplings of new
bosons and fermions known as superpartners. The masses of the new SUSY
particles reflect the origin of supersymmetry breaking. Many theories provide
plausible origins of supersymmetry breaking but are beyond the scope of this
thesis. We review selected elements of SUSY related to why one might believe
it has something to do with nature and to key elements needed for the
predictions in the following chapter.
Chapter 3, which is drawn from work published by the author and his supervisor
in [10], presents a series of arguments using supersymmetry, unification, and
precision measurements of low-energy observables to suggest some mass
relationships and phases of the yet undiscovered superpartner masses444The
phase of a Majorana fermion mass contributes to the degree to which a particle
and its antiparticle have distinguishable interactions. The conventions is to
remove the phase from the mass by redefining the fields such as to make the
mass positive. This redefinition sometimes transfers the CP violating phase to
the particle’s interactions.. In the early 1980s, Georgi and Jarlskog [11]
observed that the masses of the quarks and leptons satisfied several
properties at the grand unification scale. Because these masses cover six
orders of magnitude, these relationships are very surprising. We discover that
with updated experimental observations, the Georgi Jarlskog mass relationships
can now only be satisfied for a specific class of quantum corrections enhanced
by the ratio of the vacuum expectation value of the two supersymmetric Higgs
$v_{u}/v_{d}\equiv\tan\beta$. We predict that $\tan\beta$ be large $(\gtrsim
20)$ and the gluino mass has the sign opposite to the sign of the wino mass.
Chapter 4 reviews the existing toolbox of experimental techniques to determine
masses and phases of masses. If the model is known, then determining the
masses and phases can be done by fitting a complete simulation of the model to
the LHC observations. However, to discover properties of an unknown model, one
would like to study mass determination tools that are mostly model-
independent555By model-independent, we mean techniques that do not rely on the
cross section’s magnitude and apply generally to broad classes of models..
Astrophysical dark-matter observations suggests the lightest new particle
states be stable and leave the detector unnoticed. The dark-matter’s stability
suggests the new class of particles be produced in pairs. The pair-produced
final-state invisible particles, if produced, would lead to large missing
transverse momentum. Properties of hadron colliders make the task of mass
determination of the dark-matter particles more difficult because we do not
know the rest frame or energy of the initial parton collision. We review
techniques based on combining kinematic edges, and others based on assuming
enough on-shell states in one or two events such that the masses can be
numerically solved. We continue with the transverse mass $M_{T}$ which forms a
lower-bound on the mass of the new particle which decays into a particle with
missing transverse momentum, and was used to measure the mass of the $W$. The
transverse mass $M_{T}$ has a symmetry under boosts along the beam line. Next
we review $M_{T2}$ [12][13] which generalizes this to the case where there are
two particles produced where each of them decays to invisible particle states.
Chapter 5 begins the original contributions of this thesis towards more
accurate and robust mass determination techniques in the presence of invisible
new particle states. We begin by introducing novel ways to use the kinematic
variable $M_{T2}$ to discover new constraints among the masses of new states.
We also discuss the relationship of $M_{T2}$ to a recent new kinematic
variable called $M_{CT}$. This work was published by the author in Ref. [14].
Chapter 6 introduces a new kinematic variable $M_{2C}(M_{-})$. Most model-
independent mass determination approaches succeed in constraining the mass
difference $M_{-}=M_{Y}-M_{N}$ between a new states $Y$ and the dark-matter
particle $N$, but leave the mass scale $M_{+}=M_{Y}+M_{N}$ more poorly
determined. $M_{2C}(M_{-})$ assume the mass difference, and then provides an
event-by-event lower bound on the mass scale. The end-point of the $M_{2C}$
distribution gives the mass $M_{Y}$ which is equivalent to the mass scale if
$M_{-}$ is known. In this chapter we also discover a symmetry of the $M_{2C}$
distribution which for direct $Y$ pair production, makes the shape of the
distribution entirely independent of the unknown collision energy or rest
frame. Fitting the shape of the distribution improves our accuracy and
precision considerably. We perform some initial estimates of the performance
with several simplifying assumptions and find that with $250$ signal events we
are able to determine $M_{N}$ with a precision and accuracy of $\pm 6$ GeV for
models with $M_{Y}+M_{N}/(M_{Y}-M_{N})\approx 3$. This chapters results were
published in Ref [15].
Chapter 7, which is based on work published by the author with Alan Barr and
Graham Ross in Ref. [16], extends the $M_{2C}$ kinematic variable in two ways.
First we discover that in the presence of large upstream transverse momentum
(UTM), that we are able to bound the mass scale from above. This upper bound
is referred to as $M_{2C,UB}$. Here we perform a more realistic case study of
the performance including backgrounds, combinatorics, detector acceptance,
missing transverse momentum cuts and energy resolution. The case study uses
data provided by Dr Alan Barr created with HERWIG Monte Carlo generator [17,
18, 19] to simulate the experimental data. The author wrote Mathematica codes
that predict the shape of the distributions built from properties one can
measure with the detector. We find the mass $M_{N}$ by fitting to the lower-
bound distribution $M_{2C}$ and the upper-bound distribution $M_{2C,UB}$
distribution shapes. Our simulation indicates that with $700$ events and all
anticipated effects taken into consideration that we are able to measure the
$M_{N}$ to $\pm 4$ GeV for models with $M_{Y}+M_{N}/(M_{Y}-M_{N})\approx 3$.
This indicates that the method described is as good as, if not better than,
the other known kinematic mass determination techniques.
Chapter 8, which is based on work published by the author with Alan Barr and
Alex Pinder in Ref. [20], extends the constrained mass variables to include
two mass differences in the variable $M_{3C}$. We discuss the properties of
the $M_{3C}$ distribution. We observe that although the technique is more
sensitive to energy resolution errors, we are still able to determine both the
mass scale and the two mass differences as good if not better than other known
kinematic mass determination techniques. For SPS 1a we forecast determining
the LSP mass with $\pm 2.4$ GeV with about $3600$ events.
Chapter 9 concludes the thesis. We predict the wino’s mass will have the
opposite sign of the gluino’s mass. We develop new techniques to measure the
mass of dark-matter particles produced at the LHC. Our techniques work with
only two or three new particle states, and have precision and accuracy as good
or better than other known kinematic mass determination techniques.
To facilitate identifying the original ideas contained in this thesis for
which the author is responsible, we list them here explicitly along with the
location in the thesis which elaborates on them:
* •
Chapter 3 The updated values of the strong coupling constant, top quark mass,
and strange quark mass lead to quantitative disagreement with the Georgi-
Jarslkog mass relationships unless one uses $\tan\beta$ enhanced threshold
corrections and fix the gluino’s mass to be opposite sign of the wino’s mass
published in Ref [10].
* •
Chapter 5 The relationship between Tovey’s $M_{CT}$ and Lester, Barr, and
Summer’s $M_{T2}$ variables published in Ref [14].
* •
Chapter 5 Using $M_{T2}$ along intermediate stages of a symmetric cascade
decay to discover new constraints among the masses published in Ref [14].
* •
Chapter 6 and 7 The definition and use of the variable $M_{2C}$ to determine
mass of dark-matter-like new particle states published in Ref [15] and [16].
* •
Chapter 7 The ability to obtain an event-by-event upper bound on the mass
scale given the mass difference when the mass difference is known and the
event has large upstream transverse momentum published in Ref [16].
* •
Chapter 8 Complications and advantages to the logical extension variable
$M_{3C}$ and their use as a distribution which is published in Ref [20].
* •
A set of Mathematica Monte Carlo simulations, and $M_{T2}$ calculators used to
test the above contributions for $M_{2C}$, $M_{3C}$ and some C++ codes for
$M_{2C}$ and $M_{2C,UB}$ which has not yet been published. The author will be
happy to share any codes related to this thesis upon request.
## Chapter 2 Mass Determination in the Standard Model and Supersymmetry
### Chapter Overview
This chapter highlights three pillars of past successful new-particle state
prediction and mass determination, and it shows how these pillars are
precedents for the toolbox and concepts employed in Chapter 3’s contributions.
The three pillars on which rest most demonstrated successful theoretical new-
particle-state predictions and mass determinations in particle physics are:
(i) Symmetries, (ii) Broken symmetries, and (iii) Fine Tuning of Radiative
Corrections. We use the narrative of these pillars to review and introduce the
Standard Model and supersymmetry.
The chapter is organized as follows. Section 2.1 gives a few historical
examples of these pillars: the positron’s mass from Lorentz symmetry and from
fine tuning, the $\Omega^{-}$’s mass from an explicitly broken flavor
symmetry, the charm quark’s mass prediction from fine tuning, and the
$W^{\pm}$ and $Z^{o}$ masses from broken $SU(2)_{L}\times U(1)_{Y}$ gauge
symmetries of the Standard Model. These historical examples give us confidence
that these three pillars hold predictive power and that future employments of
these techniques may predict new particle states and their masses. Section 2.2
introduces astrophysical observation of dark-matter which suggests that nature
has a stable, massive, neutral particle that has not yet been discovered.
Section 2.3 introduces key features of Supersymmetry, a model with promising
potential to provide a dark-matter particle and simultaneously address many
other issues in particle physics. We discuss reasons why Supersymmetry may
describe nature, observational hints like the top-quark mass, anomaly in the
muon’s magnetic moment $(g-2)_{\mu}$, and gauge coupling unification. We also
review the classic $SU(5)$ Georgi-Jarlskog mass relations and
$\tan\beta$-enhanced SUSY threshold corrections. Chapter 3 will discuss how
updated low-energy data and the Georgi-Jarlskog mass relationships may provide
a window on predicting mass relationships of the low-energy supersymmetry.
### 2.1 Past Mass Determination and Discovery of New Particle States
#### 2.1.1 Unbroken Lorentz Symmetry: Positron Mass
Our first example uses Lorentz symmetry to predict the existence of and mass
of the positron. Lorentz symmetry refers to the invariance of physics to the
choice of inertial frame of reference. The Lorentz group, which transforms
vectors from their expression in one inertial frame to another, is generated
by the matrix $M^{\alpha\beta}$. For a spin-1 object (single index 4-vector),
the group elements, parameterized by a antisymmetric $4\times 4$ matrix
$\theta_{\mu\nu}$, are given by
$\Lambda(\theta)=\exp\left(iM^{\mu\nu}\theta_{\mu\nu}\right)$ (2.1)
where the antisymmetric matrix of generators $M^{\mu\nu}$ satisfies
$[M^{\mu\nu},M^{\rho\sigma}]=i(g^{\mu\rho}M^{\nu\sigma}-g^{\nu\rho}M^{\mu\sigma}-g^{\mu\sigma}M^{\nu\rho}+g^{\nu\sigma}M^{\mu\rho})$
(2.2)
and $g^{\mu\nu}={\rm{diag}}(1,-1,-1,-1)$ is the Lorentz metric. We emphasize
that each entry of $M^{\mu\nu}$ is an operator (generator), where each entry
of $\theta_{\mu\nu}$ is a number. The generators of the
Poincar$\acute{\rm{e}}$ symmetry are the generators of the Lorentz symmetry
supplemented with generators for space-time translations $P_{\mu}$ satisfying
$[P_{\mu},P_{\nu}]=0$ and
$[M_{\mu\nu},P_{\rho}]=-i(g_{\mu\rho}P_{\nu}-g_{\nu\rho}P_{\mu}).$ (2.3)
Supersymmetry, introduced later in this thesis, is a generalization of the
Poincar$\acute{\rm{e}}$ symmetry.
In 1928 designing a theory with Lorentz symmetry was a major goal in early
quantum mechanics. As Dirac described in his original paper, two problems
persisted, (1) preservation of probability in quantum mechanics requires a
first order linear equation for time evolution and (2) the presence of
positive and negative energy solutions. The Klein-Gordon equation is
$(D_{\mu}D^{\mu}-m^{2})\phi=0$ (2.4)
where $D_{\mu}=\partial_{\mu}-ieA_{\mu}$. It is invariant under Lorentz
transformations but suffers from both problems: the equation is second order
in $\partial_{t}$, and it has solutions proportional to $\exp(i\omega t)$ and
$\exp(-i\omega t)$.
Dirac’s 1928 paper on the Dirac equation [21] claims only to solve problem
(1). Because problem (2) was not solved, Dirac claimed “The resulting theory
is therefore still only an approximation”. However, the paper shows how to do
Lorentz invariant quantum mechanics of spin $1/2$ fields. Although in
different notation, Dirac’s paper showed that if one has a set of four
matrices $\gamma^{\mu}$ that satisfy
$\gamma^{\mu}\gamma^{\nu}+\gamma^{\nu}\gamma^{\mu}=2g^{\mu\nu}$ (2.5)
then the system of equations
$(i\gamma^{\mu}D_{\mu}-m)\psi=0$ (2.6)
transforms covariantly under Lorentz transformations if we define a new class
of transforms $\Lambda_{1/2}=\exp(i\theta_{\mu\nu}(M_{1/2})^{\mu\nu})$ where
the group is generated by
$(M_{1/2})^{\mu\nu}=\frac{i}{2}(\gamma^{\mu}\gamma^{\nu}-\gamma^{\nu}\gamma^{\mu}).$
(2.7)
The new generators $M_{1/2}$ satisfy Eq(2.2) so they form a representation of
the Lorentz group specific to spin $1/2$. The Dirac field transforms as
$\psi^{\prime}=\Lambda_{1/2}\psi$ and the $\gamma^{\mu}$ matrices transform as
$\Lambda^{\mu}_{\
\nu}\gamma^{\nu}=\Lambda^{-1}_{1/2}\gamma^{\mu}\Lambda_{1/2}$.
Dirac interprets that the negative-energy solutions to the equations will
behave as if they have the opposite charge but same mass in the presence of a
background electromagnetic field. The formation of a relativistic quantum
mechanics which still possesses negative energy solutions suggests that this
alternative solution with negative energy and the opposite charge and same
mass may be a prediction of relativistic quantum mechanics. Indeed, Anderson
observed the negative energy version of the electron in 1933 [22]
111Anderson’s first paper makes no reference to Dirac. However, he does
introduce the term positron and suggests renaming the electron the negatron..
The positron has the opposite charge of the electron but the same mass.
Today the Dirac equation is interpreted in terms of quantum field theory where
all energies are always considered positive. With hindsight we see that
Dirac’s motivation was partially wrong, and the Klein-Gordon equation provides
just as good an argument for antiparticles.
#### 2.1.2 Renormalization, Fine Tuning and the Positron
The next example is not the historical origin of the positron’s prediction,
but could have been if physicists in the early 1900s understood the world with
today’s effective field theory tools. The electron’s self-energy poses another
problem which effective quantum field theory and the existence of the positron
solve [23].
If we divide up the electron’s charge into two pieces222The choice of two
pieces is arbitrary and simplifies the calculations to illuminate the fine-
tuning concept trying to be communicated., we ask how much energy is stored in
the system in the process of bringing the two halves together from infinity?
At what distance apart will the ‘self energy’ of the electron be equal to the
mass energy of the electron? This is approximately the classical electron
radius333The classical electron radius is $4\times$ this value., and the
answer is around ten times bigger than atomic nuclei at around $r_{e}=7\times
10^{-14}$ m. Electron-electron scattering can probe closer than this for
collisions with $\sqrt{s}\approx 2$ MeV. Also electrons are emitted from
nuclei during $\beta$ decay suggesting the electron must be smaller than the
nucleus. We now break the mass up into two quantities: the bare mass
$m_{e\,o}$ and the ‘self-energy’ mass $\delta m$ with the observed mass
equalling $m_{e}=m_{e\,o}+\delta m_{e}$. Phrasing self energy in terms of a
cut-off one finds
$\delta m_{e}\approx\frac{\Lambda}{4}$ (2.8)
and the cut off $\Lambda$ indicates the energy needed to probe the minimum
distance within which two pieces of the electrons mass and charge must be
contained. At the Plank scale, this requires a cancelation between the bare
mass $m_{e,o}$ and the self energy $\delta m_{e}$ to more than $22$ decimal
places to end up with the observed mass $m_{e}=0.511\,\,{\rm{MeV}}$! Fine
tuning is where large cancelations between terms are needed to give an
observable quantity. This large cancelation could be perceived as an
indication of missing physics below the scale of about $2\,\,{\rm{MeV}}$ where
we would have had a cancelation of the same size as the observable quantity
$m_{e}$.
The effective quantum field theory used to describe electromagnetism
introduces the positron with the same mass as the electron. The positron acts
to moderate this “self-energy”. In QFT, the self energy of the electron is
partly due to electron interaction with electron, but also partly due to
electron interaction with a virtual positron. This is because the two
interaction vertices $x$ and $y$ are integrated over $\int d^{4}xd^{4}y$ so
the interaction The resulting self energy, in terms of a cut off $\Lambda$, is
$\frac{\delta
m_{e}}{m_{e}}\approx\frac{\alpha}{4\pi}\ln\frac{\Lambda^{2}}{m_{e}^{2}}$ (2.9)
where $\alpha=e^{2}/4\pi$. Weisskopf and Furry were the first to discover that
quantum field theory with both positrons and electrons leads to only a
logarithmic divergence [24, 25]. Now we see that taking the cut off $\Lambda$
to the Plank scale only gives a $6\%$ correction. There is no longer a
cancelation of two large terms, an issue solved by introducing new physics
below the scale at which the low-energy effective theory became fine tuned.
#### 2.1.3 Broken Flavor Symmetry: $\Omega^{-}$ Mass
The next example brings us to the early 1960s when Murray Gell-mann [26] and
Yuval Ne’eman [27] 444Yuval Ne’eman, like the author of this thesis, was a
member of the military while studying for his PhD [28]. were both studying
broken $SU(3)$ flavor symmetry as a way to understand the zoo of particles
being discovered in the 1950s and 1960s. Their study led to the theoretical
determination of mass of the $\Omega^{-}$ baryon before it was observed.
To place the $SU(3)$ flavor symmetry in context, we begin first with the
$SU(2)$ isospin symmetry. Isospin symmetry postulates that protons $p$ and
neutrons $n$ are indistinguishable if one ignores electromagnetism. Therefore,
the equations and forces governing $p$ and $n$ interactions should have a
symmetry on rotating the proton’s complex field $p$ into the neutron’s complex
field $n$ with an $SU(2)$ rotation. The symmetry is explicitly broken by the
electromagnetic force and to a lesser extent the quark masses, and a special
direction is singled out. We can label the eigenvalues along this broken
direction. The eigenvalues of $I_{3}$ isospin generator label the states along
the broken isospin axis: $I_{3}=1/2$ denotes a proton, and $I_{3}=-1/2$
denotes a neutron. Today we can trace the isospin symmetry to the presence of
up and down quarks with nearly the same mass. Isospin symmetry is broken both
by electromagnetism and by the up and down quark mass difference.
Next we wish to understand how $SU(3)_{f}$ symmetry predicted the mass of the
$\Omega^{-}$. The $SU(3)$ flavor symmetry is an extension of isospin symmetry.
It can be understood in modern language as the existence of three light
quarks. The symmetry is broken because the strange quark is considerably more
massive ( $\approx 103\pm 12\,\,{\rm{GeV}}$ ) than the up and down quark (
$m_{u}\approx 2.7\pm 0.5$ GeV, and $m_{d}=5.7\pm 0.5$ GeV) 555Quark masses are
very difficult to define because they are not free particles. Here we quote
the current quark masses at an $\overline{MS}$ renormalization scheme scale of
$\mu=2\,\,{\rm{GeV}}$ as fit to observations in Chapter 3.. The group $SU(3)$
has isospin $SU(2)$ as a supgroup so $I_{3}$ remains a good quantum number; in
addition $SU(3)$ states are also specified by the hypercharge $Y$. Quarks and
anti-quarks are given the following charge assignments ($I_{3},Y$):
$u=(1/2,1/3)$, $d=(-1/2,1/3)$, $s=(0,-2/3)$, $\bar{u}=(-1/2,-1/3)$,
$\bar{d}=(1/2,-1/3)$, $\bar{s}=(0,2/3)$. Representations of $SU(3)$ are formed
by tensors of the fundamental its conjugate with either symmetrized or
antisymmetrized indices and with traces between upper and lower indices
removed. Representations are named by the number of independent components
that tensor possesses and shown by bold numbers like 3, 8, 10, 27 etc.
Gell-mann and Ne’eman did not know what representation of $SU(3)$ was the
correct one to describe the many baryons and mesons being discovered; to
describe the spin $3/2$ light baryons, they were each considering the 10 and
the 27. The representation 10 is formed by $B^{abc}$ where $a$,$b$, and $c$
are indexes that run over $u$, $d$, and $s$, and where the tensor $B^{abc}$ is
symmetric on interchanges of the three indices. The states are displayed as
the red dots in Fig. 2.1 where the conserved quantum numbers of $I_{3}$ is
plotted against hypercharge $Y$. The 27 is given by a tensor $B^{ab}_{\ \ cd}$
where $(ab)$ is symmetrized, $(cd)$ is symmetrized, and the $9$ traces
$B^{ac}_{\ \ cd}=0$ are removed. The 27 is shown by the smaller blue dots. The
particles and the observed masses as of 1962 are shown in the Fig 2.1.
Figure 2.1: The states of the $SU(3)$ 10 of $J^{P}=3/2^{+}$ baryons in 1962.
Also shown is the $SU(3)$ 27.
In July 1962, both Gell-mann and Ne’eman went to the 11th International
Conference on High-Energy Physics at CERN. Ne’eman learned from Sulamith and
Gerson Goldhaber (a husband and wife pair) that
$K^{+}$(u$\bar{\rm{s}}$,$I_{3}=+1/2$,$Y=1$) vs $N$ (ddu,$I_{3}=-1/2$,$Y=1$)
scattering did not lead to a resonance at $(I_{3}=0,Y=2)$ with a mass range
near $1000$ MeV as one would expect if the pattern followed was that of the 27
[29]. Gell-mann had also learned of the Goldhabers’ negative result, now known
as the Goldhaber gap. During the discussion after the next talk at the
conference, Gell-mann and Ne’eman both planned on announcing their
observation; Gell-mann was called on first, and announced that one should
discover the $\Omega^{-}$ with a mass near $1680$ GeV. The $\Omega^{-}$ was
first seen in 1964 at Brookhaven [30] with a mass of $M_{\Omega^{-}}=1686\pm
12$ MeV. Amazingly the spin of $\Omega^{-}$ was not experimentally verified
until 2006 [31].
The broken $SU(3)_{F}$ flavor symmetry successfully predicted and explained
the masses of the $\Omega^{-}$ and many other baryons and mesons. The general
formula for the masses in terms of the broken symmetry was developed both by
Gell-mann and by Okubo and is known as the Gell-mann-Okubo mass formula [32].
Despite this success (and many others) the quark-model theoretical description
used here is still approximate; the strong force is only being parameterized.
The quarks here classify the hadrons, but the masses of the quarks do not add
up to the mass of the hadron. Quark masses that are thought of as being about
$1/3$ the mass of the baryon are called “constituent quark masses”. These are
not the quark masses that are most often referenced in particle physics. A
consistent definition of quark mass and explanation of how these quarks masses
relate to hadron masses will wait for the discovery of color $SU(3)_{c}$. In
$SU(3)_{c}$ the masses of the quarks are defined in terms of Chiral
Perturbation Theory ($\chi$PT) and are called “current” quark masses. We will
be using current quark masses as a basis for the arguments in Chapter 3.
Another important precedent set here is that of arranging the observed
particles in a representation of a larger group that is broken. This is the
idea behind both the Standard Model and the Grand Unified Theories to be
discussed later. The particle content and the forces are arranged into
representations of a group. When the group is broken, the particles
distinguish themselves as those we more readily observe.
#### 2.1.4 Charm Quark Mass Prediction and Fine Tuning of Radiative
Corrections
In the non-relativistic quark model, the neutral $K^{o}$ is a bound state of
$d$ and $\bar{s}$ and $\bar{K}^{o}$ is a bound state of $s$ and $\bar{d}$. The
Hamilontian for this system is given by 666In this Hamilontian, we’re
neglecting the CP violating features.
$H_{K,\bar{K}}=\left(\begin{matrix}K^{o}&\bar{K}^{o}\end{matrix}\right)\left(\begin{matrix}M^{2}_{o}&\delta^{2}\cr\delta^{2}&M^{2}_{o}\end{matrix}\right)\left(\begin{matrix}K^{o}\cr\bar{K}^{o}\end{matrix}\right).$
(2.10)
A non-zero coupling $\delta$ between the two states leads to two distinct mass
eigenstates. For $\delta<<M_{o}$, the two mass eigenstates have almost equal
mixtures of $K^{o}$ and $\bar{K}^{o}$ and are called $K_{1}$ and $K_{2}$.
Experimentally the mass splitting between the two eigenstates is
$M_{K_{2}}-M_{K_{1}}=3.5\times 10^{-6}$ eV.
During the 1960s, a combination of chiral four-Fermi interactions with the
approximate chiral symmetry $SU(3)_{L}\times SU(3)_{R}$ of the three lightest
quarks was proving successful at describing many hadronic phenomena; however,
it predicted $\delta$ was non zero. Let’s see why. The effective weak
interaction Lagrangian that was successful in atomic decays and Kaon decays
[33][34] was given by
${\mathcal{L}}=\frac{2\,G_{F}}{\sqrt{2}}J^{\mu}J^{\dagger}_{\mu}\ \ \
{\rm{where}}\ \ \ J^{\mu}=J^{\mu}_{(L)}+J^{\mu}_{(H)}$ (2.11)
and
$\displaystyle J_{(L)}^{\mu}$ $\displaystyle=$
$\displaystyle\bar{\nu_{e}}\gamma^{\nu}\frac{1}{2}(1-\gamma^{5})e+\bar{\nu_{\mu}}\gamma^{\nu}\frac{1}{2}(1-\gamma^{5})\mu$
(2.12) $\displaystyle J_{(H)}^{\mu}$ $\displaystyle=$
$\displaystyle\bar{u}\gamma^{\nu}\frac{1}{2}(1-\gamma^{5})\,d\,\cos\theta_{c}+\bar{u}\gamma^{\nu}\frac{1}{2}(1-\gamma^{5})\,s\,\sin\theta_{c}.$
(2.13)
and $\theta_{c}$ is the Cabibbo angle. Using a spontaneously broken
$SU(3)_{L}\times SU(3)_{R}$ [35] one can calculate the loops connecting
$u\bar{s}$ to $\bar{u}s$. These loops are responsible for the Kaon mass
splitting and give
$M_{K_{2}}-M_{K_{1}}=\frac{\delta^{2}}{M_{K}}\approx\frac{\cos^{2}\theta_{c}\sin\theta_{c}}{4\pi^{2}}f^{2}_{K}M_{K}G_{F}(G_{F}\Lambda^{2})$
(2.14)
where $\Lambda$ is the cut off on the divergent loop and $f_{K}\approx
180\,\,{\rm{GeV}}$ [36][37][35][33]. Using this relation, the cut-off cannot
be above $2$ GeV without exceeding the observed Kaon mass splitting. There are
higher order contributions, each with higher powers of $G_{F}\Lambda^{2}$,
that may cancel to give the correct answer. Indeed Glashow, Ilioupoulos, and
Maiani (GIM) [37] observe in a footnote “Of course there is no reason that one
cannot exclude _a priori_ the possibility of a cancelation in the sum of the
relevant perturbation expansion in the limit $\Lambda\rightarrow\infty$”. The
need for an extremely low cut off was a problem of fine tuning and naturalness
with respect to radiative corrections.
This solution to the unnaturally low scale of $\Lambda$ suggested by the mass
splitting $M_{K_{2}}-M_{K_{1}}$ holds foreshadowing for supersymmetry 777I
have learned from Alan Barr that Maiani also sees the GIM mechanism as a
precedent in favor of supersymmetry.. GIM proposed a new broken quark symmetry
$SU(4)$ which required the existence of the charm quark, and also discuss its
manifestation in terms of a massive intermediate vector boson $W^{+}$,
$W^{-}$. They introduce a unitary matrix $U$, which will later be known as the
$CKM$ matrix. We place group the quarks into groups: $d^{i}={d,s,\ldots}$ and
$u^{i}={u,c,\ldots}$; then the matrix $U_{ij}$ links up-quark $i$ to down-
quark $j$ to the charged current
$(J_{(H)})^{\mu}=\bar{u}^{i}\gamma^{\mu}\frac{1}{2}(1-\gamma^{5})d^{j}U_{ij}.$
(2.15)
The coupling of the Kaon to the neutral current formed by the pair of $W^{+}$
with $W^{-}$. In the limit of exact $SU(4)$ (all quark masses equal) the
coupling of the Kaon to the neutral current is proportional to
${\mathcal{A}}\propto(\sum_{j=u,c,\ldots}U_{dj}U^{\dagger}_{js})=\delta_{ds}=0$.
The coupling $\delta^{2}$ between $K^{o}$ and $\bar{K}^{o}$ is proportional to
${\mathcal{A}}^{2}$. This means that in the limit of $SU(4)$ quark symmetry,
there would be no coupling to enable $K_{1}$ and $K_{2}$ mass splitting.
However, the observed $K_{1}$ $K_{2}$ mass splitting is non-zero, and $SU(4)$
is not an exact symmetry; it is broken by the quark masses. The mass splitting
is dominated by the mass of the new quark $m_{c}$. GIM placed a limit on
$m_{c}\lessapprox 3\,\,{\rm{GeV}}$. One might think of the proposed $SU(4)$
symmetry becoming approximately valid above scale $\Lambda\approx m_{c}$. The
new physics (in this case the charm quark) was therefore predicted and found
to lie at the scale where fine tuning could be avoided.
#### 2.1.5 The Standard Model: Broken Gauge Symmetry: $W^{\pm}$, $Z^{o}$
Bosons Mass
The Standard Model begins with a gauge symmetry $SU(3)_{c}\times
SU(2)_{L}\times U(1)_{Y}$ that gets spontaneously broken to $SU(3)_{c}\times
U(1)_{EM}$. For a more complete pedagogical introduction, the author has found
useful Refs. [38, 39, 40] and the PDG [41, Ch 10].
The field content of the Standard Model with its modern extensions for our
future use is given in Table 2.1. The $i$ indexes the three generations. The
fermions are all represented as 2-component Weyl left-handed transforming
states888 The $(2,0)$ is the projection of Eq(2.7) onto left handed states
with $P_{L}=1/2(1-\gamma^{5})$. In a Weyl basis, Eq(2.7) is block diagonal so
one can use just the two components that survive $P_{L}$. If $e^{c}$
transforms as a $(2,0)$ then $i\sigma^{2}(e^{c})^{*}$ transforms as $(0,2)$..
The gauge bosons do not transform covariantly under the gauge group, rather
they transform as connections 999The gauge fields transform as connections
under gauge transformations:
$W^{\prime}_{\mu}=UW_{\mu}U^{-1}+(i/g)U\partial_{\mu}U^{-1}$..
Field | Lorentz${}^{\rm{\ref{FootnoteLorentzNotation}}}$ | $SU(3)_{c}$ | $SU(2)_{L}$ | $U(1)_{Y}$
---|---|---|---|---
$(L)^{i}$ | $(2,0)$ | $1$ | $2$ | $-1$
$(e^{c})^{i}$ | $(2,0)$ | $1$ | $1$ | $2$
$(\nu^{c})^{i}$ | $(2,0)$ | $1$ | $1$ | $0$
$(Q)^{i}$ | $(2,0)$ | $3$ | $2$ | $1/3$
$(u^{c})^{i}$ | $(2,0)$ | $\bar{3}$ | $1$ | $-4/3$
$(d^{c})^{i}$ | $(2,0)$ | $\bar{3}$ | $1$ | $2/3$
$H$ | $1$ | $1$ | $2$ | $+1$
$B_{\mu}$ | $4$ | $1$ | $1$ | $0^{{}^{\ddagger}}$
$W_{\mu}$ | $4$ | $1$ | $3^{{}^{\ddagger}}$ | $0$
$G_{\mu}$ | $4$ | $8^{{}^{\ddagger}}$ | $1$ | $0$
Table 2.1: Standard Model field’s transformation properties. ‡ Indicates the
field does not transform covariantly but rather transform as a
connection${}^{\rm{\ref{ConnectionTransform}}}$.
In 1967 [42], Weinberg set the stage with a theory of leptons which consisted
of the left-handed leptons which form $SU(2)$ doublets $L^{i}$, the right-
handed charged leptons which form $SU(2)$ singlets $(e^{c})^{i}$, and the
Higgs fields $H$ which is an $SU(2)$ doublet. Oscar Greenberg first proposed
3-color internal charges of $SU(3)_{c}$ in 1964 [43]101010Greenberg, like the
author, also has ties to the US Air Force. Greenberg served as a Lieutenant in
the USAF from 1957 to 1959. A discussion of the history of $SU(3)_{c}$ can be
found in Ref. [44].. It was not until Gross and Wilczek and Politzer
discovered asymptotic freedom in the mid 1970s that $SU(3)_{c}$ was being
taken seriously as a theory of the strong nuclear force [45][46]. The
hypercharge in the SM is not the same hypercharge as in the previous
subsection 111111In the SM, the $d$ and $s$ (both left-handed or right-handed)
have the same hypercharge, but in flavor $SU(3)_{f}$ (2.1.3) they have
different hypercharge. . The $U(1)_{Y}$ hypercharge assignments $Y$ are
designed to satisfy $Q=\sigma^{3}/2+Y/2$ where $Q$ is the electric charge
operator and $\sigma^{3}/2$ is with respect to the $SU(2)_{L}$ doublets (if it
has a charge under $SU(2)_{L}$ otherwise it is $0$).
The gauge fields have the standard Lagrangian
${\mathcal{L}}_{W,B,G}=\frac{-1}{4}B_{\mu\nu}B^{\mu\nu}+\frac{-1}{2}{\rm{Tr}}W_{\mu\nu}W^{\mu\nu}+\frac{-1}{2}{\rm{Tr}}G_{\mu\nu}G^{\mu\nu}$
(2.16)
where $B_{\mu\nu}=\frac{i}{g^{\prime}}[D_{\mu},D_{\nu}]$ when $D_{\mu}$ is the
covariant derivative with respect to $U(1)$:
$D_{\mu}=\partial_{\mu}-ig^{\prime}B_{\mu}$. Likewise
$W_{\mu\nu}=\frac{i}{g}[D_{\mu},D_{\nu}]$ with
$D_{\mu}=\partial_{\mu}-ig\frac{\sigma^{a}}{2}W^{a}_{\mu}$ and
$G_{\mu\nu}=\frac{i}{g_{3}}[D_{\mu},D_{\nu}]$ with
$D_{\mu}=\partial_{\mu}-ig_{3}\frac{\lambda^{a}}{2}G^{a}_{\mu}$ with
$\sigma/2$ and $\lambda/2$ the generators of $SU(2)$ and $SU(3)$ gauge
symmetry respectively. The $1/g$ factors in these definitions give the
conventional normalization of the field strength tensors. At this stage,
$SU(2)$ gauge symmetry prevents terms in the Lagrangian like
$\frac{1}{2}M^{2}_{W}W_{\mu}W^{\mu}$ which would give $W$ a mass.
The leptons and quarks acquire mass through the Yukawa sector given by
${\mathcal{L}}_{Y}=Y^{e}_{ij}L^{i}(i\sigma^{2}H^{*})(e^{c})^{j}+Y^{d}_{ij}Q^{i}(i\sigma^{2}H^{*})(d^{c})^{j}+Y^{u}_{ij}Q^{i}H(u^{c})^{j}+h.c.$
(2.17)
where we have suppressed all but the flavor indices 121212Because both $H$ and
$L$ transform as 2 their contraction to form an invariant is done like
$L^{a}H^{b}\epsilon_{ab}$ where $\epsilon_{ab}$ is the antisymmetric tensor..
Because $H$ and $i\sigma_{2}H^{*}$ both transform as a 2 (as opposed to a
$\overline{\textbf{2}}$), these can be used to couple a single Higgs field to
both up-like and down-like quarks and leptons. If neutrinos have a Dirac mass
then Eq(2.17) will also have a term $Y^{\nu}_{ij}L^{i}H(\nu^{c})^{j}$. If the
right-handed neutrinos have a Majorana mass, there will also be a term
$M_{R_{jk}}(\nu^{c})^{j}(\nu^{c})^{k}$.
With these preliminaries, we can describe how the $W$ boson’s mass was
predicted. The story relies on the Higgs mechanism that enables a theory with
an exact gauge symmetry to give mass to gauge bosons in such a way that the
gauge symmetry is preserved, although hidden. The Higgs sector Lagrangian is
${\mathcal{L}}_{H}=D_{\mu}H^{\dagger}D^{\mu}H-\mu^{2}H^{\dagger}H-\lambda(H^{\dagger}H)^{2}$
(2.18)
where the covariant derivative coupling to $H$ is given by
$D_{\mu}=\partial_{\mu}-ig\frac{\sigma^{a}}{2}W^{a}_{\mu}-i\frac{g^{\prime}}{2}B_{\mu}$.
The gauge symmetry is spontaneously broken if $\mu^{2}<0$, in which case $H$
develops a vacuum expectation value (VEV) which by choice of gauge can be
chosen to be along $\langle H\rangle=(0,v/\sqrt{2})$. The gauge boson’s
receive an effective mass due to the coupling between the vacuum state of $H$
and the fluctuations of $W$ and $B$:
$\langle{\mathcal{L}}_{H}\rangle=\frac{g^{2}v^{2}}{8}\left((W^{1})_{\mu}^{2}+(W^{2})_{\mu}^{2}\right)+\frac{v^{2}}{8}\left(g^{\prime}B_{\mu}+g(W^{3})_{\mu}\right)^{2}.$
(2.19)
From this expression, one can deduce $W^{1}_{\mu}$ and $W^{2}_{\mu}$ have a
mass of $M^{2}_{W}=g^{2}v^{2}/4$, and the linear combination
$Z_{\mu}=(g^{2}+g^{\prime 2})^{-1/2}(g^{\prime}B_{\mu}+g(W^{3})_{\mu})$ has a
mass $M^{2}_{Z}=(g^{2}+g^{\prime 2})v^{2}/4$. The massless photon $A_{\mu}$ is
given by the orthogonal combination $A_{\mu}=(g^{2}+g^{\prime
2})^{-1/2}(gB_{\mu}-g^{\prime}(W^{3})_{\mu})$. The weak mixing angle
$\theta_{W}$ is given by $\sin\theta_{W}=g^{\prime}(g^{2}+g^{\prime
2})^{-1/2}$, and electric charge coupling $e$ is given by $e=g\sin\theta_{W}$.
Before $W^{\pm}$ or $Z^{o}$ were observed, the value of $v$ and
$\sin^{2}\theta_{W}$ could be extracted from the rate of neutral current weak
processes, and through left right asymmetries in weak process experiments
[47]. At tree-level, for momentum much less than $M_{W}$, the four-Fermi
interaction can be compared to predict
$M_{W}^{2}=\sqrt{2}e^{2}/(8G_{F}\sin^{2}\theta_{W})$. By 1983 when the $W$
boson was first observed, the predicted mass including quantum corrections was
given by $M_{W}=82\pm 2.4$ GeV to be compared with the UA1 Collaboration’s
measurement $M_{W}=81\pm 5$ GeV [48]. A few months later the $Z^{o}$ boson was
observed [49]. Details of the $W$ boson’s experimental mass determination will
be discussed in Section 4.4 as it is relevant for this thesis’s contributions
in Chapters 5 \- 8.
(a) (b)
Figure 2.2: One Loop Contributions to the Higgs mass self energy. (a) Fermion
loop. (b) Higgs loop.
The Standard Model described in here has one serious difficulty regarding fine
tuning of the Higgs mass radiative corrections. The Higgs field has two
contributions to the mass self-energy shown in Fig. 2.2. The physical mass
$m_{H}$ is given by
$m^{2}_{H}\approx
m_{o,H}^{2}-\sum_{f=u,d,e}\frac{{\rm{Tr}}(Y^{f}(Y^{f})^{\dagger})}{8\pi^{2}}\Lambda^{2}+\frac{\lambda}{2\pi^{2}}\Lambda^{2}$
(2.20)
where $m_{o,H}$ is the bare Higgs mass, $\Lambda$ is the cut off energy scale,
$Y^{u}$ $Y^{d}$ $Y^{e}$ are the three Yukawa coupling matrices, third term is
from the Higgs loop. Assuming the physical Higgs mass $m_{H}$ is near the
electo-weak scale ($\approx 100\,\,{\rm{GeV}}$)131313We use $100$ GeV as a
general electroweak scale. The current fits to Higgs radiative corrections
suggest $m_{H}=95\pm 35$ GeV [8, Ch10], but the direct search limits require
$m_{H}\gtrsim 115$ GeV. and $\Lambda$ is at the Plank scale means that these
terms need to cancel to some $34$ orders of magnitude. If we make the cut off
at $\Lambda\approx 1$ TeV the corrections from either Higgs-fermion
loop141414We use the fermion loop because the Higgs loop depends linearly on
the currently unknown quartic Higgs self coupling $\lambda$. are equal to a
physical Higgs mass of $m_{H}=100$ GeV. Once again there is no reason that we
cannot exclude _a priori_ the possibility of a cancelation between terms of
this magnitude. However, arguing that such a cancelation is unnatural has
successfully predicted the charm quark mass in Sec 2.1.4. Arguing such fine-
tuning is unnatural in the Higgs sector suggests new physics should be present
below around $1$ TeV.
### 2.2 Dark Matter: Evidence for New, Massive, Invisible Particle States
Astronomical observations also point to a something missing in the Standard
Model. Astronomers see evidence for ‘dark matter’ in galaxy rotation curves,
gravitational lensing, the cosmic microwave background (CMB), colliding galaxy
clusters, large-scale structure, and high red-shift supernova. Some recent
detailed reviews of astrophysical evidence for dark-matter can be found in
Bertone et al.[50] and Baer and Tata [51]. We discuss the evidence for dark
matter below.
* •
Galactic Rotation Curves
Observations of the velocity of stars normal to the galactic plane of the
Milky Way led Jan Oort [52] in 1932 to observe the need for ‘dark matter’. The
1970s saw the beginning of using doppler shifts of galactic centers versus the
periphery. Using a few ideal assumptions, we can estimate how the velocity
should trend with changing $r$. The $21$ cm HI line allows rotation curves of
galaxies to be measured far outside the visible disk of a galaxy [53] To get a
general estimate of how to interpret the observations, we assume spherical
symmetry and circular orbits. If the mass responsible for the motion of a star
at a radius $r$ from the galactic center is dominated by a central mass $M$
contained in $R\ll r$, then the tangential velocity is
$v_{T}\approx\sqrt{G_{N}\,M/r}$ where $G_{N}$ is Newton’s constant. If instead
the star’s motion is governed by mass distributed with a uniform density
$\rho$ then $v_{T}=r\,\sqrt{4\,G_{N}\,\rho\,\pi/3}$. Using these two extremes,
we can find when the mass of the galaxy has been mostly bounded by seeking the
distance $r$ where the velocity begins to fall like $\sqrt{r}$. The
observations [53, 54] show that for no $r$ does $v_{T}$ fall as $\sqrt{1/r}$.
Typically $v_{T}$ begins to rise linearly with $r$ and then for some $r>R_{o}$
it stabilizes at $v_{T}=\rm{const.}$. In our galaxy the constant $v_{T}$ is
approximately $220{\rm{km}}/{\rm{sec}}$. The flat $v_{T}$ vs $r$ outside the
optical disk of the galaxy implies $\rho(r)\propto r^{-2}$ for the dark
matter. The density profile of the dark matter near the center is still in
dispute. The rotation curves of the galaxies is one of the most compelling
pieces of evidence that a non-absorbing, non-luminous, source of gravitation
permeates galaxies and extends far beyond the visible disk of galaxies.
* •
Galaxy clusters
In 1937 Zwicky [55] published research showing that galaxies within clusters
of galaxies were moving with velocities such that they could not have been
bound together without large amounts of ‘dark matter’. Because dark matter is
found near galaxies, a modified theory of newtonian gravity (MOND) [56] has
been shown to agree with the galactic rotation curves of galaxies. A recent
observation of two clusters of galaxies that passed through each other, known
as the bullet cluster, shows that the dark matter is separated from the
luminous matter of the galaxies[57]. The dark matter is measured through the
gravitational lensing effect on the shape of the background galaxies. The
visible ‘pressureless’ stars and galaxies pass right through each other during
the collision of the two galaxy clusters. The dark matter lags behind the
visible matter. This separation indicates that the excess gravitation observed
in galaxies and galaxy clusters cannot be accounted for by a modification of
the gravitation of the visible sector, but requires a separate ‘dark’
gravitational source not equal to the luminous matter that can lag behind and
induce the observed gravitational lensing. The bullet cluster observation is a
more direct, empirical observation of dark matter.
The dark matter seen in the rotation curves and galaxy clusters could not be
due to $H$ gas (the most likely baryonic candidate) because it would have been
observed in the $21$ cm $HI$ observations and is bounded to compose no more
than $1\%$ of the mass of the system [58]. Baryonic dark matter through
MAssive Compact Halo Objects (MACHOs) is bounded to be $\lesssim 40\%$ of the
total dark matter [8, Ch22]
* •
Not Baryonic: The Anisotropy of the Cosmic Microwave Background and
Nucleosythesis
The anisotropy of the cosmic microwave background radiation provides an strong
constraint on both the total energy density of the universe, but also on the
dark-matter and the baryonic part of the dark matter. Here is a short
explanation of why.
The anisotropy provides a snapshot of the density fluctuations in the
primordial plasma at the time the universe cools enough so that the protons
finally ‘recombine’ with the electrons to form neutral hydrogen. The initial
density fluctuations imparted by inflation follow a simple power law. The
universe’s equation-of-state shifts from radiation dominated to matter
dominated before the time of recombination. Before this transition the matter-
photon-baryon system oscillates between gravitational collapse and radiation
pressure. After matter domination, the dark matter stops oscillating and only
collapses, but the baryons and photons remain linked and oscillating. The
peaks are caused by the number of times the baryon-photon fluid collapses and
rebounds before recombination.
The first peak of the CMB anisotropy power spectrum requires the total energy
density of the universe to be very close to the critical density $\Omega=1$.
This first peak is the scale where the photon-baryon plasma is collapsing
following a dark-matter gravitational well, reaches its peak density, but
doesn’t have time to rebound under pressure before recombination. This
attaches a length scale to the angular scale observed in the CMB anisotropy in
the sky. The red-shift attaches a distance scale to the long side of an
isosceles triangle. We can determine the geometry from the angles subtended at
vertex of this isosceles triangle with all three sides of fixed geodesic
length. The angle is larger for such a triangle on a sphere than on a flat
surface. By comparing the red-shift, angular scale, and length scale allows
one to measure the total spatial curvature of space-time [59] to be nearly the
critical density $\Omega=1$.
We exclude baryonic matter as being the dark matter both from direct searches
of $H$ and MACHOs and from cosmological measurement of $\Omega_{M}$ vs
$\Omega_{b}$. $\Omega_{M}$ is the fraction of the critical density that needs
to obey a cold-matter equation of state and $\Omega_{b}$ is the fraction of
the critical density that is composed of baryons. $\Omega_{\Lambda}$ is the
fraction of the critical density with a vacuum-like equation of state (dark
energy). The anisotropy of the Cosmic Microwave Background (CMB) provides a
constraint the reduced baryon density ($h^{2}\Omega_{b}$) and reduced matter
density ($h^{2}\Omega_{b}$).
As the dark-matter collapses, the baryon-photon oscillates within these wells.
The baryons however have more inertia than the photons and therefore ‘baryon-
load’ the oscillation. The relative heights of the destructive and
constructive interference positions indicate how much gravitational collapse
due to dark matter occur during the time of a baryon-photon oscillation cycle.
The relative heights of different peaks measure the baryon loading. Therefore,
the third peak provides data about the dark matter density as opposed to the
photon-baryon density.
Using these concepts, the WMAP results give estimates of the baryonic dark-
matter: $\Omega_{b}h^{2}=0.0224\pm 0.0009$ and the dark-matter plus baryonic
matter $\Omega_{M}h^{2}=0.135\pm 0.008$ [50]. The CMB value for $\Omega_{b}$
agrees with the big-bang nucleousythesis (BBN) value
$0.018<\Omega_{b}h^{2}<0.023$. The dark-matter abundance is found to be
$\Omega_{M}=0.25$ using $h=0.73\pm 0.03$ [8, Ch21]. Combining the CMB results
with supernova observations with baryon acoustic oscillations (BAO) compared
to the galaxy distribution on large scales all lead to a three-way
intersection on $\Omega_{M}$ vs $\Omega_{\Lambda}$ that give
$\Omega_{M}=0.245\pm 0.028$, $\Omega_{\Lambda}=0.757\pm 0.021$,
$\Omega_{b}=0.042\pm 0.002$ [60]. This is called the concordance model or the
$\Lambda$CDM model.
Because all these tests confirm $\Omega_{b}<<\Omega_{M}$, we see that the
cosmological observations confirm the non-baryonic dark matter observed in
galactic rotation curves.
* •
Dark Matter Direct Detection and Experimental Anomalies
First we assume a local halo density of the dark matter of
$0.3\,\,{\rm{GeV}}/\,\,{\rm{cm}}^{3}$ as extracted from $\Omega_{M}$ and
galaxy simulations. For a given mass of dark matter particle, one can now find
a number density. The motion of the earth through this background number
density creates a flux. Cryogenic experiments shielded deep underground with
high purity crystals search for interactions of this dark matter flux with
nucleons. Using the lack of a signal they place bounds on the cross section as
a function of the mass of the dark matter particle which can be seen in Fig.
2.3. This figure shows direct dark-matter limits from the experiments CDMS
(2008)[61], XEONON10 (2007)[62], and the DAMA anomaly (1999) [63][64]. Also
shown are sample dark-matter cross sections and masses predicted for several
supersymmetric models [65][66] and universal extra dimension models [67]. This
shows that the direct searches have not excluded supersymmetry as a viable
source for the dark matter.
Figure 2.3: Direct dark matter searches showing the limits from the
experiments CDMS, XENON10, and the DAMA signal. Also shown are sample dark-
matter cross sections and masses predicted for several supersymmetric models
and universal extra dimension models. Figures generated using
http://dmtools.berkeley.edu/limitplots/.
Although direct searches have no confirmed positive results, there are two
anomalies of which we should be aware. The first is an excess in gamma rays
observed in the EGRET experiment [68] which points to dark matter particle
annihilation with the dark-matter particles mass between
$50\,\,{\rm{GeV}}<M_{N}<100\,\,{\rm{GeV}}$. The uncertainties in the
background gamma ray sources could also explain this excess. The second
anomaly is an annual variation observed in the DAMA experiment [63]. The
annual variation could reflect the annual variation of the dark matter flux as
the earth orbits the sun. The faster and slower flux of dark-matter particles
triggering the process would manifest as an annual variation. Unfortunately,
CDMS did not confirm DAMA’s initial results [69][61]. This year the DAMA/LIBRA
experiment released new results which claim to confirm their earlier
result[64] with an $8\,\sigma$ confidence. Until another group’s experiment
confirms the DAMA result, the claims will be approached cautiously.
* •
Dark matter Candidates
There are many models that provide possible dark-matter candidates. To name a
few possibilities that have been considered we list: $R$-parity conserving
supersymmetry, Universal Extra Dimensions (UED), axions, degenerate massive
neutrinos [70], stable black holes. In Chapter 3 we focus on Supersymmetry as
the model framework for explaining the dark-matter, but our results in
Chapters 6 \- 8 apply to any model where new particle states are pair produced
and decay to two semi-stable dark matter particles that escape the detector
unnoticed.
The pair-produced nature of dark-matter particles is relatively common trait
of models with dark matter. For example the UED models [71, 72, 73] can also
pair produced dark-matter particles at a collider. The lightest Kaluza-Klein
particle (LKP) is also a dark-matter candidate. In fact UED and SUSY have very
similar hadron collider signatures [74].
### 2.3 Supersymmetry: Predicting New Particle States
Supersymmetry will be the theoretical frame-work for new-particle states on
which this thesis focuses. Supersymmetry has proven a very popular and
powerful framework with many successes and unexpected benefits: Supersymmetry
provides a natural dark-matter candidate. As a fledgling speculative theory,
supersymmetry showed a preference for a heavy top-quark mass. In a close
analogy with the GIM mechanism, Supersymmetry’s minimal phenomenological
implementation eliminates a fine-tuning problem associated with the Higgs
boson in the Standard Model. Supersymmetry illuminates a coupling constant
symmetry (Grand Unification) among the three non-graviational forces at an
energy scale around $2\times 10^{16}\,\,{\rm{GeV}}$. Supersymmetry is the only
extension of Poincar$\acute{\rm{e}}$ symmetry discussed in 2.1.1 allowed with
a graded algebra [75]. It successfully eliminates the tachyon in String Theory
through the GSO projection [76]. Last, it is a candidate explanation for the
$3\,\sigma$ deviation of the muons magnetic moment known as the $(g-2)_{\mu}$
anomaly [77].
These successes are exciting because they follow many of the precedents and
clues described earlier in this chapter that have successfully predicted new-
particle states in the past. SUSY, short for supersymmetry, relates the
couplings of the Standard-Model fermions and bosons to the couplings of new
bosons and fermions known as superpartners. SUSY is based on an extension of
the Poincare-symmetry (Sec 2.1.1). In the limit of exact Supersymmetry, the
Higgs self-energy problem (Sec 2.1.5) vanishes. In analogy to the GIM
mechanism (Sec 2.1.4), the masses of the new SUSY particles reflect the
breaking of supersymmetry. There are many theories providing the origin of
supersymmetry breaking which are beyond the scope of this thesis. The belief
that nature is described by Supersymmetry follows from how SUSY connects to
the past successes in predicting new-particle states and their masses from
symmetries, broken-symmetries, and fine-tuning arguments.
Excellent introductions and detailed textbooks on Supersymmetry exist, and
there is no point to reproduce these textbooks here. Srednicki provides a very
comprehensible yet compact introduction to supersymmetry via superspace at the
end of Ref [39]. Reviews of SUSY that have proved useful in developing this
thesis are Refs [78, 79, 80, 81, 82, 83]. In this introduction, we wish to
highlight a few simple parallels to past successes in new-particle state
predictions. We review supersymmetric radiative corrections to the coupling
constants and to the Yukawa couplings. These radiative corrections can be
summarized in the renormalization group equations (RGE). The RGE have
surprising predictions for Grand Unification of the non-gravitational forces
and also show evidence of a quark lepton mass relations suggested by Georgi
and Jarlskog which we refer to as mass matrix unification. The mass matrix
unification predictions can be affected by potentially large corrections
enhanced by the ratio of the vacuum expectation value of the two
supersymmetric Higgs particles $v_{u}/v_{d}=\tan\beta$. Together, these
renormalization group equations and the large $\tan\beta$ enhanced corrections
provide the basis for the new contributions this thesis presents in Chapter 3.
#### 2.3.1 Supersymmetry, Radiative Effects, and the Top Quark Mass
Supersymmetry extends the Poincar$\acute{\rm{e}}$ symmetry used in 2.1.1. The
super-Poincar$\acute{\rm{e}}$ algebra involves a graded Lie algebra which has
generators that anticommute as well as generators that commute. One way of
understanding Supersymmetry is to extend four space-time coordinates to
include two complex Grassmann coordinates which transform as two-component
spinors. This extended space is called superspace. In the same way that
$P_{\mu}$ generates space-time translations, we introduce one
supercharge151515We will only consider theories with one supercharge. $Q$ that
generates translations in $\theta$ and $\bar{\theta}$. The graded super-
Poincar$\acute{\rm{e}}$ algebra includes the Poincar$\acute{\rm{e}}$
supplemented by
$\displaystyle\left[Q^{\dagger}_{\dot{a}},P^{\mu}\right]=\left[Q_{a},P^{\mu}\right]$
$\displaystyle=$ $\displaystyle 0$ (2.21)
$\displaystyle\left\\{Q_{a},Q_{b}\right\\}=\left\\{Q^{\dagger}_{\dot{a}},Q^{\dagger}_{\dot{b}}\right\\}$
$\displaystyle=$ $\displaystyle 0$ (2.22)
$\displaystyle\left[Q_{a},M^{\mu\nu}\right]$ $\displaystyle=$
$\displaystyle\frac{i}{2}(\bar{\sigma}^{\mu}{\sigma}^{\nu}-\bar{\sigma}^{\nu}{\sigma}^{\mu})_{{a}}^{\
{c}}Q^{\dagger}_{{c}}$ (2.23)
$\displaystyle\left[Q^{\dagger}_{\dot{a}},M^{\mu\nu}\right]$ $\displaystyle=$
$\displaystyle\frac{i}{2}(\sigma^{\mu}\bar{\sigma}^{\nu}-\sigma^{\nu}\bar{\sigma}^{\mu})_{\dot{a}}^{\
\dot{c}}Q^{\dagger}_{\dot{c}}$ (2.24)
$\displaystyle\left\\{Q_{a},Q^{\dagger}_{\dot{a}}\right\\}$ $\displaystyle=$
$\displaystyle 2\,\sigma^{\mu}_{a\dot{a}}P_{\mu}.$ (2.25)
where $M$ are the boost generators satisfying Eq(2.3), $\\{,\\}$ indicate
anticommutation, $a,b,\dot{a},\dot{b}$ are spinor indices,
$\sigma_{\mu}=(1,\vec{\sigma})$ and $\bar{\sigma}_{\mu}=(1,-\vec{\sigma})$.
Eq(2.21) indicates htat the charge $Q$ is conserved by space-time
translations. Eq(2.22) indicates that no more than states of two different
spins can be connected by the action of a supercharge. Eq(2.23-2.24) indicate
that $Q$ and $Q^{\dagger}$ transform as left and right handed spinors
respectively. Eq(2.25) indicates that two supercharge generators can generate
a space-time translation.
In relativistic QFT, we begin with fields $\phi(x)$ that are functions of
space-time coordinates $x^{\mu}$ and require the Lagrangian to be invariant
under a representation of the Poincar$\acute{\rm{e}}$ group acting on the
space-time and the fields. To form supersymmetric theories, we begin with
superfields $\hat{\Phi}(x,\theta,\bar{\theta})$ that are functions of space-
time coordinates $x^{\mu}$ and the anticommuting Grassmann coordinates
$\theta_{a}$ and $\bar{\theta}_{\dot{a}}$ and require the Lagrangian to be
invariant under a representation of the super-Poincar$\acute{\rm{e}}$ symmetry
acting on the superspace and the superfields. The procedure described here is
very tedious: defining a representation of the super-Poincar$\acute{\rm{e}}$
algebra, and formulating a Lagrangian that is invariant under actions of the
group involves many iterations of trial and error. Several short-cuts have
been discovered to form supersymmetric theories very quickly. These shortcuts
involve studying properties of superfields.
Supersymmetric theories can be expressed as ordinary relativistic QFT by
expressing the superfield $\hat{\Phi}(x,\theta,\bar{\theta})$ in terms of
space-time fields like $\phi(x)$ and $\psi(x)$. The superfields, which we
denote with hats, can be expanded as a Taylor series in $\theta$ and
$\bar{\theta}$. Because $\theta_{a}^{2}=\bar{\theta}_{\dot{a}}^{\,2}=0$, the
superfield expansions consist of a finite number of space-time fields
(independent of $\theta$ or $\bar{\theta}$); some of which transform as
scalars, and some transform as spinors, vectors, or higher level tensors. A
supermultiplet is the set of fields of different spin interconnected because
they are part of the same superfield. If supersymmetry were not broken, then
these fields of different spin would be indistinguishable. The fields of a
supermultiplet share the same quantum numbers (including mass) except spin.
Different members of a supermultiplet are connected by the action of the
supercharge operator $Q$ or $Q^{\dagger}$: roughly speaking
$Q|\rm{boson}\rangle=|\rm{fermion}\rangle$ and
$Q|\rm{fermion}\rangle=|\rm{boson}\rangle$. Because
$Q_{a}^{2}=(Q_{\dot{a}}^{\dagger})^{2}=0$ a supermultiplet only has fields of
two different spins. Also because it is a symmetry transformation, the fields
of different spin within a supermultiplet need to have equal numbers of
degrees of freedom 161616There are auxiliary fields in supermultiplets that,
while not dynamical, preserve the degrees of freedom when virtual states go
off mass shell.. A simple type of superfield is a chiral superfield. A chiral
superfield consists of a complex scalar field and a chiral fermion field each
with $2$ degrees of freedom. Another type of superfield is the vector
superfield $\hat{V}=\hat{V}^{\dagger}$ which consists of a vector field and a
Weyl fermion field. Magically vector superfields have natural gauge
transformations, the spin-1 fields transform as connections under gauge
transformation whereas the superpartner spin-1/2 field transforms covariantly
in the adjoint of the gauge transformation171717 Although this seems very
unsymmetric, Wess and Bagger[81] show a supersymmetric differential geometry
with tetrads that illuminate the magic of how the spin-1 fields transform as
connections but the superpartners transform covariantly in the adjoint
representation..
The shortcuts to form Lagrangians invariant under supersymmetry
transformations are based on three observations: (1) the products of several
superfields is again a superfield (2) the term in an expansion of a superfield
(or product of superfields) proportional to
$\theta_{a}\theta_{b}\epsilon^{ab}$ is invariant under supersymmetry
transformations up to a total derivative (called an $F$-term), and (3) the
term in an expansion of a superfield (or product of superfields) proportional
to
$\theta_{a}\theta_{b}\epsilon^{ab}\bar{\theta}_{\dot{a}}\bar{\theta}_{\dot{b}}\epsilon^{\dot{a}\dot{b}}$
is invariant under supersymmetry transformations up to a total derivative
(called a $D$-term).
These observations have made constructing theories invariant under
supersymmetry a relatively painless procedure: the $D$-term of
$\hat{\Psi}^{\dagger}e^{-\hat{V}}\hat{\Psi}$ provides supersymmetricly
invariant kinetic terms. A superpotential $\hat{W}$ governs the Yukawa
interactions among chiral superfields. The $F$-term of $\hat{W}$ gives the
supersymmetricly invariant interaction Lagrangian. The $F$-term of $\hat{W}$
can be found with the following shortcuts: If we take all the chiral
superfields in $\hat{W}$ to be enumerated by $i$ in $\hat{\Psi}_{i}$ then the
interactions follow from two simple calculations: The scalar potential is
given by
$\mathcal{L}\supset=-(\sum_{j}|\partial\hat{W}/\partial\hat{\Psi}_{j}|^{2})$
where $\hat{\Psi}_{j}$ is a the superfield and all the superfields inside $()$
are replaced with their scalar part of their chiral supermultiplet. The
fermion interactions with the scalars are given by
$\mathcal{L}\supset-\sum_{i,j}(\partial^{2}\hat{W}/\partial\hat{\Psi}_{i}\partial\hat{\Psi}_{j})\psi_{i}\psi_{j}$.
where the superfields in $()$ are replaced with the scalar part of the chiral
supermultiplet and $\psi_{i}$ and $\psi_{j}$ are the 2-component Weyl fermions
that are part of their chiral supermultiplet $\hat{\Psi}_{j}$. Superpotential
terms must be gauge invariant just as one would expect for terms in the
Lagrangian and must be holomorphic function of the superfields181818By
holomorphic we mean the superpotential can only be formed from unconjugated
superfields $\hat{\Psi}$ and not the conjugate of superfields like
$\hat{\Psi}^{*}$.. Another way to express the interaction Lagrangian is by
${\mathcal{L}_{W}}=\int d\theta^{2}\hat{W}+\int d\bar{\theta}^{2}W^{\dagger}$
where the integrals pick out the $F$ terms of the superpotential $\hat{W}$.
In addition to supersymmetry preserving terms, we also need to add ‘soft’
terms which parameterize the breaking of supersymmetry. ‘Soft’ refers to only
SUSY breaking terms which do not spoil the fine-tuning solution discussed
below.
Field | Lorentz | $SU(3)_{c}$ | $SU(2)_{L}$ | $U(1)_{Y}$
---|---|---|---|---
$(L)^{i}$ | $(2,0)$ | $1$ | $2$ | $-1$
$(\tilde{L})^{i}$ | $0$ | $1$ | $2$ | $-1$
$(e^{c})^{i}$ | $(2,0)$ | $1$ | $1$ | $2$
$(\tilde{e}^{c})^{i}$ | $0$ | $1$ | $1$ | $2$
$(\nu^{c})^{i}$ | $(2,0)$ | $1$ | $1$ | $0$
$(\tilde{\nu}^{c})^{i}$ | $0$ | $1$ | $1$ | $0$
$(Q)^{i}$ | $(2,0)$ | $3$ | $2$ | $1/3$
$(\tilde{Q})^{i}$ | $0$ | $3$ | $2$ | $1/3$
$(u^{c})^{i}$ | $(2,0)$ | $\bar{3}$ | $1$ | $-4/3$
$(\tilde{u}^{c})^{i}$ | $0$ | $\bar{3}$ | $1$ | $-4/3$
$(d^{c})^{i}$ | $(2,0)$ | $\bar{3}$ | $1$ | $2/3$
$(\tilde{d}^{c})^{i}$ | $0$ | $\bar{3}$ | $1$ | $2/3$
$H_{u}$ | $1$ | $1$ | $2$ | $+1$
$\tilde{H}_{u}$ | $(2,0)$ | $1$ | $2$ | $+1$
$H_{d}$ | $1$ | $1$ | $2$ | $-1$
$\tilde{H}_{d}$ | $(2,0)$ | $1$ | $2$ | $-1$
$B_{\mu}$ | $4$ | $1$ | $1$ | $0^{\ddagger}$
$\tilde{B}$ | $(2,0)$ | $1$ | $1$ | $0$
$W_{\mu}$ | $4$ | $1$ | $3^{\ddagger}$ | $0$
$\tilde{W}$ | $(2,0)$ | $1$ | $3$ | $0$
$g_{\mu}$ | $4$ | $8^{\ddagger}$ | $1$ | $0$
$\tilde{g}$ | $(2,0)$ | $8$ | $1$ | $0$
Table 2.2: Minimial Supersymmetric Standard Model (MSSM) field’s
transformation properties. Fields grouped together are part of the same
supermultiplet. ‡ Indicated fields transform as a connection as opposed to
covariantly${}^{\rm{\ref{ConnectionTransform2}}}$.
To form the minimal supersymmetric version of the Standard Model, known as the
MSSM, we need to identify the Standard Model fields with supermultiplets. The
resulting list of fields is given in Table 2.2. Supersymmetry cannot be
preserved in the Yukawa sector with just one Higgs field because the
superpotential which will lead to the $F$-terms in the theory must be
holomorphic so we cannot include both $\hat{H}$ and $\hat{H}^{*}$ superfields
in the same superpotential; instead the MSSM has two Higgs fields
$H_{u}=(h^{+}_{u},h^{o}_{u})$ and $H_{d}=(h^{o}_{d},h^{-}_{d})$. The neutral
component of each Higgs will acquire a vacuum expectation value (VEV):
$\langle h_{u}^{o}\rangle=v_{u}$ and $\langle h_{d}^{o}\rangle=v_{d}$. The
parameter $\tan\beta=v_{u}/v_{d}$ is ratio of the VEV of the two Higgs fields.
The structure of the remaining terms can be understood by studying a field
like a right-handed up quark. The $(u^{c})$ transforms as a 3 under $SU(3)$ so
its superpartner must be also transform as a 3 but have spin $0$ or $1$. No
Standard Model candidate exists that fits either option. A spin-$1$
superpartner is excluded because the fermion component of a vector superfield
transforms as a connection in the adjoint of the gauge group; not the
fundamental representation like a quark. If $(u^{c})$ is part of a chiral
superfield, then there is an undiscovered spin-0 partner. Thus the $(u^{c})$
is part of a chiral supermultiplet with a scalar partner called a right-handed
squark $\tilde{u}^{c}$ 191919The right-handed refers to which fermion it is a
partner with. The field is a Lorentz scalar..
The remaining supersymmetric partner states being predicted in Table 2.2 can
be deduced following similar arguments. The superpartners are named after
their SM counterparts with preceding ‘s’ indicating it is a scalar
superpartner of a fermion or affixing ‘ino’ to the end indicating it is a
fermionic partner of a boson: for example selectron, smuon, stop-quark,
Higgsino, photino, gluino, etc.
(a) (b)
Figure 2.4: One Loop Contributions to the supersymmetric Higgs $H_{u}$ mass
self energy. (a) top loop. (b) stop loop.
Supersymmetry solves the fine-tuning problem of the Higgs self energy. The
superpotential describing the Yukawa sector of the MSSM is given by
$\hat{W}=Y^{e}_{ij}\hat{L}^{i}\hat{H}_{d}(\hat{e}^{c})^{j}+Y^{d}_{ij}\hat{Q}^{i}\hat{H}_{d}(\hat{d}^{c})^{j}+Y^{u}_{ij}\hat{Q}^{i}\hat{H}_{u}(\hat{u}^{c})^{j}+\mu\hat{H}_{u}\hat{H}_{d}$
(2.26)
where the fields with hats like $\hat{L}$, $\hat{H}_{u,d}$, $\hat{Q}$, etc.
are all superfields. In the limit of exact supersymmetry the fine-tuning
problem is eliminated because the resulting potential and interactions lead to
a cancelation of the quadratic divergences between the fermion and scalar
loops in Fig 2.4. The self-energy of the neutral Higgs
$H_{u}=(h_{u}^{+},h_{u}^{o})$ is now approximately given by
$m^{2}_{h^{o}_{u}}\approx|\mu_{o}|^{2}-\frac{{\rm{Tr}}(Y^{u}(Y^{u})^{\dagger})}{8\pi^{2}}\Lambda^{2}+\frac{{\rm{Tr}}(Y^{u}(Y^{u})^{\dagger})}{8\pi^{2}}\Lambda^{2}$
(2.27)
where $|\mu_{o}|^{2}$ is the modulus squared of the bare parameter in the
superpotential which must be positive, the second term comes from the fermion
loop and therefore has a minus sign, and the third term comes from the scalar
loop. Both divergent loops follow from the MSSM superpotential: the first loop
term follows from the fermion coupling $\partial^{2}W/\partial t\partial
t^{c}=y_{t}h^{o}_{u}tt^{c}$ and the second loop term follows from the scalar
potential $|\partial W/\partial
t^{c}|^{2}=y_{t}^{2}|h^{o}_{u}|^{2}|\tilde{t}|^{2}$ and $|\partial W/\partial
t|^{2}=y_{t}^{2}|h^{o}_{u}|^{2}|\tilde{t^{c}}|^{2}$ where we assume the top-
quark dominates the process. Exact supersymmetry ensures these two
quadratically divergent loops cancel. However two issues remain and share a
common solution: supersymmetry is not exact, and the Higgs mass squared must
go negative to trigger spontaneous symmetry breaking (SSB). In the effective
theory well above the scale where all superpartners are energetically
accessible the cancelation dominates. The fine-tuning arguments in Sec. 2.1.5
suggest that if $\mu_{o}\approx 100\,\,{\rm{GeV}}$ 202020Again we choose $100$
GeV as a generic electroweak scale. this cancelation should dominate above
about $1$ TeV. In an effective theory between the scale of the top-quark mass
and the stop-quark mass only the fermion loop (Fig 2.4 a) will contribute
significantly. In this energy-scale region we neglect the scalar loop (Fig 2.4
b). With only the fermion loop contributing significantly and if $y_{t}$ is
large enough then the fermion loop will overpower $\mu^{2}_{o}$ then the mass
squared $m^{2}_{h^{o}_{u}}$ can be driven negative.
In this way the need for SSB without fine tuning in the MSSM prefers a large
top-quark mass and the existence of heavier stop scalar states. Assuming
$\mu_{o}\approx 100\,\,{\rm{GeV}}$ predicts $\tilde{t}$ and $\tilde{t}^{c}$
below around $1$ TeV212121As fine tuning is an aesthetic argument, there is a
wide range of opinions on the tolerable amount of fine tuning acceptable.
There is also a wide range of values for $\mu$ that are tolerable.. This
phenomena for triggering SSB is known as radiative electroweak symmetry
breaking (REWSB). We have shown a very coarse approach to understanding the
major features; details can be found in a recent review [84] or any of the
supersymmetry textbooks listed above.
The detailed REWSB [85] technology was developed in the early 1980s, and
predicted $M_{t}\approx 55-200$ GeV [86][87]; a prediction far out of the
general expectation of $M_{t}\approx 15-40$ GeV of the early 1980s 222222Raby
[88] Glashow [89] and others [90] [91] all made top-quark mass predictions in
the range of $15-40\,\,{\rm{GeV}}$. and closer to the observed value of
$M_{t}=170.9\pm 1.9$ GeV. If supersymmetric particles are observed at the LHC,
the large top-quark mass may be looked at as the first confirmed prediction of
supersymmetry.
The MSSM provides another independent reason to prefer a large top quark mass.
The top Yukawa coupling’s radiative corrections are dominated by the
difference between terms proportional to $g_{3}^{2}$ and $y_{t}^{2}$. If the
ratio of these two terms is fixed, then $y_{t}$ will remain fixed [92]. For
the standard model this gives a top quark mass around $120$ GeV. However for
the MSSM, assuming $\alpha_{3}=g_{3}^{2}/4\pi=0.12$ then one finds a top quark
mass around $180$ GeV assuming a moderate $\tan\beta$ [93]. Our observations
of the top quark mass very near this fixed point again points to supersymmetry
as a theory describing physics at scalers above the electroweak scales.
#### 2.3.2 Dark Matter and SUSY
Supersymmetry is broken in nature. The masses of the superpartners reflect
this breaking. Let’s assume the superpartner masses are at a scale that avoids
bounds set from current searches yet still solve the fine-tuning problem. Even
in this case there are still problems that need to be resolved 232323There is
also a flavor changing neutral current (FCNC) problem not discussed here..
There are many couplings allowed by the charge assignments displayed in Table
2.2 that would immediately lead to unobserved phenomena. For example the
superpotential could contain superfield interactions
$\hat{u}^{c}\hat{u}^{c}\hat{d}^{c}$ or $\hat{Q}\hat{L}\hat{d}^{c}$ or
$\hat{L}\hat{L}\hat{e}^{c}$ or $\kappa\hat{L}\hat{H}_{u}$ where $\kappa$ is a
mass scale. Each of these interactions is invariant under the $SU(3)_{c}\times
SU(2)_{L}\times U(1)_{Y}$ charges listed in Table 2.2. These couplings, if
allowed with order $1$ coefficients, would violate the universality of the
four-Fermi decay, lead to rapid proton decay, lepton and baryon number
violation, etc.
These couplings can be avoided in several ways. We can require a global baryon
number or lepton number $U(1)$ on the superpotential; in the Standard Model
these were accidental symmetries. However, successful baryogenesis requires
baryon number violation so imposing it directly is only an approximation.
Another option is to impose an additional discrete symmetry on the Lagrangian;
a common choice is $R$-parity
$R=(-1)^{2j+3B+L}$ (2.28)
where $j$ is the spin of the particle. This gives the Standard Model particles
$R=1$ and the superpartners $R=-1$. Each interaction of the MSSM Lagrangian
conserves R-parity. The specific choice of how to remove these interactions is
more relevant for GUT model building. The different choices lead to different
predictions for proton decay lifetime.
The $R$-Parity, which is needed to effectively avoid these unobserved
interactions at tree level, has the unexpected benefit of also making stable
the lightest supersymmetric particle (LSP). A stable massive particle that is
non-baryonic is exactly what is needed to provide the dark matter observed in
the galactic rotation curves of Sec. 2.2.
#### 2.3.3 Renormalization Group and the Discovered Supersymmetry Symmetries
##### Unification of $SU(3)_{c}\times SU(2)_{L}\times U(1)_{Y}$ coupling
constants
Up until now, the divergent loops have been treated with a UV cut off.
Renormalization of non-abelian gauge theories is more easily done using
dimensional regularization where the dimensions of space-time are taken to be
$d=4-2\epsilon$. The dimensionless coupling constants $g$ pick up a
dimensionfull coefficient $g\,\mu^{-\epsilon}$ where $\mu$ is an arbitrary
energy scale. The divergent term in loops diagrams is now proportional to
$1/\epsilon$, and the counter-terms can be chosen to cancel these divergent
parts of these results. By comparison with observable quantities, all the
parameters in the theory are measured assuming a choice of $\mu$. The
couplings with one choice of $\mu$ can be related to an alternative choice of
$\mu$ by means of a set of differential equation known as the renormalization
group equations (RGE). The choice of $\mu$ is similar to the choice of the
zero of potential energy; in principle any choice will do, but in practice
some choices are easier than others. Weinberg shows how this arbitrary scale
can be related to the typical energy scale of a process [34].
The renormalization group at one loop for the coupling constant $g$ of an
$SU(N)$ gauge theory coupled to fermions and scalars is
$\frac{\partial}{\partial\log\mu}g=\frac{g^{3}}{16\pi^{2}}\left[\frac{11}{3}C(G)-\frac{2}{3}n_{F}S(R_{F})-\frac{1}{3}n_{S}S(R_{S})\right]$
(2.29)
where $n_{F}$ is the number of 2-component fermions, $n_{S}$ is the number of
complex scalars, $S(R_{F,S})$ is the Dynkin index for the representation of
the fermions or scalars respectively. Applying this to our gauge groups in the
MSSM $C(G)=N$ for $SU(N)$ and $C(G)=0$ for $U(1)$. For the fundamental $SU(N)$
we have $S(R)=1/2$; for the adjoint $S(R)=N$; for $U(1)$ we have $S(R)$ equal
to the $\sum(Y/2)^{2}$ over all the scalars (which accounts for the number of
scalars). When the $SU(3)_{c}\times SU(2)_{L}\times U(1)_{Y}$ is embedded in a
larger group like $SU(5)$, the $U(1)_{Y}$ coupling $g^{\prime}$ is rescaled to
the normalization appropriate to the $SU(5)$ generator that becomes
hypercharge. This rescaling causes us to work with
$g_{1}=\sqrt{5/3}g^{\prime}$ 242424Eq(2.29) is for $g^{\prime}$ and one must
substitute the definition of $g_{1}$ to arrive at Eq(2.30)..
Figure 2.5: Gauge couplings for the three non-gravitational forces as as a
function of energy scale for the (left) Standard Model and (right) MSSM.
Applying this general formula to both the Standard Model (SM) and the MSSM
leads to
$\displaystyle\frac{\partial}{\partial\log\mu}g_{i}$ $\displaystyle=$
$\displaystyle\frac{g_{i}^{3}}{16\pi^{2}}b_{i}$ (2.30)
$\displaystyle{\rm{SM}}\begin{cases}b_{1}=&n_{G}\,4/3+n_{H}\,1/10\\\
b_{2}=&-22/3+n_{G}\,4/3+n_{H}\,1/6\\\ b_{3}=&-11+n_{G}\,4/3\end{cases}\ $
$\displaystyle\ \ {\rm{MSSM}}=\begin{cases}b_{1}=&n_{G}\,2+n_{H}\,3/10\\\
b_{2}=&-6+n_{G}\,2+n_{H}\,1/2\\\ b_{3}=&-9+n_{G}\,2\end{cases}$
where $n_{G}=3$ is the number of generations and $n_{H}$ is the number of
Higgs doublets ($n_{H}=1$ in SM and $n_{H}=2$ in MSSM). A miracle is shown in
Fig. 2.5. The year 1981 saw a flurry of papers from Dimopoulos, Ibanez,
Georgi, Raby, Ross, Sakai and Wilczek who were detailing the consequences of
this miracle [94][95][96][97]. There is one degree of freedom in terms of
where to place an effective supersymmetry scale $M_{S}$ where Standard Model
RG running turns into MSSM RG running. At one-loop order, unification requires
$M_{S}\approx M_{Z}$; at two-loop order $200\,\,{\rm{GeV}}<M_{S}<1$ TeV
252525This range comes from a recent study [98] which assumes
$\alpha_{S}(M_{Z})=0.122$. If we assumes $\alpha_{S}(M_{Z})=0.119$, then we
find $2\,\,{\rm{TeV}}<M_{S}<6$ TeV. Current PDG [8, Ch 10] SM global fits give
$\alpha_{S}=0.1216\pm 0.0017$..
The MSSM was not designed for this purpose, but the particle spectrum gives
this result effortlessly262626Very close coupling constant unification can
also occur in non-supersymmetric models. The Standard Model with six Higgs
doublets is one such example [99], but the unification occurs at too low a
scale $\approx 10^{14}$ GeV. GUT-scale gauge-bosons lead to proton decay. Such
a low scale proton decay at a rate in contradiction with current experimental
bounds.. A symmetry among the coupling of the three forces is discovered
through the RG equations. If the coupling unify, they may all originate from a
common grand-unified force that is spontaneously broken at $\mu\approx 2\times
10^{16}$ GeV.
There are several possibilities for SUSY GUT theories: $SU(5)$ [100] or
$SO(10)$ or $SU(4)\times SU(2)_{L}\times SU(2)_{R}$ [101] to list just a few.
$SU(5)$, although the minimal version is now excluded experimentally, is the
prototypical example with which we work. A classic review of GUT theories can
be found in [102]
##### Georgi-Jarlskog Factors
It is truly miraculous that the three coupling constants unify (to within
current experimental errors) with two-loop running when adjusted to an $SU(5)$
grand unified gauge group and when the SUSY scale is placed in a region where
the fine-tuning arguments suggest new-particle states should exist. Let’s now
follow the unification of forces arguments to the next level. Above the
unification scale, there is no longer a distinction between $SU(3)_{c}$ and
$U(1)_{Y}$. If color and hypercharge are indistinguishable, what distinguishes
an electron from a down-quark? The Yukawa couplings, which give rise to the
quark and lepton masses, are also functions of the scale $\mu$ and RG
equations relate the low-energy values to their values at the GUT scale.
Appendix A gives details of the RG procedure used in this thesis to take the
measured low-energy parameters and use the RG equations to relate them to the
predictions at the GUT scale. Do the mass parameters also unify?
With much more crude estimates for the quark masses, strong force, and without
the knowledge of the value of the top quark mass Georgi and Jarlskog (GJ) [11]
noticed that at the GUT scale the masses satisfied the approximate relations
272727To the best of my knowledge, the $b=\tau$ relations were first noticed
by Buras et al.[103].:
$m_{\tau}\approx m_{b}\ \ \ \ m_{\mu}\approx 3m_{s}\ \ \ \ 3m_{e}\approx
m_{d}.$ (2.31)
This is a very non-trivial result. The masses of the quarks and charged
leptons span more than $5$ orders of magnitude. The factor $3$ is
coincidentally equal to the number of colors. At the scale of the
$Z^{o}$-boson’s mass $\mu=M_{Z}$ the ratios look like $m_{\mu}\approx 2m_{s}$,
and $1.6m_{\tau}\approx m_{b}$ so the factor of three is quite miraculous.
Using this surprising observation, GJ constructed a model where this relation
followed from an $SU(5)$ theory with the second generation coupled to a Higgs
in a different representation.
In the $SU(5)$ model the fermions are arranged into a $\bf{\bar{5}}$
$(\psi^{a}_{i})$ and a $\bf{10}$ $(\psi_{ab\,i})$ where $a,b,c,..$ are the
$SU(5)$ indexes and $i,j,..$ are the family indexes. The particle assignments
are
$\displaystyle\psi_{a}^{i}$ $\displaystyle=$
$\displaystyle\left(\begin{matrix}d^{c}&d^{c}&d^{c}&\nu&e\end{matrix}\right)^{i}$
(2.32) $\displaystyle\psi^{ab\,j}$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\left(\begin{array}[]{lllll}0&u^{c}&-u^{c}&-u&-d\\\
-u^{c}&0&u^{c}&-u&-d\\\ u^{c}&-u^{c}&0&-u&-d\\\ u&u&u&0&-e^{c}\\\
d&d&d&e^{c}&0\end{array}\right)^{j}$ (2.38)
where c indicates the conjugate field. There are also a $\bf{\bar{5}}$ Higgs
fields $(H_{d})_{a}$ and a $\bf{5}$ Higgs field $(H_{u})^{a}$. The key to
getting the mass relations hypothesized in Eq(2.31) is coupling only the
second generation to a $\bf{45}$ Higgs 282828In tensor notation the 45
representation is given by $(H_{d,45})^{c}_{ab}$ where $ab$ are antisymmetric
and the five traces $(H_{d,45})^{a}_{ab}=0$ are removed.. The VEVs of the
Higgs fields are $H_{u}=(0,0,0,0,v_{u})$, $H_{d}=(0,0,0,0,v_{d})$ and
$(H_{45,d})^{a}_{b5}=v_{45}(\delta^{a}_{b}-4\delta^{a4}\delta_{b4})$. The
coupling to matter that gives mass to the down-like states is
$W_{Y\,d}=Y_{d\,5\,ij}\psi^{ab\,i}\psi_{a}^{j}(H_{d})_{b}+Y_{d\,45\,ij}\psi^{ab\,i}\psi^{j}_{c}(H_{d,45})_{ab}^{c}$
(2.39)
and that give mass to the up-like states
$W_{Y\,u}=Y_{5u\,ij}\psi^{ab\,i}\psi^{cd\,j}(H_{u})^{e}\epsilon_{abcde}.$
(2.40)
Georgi and Jarlskog do not concern themselves with relating the neutrino
masses to the up-quark masses, so we will focus on the predictions for the
down-like masses. The six masses of both the down-like quarks and the charged
leptons may now be satisfied by arranging for
$Y_{d\,45}=\left(\begin{matrix}0&0&0\cr 0&C&0\cr 0&0&0\end{matrix}\right)\ \ \
Y_{d\,5}=\left(\begin{matrix}0&A&0\cr A&0&0\cr 0&0&B\end{matrix}\right)$
(2.41)
and fitting the three parameters $A$, $B$, and $C$. The fitting will create
the hierarchy $B>>C>>A$. Coupling these Yukawa matricies to the Eq(2.39) gives
the factor of $-3C$ for the (2,2) entry of the leptons mass matrix relative to
the $(2,2)$ entry of the down-like quark mass matrix. Because $B>>C>>A$, the
equality of the (3,3) entry leads to $m_{b}/m_{\tau}\approx 1$. The (2,2)
entry dominates the mass of the second generation so $m_{\mu}/m_{s}\approx 3$.
The determinant of the resulting mass matrix is independent of $C$ so at the
GUT scale the product of the charged lepton masses is predicted to equal the
product of the down-like quark masses.
These results have been generalized to other GUT models like the Pati-Salam
model 292929In Pati-Salam the $\bf{45}$’s VEV is such that one has a factor of
$3$ and not $-3$ for the charged leptons vs the down-like quark Yukawa
coupling.. Family symmetries have been used to arrange the general structure
shown here [104]. The continued validity of the Georgi-Jarlskog mass relations
is one of the novel contributions of the thesis presented in Chapter 3.
#### 2.3.4 $\tan\beta$ Enhanced Threshold Effects
The Appelquist Carazzone [105] decoupling theorem indicates that particle
states heavier than the energy scales being considered can be integrated out
and decoupled from the low-energy effective theory. A good review of working
with effective theories and decoupling relations is found in Pich [106]. The
parameters we measure are in some cases in an effective theory of
$SU(3)_{c}\times U(1)_{EM}$; in other cases we measure the parameters with
global fits to the Standard Model. At the energy scale of the sparticles, we
need to match onto the MSSM effective theory. Finally at the GUT scale we need
to match onto the GUT effective theory.
As a general rule, matching conditions are needed to maintain the order of
accuracy of the results. If we are using one-loop RG running, we can use
trivial matching conditions at the interface of the two effective theories. If
we are using two-loop RG running, we should use one-loop matching conditions
at the boundaries. This is to maintain the expected order of accuracy and
precision of the results. There is an important exception to this general rule
relevant to SUSY theories with large $\tan\beta$.
At tree level the VEV of $H_{u}$ gives mass to the up-like states (t,c,u) and
the neutrinos. At tree-level the VEV of $H_{d}$ gives mass to the down-like
states (b, s, d, $\tau$, $\mu$, e). However ‘soft’ interactions which break
supersymmetry allow the VEV of $H_{u}$ to couple to down-like Yukawa couplings
through loop diagrams. Two such soft terms are the trilinear couplings
${\mathcal{L}}\supset y_{t}A^{t}H_{u}\tilde{t}\tilde{t}^{c}$ where $A_{t}$ is
a mass parameter and the gluino mass
${\mathcal{L}}\supset\tilde{g}\tilde{g}M_{3}$ where $M_{3}$ is the gluino’s
soft mass parameter.
The matching conditions for two effective theories are deduced by expressing a
common observable in terms of the two effective theories. For example the Pole
mass $M_{b}$ of the bottom quark 303030If it existed as a free state. would be
expressed as
$M_{b}\approx\parbox{130.08621pt}{\includegraphics[width=130.08621pt]{HiggsInsertion}}\
+\
\parbox{130.08621pt}{\includegraphics[width=130.08621pt]{GluonSelfEnergy}}+\ldots$
where $H$ takes on the vacuum expectation value, $y_{b}(\mu)$ and $g_{3}(\mu)$
are the QFT parameters which depends on an unphysical choice of scale $\mu$.
The same observable expressed in MSSM involves new diagrams
$M_{b}\approx\parbox{130.08621pt}{\includegraphics[width=130.08621pt]{HiggsInsertionSUSY}}+\parbox{130.08621pt}{\includegraphics[width=130.08621pt]{GluonSelfEnergySUSY}}$
$+\parbox{130.08621pt}{\includegraphics[width=130.08621pt]{GluinoLoop}}+\parbox{130.08621pt}{\includegraphics[width=130.08621pt]{StopLoop}}+\ldots$
The parameters $y_{b}^{SM}(\mu),g_{3}^{SM}(\mu)$ are not equal to the
parameters $y_{b}^{MSSM}(\mu),g_{3}^{MSSM}(\mu)$. By expressing
$y_{b}^{MSSM}=y_{b}^{SM}+\delta y_{b}$ and likewise for
$g_{3}^{MSSM}=g_{d}^{SM}+\delta g_{3}$, we find many common graphs which
cancel. We are left with an expression for $\delta y_{b}$ equal to the graphs
not common between the two effective theories.
$-(\delta
y_{b})v_{d}=\parbox{130.08621pt}{\includegraphics[width=130.08621pt]{GluinoLoop}}+\parbox{130.08621pt}{\includegraphics[width=130.08621pt]{StopLoop}}+\ldots$
(2.42)
The two graphs in this correction are proportional to the VEV of $\langle
h^{o}_{u}\rangle=v_{u}$. However the $y_{b}$ Yukawa coupling is the ratio of
$m_{b}$ to $\langle h^{o}_{d}\rangle=v_{d}$. This makes the correction $\delta
y_{b}/y_{b}$ due to the two loops shown proportional to
$v_{u}/v_{d}=\tan\beta$. If $\tan\beta$ were small $\lesssim 5$, then the loop
result times the $\tan\beta$ would remain small and the effect would be only
relevant at two-loop accuracy. However when $\tan\beta\gtrsim 10$ then the
factor of $\tan\beta$ makes the contribution an order of magnitude bigger and
the effect can be of the same size as the one-loop running itself.
These $\tan\beta$ enhanced SUSY threshold corrections can have a large effect
on the GUT-scale parameters. More precise observations of the low-energy
parameters have driven the Georgi-Jarlskog mass relations out of quantitative
agreement. However there is a class of $\tan\beta$ enhanced corrections that
can bring the relations back into quantitative agreement. Chapter 3 of this
thesis makes predictions for properties of the SUSY mass spectrum by updating
the GUT-scale parameters to the new low-energy observations and considering
properties of $\tan\beta$ enhanced SUSY threshold corrections needed to
maintain the quantitative agreement of the Georgi Jarlskog mass relations.
### Chapter Summary
In this chapter we have introduced the ingredients of the Standard Model and
its supersymmetric extension and given examples of how symmetries, broken
symmetries, and fine-tuning arguments have successfully predicted the mass of
the positron, the $\Omega^{-}$, the charm quark, and the $W^{\pm}$ and $Z^{o}$
bosons. We have introduced astrophysical evidence that indicate a significant
fraction of the mass-energy density of the universe is in a particle type yet
to be discovered. We have introduced supersymmetry as a plausible framework
for solving the fine-tuning of the Higgs self energy, for explaining the top-
quarks large mass, and for providing a dark-matter particle. In addition we
have discussed how SUSY predicts gauge coupling unification, and a framework
for mass-matrix unification. Last we have introduced potentially large
corrections to the RG running of the mass matrices.
## Chapter 3 Predictions from Unification and Fermion Mass Structure
### Chapter Overview
Figure 3.1: Updates to the top-quark mass, strong coupling constant, and
bottom-quark mass are responsible for the quantitative stress of the classic
GUT relation for $y_{b}/y_{\tau}$.
Figure 3.2: Updates to the strange-quark mass and $V_{cb}$ are responsible
for the quantitative stress of the Georgi-Jarlskog mass relations and the need
to update values from Ref. [1].
Grand Unified Theories predict relationships between the GUT-scale quark and
lepton masses. Using new data in the context of the MSSM, we update the values
and uncertainties of the masses and mixing angles for the three generations at
the GUT scale. We also update fits to hierarchical patterns in the GUT-scale
Yukawa matrices. The new data shows that not all the classic GUT-scale mass
relationships remain in quantitative agreement at small to moderate
$\tan\beta$. However, at large $\tan\beta$, these discrepancies can be
eliminated by finite, $\tan\beta$-enhanced, radiative, threshold corrections
if the gluino mass has the opposite sign to the wino mass. This chapter is
based on work first published by the author and his supervisor in Ref. [10].
Explaining the origin of fermion masses and mixings remains one of the most
important goals in our attempts to go beyond the Standard Model. In this, one
very promising possibility is that there is an underlying stage of unification
relating the couplings responsible for the fermion masses. However we are
hindered by the fact that the measured masses and mixings do not directly give
the structure of the underlying Lagrangian both because the data is
insufficient unambiguously to reconstruct the full fermion mass matrices and
because radiative corrections can obscure the underlying structure. In this
chapter we will address both these points in the context of the MSSM.
We first present an analysis of the measured mass and mixing angles continued
to the GUT scale. The analysis updates Ref [1] using the precise measurements
of fermion masses and mixing angles from the b-factories and the updated top-
quark mass from CDF and D0. The resulting data at the GUT scale allows us to
look for underlying patterns which may suggest a unified origin. We also
explore the sensitivity of these patterns to $\tan\beta$-enhanced, radiative
threshold corrections.
We next proceed to extract the underlying Yukawa coupling matrices for the
quarks and leptons. There are two difficulties in this. The first is that the
data cannot, without some assumptions, determine all elements of these
matrices. The second is that the Yukawa coupling matrices are basis dependent.
We choose to work in a basis in which the mass matrices are hierarchical in
structure with the off-diagonal elements small relative to the appropriate
combinations of on-diagonal matrix elements. Appendix B, Eq(B.1) defines this
basis more precisely. This is the basis we think is most likely to display the
structure of the underlying theory, for example that of a spontaneously broken
family symmetry in which the hierarchical structure is ordered by the (small)
order parameter breaking the symmetry. With this structure to leading order
the observed masses and mixing angles determine the mass matrix elements on
and above the diagonal, and our analysis determines these entries, again
allowing for significant $\tan\beta$ enhanced radiative corrections. The
resulting form of the mass matrices provides the “data” for developing models
of fermion masses such as those based on a broken family symmetry.
### 3.1 Supersymmetric Thresholds and GUT-Scale Mass Relations
Low-Energy Parameter Value(Uncertainty in last digit(s)) Notes and Reference
$m_{u}(\mu_{L})/m_{d}(\mu_{L})$ 0.45(15) PDB Estimation [41]
$m_{s}(\mu_{L})/m_{d}(\mu_{L})$ 19.5(1.5) PDB Estimation [41]
$m_{u}(\mu_{L})+m_{d}(\mu_{L})$ $\left[8.8(3.0),\ 7.6(1.6)\right]$ MeV PDB,
Quark Masses, pg 15 [41]. ( Non-lattice, Lattice )
$Q=\sqrt{\frac{m_{s}^{2}-(m_{d}+m_{u})^{2}/4}{m_{d}^{2}-m_{u}^{2}}}$ 22.8(4)
Martemyanov and Sopov [107] $m_{s}(\mu_{L})$ $\left[103(20)\,,95(20)\right]$
MeV PDB, Quark Masses, pg 15 [41]. [Non-lattice, lattice] $m_{u}(\mu_{L})$
3(1) MeV PDB, Quark Masses, pg 15 [41]. Non-lattice. $m_{d}(\mu_{L})$ 6.0(1.5)
MeV PDB, Quark Masses, pg 15 [41]. Non-lattice. $m_{c}(m_{c})$ 1.24(09) GeV
PDB, Quark Masses, pg 16 [41]. Non-lattice. $m_{b}(m_{b})$ 4.20(07) GeV PDB,
Quark Masses, pg 16,19 [41]. Non-lattice. $M_{t}$ 170.9 (1.9) GeV CDF & D0
[108] Pole Mass $(M_{e},M_{\mu},M_{\tau})$ ($0.511(15)$,$\ 105.6(3.1)$,$\
1777(53)$ ) MeV $3\%$ uncertainty from neglecting $Y^{e}$ thresholds. $A$
Wolfenstein parameter 0.818(17) PDB Ch 11 Eq. 11.25 [41] $\overline{\rho}$
Wolfenstein parameter 0.221(64) PDB Ch 11 Eq. 11.25 [41] $\lambda$ Wolfenstein
parameter 0.2272(10) PDB Ch 11 Eq. 11.25 [41] $\overline{\eta}$ Wolfenstein
parameter 0.340(45) PDB Ch 11 Eq. 11.25 [41] $|V_{CKM}|$
$\left(\begin{matrix}0.97383(24)&0.2272(10)&0.00396(09)\cr
0.2271(10)&0.97296(24)&0.04221(80)\cr
0.00814(64)&0.04161(78)&0.999100(34)\end{matrix}\right)$ PDB Ch 11 Eq. 11.26
[41] $\sin 2\beta$ from CKM 0.687(32) PDB Ch 11 Eq. 11.19 [41] Jarlskog
Invariant $3.08(18)\times 10^{-5}$ PDB Ch 11 Eq. 11.26 [41] $v_{Higgs}(M_{Z})$
$246.221(20)$ GeV Uncertainty expanded. [41] ( $\alpha_{EM}^{-1}(M_{Z})$,
$\alpha_{s}(M_{Z})$, $\sin^{2}\theta_{W}(M_{Z})$ ) ($\ 127.904(19)$, $\
0.1216(17)$, $\ 0.23122(15)$) PDB Sec 10.6 [41]
Table 3.1: Low-energy observables. Masses in lower-case $m$ are
$\overline{MS}$ running masses. Capital $M$ indicates pole mass. The light
quark’s ($u$,$d$,$s$) mass are specified at a scale $\mu_{L}=2\ \mathrm{GeV}$.
$V_{CKM}$ are the Standard Model’s best fit values.
The data set used is summarized in Table 3.1. Since the fit of reference [1]
(RRRV) to the Yukawa texture was done, the measurement of the Standard-Model
parameters has improved considerably. Figs 3.1 and 3.2 highlight a few of the
changes in the data since: The top-quark mass has gone from $M_{t}=174.3\pm 5$
GeV to $M_{t}=170.9\pm 1.9$ GeV. In 2000 the Particle Data Book reported
$m_{b}(m_{b})=4.2\pm 0.2$ GeV [109] which has improved to $m_{b}(m_{b})=4.2\pm
0.07$ GeV today. In addition each higher order QCD correction pushes down the
value of $m_{b}(M_{Z})$ at the scale of the $Z$ bosons mass. In 1998
$m_{b}(M_{Z})=3.0\pm 0.2$ GeV [110] and today it is $m_{b}(M_{Z})=2.87\pm
0.06$ GeV [111]. The most significant shift in the data relevant to the RRRV
fit is a downward revision to the strange-quark mass at the scale $\mu_{L}=2$
GeV from $m_{s}(\mu_{L})\approx 120\pm 50$ MeV [109] to today’s value
$m_{s}(\mu_{L})=103\pm 20$ MeV. We also know the CKM unitarity triangle
parameters better today than six years ago. For example, in 2000 the Particle
Data book reported $\sin 2\beta=0.79\pm 0.4$ [109] which is improved to $\sin
2\beta=0.69\pm 0.032$ in 2006 [41]. Figures 3.1 and 3.2 show these updates
visually. The $\sin 2\beta$ value is about $1.2\,\sigma$ off from a global fit
to all the CKM data [112], our fits generally lock onto the global-fit data
and exhibit a $1\,\sigma$ tension for $\sin 2\beta$. Together, the improved
CKM matrix observations add stronger constraints to the textures compared to
data from several years ago.
We first consider the determination of the fundamental mass parameters at the
GUT scale in order simply to compare to GUT predictions. The starting point
for the light-quark masses at low scale is given by the $\chi^{2}$ fit to the
data of Table 3.1
$m_{u}(\mu_{L})=2.7\pm 0.5\ \mathrm{MeV}\ \ m_{d}(\mu_{L})=5.3\pm 0.5\
\mathrm{MeV}\ \ m_{s}(\mu_{L})=103\pm 12\ \mathrm{MeV}.$ (3.1)
Using these as input we determine the values of the mass parameters at the GUT
scale for various choices of $\tan\beta$ but not including possible
$\tan\beta$ enhanced threshold corrections. We do this using numerical
solutions to the RG equations. The one-loop and two-loop RG equations for the
gauge couplings and the Yukawa couplings in the Standard Model and in the MSSM
that we use in this study come from a number of sources [110][113][40][114]
and are detailed in Appendix A. The results are given in the first five
columns of Table 3.2. These can readily be compared to expectations in various
Grand Unified models. The classic prediction of $SU(5)$ with third generation
down-quark and charged-lepton masses given by the coupling
$B\;\overline{5}_{f}.10_{f}.5_{H}$111$\overline{5}_{f}$, $10_{f}$ refer to the
$SU(5)$ representations making up a family of quarks and leptons while $5_{H}$
is a five dimensional representation of Higgs scalars. is
$m_{b}(M_{X})/m_{\tau}(M_{X})=1$ [103]. This ratio is given in Table 3.2 where
it may be seen that the value agrees at a special low $\tan\beta$ value but
for large $\tan\beta$ it is some $25\%$ smaller than the GUT prediction222We’d
like to thank Ilja Dorsner for pointing out that the $\tan\beta$ dependence of
$m_{b}/m_{\tau}(M_{X})$ is more flat than in previous studies (e.g. ref.
[115]). This change is mostly due to the higher effective SUSY scale $M_{S}$,
the higher value of $\alpha_{s}(M_{Z})$ found in global standard model fits,
and smaller top-quark mass $M_{t}$.. A similar relation between the strange
quark and the muon is untenable and to describe the masses consistently in
$SU(5)$ Georgi and Jarlskog [11] proposed that the second generation masses
should come instead from the coupling $C\;\overline{5}_{f}.10_{f}.45_{H}$
leading instead to the relation 3$m_{s}(M_{X})/m_{\mu}(M_{X})=1.$ As may be
seen from Table 3.2 in all cases this ratio is approximately $0.69(8)$. The
prediction of Georgi and Jarlskog for the lightest generation masses follows
from the relation $Det(M^{d})/Det(M^{l})=1$. This results from the form of
their mass matrix which is given by333The remaining mass matrix elements may
be non-zero provided they do not contribute significantly to the determinant.
$M^{d}=\left(\begin{array}[]{ccc}0&A^{\prime}&\\\ A&C&\\\
&&B\end{array}\right),\;M^{l}=\left(\begin{array}[]{ccc}0&A^{\prime}&\\\
A&-3C&\\\ &&B\end{array}\right)$ (3.2)
in which there is a $(1,1)$ texture zero444Below we discuss an independent
reason for having a $(1,1)$ texture zero. and the determinant is given by the
product of the $(3,3)$, $(1,2)$ and $(2,1)$ elements. If the $(1,2)$ and
$(2,1)$ elements are also given by $\overline{5}_{f}.10_{f}.5_{H}$ couplings
they will be the same in the down-quark and charged-lepton mass matrices
giving rise to the equality of the determinants. The form of Eq(3.2) may be
arranged by imposing additional continuous or discrete symmetries. One may see
from Table 3.2 that the actual value of the ratio of the determinants is quite
far from unity disagreeing with the Georgi Jarlskog relation.
In summary the latest data on fermion masses, while qualitatively in agreement
with the simple GUT relations, has significant quantitative discrepancies.
However the analysis has not, so far, included the SUSY threshold corrections
which substantially affect the GUT mass relations at large $\tan\beta$ [116].
Parameters Input SUSY Parameters $\tan\beta$ $1.3$ $10$ $38$ $50$ $38$ $38$
$\gamma_{b}$ $0$ $0$ $0$ $0$ $-0.22$ $+0.22$ $\gamma_{d}$ $0$ $0$ $0$ $0$
$-0.21$ $+0.21$ $\gamma_{t}$ $0$ $0$ $0$ $0$ $0$ $-0.44$ Parameters
Corresponding GUT-Scale Parameters with Propagated Uncertainty $y^{t}(M_{X})$
$6^{+1}_{-5}$ $0.48(2)$ $0.49(2)$ $0.51(3)$ $0.51(2)$ $0.51(2)$ $y^{b}(M_{X})$
$0.0113^{+0.0002}_{-0.01}$ $0.051(2)$ $0.23(1)$ $0.37(2)$ $0.34(3)$ $0.34(3)$
$y^{\tau}(M_{X})$ $0.0114(3)$ $0.070(3)$ $0.32(2)$ $0.51(4)$ $0.34(2)$
$0.34(2)$ $(m_{u}/m_{c})(M_{X})$ $0.0027(6)$ $0.0027(6)$ $0.0027(6)$
$0.0027(6)$ $0.0026(6)$ $0.0026(6)$ $(m_{d}/m_{s})(M_{X})$ $0.051(7)$
$0.051(7)$ $0.051(7)$ $0.051(7)$ $0.051(7)$ $0.051(7)$
$(m_{e}/m_{\mu})(M_{X})$ $0.0048(2)$ $0.0048(2)$ $0.0048(2)$ $0.0048(2)$
$0.0048(2)$ $0.0048(2)$ $(m_{c}/m_{t})(M_{X})$ $0.0009^{+0.001}_{-0.00006}$
$0.0025(2)$ $0.0024(2)$ $0.0023(2)$ $0.0023(2)$ $0.0023(2)$
$(m_{s}/m_{b})(M_{X})$ $0.014(4)$ $0.019(2)$ $0.017(2)$ $0.016(2)$ $0.018(2)$
$0.010(2)$ $(m_{\mu}/m_{\tau})(M_{X})$ $0.059(2)$ $0.059(2)$ $0.054(2)$
$0.050(2)$ $0.054(2)$ $0.054(2)$ $A(M_{X})$ $0.56^{+0.34}_{-0.01}$ $0.77(2)$
$0.75(2)$ $0.72(2)$ $0.73(3)$ $0.46(3)$ $\lambda(M_{X})$ $0.227(1)$ $0.227(1)$
$0.227(1)$ $0.227(1)$ $0.227(1)$ $0.227(1)$ $\bar{\rho}(M_{X})$ $0.22(6)$
$0.22(6)$ $0.22(6)$ $0.22(6)$ $0.22(6)$ $0.22(6)$ $\bar{\eta}(M_{X})$
$0.33(4)$ $0.33(4)$ $0.33(4)$ $0.33(4)$ $0.33(4)$ $0.33(4)$ $J(M_{X})\,\times
10^{-5}$ $1.4^{+2.2}_{-0.2}$ $2.6(4)$ $2.5(4)$ $2.3(4)$ $2.3(4)$ $1.0(2)$
Parameters Comparison with GUT Mass Ratios $(m_{b}/m_{\tau})(M_{X})$
$1.00^{+0.04}_{-0.4}$ $0.73(3)$ $0.73(3)$ $0.73(4)$ $1.00(4)$ $1.00(4)$
$({3m_{s}/m_{\mu}})(M_{X})$ $0.70^{+0.8}_{-0.05}$ $0.69(8)$ $0.69(8)$
$0.69(8)$ $0.9(1)$ $0.6(1)$ $({m_{d}/3\,m_{e}})(M_{X})$ $0.82(7)$ $0.83(7)$
$0.83(7)$ $0.83(7)$ $1.05(8)$ $0.68(6)$ $(\frac{\det Y^{d}}{\det
Y^{e}})(M_{X})$ $0.57^{+0.08}_{-0.26}$ $0.42(7)$ $0.42(7)$ $0.42(7)$
$0.92(14)$ $0.39(7)$
Table 3.2: The mass parameters continued to the GUT-scale $M_{X}$ for various
values of $\tan\beta$ and threshold corrections $\gamma_{t,b,d}$. These are
calculated with the 2-loop gauge coupling and 2-loop Yukawa coupling RG
equations assuming an effective SUSY scale $M_{S}=500$ GeV.
A catalog of the full SUSY threshold corrections is given in [117]. The
particular finite SUSY thresholds discussed in this letter do not decouple as
the superpartners become massive. We follow the approximation described in
Blazek, Raby, and Pokorski (BRP) for threshold corrections to the CKM elements
and down-like mass eigenstates [118]. The finite threshold corrections to
$Y^{e}$ and $Y^{u}$ and are generally about 3% or smaller
$\delta Y^{u},\ \delta Y^{d}\lesssim 0.03$ (3.3)
and will be neglected in our study. The logarithmic threshold corrections are
approximated by using the Standard-Model RG equations from $M_{Z}$ to an
effective SUSY scale $M_{S}$.
The finite, $\tan\beta$-enhanced $Y^{d}$ SUSY threshold corrections are
dominated by the a sbottom-gluino loop, a stop-higgsino loop, and a stop-
chargino loop. Integrating out the SUSY particles at a scale $M_{S}$ leaves
the matching condition at that scale for the Standard-Model Yukawa couplings:
$\displaystyle\delta m_{sch}\,Y^{u\,SM}$ $\displaystyle=$
$\displaystyle\sin\beta\ \,Y^{u}$ (3.4) $\displaystyle\delta
m_{sch}\,Y^{d\,SM}$ $\displaystyle=$ $\displaystyle\cos\beta\
\,U_{L}^{d{\dagger}}\,\left(1+{\Gamma}^{d}+V_{CKM}^{{\dagger}}\,{\Gamma}^{u}\,V_{CKM}\right)\,Y^{d}_{\mathrm{diag}}\,U_{R}^{d}$
(3.5) $\displaystyle Y^{e\,SM}$ $\displaystyle=$ $\displaystyle\cos\beta\,\
Y^{e}.$ (3.6)
All the parameters on the right-hand side take on their MSSM values in the
$\overline{DR}$ scheme. The factor $\delta m_{sch}$ converts the quark running
masses from $\overline{MS}$ to $\overline{DR}$ scheme. Details about this
scheme conversion are listed in Appendix A.3. The $\beta$ corresponds to the
ratio of the two Higgs VEVs $v_{u}/v_{d}=\tan\beta$. The $U$ matrices
decompose the MSSM Yukawa couplings at the scale $M_{S}$:
$Y^{u}=U_{L}^{u{\dagger}}Y_{\mathrm{diag}}^{u}U_{R}^{u}$ and
$Y^{d}=U_{L}^{d{\dagger}}Y_{\mathrm{diag}}^{d}U_{R}^{d}$. The matrices
$Y_{\mathrm{diag}}^{u}$ and $Y_{\mathrm{diag}}^{d}$ are diagonal and
correspond to the mass eigenstates divided by the appropriate VEV at the scale
$M_{S}$. The CKM matrix is given by $V_{CKM}=U_{L}^{u}U_{L}^{d{\dagger}}$. The
left-hand side involves the Standard-Model Yukawa couplings. The matrices
$\Gamma^{u}$ and $\Gamma^{d}$ encode the SUSY threshold corrections.
If the squarks are diagonalized in flavor space by the same rotations that
diagonalize the quarks, the matrices $\Gamma^{u}$ and $\Gamma^{d}$ are
diagonal: $\Gamma^{d}=\mathrm{diag}(\gamma_{d},\gamma_{d},\gamma_{b}),$ $\
\Gamma^{u}=\mathrm{diag}(\gamma_{u},\gamma_{u},\gamma_{t})$. In general the
squarks are not diagonalized by the same rotations as the quarks but provided
the relative mixing angles are reasonably small the corrections to flavour
conserving masses, which are our primary concern here, will be second order in
these mixing angles. We will assume $\Gamma^{u}$ and $\Gamma^{d}$ are diagonal
in what follows.
Approximations for $\Gamma^{u}$ and $\Gamma^{d}$ based on the mass insertion
approximation are found in [119][120][121]:
$\displaystyle\gamma_{t}$ $\displaystyle\approx$ $\displaystyle
y_{t}^{2}\,\mu\,A^{t}\,\frac{\tan\beta}{16\pi^{2}}I_{3}(m_{\tilde{t}_{1}}^{2},m_{\tilde{t}_{2}}^{2},\mu^{2})\
\ \sim\ \
y_{t}^{2}\,\frac{\tan\beta}{32\pi^{2}}\frac{\mu\,A^{t}\,}{m_{\tilde{t}}^{2}}$
(3.7) $\displaystyle\gamma_{u}$ $\displaystyle\approx$ $\displaystyle-
g_{2}^{2}\,M_{2}\,\mu\,\frac{\tan\beta}{16\pi^{2}}I_{3}(m_{\chi_{1}}^{2},m_{\chi_{2}}^{2},m_{\tilde{u}}^{2})\
\ \sim\ \ 0$ (3.8) $\displaystyle\gamma_{b}$ $\displaystyle\approx$
$\displaystyle\frac{8}{3}\,g_{3}^{2}\,\frac{\tan\beta}{16\pi^{2}}\,M_{3}\,\mu\,I_{3}(m_{\tilde{b}_{1}}^{2},m_{\tilde{b}_{2}}^{2},{M_{3}}^{2})\
\ \sim\ \
\frac{4}{3}\,g_{3}^{2}\,\frac{\tan\beta}{16\pi^{2}}\,\frac{\mu\,M_{3}}{m_{\tilde{b}}^{2}}$
(3.9) $\displaystyle\gamma_{d}$ $\displaystyle\approx$
$\displaystyle\frac{8}{3}g_{3}^{2}\frac{\tan\beta}{16\pi^{2}}M_{3}\,\mu\,I_{3}(m_{\tilde{d}_{1}}^{2},m_{\tilde{d}_{2}}^{2},{M_{3}}^{2})\
\ \sim\ \
\frac{4}{3}\,g_{3}^{2}\,\frac{\tan\beta}{16\pi^{2}}\,\frac{\mu\,M_{3}}{m_{\tilde{d}}^{2}}$
(3.10)
where $I_{3}$ is given by
$I_{3}(a^{2},b^{2},c^{2})=\frac{a^{2}b^{2}\log\frac{a^{2}}{b^{2}}+b^{2}c^{2}\log\frac{b^{2}}{c^{2}}+c^{2}a^{2}\log\frac{c^{2}}{a^{2}}}{(a^{2}-b^{2})(b^{2}-c^{2})(a^{2}-c^{2})}.$
(3.11)
In these expressions $\tilde{q}$ refers to superpartner of $q$. $\chi^{j}$
indicate chargino mass eigenstates. $\mu$ is the coefficient to the $H^{u}$
$H^{d}$ interaction in the superpotential. $M_{1},M_{2},M_{3}$ are the gaugino
soft breaking terms. $A^{t}$ refers to the soft top-quark trilinear coupling.
The mass insertion approximation breaks down if there is large mixing between
the mass eigenstates of the stop or the sbottom. The right-most expressions in
Eqs(3.7,3.9,3.10) assume the relevant squark mass eigenstates are nearly
degenerate and heavier than $M_{3}$ and $\mu$. These expressions ( eqs 3.7 \-
3.10) provide an approximate mapping from a supersymmetric spectra to the
$\gamma_{i}$ parameters through which we parameterize the threshold
corrections; however, with the exception of Column A of Table 3.4, we do not
specify a SUSY spectra but directly parameterize the thresholds corrections
through $\gamma_{i}$.
The separation between $\gamma_{b}$ and $\gamma_{d}$ is set by the lack of
degeneracy of the down-like squarks. If the squark masses for the first two
generations are not degenerate, then there will be a corresponding separation
between the (1,1) and (2,2) entries of $\Gamma^{d}$ and $\Gamma^{u}$. If the
sparticle spectra is designed to have a large $A^{t}$ and a light stop,
$\gamma_{t}$ can be enhanced and dominate over $\gamma_{b}$. Because the charm
Yukawa coupling is so small, the scharm-higgsino loop is negligible, and
$\gamma_{u}$ follows from a chargino squark loop and is also generally small
with values around $0.02$ because of the smaller $g_{2}$ coupling. In our
work, we approximate $\Gamma^{u}_{22}\sim\Gamma^{u}_{11}\sim 0$. The only
substantial correction to the first and second generations is given by
$\gamma_{d}$ [116].
As described in BRP, the threshold corrections leave $|V_{us}|$ and
$|V_{ub}/V_{cb}|$ unchanged to a good approximation. Threshold corrections in
$\Gamma^{u}$ do affect the $V_{ub}$ and $V_{cb}$ at the scale $M_{S}$ giving
$\frac{V_{ub}^{SM}-V_{ub}^{MSSM}}{V_{ub}^{MSSM}}\backsimeq\frac{V_{cb}^{SM}-V_{cb}^{MSSM}}{V_{cb}^{MSSM}}\backsimeq-\left(\gamma_{t}-\gamma_{u}\right).$
(3.12)
The threshold corrections for the down-quark masses are given approximately by
$\displaystyle m_{d}$ $\displaystyle\backsimeq$ $\displaystyle
m_{d}^{0}\,(1+\gamma_{d}+\gamma_{u})^{-1}$ $\displaystyle m_{s}$
$\displaystyle\backsimeq$ $\displaystyle
m_{s}^{0}\,(1+\gamma_{d}+\gamma_{u})^{-1}$ $\displaystyle m_{b}$
$\displaystyle\backsimeq$ $\displaystyle
m_{b}^{0}\,(1+\gamma_{b}+\gamma_{t})^{-1}$
where the superscript $0$ denotes the mass without threshold corrections. Not
shown are the nonlinear effects which arise through the RG equations when the
bottom Yukawa coupling is changed by threshold effects. These are properly
included in our final results obtained by numerically solving the RG
equations.
Due to our assumption that the squark masses for the first two generations are
degenerate, the combination of the GUT relations given by $\left(\det
M^{l}/\det
M^{d}\right)\left(3\,m_{s}/m_{\mu}\right)^{2}\left(m_{b}/m_{\tau}\right)=1$ is
unaffected up to nonlinear effects. Thus we cannot simultaneously fit all
three GUT relations through the threshold corrections. A best fit requires the
threshold effects given by
$\displaystyle\gamma_{b}+\gamma_{t}$ $\displaystyle\approx$
$\displaystyle-0.22\pm 0.02$ (3.13) $\displaystyle\gamma_{d}+\gamma_{u}$
$\displaystyle\approx$ $\displaystyle-0.21\pm 0.02.$ (3.14)
giving the results shown in the penultimate column of Table 3.2, just
consistent with the GUT predictions. The question is whether these threshold
effects are of a reasonable magnitude and, if so, what are the implications
for the SUSY spectra which determine the $\gamma_{i}?$ From Eqs(3.9,3.10), at
$\tan\beta=38$ we have
$\frac{\mu\,M_{3}}{m_{\tilde{b}}^{2}}\sim-0.5,\;\ \ \
\frac{m_{\tilde{b}}^{2}}{m_{\tilde{d}}^{2}}\sim 1.0$
The current observation of the muon’s $(g-2)_{\mu}$ is $3.4\,\sigma$ [122]
away from the Standard-Model prediction. If SUSY is to explain the observed
deviation, one needs $\tan\beta>8$ [123] and $\mu M_{2}>0$ [77]. With this
sign we must have $\mu M_{3}$ negative and the $\widetilde{d},$
$\widetilde{s}$ squarks only lightly split from the $\widetilde{b}$ squarks.
$M_{3}$ negative is characteristic of anomaly mediated SUSY breaking[124] and
is discussed in [125][126][121][127]. Although we have deduced $M_{3}<0$ from
the approximate Eqs(3.9,3.10), the correlation persists in the near exact
expression found in Eq(23) of Ref [118]. Adjusting to different squark
splitting can occur in various schemes [128]. However the squark splitting can
readily be adjusted without spoiling the fit because, up to nonlinear effects,
the solution only requires the constraints implied by Eq(3.13), so we may make
$\gamma_{b}>\gamma_{d}$ and hence make $m_{\tilde{b}}^{2}<m_{\tilde{d}}^{2}$
by allowing for a small positive value for $\gamma_{t}.$ In this case $A^{t}$
must be positive.
It is of interest also to consider the threshold effects in the case that $\mu
M_{3}$ is positive. This is illustrated in the last column of Table 3.2 in
which we have reversed the sign of $\gamma_{d},$ consistent with positive $\mu
M_{3}$ , and chosen $\gamma_{b}\simeq\gamma_{d}$ as is expected for similar
down squark masses. The value of $\gamma_{t}$ is chosen to keep the equality
between $m_{b}$ and $m_{\tau}.$ One may see that the other GUT relations are
not satisfied, being driven further away by the threshold corrections.
Reducing the magnitude of $\gamma_{b}$ and $\gamma_{d}$ reduces the
discrepancy somewhat but still limited by the deviation found in the no-
threshold case (the fourth column of Table 3.2).
At $\tan\beta$ near $50$ the non-linear effects are large and $b-\tau$
unification requires $\gamma_{b}+\gamma_{t}\sim-0.1$ to $-0.15.$ In this case
it is possible to have $t-b-\tau$ unification of the Yukawa couplings. For
$\mu>0,M_{3}>0$, the “Just-so” Split-Higgs solution of references [129, 130,
131, 132] can achieve this while satisfying both $b\rightarrow s\ \gamma$ and
$(g-2)_{\mu}$ constraints but only with large $\gamma_{b}$ and $\gamma_{t}$
and a large cancellation in $\gamma_{b}+\gamma_{t}$. In this case, as in the
example given above, the threshold corrections drive the masses further from
the mass relations for the first and second generations because
$\mu\,M_{3}>0$. It is possble to have $t-b-\tau$ unification with
$\mu\,M_{3}<0$, satisfying the $b\rightarrow s\ \gamma$ and $(g-2)_{\mu}$
constraints in which the GUT predictions for the first and second generation
of quarks is acceptable. Examples include Non-Universal Gaugino Mediation
[133] and AMSB; both have some very heavy sparticle masses ( $\gtrsim 4$ TeV)
[121]. Minimal AMSB with a light sparticle spectra( $\lesssim 1$ TeV), while
satisfying $(g-2)_{\mu}$ and $b\rightarrow s\ \gamma$ constraints, requires
$\tan\beta$ less than about $30$ [77].
### 3.2 Updated fits to Yukawa matrices
We turn now to the second part of our study in which we update previous fits
to the Yukawa matrices responsible for quark and lepton masses. As discussed
above we choose to work in a basis in which the mass matrices are hierarchical
with the off-diagonal elements small relative to the appropriate combinations
of on-diagonal matrix elements defined in Eq(B.1). This is the basis we think
is most likely to display the structure of the underlying theory, for example
that of a spontaneously broken family symmetry, in which the hierarchical
structure is ordered by the (small) order parameter breaking the symmetry.
With this structure to leading order in the ratio of light to heavy quarks the
observed masses and mixing angles determine the mass matrix elements on and
above the diagonal provided the elements below the diagonal are not
anomalously large. This is the case for matrices that are nearly symmetrical
or for nearly Hermitian as is the case in models based on an $SO(10)$ GUT.
Parameter 2001 RRRV Fit A0 Fit B0 Fit A1 Fit B1 Fit A2 Fit B2 $\tan\beta$
Small $1.3$ $1.3$ $38$ $38$ $38$ $38$ $a^{\prime}$ ${\mathcal{O}}(1)$ $0$ $0$
$0$ $0$ $-2.0$ $-2.0$ $\epsilon_{u}$ $0.05$ $0.030(1)$ $0.030(1)$ $0.0491(16)$
$0.0491(15)$ $0.0493(16)$ $0.0493(14)$ $\epsilon_{d}$ $0.15(1)$ $0.117(4)$
$0.117(4)$ $0.134(7)$ $0.134(7)$ $0.132(7)$ $0.132(7)$ $|b^{\prime}|$ $1.0$
$1.75(20)$ $1.75(21)$ $1.05(12)$ $1.05(13)$ $1.04(12)$ $1.04(13)$
${\rm{arg}}(b^{\prime})$ $90^{o}$ $+\,93(16)^{o}$ $-\,93(13)^{o}$
$+\,91(16)^{o}$ $-\,91(13)^{o}$ $+\,93(16)^{o}$ $-\,93(13)^{o}$ $a$ $1.31(14)$
$2.05(14)$ $2.05(14)$ $2.16(23)$ $2.16(24)$ $1.92(21)$ $1.92(22)$ $b$
$1.50(10)$ $1.92(14)$ $1.92(15)$ $1.66(13)$ $1.66(13)$ $1.70(13)$ $1.70(13)$
$|c|$ $0.40(2)$ $0.85(13)$ $2.30(20)$ $0.78(15)$ $2.12(36)$ $0.83(17)$
$2.19(38)$ ${\rm{arg}}(c)$ ${-\,24(3)^{o}}$ ${-\,39(18)^{o}}$
${-\,61(14)^{o}}$ ${-\,43(14)^{o}}$ ${-\,59(13)^{o}}$ ${-\,37(25)^{o}}$
${-\,60(13)^{o}}$
Table 3.3: Results of a $\chi^{2}$ fit of eqs(3.15,3.16) to the data in Table
3.2 in the absence of threshold corrections. We set $a^{\prime}$ as indicated
and set $c^{\prime}=d^{\prime}=d=0$ and $f=f^{\prime}=1$ at fixed values.
For convenience we fit to symmetric Yukawa coupling matrices but, as stressed
above, this is not a critical assumption as the data is insensitive to the
off-diagonal elements below the diagonal and the quality of the fit is not
changed if, for example, we use Hermitian forms. For comparison Appendix B
gives the observables in terms of Yukawa matrix entries following a general
hierarchical texture. We parameterize a set of general, symmetric Yukawa
matrices as:
$\displaystyle Y^{u}(M_{X})$ $\displaystyle=$ $\displaystyle
y_{33}^{u}\left(\begin{matrix}d^{\prime}\epsilon_{u}^{4}&b^{\prime}\,\epsilon_{u}^{3}&c^{\prime}\,\epsilon_{u}^{3}\cr
b^{\prime}\,\epsilon_{u}^{3}&f^{\prime}\,\epsilon_{u}^{2}&a^{\prime}\,\epsilon_{u}^{2}\cr
c^{\prime}\,\epsilon_{u}^{3}&a^{\prime}\,\epsilon_{u}^{2}&1\end{matrix}\right),$
(3.15) $\displaystyle Y^{d}(M_{X})$ $\displaystyle=$ $\displaystyle
y_{33}^{d}\left(\begin{matrix}d\,\epsilon_{d}^{4}&b\,\epsilon_{d}^{3}&c\,\epsilon_{d}^{3}\cr
b\,\epsilon_{d}^{3}&f\,\epsilon_{d}^{2}&a\,\epsilon_{d}^{2}\cr
c\,\epsilon_{d}^{3}&a\,\epsilon_{d}^{2}&1\end{matrix}\right).$ (3.16)
Although not shown, we always choose lepton Yukawa couplings at $M_{X}$
consistent with the low-energy lepton masses. Notice that the $f$ coefficient
and $\epsilon_{d}$ are redundant (likewise in $Y^{u}$). We include $f$ to be
able to discuss the phase of the (2,2) term. We write all the entries in terms
of $\epsilon$ so that our coefficients will be ${\mathcal{O}}(1)$. We will
always select our best $\epsilon$ parameters such that $|f|=1$.
RRRV noted that all solutions, to leading order in the small expansion
parameters, only depend on two phases $\phi_{1}$ and $\phi_{2}$ given by
$\displaystyle\phi_{1}$ $\displaystyle=$
$\displaystyle(\phi_{b}^{\prime}-\phi_{f}^{\prime})-(\phi_{b}-\phi_{f})$
(3.17) $\displaystyle\phi_{2}$ $\displaystyle=$
$\displaystyle(\phi_{c}-\phi_{a})-(\phi_{b}-\phi_{f})$ (3.18)
where $\phi_{x}$ is the phase of parameter $x$. For this reason it is
sufficient to consider only $b^{\prime}$ and $c$ as complex with all other
parameters real.
As mentioned above the data favours a texture zero in the $(1,1)$ position.
With a symmetric form for the mass matrix for the first two families, this
leads to the phenomenologically successful Gatto Sartori Tonin [134] relation
$V_{us}(M_{X})\approx\left|b\epsilon_{d}-|b^{\prime}|e^{i\,\phi_{b^{\prime}}}\epsilon_{u}\right|\approx\left|\sqrt{(\frac{m_{d}}{m_{s}})_{0}}-\sqrt{(\frac{m_{u}}{m_{c}})_{0}}e^{i\,\phi_{1}}\right|.$
(3.19)
This relation gives an excellent fit to $V_{us}$ with
$\phi_{1}\approx\,\pm\,90^{o}$, and to preserve it we take $d,$ $d^{\prime}$
to be zero in our fits. As discussed above, in $SU(5)$ this texture zero leads
to the GUT relation $Det(M^{d})/Det(M^{l})=1$ which, with threshold
corrections, is in good agreement with experiment. In the case that $c$ is
small it was shown in RRRV that $\phi_{1}$ is to a good approximation the CP
violating phase $\delta$ in the Wolfenstein parameterization. A non-zero $c$
is necessary to avoid the relation $V_{ub}/V_{cb}=\sqrt{m_{u}/m_{c}}$ and with
the improvement in the data, it is now necessary to have $c$ larger than was
found in RRRV 555As shown in ref. [135], it is possible, in a basis with large
off-diagonal entries, to have an Hermitian pattern with the (1,1) and (1,3)
zero provided one carefully orchestrates cancelations among $Y^{u}$ and
$Y^{d}$ parameters. We find this approach requires a strange-quark mass near
its upper limit.. As a result the contribution to CP violation coming from
$\phi_{2}$ is at least $30\%$. The sign ambiguity in $\phi_{1}$ gives rise to
an ambiguity in $c$ with the positive sign corresponding to the larger value
of $c$ seen in Tables 3.3 and 3.4.
Parameter A B C B2 C2 $\tan\beta$ $30$ $38$ $38$ $38$ $38$ $\gamma_{b}$ $0.20$
$-0.22$ $+0.22$ $-0.22$ $+0.22$ $\gamma_{t}$ $-0.03$ $0$ $-0.44$ $0$ $-0.44$
$\gamma_{d}$ $0.20$ $-0.21$ $+0.21$ $-0.21$ $+0.21$ $a^{\prime}$ $0$ $0$ $0$
$-2$ $-2$ $\epsilon_{u}$ $0.0495(17)$ $0.0483(16)$ $0.0483(18)$ $0.0485(17)$
$0.0485(18)$ $\epsilon_{d}$ $0.131(7)$ $0.128(7)$ $0.102(9)$ $0.127(7)$
$0.101(9)$ $|b^{\prime}|$ $1.04(12)$ $1.07(12)$ $1.07(11)$ $1.05(12)$
$1.06(10)$ ${\rm{arg}}(b^{\prime})$ ${\,90(12)^{o}}$ ${\,91(12)^{o}}$
${\,93(12)^{o}}$ ${\,95(12)^{o}}$ ${\,95(12)^{o}}$ $a$ $2.17(24)$ $2.27(26)$
$2.30(42)$ $2.03(24)$ $1.89(35)$ $b$ $1.69(13)$ $1.73(13)$ $2.21(18)$
$1.74(10)$ $2.26(20)$ $|c|$ $0.80(16)$ $0.86(17)$ $1.09(33)$ $0.81(17)$
$1.10(35)$ ${\rm{arg}}(c)$ ${-\,41(18)^{o}}$ ${-\,42(19)^{o}}$
${-\,41(14)^{o}}$ ${-\,53(10)^{o}}$ ${-\,41(12)^{o}}$ $Y^{u}_{33}$ $0.48(2)$
$0.51(2)$ $0.51(2)$ $0.51(2)$ $0.51(2)$ $Y^{d}_{33}$ $0.15(1)$ $0.34(3)$
$0.34(3)$ $0.34(3)$ $0.34(3)$ $Y^{e}_{33}$ $0.23(1)$ $0.34(2)$ $0.34(2)$
$0.34(2)$ $0.34(2)$ $(m_{b}/m_{\tau})(M_{X})$ $0.67(4)$ $1.00(4)$ $1.00(4)$
$1.00(4)$ $1.00(4)$ $({3m_{s}/m_{\mu}})(M_{X})$ $0.60(3)$ $0.9(1)$ $0.6(1)$
$0.9(1)$ $0.6(1)$ $({m_{d}/3\,m_{e}})(M_{X})$ $0.71(7)$ $1.04(8)$ $0.68(6)$
$1.04(8)$ $0.68(6)$ $\left|\frac{\det Y^{d}(M_{X})}{\det Y^{e}(M_{X})}\right|$
$0.3(1)$ $0.92(14)$ $0.4(1)$ $0.92(14)$ $0.4(1)$
Table 3.4: A $\chi^{2}$ fit of Eqs(3.15,3.16) including the SUSY threshold
effects parameterized by the specified $\gamma_{i}$.
Table 3.3 shows results from a $\chi^{2}$ fit of Eqs(3.15,3.16) to the data in
Table 3.2 in the absence of threshold corrections. The error, indicated by the
term in brackets, represent the widest axis of the $1\sigma$ error ellipse in
parameter space. The fits labeled ‘A’ have phases such that we have the
smaller magnitude solution of $|c|$, and fits labeled ‘B’ have phases such
that we have the larger magnitude solution of $|c|$. As discussed above, it is
not possible unambiguously to determine the relative contributions of the off-
diagonal elements of the up and down Yukawa matrices to the mixing angles. In
the fit A2 and B2 we illustrate the uncertainty associated with this
ambiguity, allowing for $O(1)$ coefficients $a^{\prime}$. In all the examples
in Table 3.3, the mass ratios, and Wolfenstein parameters are essentially the
same as in Table 3.2.
The effects of the large $\tan\beta$ threshold corrections are shown in Table
3.4. The threshold corrections depend on the details of the SUSY spectrum, and
we have displayed the effects corresponding to a variety of choices for this
spectrum. Column A corresponds to a “standard” SUGRA fit - the benchmark
Snowmass Points and Slopes (SPS) spectra 1b of ref([3]). Because the spectra
SPS 1b has large stop and sbottom squark mixing angles, the approximations
given in Eqns(3.7-3.10) break down, and the value for the correction
$\gamma_{i}$ in Column A need to be calculated with the more complete
expressions in BRP [118] . In the column A fit and the next two fits in
columns B and C, we set $a^{\prime}$ and $c^{\prime}$ to zero. Column B
corresponds to the fit given in the penultimate column of Table 3.2 which
agrees very well with the simple GUT predictions. It is characterized by the
“anomaly-like” spectrum with $M_{3}$ negative. Column C examines the $M_{3}$
positive case while maintaining the GUT prediction for the third generation
$m_{b}=m_{\tau}.$ It corresponds to the “Just-so” Split-Higgs solution. In the
fits A, B and C the value of the parameter $a$ is significantly larger than
that found in RRRV. This causes problems for models based on non-Abelian
family symmetries, and it is of interest to try to reduce $a$ by allowing
$a^{\prime},$ $b^{\prime}$ and $c^{\prime}$ to vary while remaining
${\mathcal{O}}(1)$ parameters. Doing this for the fits B and C leads to the
fits B2 and C2 given in Table 3.4 where it may be seen that the extent to
which $a$ can be reduced is quite limited. Adjusting to this is a challenge
for the broken family-symmetry models.
Although we have included the finite corrections to match the MSSM theory onto
the Standard Model at an effective SUSY scale $M_{S}=500$ GeV, we have not
included finite corrections from matching onto a specific GUT model. Precise
threshold corrections cannot be rigorously calculated without a specific GUT
model. Here we only estimate the order of magnitude of corrections to the mass
relations in Table 3.2 from matching the MSSM values onto a GUT model at the
GUT scale. The $\tan\beta$ enhanced corrections in Eq(3.7-3.10) arise from
soft SUSY breaking interactions and are suppressed by factors of
$M_{SUSY}/M_{GUT}$ in the high-scale matching. Allowing for ${\mathcal{O}}(1)$
splitting of the mass ratios of the heavy states, one obtains corrections to
$y^{b}/y^{\tau}$ (likewise for the lighter generations) of
${\mathcal{O}}(\frac{g^{2}}{(4\pi)^{2}})$ from the $X$ and $Y$ gauge bosons
and ${\mathcal{O}}(\frac{y_{b}^{2}}{(4\pi)^{2}})$ from colored Higgs states.
Because we have a different Higgs representations for different generations,
these threshold correction will be different for correcting the
$3m_{s}/m_{\mu}$ relation than the $m_{b}/m_{\tau}$ relation. These factors
can be enhanced in the case there are multiple Higgs representation. For an
$SU(5)$ SUSY GUT these corrections are of the order of $2\,\%$. Planck scale
suppressed operators can also induce corrections to both the unification scale
[136] and may have significant effects on the masses of the lighter
generations [137]. In the case that the Yukawa texture is given by a broken
family symmetry in terms of an expansion parameter $\epsilon$, one expects
model dependent corrections of order $\epsilon$ which may be significant.
### Chapter Summary
In summary, in the light of the significant improvement in the measurement of
fermion mass parameters, we have analyzed the possibility that the fermion
mass structure results from an underlying supersymmetric GUT at a very high-
scale mirroring the unification found for the gauge couplings. Use of the RG
equations to continue the mass parameters to the GUT scale shows that,
although qualitatively in agreement with the GUT predictions coming from
simple Higgs structures, there is a small quantitative discrepancy. We have
shown that these discrepancies may be eliminated by finite radiative threshold
corrections involving the supersymmetric partners of the Standard-Model
states. The required magnitude of these corrections is what is expected at
large $\tan\beta$, and the form needed corresponds to a supersymmetric
spectrum in which the gluino mass is negative with the opposite sign to the
wino mass. We have also performed a fit to the recent data to extract the
underlying Yukawa coupling matrices for the quarks and leptons. This is done
in the basis in which the mass matrices are hierarchical in structure with the
off-diagonal elements small relative to the appropriate combinations of on-
diagonal matrix elements, the basis most likely to be relevant if the fermion
mass structure is due to a spontaneously broken family symmetry. We have
explored the effect of SUSY threshold corrections for a variety of SUSY
spectra. The resulting structure has significant differences from previous
fits, and we hope will provide the “data” for developing models of fermion
masses such as those based on a broken family symmetry.
Since this work was first published, its conclusions have been confirmed by
studies of other research groups [138]. The updated fits to the Yukawa
textures and viability of the Georgi-Jarlskog relations have been used in
numerous string theory and family symmetry models.
## Chapter 4 Mass Determination Toolbox at Hadron Colliders
### Chapter Overview
In the previous chapter, we presented arguments that predict the sign of the
gluino mass relative to the wino mass and relationships between these masses
needed to satisfy classic Georgi-Jarlskog mass relationships at the GUT scale.
This chapter and the remaining chapters discuss mostly model-independent
experimental techniques to determine the mass of a pair-produced dark-matter
particle. As a test case, we take the dark-matter to be the lightest
supersymmetric particle (LSP) which we assume is the neutralino
$\tilde{\chi}^{o}_{1}$111The four neutralinos are superpositions of the two
Higgsinos, the bino, and the neutral wino
($\tilde{h}^{o}_{u},\tilde{h}^{o}_{d},\tilde{B},\tilde{W}^{3}$). They are
numbered from $1$ having the smallest mass to $4$ having the largest mass..
Determining the mass of the dark-matter particle is a necessary prerequisite
to determining the entire mass spectrum of the new particle states and to
determining the sign of the gluino’s mass (given that supersymmetry is the
correct model) predicted in the previous chapter.
This chapter reviews challenges of hadron-collider mass-determination and
mostly model-independent techniques to address these challenges existent in
the literature. In the subsequent chapters of this thesis, we will improve on
these techniques and develop new techniques that can perform precision
measurements of the dark-matter’s mass with only a few assumptions about the
underlying model.
### 4.1 Mass Determination Challenges at Hadron Colliders
Measuring all the masses and phases associated with the predictions in Chapter
3 or the predictions of any of the many competing models depends on the
successful resolution of several challenges present at any hadron collider.
First to avoid selection bias and because of the large number of possible
models, we would prefer kinematic model-independent techniques to measure
parameters like the mass instead of model-dependent techniques. Kinematic
techniques are made complicated because of the possibility of producing dark
matter in the collider which would lead to new sources (beyond neutrinos) of
missing transverse momentum. Kinematic techniques are also made complex
because the reference frame and center-of-mass energy of the parton collision
is only known statistically 222To the extent we understand the uncertainties
in the measured parton distribution functions.. Last our particle detector has
fundamental limitations on the shortest track that can be observed leading to
combinatoric ambiguities identifying decay products from new particle states
that are created and decay at the “effective” vertex of the collision.
#### 4.1.1 Kinematic versus Model-Dependent Mass Determination
Any new physics discovered will come with unknown couplings, mixing angles,
spins, interactions, and masses. One approach is to assume a model and fit the
model parameters to the observed data. However, we have hundreds of distinct
anticipated theories possessing tens of new particle states with differing
spins and couplings each with tens of parameters. Global fits to approximately
$10^{9}$ events recorded per year are an enormous amount of work, and assume
that the ‘correct model’ has been anticipated.
We would like to measure the particle properties with minimal model
assumptions. The term ‘model-independent’ technique is misleading because we
are always assuming some model. When we say model-independent, we mean that we
assume broadly applicable assumptions such as special relativity. We also try
to ensure that our techniques are largely independent of _a priori_ unknown
coupling coefficients and model parameters. For this reason our desire for
model-independence constrains our toolbox to kinematic properties of particle
production and decay.
If we are able to determine the mass of the dark-matter particle, other
properties follow more easily. Knowing the dark-matter particle’s mass, the
remaining mass spectrum follows from kinematic edges. If we know the dark-
matter’s mass and gluino’s mass, then the sign of the gluino’s mass predicted
in Chapter 3 can be determined from the distribution of the invariant mass of
jet pairs from the decay of gluino pair production [139]. Another avenue to
the sign of the gluino’s mass requires measuring the masses of the gluino and
the two stop squarks and the measuring the decay width of gluino to these
states [140, Appendix B]. In addition, the measurement of spin correlations, a
study that can contribute to spin determination, has historically relied on
knowing the masses to reconstruct the event kinematics [141].
In short, mass determination is a key part to identifying the underlying
theory which lies beyond the Standard Model. Newly discovered particles could
be Kaluza-Klein (KK) states from extra-dimensions, supersymmetric partners of
known states, technicolor hadrons, or something else that we have not
anticipated. Models predict relationships between parameters. Supersymmetry
relates the couplings of current fermions to the couplings of new bosons, and
the supersymmetric particle masses reflect the origin of supersymmetry
breaking. Masses of KK states tell us about the size of the extra dimensions.
If these parameters, such as the masses and spins, can be measured without
assuming the model, then these observations exclude or confirm specific
models. In general, mass determination of new particle states is central to
discerning what lies beyond the Standard Model.
#### 4.1.2 Dark matter: particle pairs carrying missing transverse momentum
If dark matter is produced at a hadron collider, the likely signature will be
missing transverse momentum. The lightest supersymmetric particle (LSP) or
lightest Kaluza-Klein particle (LKP) is expected to be neutral, stable, and
invisible to the particle detectors. The astrophysical dark matter appears to
be stable. Whatever symmetry makes dark matter stable and / or distinguishes
superpartners from their Standard-Model cousins will likely also require that
dark matter particles be pair produced. Therefore, events with supersymmetric
new particle states are expected to end in two LSP’s leaving the detector
unnoticed. Events with Kaluza-Klien particles are expected to end with two
LKP’s leaving the detector unnoticed. The presence of two invisible particles
prevents complete reconstruction of the kinematics of any single event and
leads to missing transverse momentum in the events.
#### 4.1.3 Reference frame unknown due to colliding hadrons
At a hadron collider, because the partons colliding each carry an unknown
fraction of the hadron’s momentum we do not know the reference frame of the
collision. Although the individual parton momentum is unknown, the statistical
distribution of the parton’s momentum can be measured from deep inelastic
scattering [142]. The measured parton distribution functions (PDFs)
$f^{H}_{i}(x,Q)$ give the distribution of parton $i$ within the hadron $H$
carrying $x$ fraction of the hadron’s momentum when probed by a space-like
momentum probe $q^{\mu}$ with $Q^{2}=-q^{2}$. $Q$ is also the factorization
scale at which one cuts off collinear divergences. The dominant parton types
$i$ are $u$, $d$, $s$, their antiparticles $\bar{u}$, $\bar{d}$, $\bar{s}$,
and gluons $g$.
The events produced in the collider follow from a convolution of the cross
section over the parton distribution functions. If the two colliding protons
have 4-momentum $p_{1}=\sqrt{s_{LHC}}(1,0,0,1)/2$ and
$p_{2}=\sqrt{s_{LHC}}(1,0,0,-1)/2$, then for example the $u$ and $\bar{u}$
quarks colliding would have 4-momentum $x_{u}p_{1}$ and $x_{\bar{u}}p_{2}$.
The spatial momentum of the collision along the beam axis is given by
$(x_{u}-x_{\bar{u}})\sqrt{s_{LHC}}/2$, and the center-of-mass energy of the
parton collision is $\sqrt{\,x_{u}\,x_{\bar{u}}s_{LHC}}$. Because $x_{u}$ and
$x_{\bar{u}}$ are only known statistically, any individual collision has an
unknown center-of-mass energy and an unknown momentum along the beam axis.
LHC processes are calculated by convolutions over the PDFs as in
$\sigma=\int\,dx_{u}\,dx_{\bar{u}}\,f^{p}_{u}(x_{u},Q)f^{p}_{\bar{u}}(x_{\bar{u}},Q)\sigma(x_{u}\,x_{\bar{u}}s_{LCH},Q)_{u\bar{u}\rightarrow
final}$ (4.1)
where $\sigma(s,Q)_{u\bar{u}\rightarrow final}$ is the total cross section for
a given parton collision center-of-mass squared $s$, factorization scale $Q$,
and $\sqrt{s_{LHC}}$ is the center-of-mass collision energy of the protons
colliding at the LHC. The region of integration of $x_{u}$ and $x_{\bar{u}}$
is based on the kinematically allowable regions. The factorization scale $Q$
is chosen to minimize the size of the logarithms associated with regulating
collinear divergences in the cross section. See Ref [143] for a recent review
on choosing the factorization scale.
Calculating these input cross sections even at tree level is an arduous
process that has been largely automated. Performing these convolutions over
experimentally determined PDFs adds even further difficulties. Monte-Carlo
generators typically perform these calculations. The calculations in
subsequent chapters made use of MadGraph and MadEvent [144], HERWIG[17],
CompHEP [145].
Despite this automation, the author has found it useful to reproduce these
tools so that we can deduce and control what aspects of the observed events
are caused by what assumptions. For this purpose, parton distributions can be
downloaded333One should also look for the Les Houches Accord PDF Interface
(LHAPDFs) interfaces to interface PDFs to different applications
(http://projects.hepforge.org/lhapdf/). from [146]. Analytic cross sections
for processes can be produced by CompHEP[145].
These review studies have led to an appreciation of uncertainties in the
parton distribution functions, uncertainties about the correct $Q$ to
use444Ref [140] suggests using $Q\approx 2M_{SUSY}$ or whatever energy scale
is relevant to the particles being produced., and uncertainties in the process
of hadronization of the outgoing partons. All these uncertainties contribute
to significant uncertainties in the background of which new physics must be
discovered, and to uncertainties underling model-dependent, cross-section-
dependent determinations of particle masses.
#### 4.1.4 Detector Limitations
Particle detectors have finite capabilities which limit what we can learn. We
will discuss the effects of finite energy resolution later in this thesis.
Here I focus on the lack of the traditional tracks one sees in event
representations.
The early history of particle physics shows beautiful bubble chamber tracks of
particles. Studies of Pions and Kaons relied on tracks left in bubble chambers
or modern digitized detectors. In the case of the $K_{S}$ the lifetime is
$\tau=0.9\times 10^{-10}$ seconds or $c\tau=2.7$ cm, or equivalently the decay
width is $\Gamma=7\times 10^{-6}$ eV. A particle with a width of $\Gamma=1$ eV
has a lifetime of $\tau=6\times 10^{-16}$ seconds with $c\tau=0.2\mu\rm{m}$.
The more massive known states of $W^{\pm}$, $Z^{o}$ bosons and the top quark
have widths of $\Gamma_{W}=2.141\,\,{\rm{GeV}}$,
$\Gamma_{Z}=2.50\,\,{\rm{GeV}}$, and $\Gamma_{t}\approx 1.5\,\,{\rm{GeV}}$.
These decay widths give these states tracks with $c\tau\approx 0.1$ fm or
about $1/10$ the size of an atomic nuclei.
Most supersymmetric states, depending on the model, have decay widths between
around $1$ MeV and $2$ GeV 555This is not a universal statement. Some viable
models have long lived charged states ($c\tau>>1$ cm) [147, 148, 149].. These
observations, while trivial, make us realize that new physics discoveries will
most likely need to deduce properties of the new states from their decay
products only. The fact that all events effectively occur at the origin
creates combinatoric problems because we cannot know the order in which the
visible particles came off of a decay chain or even to which decay chain they
belong.
### 4.2 Invariant Mass Edge Techniques
We now turn to existent tools to overcome the challenges inherent in mass
determination at hadron colliders in the presence of missing transverse
momentum.
#### 4.2.1 Invariant Mass Edges
Lorentz invariant distributions are optimal for making a distribution
independent of the unknown frame of reference. Fig 4.1 shows a two-body decay
and a three-body decay.
Figure 4.1: (Left:) Two body decay and it’s associated $m_{ll}$ distribution.
(Right:) Three body decay and it’s associated $m_{ll}$ distribution.
The decay products from $Y$ in both cases involve a dark-matter particle $N$
and two visible states with 4-momenta $\alpha_{1}$ and $\alpha_{2}$; we will
abbreviate $\alpha=\alpha_{1}+\alpha_{2}$. The Lorentz invariant mass of these
two visible states is defined as
$m^{2}_{12}=\alpha^{2}=(\alpha_{1}+\alpha_{2})^{2}.$ (4.2)
Assuming no spin correlation between $\alpha_{1}$ and $\alpha_{2}$, the
distribution for the two-body decay looks more like a right-triangle whereas
the three-body decay case looks softer. Spin correlations between these states
may change the shape and might be used to measure spin in some cases [150].
The distribution shape can also be affected by competing processes in the
three-body decay. For example, the degree of interference with the slepton can
push the peak to a smaller value[151].
The end-point of the distribution gives information about the masses of the
new particle states. In the two-body decay case the endpoint gives
$\max
m^{2}_{12}=\frac{(M_{Y}^{2}-M_{X}^{2})(M_{X}^{2}-M_{N}^{2})}{M_{X}^{2}},$
(4.3)
and in the three-body decay case
$\max m_{12}=M_{Y}-M_{N}.$ (4.4)
These edges in invariant mass combinations provide information about mass
differences or mass squared differences but not about the mass-scale itself.
Measuring these edges requires some model of the events creating the
distribution in order to simulate the effect of the detectors energy
resolution. For this reason, even end-point techniques, although mostly model-
independent, still require some estimate of the distribution near the end
point to get an accurate end-point measurement. Radiative corrections also
play a role in shifting this endpoint slightly[152].
These techniques have been studied to select a model if one assumes a
particular set of starting models [153]. They have also been used with
Bayesian techniques to measure the mass difference between slepton states
[154].
#### 4.2.2 Constraints from Cascade Decays
If there are many new particle states produced at the collider, then we can
use edges from cascade decays between these new states to provide constraints
between the masses. Given some luck regarding the relative masses, these
constraints may be inverted to obtain the mass scale. Fig 4.2 shows a cascade
decay from $Z$ to $Y$ to $X$ ending in $N$ and visible particle momenta
$\alpha_{1}$, $\alpha_{2}$, $\alpha_{3}$.
There are four unknown masses and potentially four linearly-independent
endpoints. In addition to Eq(4.3), we also have [155, 153]
$\max m^{2}_{32}=(M^{2}_{Z}-M^{2}_{Y})(M^{2}_{Y}-M^{2}_{X})/M_{Y}^{2},$ (4.5)
and
$\displaystyle\max
m^{2}_{123}=\begin{cases}(M^{2}_{Z}-M^{2}_{Y})(M^{2}_{Y}-M^{2}_{N})/M_{Y}^{2}&{\rm{iff}}M_{Y}^{2}<M_{N}M_{Z}\\\
(M^{2}_{Z}-M^{2}_{X})(M^{2}_{X}-M^{2}_{N})/M_{X}^{2}&{\rm{iff}}M_{N}M_{Z}<M_{X}^{2}\\\
(M^{2}_{Z}M_{X}^{2}-M_{Y}^{2}M_{N}^{2})(M_{Y}^{2}-M_{X}^{2})/(M^{2}_{Y}M^{2}_{X})&{\rm{iff}}M_{X}^{2}M_{Z}<M_{N}M_{Y}^{2}\\\
(M_{Z}-M_{N})^{2}&{\rm{otherwise}}.\end{cases}$ (4.6)
The fourth endpoint is
$\max m^{2}_{13}=(M^{2}_{Z}-M^{2}_{Y})(M^{2}_{X}-M^{2}_{N})/M_{X}^{2}.$ (4.7)
Depending on whether we can distinguish visible particles $1$, $2$ and $3$
from each other, some of these endpoints may be obscured by combinatorics
problems. If the visible particles and the masses are such that these four can
be disentangled from combinatoric problems, and if the masses are such that
these four end-points provide independent constraints, and if there is
sufficient statistics to measure the endpoints well-enough, then we can solve
for the mass spectrum.
Figure 4.2: Cascade decay from $Z$ to $Y$ to $X$ ending in $N$ and visible
particle momenta $\alpha_{1}$, $\alpha_{2}$, $\alpha_{3}$.
This technique has been studied in [156, 155, 153] and subsequently by many
others. In some cases, there is more than one solution to the equations. This
degeneracy can be lifted by using the shape of these distributions [157, 158,
159]. Using the Supersymmetric benchmark point SPS 1a, this technique has been
shown to determine the LSP mass to $\pm 3.4\,\,{\rm{GeV}}$ with about 500
thousand events from $300\,\,{\rm{fb}}^{-1}$. They are able to determine the
mass difference to
$\sigma_{M_{\tilde{\chi}^{o}_{2}}-M_{\tilde{\chi}^{o}_{1}}}=0.2\,\,{\rm{GeV}}$
and $\sigma_{M_{\tilde{l}_{R}}-M_{\tilde{\chi}^{o}_{1}}}=0.3\,\,{\rm{GeV}}$
[157].
### 4.3 Mass Shell Techniques
What if there are not as many new-states accessible at the collider, or what
if the masses arrange themselves to not enable a solution to the four
invariant mass edges listed above? There is also a series of approaches called
Mass Shell Techniques (MST)666The title MST is suggested in Ref. [160]. where
assumption are used about the topology and on-shell conditions to solve for
the unknown masses.
Figure 4.3: Events with the new state $Y$ is pair produced and in which each
$Y$ decays through a two-body decay to a massive new state $X$ and a visible
state $1$ and then where $X$ subsequently decays to a massive state $N$
invisible to the detector and visible particles $2$. All previous decay
products are grouped into the upstream transverse momentum, $k$.
An MST was used in the early 1990s by Goldstein, Dalitz (GD) [161], and Kondo,
Chikamatsu, Kim (KCK)[162] as a suggested method to measure the mass of the
top quark. The top-quark pair production and decay has the topology shown in
Fig 4.3. Here $Y=t$ or $\bar{t}$, $X=W^{\pm}$, $(1)=b$, $(2)=e$ and
$N=\nu_{e}$. In each event, one measures $\alpha_{1,2}$, $\beta_{1,2}$, and
the missing transverse momentum $\not{P}_{T}=(p+q)_{T}$. The resulting mass
shell and missing momentum equations are
$\displaystyle M_{N}^{2}$ $\displaystyle=$ $\displaystyle p^{2}=q^{2},$ (4.8)
$\displaystyle M_{X}^{2}$ $\displaystyle=$
$\displaystyle(p+\alpha_{2})^{2}=(q+\beta_{2})^{2},$ (4.9) $\displaystyle
M_{Y}^{2}$ $\displaystyle=$
$\displaystyle(p+\alpha_{1}+\alpha_{2})^{2}=(q+\beta_{1}+\beta_{2})^{2},$
(4.10) $\displaystyle\not{P}_{T}$ $\displaystyle=$ $\displaystyle(p+q)_{T}.$
(4.11)
If we assume a mass for $M_{Y}$, then we have $8$ unknowns (the four
components of each $p$ and $q$) and $8$ equations. The equations can be
reduced to the intersection of two ellipses which have between zero and four
distinct solutions. In GD and KCK, they supplemented these solutions with some
data from the cross section and used Bayes’ theorem to create a likelihood
function for the unknown mass of the top quark. The approach has a systematic
bias [163][164] that must be removed by modeling [165]777Details of this bias
are discussed more in Section 8.2.1.
A modern reinvention of this MST is found in Cheng, Gunion, Han, Marandella
and McElrath (CHGMM) [166] where they assume a symmetric topology with an on-
shell intermediate state such as in Fig 4.3. This reinvention assumes
$Y=\tilde{\chi}^{o}_{2}$, $X=\tilde{l}$ and $N=\tilde{\chi}^{o}_{1}$. CHGMM do
a test of each event’s compatibility with all values of $(M_{Y},M_{X},M_{N})$.
Using this approach, they find that with $300\,\,{\rm{fb}}^{-1}$ of events
that they can determine the LSP mass to $\pm 12$ GeV. Unlike GD and KCK, CHGMM
make no reference to Bayesian techniques.
Another MST assumes a longer symmetric decay chain and assumes the masses in
two events must be equal [167]. This results in enough equations to equal the
number of unknowns to solve for the masses directly. For SPS 1a after
$300\,\,{\rm{fb}}^{-1}$, they reach $\pm 2.8\,\,{\rm{GeV}}$ using $700$ events
satisfying their cuts but have a $2.5\,\,{\rm{GeV}}$ systematic bias that
needs modeling to remove.
Finally there is a suggestion for hybrid MST where one combines on-shell
conditions with the information from the many edges in cascade decays [168].
The $M_{2C}$ variable introduced in Chapters 6 is a simple example of such a
hybrid technique that predated their suggestion.
### 4.4 Transverse Mass Techniques
#### 4.4.1 $M_{T}$ and Measuring $M_{W}$
The invariant mass distributions are Lorentz invariant. In events with one
invisible particle, like the neutrino, leaving the detector unnoticed, we only
know the neutrino’s transverse momentum inferred from the missing transverse
momentum. If a parent particle $X$ decays into a visible particle with
observed 4-momentum $\alpha_{1}$ and an invisible particle $p$, then what
progress can be made in mass determination? There is a class of techniques
which, although not Lorentz invariant, is invariant with respect to
longitudinal boosts along the beam line. The 4-momentum
$p_{\mu}=(p_{0},p_{x},p_{y},p_{z})$ is cast in new coordinates
$\displaystyle\eta(p)$ $\displaystyle=$
$\displaystyle\frac{1}{2}\ln\frac{p_{0}+p_{z}}{p_{0}-p_{z}}$ (4.12)
$\displaystyle E_{T}(p)$ $\displaystyle=$
$\displaystyle\sqrt{m_{p}^{2}+p^{2}_{x}+p_{y}^{2}}$ (4.13) $\displaystyle
p_{\mu}$ $\displaystyle=$
$\displaystyle(E_{T}\cosh\eta,p_{x},p_{y},E_{T}\sinh\eta)$ (4.14)
where $m_{p}^{2}=p^{2}$. The mass of $X$ in the decay
$X(\alpha_{1}+p)\rightarrow N(p)+1(\alpha_{1})$ in Fig. 4.1 is
$\displaystyle M_{X}^{2}=(p+\alpha_{1})^{2}$ $\displaystyle=$ $\displaystyle
M_{N}^{2}+M_{1}^{2}+2p\cdot\alpha_{1}$ (4.15) $\displaystyle=$ $\displaystyle
M_{N}^{2}+M_{1}^{2}+2E_{T}(p)E_{T}(\alpha_{1})\cosh(\eta(p)-\eta(\alpha_{1}))-2p_{T}\cdot(\alpha_{1})_{T}.$
(4.16)
We observe the 4-momentum $\alpha_{1}$ in the detector. From the missing
transverse momentum, we can deduce the transverse components $p_{x}$ and
$p_{z}$. The components $p_{o}$ and $p_{z}$ remain unknown. If we know the
mass of $N$, we can bound the unknown mass of $X$ from below by minimizing
with respect to the unknown rapidity $\eta(p)$. The minimum
$\displaystyle M_{X}^{2}\geq M^{2}_{T}(p,\alpha_{1})\equiv
M_{N}^{2}+M_{1}^{2}+2E_{T}(p)E_{T}(\alpha_{1})-2p_{T}\cdot(\alpha_{1})_{T}$
(4.17)
is at $\eta(p)=\eta(\alpha_{1})$ and is guaranteed to be less than the true
mass of $X$. This lower bound is called the transverse mass $M_{T}$.
This technique is used to measure the $W^{\pm}$ boson’s mass with the
identification $X=W^{\pm}$, $N=\nu_{e}$, and $(1)=e^{\pm}$. The first
observation of the $W^{\pm}$ was reported in 1983 at UA1 Collaboration at
Super Proton Synchrotron (SPS) collider at CERN[48]. They used $M_{T}$ to
bound $M_{W}\geq 73\,\,{\rm{GeV}}$ at the $90\%$ confidence level. Assuming
the Standard Model and fitting the data to the model with $M_{W}$ as a free
parameter gave the UA1 collaboration the measurement $M_{W}=81\pm
5\,\,{\rm{GeV}}$, which agreed with the prediction described by Llewellyn
Smith and Wheater [47]. $M_{T}$ is still used even for the more recent 2002
D0-Collaboration model-independent measurement of $M_{W}=80.483\pm
0.084\,\,{\rm{GeV}}$ [169].
#### 4.4.2 The Stransverse Mass Variable $M_{T2}$
If a hadron collider pair produces dark-matter particles, then there are two
sources of missing transverse momentum, each with the same unknown mass. The
‘stransverse mass’ $m_{T2}$ introduced by Lester and Summers [12, 13] adapts
the transverse mass $m_{T}$ to this task. $M_{T2}$ techniques have become
widely used with Refs. [12, 13] having more than 45 citations.
The stransverse mass is used to determine the mass difference between a parent
particle and a dark-matter candidate particle given an assumed mass for the
dark-matter candidate based on a topology similar to Figs 4.4 or 4.3. The
variable $m_{T2}$ accepts three inputs: $\chi_{N}$ (an assumed mass of the two
particles carrying away missing transverse momenta), $\alpha$ and $\beta$ (the
visible momenta of each branch), and $\not{P}_{T}=(p+q)_{T}$ (the missing
transverse momenta two-vector). We can define $m_{T2}$ in terms of the
transverse mass of each branch where we minimize the maximum of the two
transverse masses over the unknown split between $p$ and $q$ of the overall
missing transverse momenta:
$M^{2}_{T2}(\chi_{N},\alpha,\beta,\not{P}_{T})=\min_{{p}_{T}+{q}_{T}=\not{{P}}_{T}}\left[\max\left\\{M^{2}_{T}(\alpha,p),M^{2}_{T}(\beta,q)\right\\}\right].$
(4.18)
In this expression $\chi_{N}$ is the assumed mass of $N$, $\alpha$ and $\beta$
are the four momenta of the visible particles in the two branches, the
transverse mass is given by
$M^{2}_{T}(\alpha,p)=m^{2}_{\alpha}+\chi_{N}^{2}+2(E_{T}(p)E_{T}(\alpha)-p_{T}\cdot\alpha_{T})$
and the transverse energy $E_{T}(\alpha)=\sqrt{p_{T}^{2}+\chi_{N}^{2}}$ is
determined from the transverse momentum of $p$ and the assumed mass of $p$.
The $M_{T}$ here is identical to Eq(4.17) with $M_{N}$ replaced by the assumed
mass $\chi_{N}$. An analytic formula for the case with no transverse upstream
momentum $k_{T}$ can be found in the appendix of [170]. For each event, the
quantity
$M_{T2}(\chi_{N}=m_{N},\alpha_{1}+\alpha_{2},\beta_{1}+\beta_{2},\not{P}_{T})$
gives the smallest mass for the parent particle compatible with the event’s
kinematics. Under ideal assumptions, the mass of the parent particle $Y$ is
given by the end-point of the distribution of this $m_{T2}$ parameter over a
large number of events like Figs. 4.4 or 4.3. Because a priori we do not know
$M_{N}$, we need some other mechanism to determine $M_{N}$. We use $\chi$ to
distinguish assumed values of the masses ($\chi_{Y}$, $\chi_{X}$, $\chi_{N}$)
from the true values for the masses ($m_{Y}$, $m_{X}$, $m_{N}$). Because of
this dependence on the unknown mass, we should think of $\max\,m_{T2}$ as
providing a relationship or constraint between the mass of $Y$ and the mass of
$N$. This forms a surface in the ($\chi_{Y}$, $\chi_{X}$, $\chi_{N}$) space on
which the true mass will lie. We express this relationship as
$\chi_{Y}(\chi_{N})$ 888In principle this surface would be considered a
function of $\chi_{Y}(\chi_{X},\chi_{N})$, but $m_{T2}$ makes no reference to
the mass of $X$ and the resulting constraints are therefore independent of any
assumed value of the mass of $X$..
In addition to invariance with respect to longitudinal boosts, $M_{T2}$, in
the limit where $k_{T}=0$, is also invariant with respect to back-to-back
boosts of the parent particle pair in the transverse plane [171].
$M_{T2}$ is an ideal tool for top-mass determination at the LHC [172].
$M_{T2}$ would apply to events where both pair-produced top quarks decay to a
$b$ quark and a $W^{\pm}$ and where in both branches the $W^{\pm}$ decays
leptonically to $l$ and $\nu_{l}$. To use $M_{T2}$ to find the mass of the
parent particle, a value must be assumed for $\chi_{N}=M_{N}$. For a $\nu$ we
approximate $\chi_{N}=0$. However, even in models of new physics where new
invisible particles are nearly massless (like the gravitino studied in [173]),
we would rather not just assume the mass of the lightest supersymmetric
particle (LSP), which is needed as an input to the traditional $m_{T2}$
analysis, without measuring it in some model independent way.
#### 4.4.3 Max $M_{T2}$ Kink Technique
Figure 4.4: Events with the new state $Y$ is pair produced and in which each
$Y$ decays through a three-body decay to a massive state $N$ invisible to the
detector and visible particles $1$, $2$, $3$, and $4$. All previous decay
products are grouped into the upstream transverse momentum, $k$.
$M_{T2}$ assumes a mass $\chi_{N}$ for $N$. Is there any way that one can
determine $M_{N}$ from $M_{T2}$ alone? At the same time as the author was
completing work on $M_{2C}$ described in Chapter 6, Cho, Choi Kim, Park (CCKP)
[174, 171] published an observation on how the mass could be deduced from
$M_{T2}$ alone. If $Y$ is pair produced and each undergoes a three-body
decay999The presence of a $\geq 3$-body decay is a sufficient but not
necessary condition. Two-body decays can also display kinks [175, 176]
provided the decaying particles have sufficiently large transverse boosts. to
$N$ as shown in Fig. 4.4, then a ‘kink’ in the ${\tt{max}}\ m_{T2}$[12, 13],
will occur at a position which indicates the invisible particle mass. This
‘kink’ is corroborated in Refs. [176, 177]. Fig 4.5 shows the ‘kink’ in
${\tt{max}}\ m_{T2}$ in an idealized simulation where $M_{Y}=800$ GeV and
$M_{N}=100$ GeV. In the CCKP example, they study a mAMSB senario where a
gluino decays via a three-body decay to the LSP and quarks. In this case, they
determined the LSP’s mass to $\pm 1.7$ GeV. The difficulty in mass
determination with this technique scales as
$(M_{+}/M_{-})^{2}=(M_{Y}+M_{N})^{2}/(M_{Y}-M_{N})^{2}$. For CCKP’s example
$M_{+}/M_{-}=1.3$; we will consider examples where $M_{+}/M_{-}\approx 3$. The
$(M_{+}/M_{-})^{2}$ scaling behavior follows by propagating error in $M_{-}$
to position of the intersection of the two curves that form the kink.
This kink is quantified by the constrained mass variable $m_{2C}$ [15][16]
that is the subject of later chapters of this thesis. We will be applying it
to a case about $5$ or $6$ times more difficult because $M_{+}/M_{-}\approx 3$
for the neutralino studies on which we choose to focus.
Figure 4.5: The $\max M_{T2}(\chi)$ shows a kink at the true $M_{N}$ and
$M_{Y}$. For this simulation, $m_{Y}=200$ GeV, and $m_{N}=99$ GeV.
### Chapter Summary
In this chapter, we have introduced the need for making mass measurements in a
model independent manner of collider-produced dark matter. We have outlined
many approaches based on edges of invariant mass distributions, mass shell
techniques, and transverse mass techniques. Each technique has distinct
domains of validity and requirements. In the remaining chapters of this
thesis, we introduce and test new model-independent techniques to determine
mass of dark-matter particles. Which approach turns out to be best is likely
to depend on what scenario nature hands us, since the various techniques
involve different assumptions. Having different approaches also offers the
advantage of providing a system of redundant checks.
## Chapter 5 Using $M_{T2}$ with Cascade decays
### Chapter Overview
Recently, a paper by Tovey[178] introduced a new variable, $M_{CT}$, with the
powerful concept of extracting mass information from intermediate stages of a
symmetric decay chain. In this chapter, we compare $M_{T2}$ with $M_{CT}$. The
variable $M_{CT}$ is new but shares many similarities and differences with
$M_{T2}$. We briefly define $M_{CT}$ and explain when it gives identical
results to $M_{T2}$ and when it gives different results. We comment on
benefits of each variable in its intended applications. We find that for
massless visible particles $M_{CT}$ equals $M_{T2}$ in a particular limit, but
$M_{T2}$ has better properties when there is initial state radiation (ISR) or
upstream transverse momentum (UTM)111ISR and UTM both indicate $k_{T}\neq 0$.
We take ISR to mean the case when $k_{T}$ is small compared to the energy
scales involved in the collision.. We argue that $M_{T2}$ is a more powerful
tool for extracting mass information from intermediate stages of a symmetric
decay chain. This chapter is based on work first published by the author in
Ref. [14].
### 5.1 $M_{T2}$ and $M_{CT}$ in this context
Both $M_{T2}$ and $M_{CT}$ assume a pair-produced new-particle state followed
by each branch decaying symmetrically to visible states and dark-matter
candidates which escape detection and appear as missing transverse momentum.
Fig 4.3 is the simplest example on-which we can meaningfully compare the two
kinematic quantities. The figure shows two partons colliding and producing
some observed initial state radiation (ISR) or upstream transverse momentum
(UTM) with four momenta $k$ and an on-shell, pair-produced new state $Y$. On
each branch, $Y$ decays to on-shell states $X$ and $v_{1}$ with masses $M_{X}$
and $m_{v_{1}}$, and $X$ then decays to on-shell states $N$ and $v_{2}$ with
masses $M_{N}$ and $m_{v_{2}}$. The four-momenta of $v_{1}$, $v_{2}$ and $N$
are respectively $\alpha_{1}$, $\alpha_{2}$ and $p$ on one branch and
$\beta_{1}$, $\beta_{2}$ and $q$ in the other branch. The missing transverse
momenta $\not{P}_{T}$ is given by the transverse part of $p+q$.
Tovey [178] recently defined a new variable $M_{CT}$ which has many
similarities to $M_{T2}$. The variable is defined as
$\displaystyle M_{CT}^{2}(\alpha_{1},\beta_{1})$ $\displaystyle=$
$\displaystyle(E_{T}(\alpha_{1})+E_{T}(\beta_{1}))^{2}-(\alpha_{1T}-{\beta}_{1T})^{2}.$
(5.1)
Tovey’s goal is to identify another constraint between masses in the decay
chain. He observes that in the rest frame of $Y$ the momentum of the back-to-
back decay products $X$ and $v_{1}$ is given by
$\left(k_{*}(M_{Y},M_{X},m_{v_{1}})\right)^{2}=\frac{(M_{Y}^{2}-(m_{v_{1}}+M_{X})^{2})(M_{Y}^{2}-(m_{v_{1}}-M_{X})^{2})}{4M_{Y}^{2}}$
(5.2)
where $k_{*}$ is the two-body excess-momentum parameter (2BEMP) 222Tovey
refers to this as the 2-body mass parameter ${\mathcal{M}}_{i}$. We feel
calling this a mass is a bit misleading so we are suggesting 2BEMP.. In the
absence of transverse ISR ($k_{T}=0$) and if the visible particles are
effectively massless ($m_{v_{1}}=0$), Tovey observes that $\max
M_{CT}(\alpha_{1},\beta_{1})$ is given by $2k_{*}$; this provides an equation
of constraint between $M_{Y}$ and $M_{X}$. Tovey observes that if we could do
this analysis at various stages along the symmetric decay chain, then all the
masses in principle could be determined.
The big advantage of $M_{CT}$ is in its computational simplicity. It is a
simple one line formula to evaluate. Also, $M_{CT}$ is intended to only be
calculated once per event instead of at a variety of choices of the
hypothetical LSP mass $\chi$. In contrast, $M_{T2}$ is a more computationally
intensive parameter to compute; but this is aided by the use of a common
repository of community tested C++ libraries found at [179].
How are these two variables similar? Both $M_{CT}$ and $M_{T2}$, in the limit
of $k_{T}=0$, are invariant under back-to-back boosts of the parent particles
momenta [171]. The variable $M_{CT}$ equals $M_{T2}$ in the special case where
$\chi=0$, and the visible particles are massless
$(\alpha_{1}^{2}=\beta_{1}^{2}=0)$, and there is no transverse ISR or UTM
($k_{T}=0$)
$\displaystyle M_{CT}(\alpha_{1},\beta_{1})$ $\displaystyle=$ $\displaystyle
M_{T2}(\chi=0,\alpha_{1},\beta_{1},\not{P}_{T}=(p+q+\alpha_{2}+\beta_{2})_{T})\
\ \ {\rm{if}}\ \ \alpha_{1}^{2}=\beta_{1}^{2}=0.$ (5.3) $\displaystyle=$
$\displaystyle
2({\alpha_{1}}_{T}\cdot{\beta_{1}}_{T}+|{\alpha_{1}}_{T}|\,|{\beta_{1}}_{T}|).$
(5.4)
The $M_{CT}$ side of the equation is straight forward. The $M_{T2}$ side of
the expression can be derived analytically using the formula for $M_{T2}$
given in [170]; we also show a short proof in Appendix C. Eq(5.3) uses a
$M_{T2}$ in a unconventional way; we group the observed momenta of the second
decay products into the missing transverse momenta. In this limit, both share
an endpoint of $2k_{*}=(M_{Y}^{2}-M_{X}^{2})/M_{Y}$. To the best of our
knowledge, this endpoint was first pointed out by CCKP [174] 333The endpoint
given by CCKP is violated for non-zero ISR at $\chi_{N}<M_{N}$ and
$\chi_{N}>M_{N}$.. We find it surprising that a physical relationship between
the masses follows from $M_{T2}$ evaluated at a non physical $\chi$. In the
presence of ISR or UTM, Eq(5.3) is no longer an equality. Furthermore in the
presence of ISR or UTM, the end point of the distribution given by either side
of Eq(5.3) exceeds $2k_{*}$. In both cases, we will need to solve a
combinatoric problem of matching visible particles to their decay order and
branch of the event which we leave for future research.
In the case where the visible particle $v_{1}$ is massive, the two parameters
give different end-points
$\displaystyle\max M_{CT}(\alpha_{1},\beta_{1})$ $\displaystyle=$
$\displaystyle\frac{M_{Y}^{2}-M_{X}^{2}}{M_{Y}}+\frac{m_{v_{1}}^{2}}{M_{Y}},\
{\rm{and}}$ (5.5) $\displaystyle\max
M_{T2}(\chi=0,\alpha_{1},\beta_{1},\not{P}_{T}=(p+q+\alpha_{2}+\beta_{2})_{T})$
$\displaystyle=$
$\displaystyle\sqrt{m_{v_{1}}^{2}+2(k_{*}^{2}+k_{*}\sqrt{k_{*}^{2}+m^{2}_{v_{1}}})}$
(5.6)
where $k_{*}$ is given by Eq(5.2). Unfortunately, there is no new information
about the masses in these two endpoints. If we solve Eq(5.5) for $M_{X}$ and
substitute this into Eq(5.6) and (5.2), all dependence on $M_{Y}$ is
eliminated.
### 5.2 Application to symmetric decay chains
Tovey’s idea of analyzing the different steps in a symmetric decay chain to
extract the masses is powerful. Up to now, we have been analyzing both
variables in terms of the first decay products of $Y$. This restriction is
because $M_{CT}$ requires no transverse ISR to give a meaningful endpoint. If
we were to try and use $\alpha_{2}$ and $\beta_{2}$ to find a relationship
between $M_{X}$ and $M_{N}$, then we would need to consider the transverse UTM
to be $(k+\alpha_{1}+\beta_{1})_{T}$, which is unlikely to be zero.
Figure 5.1: Shows constraints from $\max\,M_{T2}$ used with different
combinations as described in eqs(5.7,5.8,5.9) and the $\max m_{12}$ described
in Eq(5.11). Intersection is at the true mass
$(97\,\,{\rm{GeV}},144\,\,{\rm{GeV}},181\,\,{\rm{GeV}})$ shown by sphere.
Events include ISR but otherwise ideal conditions: no background, resolution,
or combinatoric error.
We suggest $M_{T2}$ is a better variable with which to implement Tovey’s idea
of analyzing the different steps in a symmetric decay chain because of its ISR
properties. With and without ISR, $M_{T2}$’s endpoint gives the correct mass
of the parent particle when we assume the correct value of the missing-energy-
particle’s mass 444In principle we could plot the $\max
M_{T2}(\chi_{X},\alpha_{1},\beta_{1},\not{P}_{T}=(\alpha_{2}+\beta_{2}+p+q)_{T})$
vs $\chi_{X}$ as a function of transverse ISR and the value of $\chi_{X}$ at
which the end point is constant would give the correct value of $M_{X}$; at
which point the distributions end point would give the correct $M_{Y}$. In
practice we probably will not have enough statistics of ISR events.. For this
reason, $\max M_{T2}$ gives a meaningful relationship between masses
$(M_{Y},M_{X},M_{N})$ for all three symmetric pairings of the visible
particles across the two branches. A relationship between $M_{Y}$ and $M_{X}$
is given by
$\chi_{Y}(\chi_{X})=\max
M_{T2}(\chi_{X},\alpha_{1},\beta_{1},\not{P}_{T}=(p+q+\alpha_{2}+\beta_{2})_{T}).$
(5.7)
A relationship between $M_{X}$ and $M_{N}$ can be found by computing
$\chi_{X}(\chi_{N})=\max
M_{T2}(\chi_{N},\alpha_{2},\beta_{2},\not{P}_{T}=(p+q)_{T})$ (5.8)
where we have grouped $\alpha_{1}+\beta_{1}$ with the $k$ as a part of the
ISR. A relationship between $M_{Y}$ and $M_{N}$ can be found by using $M_{T2}$
in the traditional manner giving
$\chi_{Y}(\chi_{N})=\max
M_{T2}(\chi_{N},\alpha_{1}+\alpha_{2},\beta_{1}+\beta_{2},\not{P}_{T}=(p+q)_{T}).$
(5.9)
Lastly, we can form a distribution from the invariant mass of the visible
particles on each branch $m^{2}_{12}=(\alpha_{1}+\alpha_{2})^{2}$ or
$m^{2}_{12}=(\beta_{1}+\beta_{2})^{2}$. The endpoint of this distribution
gives a relationship between $M_{Y}$, $M_{X}$, and $M_{N}$ given by
$\max
m^{2}_{12}=\frac{(M_{Y}^{2}-M_{X}^{2})(M_{X}^{2}-M_{N}^{2})}{M_{X}^{2}}.$
(5.10)
Solving this expression for $M_{Y}$ gives the relationship
$\chi^{2}_{Y}(\chi_{N},\chi_{X})=\frac{\chi_{X}^{2}((\max
m^{2}_{12})+\chi_{X}^{2}-\chi_{N}^{2})}{\chi_{X}^{2}-\chi_{N}^{2}}.$ (5.11)
Fig 5.1 shows the constraints from Eqs(5.7, 5.8, 5.9, 5.11) in an ideal
simulation using $(M_{Y}=181\,{\rm{GeV}}$, $M_{X}=144\,{\rm{GeV}}$,
$M_{N}=97\,{\rm{GeV}})$, 1000 events, and massless visible particles, and ISR
added with an exponential distribution with a mean of $50$ GeV. These four
surfaces in principle intersect at a single point $(M_{Y},M_{X},M_{N})$ given
by the sphere in the figure 5.1. Unfortunately, all these surfaces intersect
the correct masses at a shallow angles so we have a sizable uncertainty along
the direction of the sum of the masses and tight constraints in the
perpendicular directions. In other words, the mass differences are well-
determined but not the mass scale. From here one could use a shape fitting
technique like that described in Chapters 6 \- 8. to find a constraint on the
sum of the masses. Tovey’s suggestion for extracting information from these
intermediate stages of a symmetric cascade chain clearly provides more
constraints to isolate the true mass than one would find from only using the
one constraint of Eq(5.9) as described in [174]. However, Tovey’s suggestion
is more feasible using the $M_{T2}$ rather than $M_{CT}$ because the
constraint surfaces derived from $M_{T2}$ intersect the true masses even with
UTM.
### Chapter summary
In summary, we have compared and contrasted $M_{CT}$ with $M_{T2}$. The
variable $M_{CT}$ is a special case of $M_{T2}$ given by Eq(5.3) when ISR can
be neglected and when the visible particles are massless. In this case, the
end-point of this distribution gives $2k_{*}$, twice the two-body excess
momentum parameter (2BEMP). If $m_{v_{1}}\neq 0$, the two distributions have
different endpoints but no new information about the masses. In the presence
of ISR the two functions are not equal; both have endpoints that exceed
$2k_{*}$. Because of it’s better properties in the presence of UTM or ISR,
$M_{T2}$ is a better variable for the task of extracting information for each
step in the decay chain. Extracting this information requires solving
combinatoric problems which are beyond the scope of this chapter.
## Chapter 6 The Variable $M_{2C}$: Direct Pair Production
### Chapter Overview
In this chapter, we propose an improved method for hadron-collider mass
determination of new states that decay to a massive, long-lived state like the
LSP in the MSSM. We focus on pair-produced new states which undergo three-body
decay to a pair of visible particles and the new invisible long-lived state.
Our approach is to construct a kinematic quantity which enforces all known
physical constraints on the system. The distribution of this quantity
calculated for the observed events has an endpoint that determines the mass of
the new states. However we find it much more efficient to determine the masses
by fitting to the entire distribution and not just the end point. We consider
the application of the method at the LHC for various models under the ideal
assumption of effectively direct production with minimal ISR and demonstrate
that the method can determine the masses within about $6$ GeV using only $250$
events. This implies the method is viable even for relatively rare processes
at the LHC such as neutralino pair production.
This chapter, which is based on work first published by the author and his
supervisor in Ref. [15], concentrates on mass determination involving the
production of only two new states. Our particular concern is to use the
available information as effectively as possible to reduce the number of
events needed to make an accurate determination of $M_{Y}$ and $M_{N}$. The
main new ingredient of the method proposed is that it does not rely solely on
the events close to the kinematic boundary but makes use of all the events.
Our method constrains the unobserved energy and momentum such that all the
kinematical constraints of the process are satisfied including the mass
difference, Eq(4.4), which can be accurately measured from the $ll$ spectrum
discussed in Sec 4.2.1. This increases the information that events far from
the kinematic boundary can provide about $M_{Y}$ and significantly reduces the
number of events needed to obtain a good measurement of the overall mass
scale. We develop the method for the case where $Y$ is directly pair produced
in the parton collision with minimal ISR and where each $Y$ decays via a
three-body decay to a on-shell final states $N+l^{+}+l^{-}$. Its
generalization to other processes is straightforward and considered later in
this thesis111We note that the on-shell intermediate case studied by CGHMM is
also improved by including the relationship measured by the edge in the $ll$
distribution on each event’s analysis. The $Y$ decay channel with an on-shell
intermediate state $X$ has an edge in the $ll$ invariant mass distribution
which provides a good determination of the relationship $\max
m_{ll}^{2}=(M_{Y}^{2}-M_{X}^{2})(M_{X}^{2}-M_{N}^{2})/M_{X}^{2}$. This
relationship forms a surface in $M_{N}$,$M_{X}$,$M_{Y}$ space that only
intersects the allowed points of CGHMM’s Fig 3 near the actual masses. We will
investigate this case in Chapter 8..
The chapter is structured as follows: In Section 6.1, we introduce the
$M_{2C}$ distribution whose endpoint gives $M_{Y}$, and whose distribution can
be fitted away from the endpoint to determine $M_{Y}$ and $M_{N}$ before we
have enough events to saturate the endpoint. Section 6.2 discusses the
relationship between our distribution and the kink in $M_{T2}(\chi)$ of CCKP
and how this relationship can be used to calculate $M_{2C}$ in a
computationally efficient manner. We then in 6.3 discuss symmetries and
dependencies of the $M_{2C}$ distribution. Section 6.4 estimates the
performance for a few SUSY models where we include approximate detector
resolution effects and where we expect backgrounds to be minimal but where we
assume ($k_{T}\approx 0$). Finally we summarize the chapter’s findings.
### 6.1 An improved distribution from which to determine $M_{Y}$
We consider the event topology shown in Fig 4.4. The new state $Y$ is pair
produced. Each branch undergoes a three-body decay to the state $N$ with
4-momentum $p$ ($q$) and two visible particles $1+2$ ($3+4$) with 4-momentum
$\alpha$ ($\beta$). The invariant mass $m_{12}$ ($m_{34}$) of the particles
$1+2$ ($3+4$) will have an upper edge from which we can well-determine
$M_{-}$. Other visible particles not involved can be grouped into $V$ with
4-momentum $k$. In general we refer to any process that creates non-zero
$k_{T}$ as upstream transverse momentum (UTM). One type of UTM is initial
state radiation (ISR), which tends to be small compared to the mass scales
involved in SUSY processes. Another type of UTM would be decays of heavier
particles earlier in the decay chain. In the analysis presented in this
chapter, we tested the concepts against both $k=0$ and $k\lesssim 20$ GeV
commiserate with what we might expect for ISR.
We adapt the concept from $M_{T2}$ of minimizing the transverse mass over the
unknown momenta to allow for the incorporation of all the available
information about the masses. To do this we form a new variable $M_{2C}$ which
we define as the minimum mass of the second to lightest new state in the event
$M_{Y}$ constrained to be compatible with the observed 4-momenta of $Y$’s
visible decay products with the observed missing transverse energy, with the
four-momenta of $Y$ and $N$ being on shell, and with the constraint that
$M_{-}=M_{Y}-M_{N}$ is given by the value determined by the end point of the
$m_{12}$ distribution. The minimization is performed over the eight relevant
unknown parameters which may be taken as the 4-momenta $p$ and $q$ of the
particle $N$. We neglect any contributions from unobserved initial state
radiation (ISR). Thus we have
$\displaystyle M_{2C}^{2}$ $\displaystyle=$
$\displaystyle\min_{p,q}(p+\alpha)^{2}$ (6.5) $\displaystyle\mathrm{subject}\
\mathrm{to}\ \mathrm{the}\ 5\ \mathrm{constraints}$
$\displaystyle(p+\alpha)^{2}=(q+\beta)^{2},$ $\displaystyle p^{2}=q^{2},$
$\displaystyle\not{P}_{T}=(p+q)_{T}$
$\displaystyle\sqrt{(p+\alpha)^{2}}-\sqrt{(p)^{2}}=M_{-}$
where $\not{P}_{T}$ is the missing transverse momentum and $(p+q)_{T}$ are the
transverse components of $p+q$. Although we can implement the minimization
numerically or by using Lagrange multipliers, we find the most computationally
efficient approach is to modify the $M_{T2}$ analytic solution from Lester and
Barr [170]. Details regarding implementing $M_{2C}$ and the relation of
$M_{2C}$ to $M_{T2}$ and the approach of CCKP are in Sec. 6.2.
Errors in the determined masses propagated from the error in the mass
difference in the limit of $k=0$ are given by
$\delta M_{Y}=\frac{\delta
M_{-}}{2}\left(1-\frac{M_{+}^{2}}{M_{-}^{2}}\right)\ \ \ \delta
M_{N}=-\frac{\delta M_{-}}{2}\left(1+\frac{M_{+}^{2}}{M_{-}^{2}}\right)$ (6.6)
where $\delta M_{-}$ is the error in the determination of the mass difference
$M_{-}$. To isolate this source of error from those introduced by low
statistics, we assume we know the correct $M_{-}$, and we should consider the
error described in Eq(6.6) as a separate uncertainty from that reported in our
initial performance estimates in the Section 6.4.
Because the true $p$, $q$ are in the domain over which we are minimizing,
$M_{2C}$ will always satisfy $M_{2C}\leq M_{Y}$. The equality is reached for
events with either $m_{12}$ or $m_{34}$ smaller than $M_{-},$ with
$p_{z}/p_{o}=\alpha_{z}/\alpha_{o}$, and $q_{z}/q_{o}=\beta_{z}/\beta_{o}$,
and with the transverse components of $\alpha$ parallel to the transverse
components of $\beta$. When $m_{12}=m_{34}=M_{-}$ the event only gives
information about about the mass difference.
The events that approximately saturate the bound have the added benefit that
they are approximately reconstructed ($p$ and $q$ are known). We present a
proof of this in Appendix E. If $Y$ is produced near the end of a longer
cascade decay, then this reconstruction would allow us to determine the masses
of all the parent states in the event. The reconstruction of several such
events may also aid in spin-correlation studies [141].
In order to determine the distribution of $M_{2C}$ for the process shown in
Fig 4.4, we computed it for a set of events generated using the theoretical
cross section and assuming perfect detector resolution and no background.
Figure 6.1 shows the resulting distribution for three cases: $M_{Y}=200$ GeV,
$M_{Y}=150$ GeV and $M_{Y}=100$ GeV each with $M_{-}=50$ GeV. Each
distribution was built from 30000 events. Note that the minimum $M_{Y}$ for an
event is $M_{-}$. The three examples each have endpoints that give the mass
scale, and we are able to distinguish between different $M_{Y}$ for a given
$M_{-}$. The end-point for $M_{Y}=100$ GeV is clear, and the endpoint for
$M_{Y}=150$ GeV and $M_{Y}=200$ GeV becomes more difficult to observe. The
shape of the distribution exhibits a surprising symmetry discussed in Sec 6.3.
We can also see that as $M_{+}/M_{-}$ becomes large, the $M_{Y}$ determination
will be hindered by the small statistics available near the endpoint or
backgrounds. To alleviate this, we should instead fit to the entire
distribution. It is clear that events away from the endpoint also contain
information about the masses. For this reason we propose to fit the entire
distribution of $M_{2C}$ and compare it to the ‘ideal’ distribution that
corresponds to a given value of the masses. As we shall discuss this allows
the determination of $M_{Y}$ with a significant reduction in the number of
events needed. This is the most important new aspect of the method proposed
here.
Figure 6.1: The distribution of 30000 events in 5 GeV bins with perfect
resolution and no background. The three curves represent $M_{Y}=200$ GeV (dot-
dashed), $M_{Y}=150$ GeV (dotted) and $M_{Y}=100$ GeV (solid) each with
$M_{-}=50$ GeV. Each distribution cuts off at the correct $M_{Y}$.
### 6.2 Using $M_{T2}$ to Find $M_{2C}$ and the $\max M_{T2}$ Kink
The calculation of $M_{2C}$ is greatly facilitated by understanding its
relation to $M_{T2}$. The variable $M_{T2}$, which was introduced in by Lester
and Summers [12], is equivalent to
$\displaystyle M_{T2}^{2}(\chi)$ $\displaystyle=\min_{p,q}(p+\alpha)^{2}$
(6.7) $\displaystyle\mathrm{subject}\ \mathrm{to}\ \mathrm{the}\ 5\
\mathrm{constraints}$ $\displaystyle(p+\alpha)^{2}=(q+\beta)^{2},$ (6.8)
$\displaystyle p^{2}=q^{2}$ (6.9) $\displaystyle\not{P}_{T}=(p+q)_{T}$ (6.10)
$\displaystyle p^{2}=\chi^{2}.$ (6.11)
As is suggested in the simplified example of [175], the minimization over the
longitudinal frame of reference and center-of-mass energy is equivalent to
assuming $p$ and $\alpha$ have equal rapidity and $q$ and $\beta$ have equal
rapidity. Implementing this Eq(6.7) reduces to the traditional definition of
the Cambridge transverse mass 222We have only tested and verified this
equivalence numerically for events satisfying $M_{T2}(\chi=0)>M_{-}$..
Figure 6.2: The $M_{T2}(\chi)$ curves for four events with $M_{N}=50$ GeV and
$M_{Y}=100$ GeV. Only the events whose curves starts off at $M_{T2}(0)>M_{-}$
intersect the straight line given by $M_{T2}(\chi)-\chi=M_{-}$. The $M_{T2}$
at the intersection is $M_{2C}$ for that event.
By comparing $M_{T2}(\chi)$ as defined above to $M_{2C}$ defined in Eq(6.5),
we can see that they are very similar with the exception that the constraint
Eq(6.5) is replaced by the constraint Eq(6.11). $M_{2C}$ can be found by
scanning $M_{T2}(\chi)$ for the $\chi$ value that such that the constraint in
Eq(6.5) is also satisfied. At the value of $\chi$ that satisfies
$M_{T2}(\chi)-\chi=M_{-}$, both Eq(6.5) and Eq(6.11) are satisfied.
We can see the $M_{2C}$ and $M_{T2}$ relationship visually. Each event
provides a curve $M_{T2}(\chi)$; Fig 6.2 shows curves for four events with
$M_{N}=50$ GeV and $M_{Y}=100$ GeV. For all events $M_{T2}(\chi)$ is a
continuous function of $\chi$. CCKP point that out that at $\chi>M_{N}$ and at
$k=0$ the maximum $M_{T2}(\chi)$ approaches $\chi+M_{-}$. At $\chi<M_{N}$ and
at $k=0$ the maximum $M_{T2}(\chi)$ approaches
$\max M_{T2}(\chi)=k^{*}+\sqrt{(k^{*})^{2}+\chi^{2}}$ (6.12)
where $k^{*}=(M_{Y}^{2}-M_{N}^{2})/2M_{Y}$ is the 2BEMP. This maximum occurs
for events with $\alpha^{2}=\beta^{2}=0$. Putting this together, if
$M_{T2}(\chi=0)>M_{-}$, as is true for two of the four events depicted in Fig.
6.2, then because the event is bounded above by Eq(6.12) and is continuous,
then it must cross $\chi+M_{-}$ at $\chi\leq M_{N}$. At this intersection
there is a solution to $M_{T2}(\chi)=\chi+M_{-}$ where $M_{T2}(\chi)=\min
M_{Y}|_{\mathrm{Constraints}}\equiv M_{2C}$. Equivalently
$\displaystyle M_{2C}$ $\displaystyle=M_{T2}\ \ \mathrm{at}\ \chi\
\mathrm{where}\ \ M_{T2}(\chi)=\chi+M_{-}\ \ \ \mathrm{if}\ \
M_{T2}(\chi=0)>M_{-}$ (6.13) $\displaystyle=M_{-}\ \ \ \ \mathrm{otherwise}.$
(6.14)
Assuming $k=0$, the maximum $\chi$ of such an intersection occurs for
$\chi=M_{N}$ which is why the endpoint of $M_{2C}$ occurs at the correct
$M_{Y}$ and why this corresponds to the kink of CCKP. Because Barr and Lester
have an analytic solution to $M_{T2}$ in Ref. [170] in the case $k=0$, this is
computationally very efficient as a definition. We will study the intersection
of these two curves when $k_{T}\neq 0$ in Chapter 7.
### 6.3 Symmetries and Dependencies of the $M_{2C}$ Distribution
All transverse mass variables $M_{T}$ and $M_{T2}$ are invariant under
longitudinal boosts. $M_{T2}$ has an additional invariance. CCKP [171] prove
that if $k_{T}=0$ then $M_{T2}$ is invariant under back to back boosts of the
parent particle pair in the plane perpendicular to the beam direction. This
means that for any event of the topology in Fig. 4.4 with $k_{T}=0$ and
barring spin correlation effects, the $M_{T2}$ distribution will be the same
for a fixed center-of-mass energy as it is for a mixed set of collision
energies. We now verify this argument numerically.
In order to numerically determine the distribution of $M_{2C}$ for the
processes shown in Fig 4.4, it is necessary to generate a large sample of
“ideal” events corresponding to the physical process shown in the figure. We
assume $k_{T}=0$ where we expect the back-to-back boost invariance to lead to
a symmetry of the distribution under changes in the $\sqrt{s}$ of the
collision. We assume each branch decays via an off-shell $Z^{o}$-boson as this
is what could be calculated quickly and captures the essential elements to
provide an initial estimate of our approach’s utility.
Even under these assumptions without knowing about the invariance of $M_{T2}$
under back-to-back boosts, we might expect that the shape of the distribution
depends sensitively on the parton distribution and many aspects of the
differential cross section and differential decay rates. This is not the case;
in the case of direct pair production the shape of the distribution depends
sensitively only on two properties:
(i) the shape of the $m_{12}$ (or equivalently $m_{34}$) distributions. In the
examples studied here for illustration, we calculate the $m_{12}$ distribution
assuming it is generated by a particular supersymmetric extension of the
Standard Model, but in practice we should use the measured distribution which
is accessible to accurate determination. The particular shape of $m_{12}$ does
not greatly affect the ability to determine the mass of $N$ and $Y$ so long as
we can still find the endpoint to determine $M_{Y}-M_{N}$ and use the observed
$m_{ll}$ distribution to model the shape of the $M_{2C}$ distribution.
(ii) the angular dependence of the $N$’s momenta in the rest frame of $Y$. In
the preliminary analysis presented here we assume that in the rest frame of
$\tilde{\chi}_{2}^{o}$, $\tilde{\chi}_{1}^{o}$’s momentum is distributed
uniformly over the $4\pi$ steradian directions. While this assumption is not
universally true it applies in many cases and hence is a good starting point
for analyzing the efficacy of the method.
Under what conditions is the uniform distribution true? Note that the
$\tilde{\chi}_{2}^{o}$’s spin is the only property of $\tilde{\chi}_{2}^{o}$
that can break the rotational symmetry of the decay products. For
$\tilde{\chi}_{2}^{o}$’s spin to affect the angular distribution there must be
a correlation of the spin with the momentum which requires a parity violating
coupling. Consider first the $Z^{o}$ contribution. Since we are integrating
over the lepton momenta difference333$M_{2C}$ only depends on the sum of the
two OSSF lepton momenta that follow from a decay of $Y$., the parity violating
term in the cross section coming from the lepton-$Z^{o}$ vertex vanishes and a
non-zero correlation requires that the parity violating coupling be associated
with the neutralino vertex. The $Z^{o}$-boson neutralino vertex vanishes as
the $Z^{o}$ interaction is proportional to
$\overline{\tilde{\chi}_{2}^{o}}\gamma^{5}\gamma^{\mu}\tilde{\chi}_{1}^{o}Z^{o}_{\mu}$
or
$\overline{\tilde{\chi}_{2}^{o}}\gamma^{\mu}\tilde{\chi}_{1}^{o}Z^{o}_{\mu}$
depending on the relative sign of $M_{\tilde{\chi}_{2}^{o}}$ and
$M_{\tilde{\chi}_{1}^{o}}$ eigenvalues. However if the decay has a significant
contribution from an intermediate slepton there are parity violating couplings
and there will be spin correlations. In this case there will be angular
correlations but it is straightforward to modify the method to take account of
correlations444Studying and exploiting the neutralino spin correlations is
discussed further in Refs [180, 181, 182]..
How big an effect could spin correlations have on the shape of the $M_{2C}$
distribution? To demonstrate, we modeled a maximally spin-correlated direct
production process. Fig 6.3 show the spin-correlated process that we consider,
and the $M_{2C}$ distributions from this process compared to the $M_{2C}$
distribution from the same topology and masses but without spin correlations.
The modeled case has perfect energy resolution, $m_{v_{1}}=0$ GeV, $M_{Y}=200$
GeV, and $M_{N}=150$ GeV. Our maximally spin-correlated process involves pair
production of $Y$ through a pseudoscalar $A$. The fermion $Y$ in both branches
decays to a complex scalar $N$ and visible fermion $v_{1}$ through a purely
chiral interaction. The production of the pseudoscalar ensures that the $Y$
and $\bar{Y}$ are in a configuration
$\sqrt{2}^{-1}(|\uparrow\downarrow\rangle+|\downarrow\uparrow\rangle)$. The
$Y$ then decays with $N$ preferentially aligned with the spin. The $\bar{Y}$
decays with $N^{*}$ preferentially aligned against the spin. This causes the
two sources of missing transverse momentum to be preferentially parallel and
pushes $M_{2C}$ closer to the endpoint. For this reason the spin-correlated
distribution (red dotted distribution) is above the uncorrelated distribution
(solid thick distribution) in Fig. 6.3. For the remainder of the chapter we
assume no such spin correlations are present as is true for neutralino pair
production and decay to leptons through a $Z^{o}$ boson.
Figure 6.3: Effect of this maximally spin correlated process on the $M_{2C}$
distribution. Modeled masses are $M_{Y}=200$ GeV and $M_{N}=150$ GeV. The
solid black distribution is the uncorrelated case and red dotted distribution
is maximally spin correlated.
How likely will we have no spin correlations in supersymmetric LHC events? We
showed earlier that $Z^{o}$-boson dominated three-body decays lack spin
correlations. Even in the case that the slepton contribution is significant
the correlations may still be largely absent. Because we are worried about a
distribution, the spin correlation is only of concern to our assumption if a
mechanism aligns the spin’s of the $\tilde{\chi}_{2}^{o}$s in the two
branches. Table 6.2 shows that most of the $\tilde{\chi}_{2}^{o}$s that we
expect follow from decay chains involving a squark, which being a scalar
should make uncorrelated the spin of the $\tilde{\chi}_{2}^{o}$ in the two
branches. We would then average over the spin states of $\tilde{\chi}_{2}^{o}$
and recover the uniform angular distribution of $\tilde{\chi}_{1}^{o}$’s
momentum in $\tilde{\chi}_{2}^{o}$’s rest frame.
Once we have fixed the dependencies (i) and (ii) above, the shape of the
distribution is essentially independent of the remaining parameters. We
calculate the “ideal” distributions for $M_{2C}$ assuming that $k=0$ and that
in the rest frame of $Y$ there is an equal likelihood of $N$ going in any of
the $4\pi$ steradian directions. The observable invariant $\alpha^{2}$ is
determined according to the differential decay probability of $\chi_{2}^{o}$
to $e^{+}$ $e^{-}$ and $\chi_{1}^{o}$ through a $Z^{o}$-boson mediated three-
body decay. Analytic expressions for cross sections were obtained from the
Mathematica output options in CompHEP [145] To illustrate the symmetry with
respect to changes in $\sqrt{s}$ and the angle of $Y$’s production, we show in
Fig 6.4 two cases:
(1) The case that the collision energy and frame of reference and angle of the
produced $Y$ with respect to the beam axis are distributed according to the
calculated cross section for the process considered in Section 6.4 in which
$\tilde{\chi}_{2}^{o}$ decays via $Z^{o}$ exchange to the three-body state
$l^{+}+l^{-}+\tilde{\chi}_{1}^{o},$ convoluted with realistic parton
distribution functions.
(2) The case that the angle of the produced $Y$ with respect to the beam axis
is arbitrarily fixed at $\theta=0.2$ radians, the azimuthal angle $\phi$ fixed
at $0$ radians, and the center-of-mass energy set to $\sqrt{s}=500$ GeV.
The left plot of Fig 6.4 shows the two distributions intentionally shifted by
0.001 to allow us to barely distinguish the two curves. On the right side of
Fig 6.4 we show the difference of the two distributions with the 2 $\sigma$
error bars within which we expect 95% of the bins to overlap $0$ if the
distributions are identical.
Analytically, this symmetry is because both the $M_{T2}$ and $M_{2C}$ are
invariant under back to back boosts when $k_{T}=0$. This symmetry implies that
the distributions (1) and (2) above should be the same which has been verified
numerically in Fig 6.4. In this way we have identified both the mathematical
origin of the symmetry and numerically verified the symmetry. Thus the
$M_{2C}$ distribution has a symmetry with respect to changes in $\sqrt{s}$
when $k_{T}=0$.
In addition to tests with $k_{T}=0$, we also tested that $k\lesssim 20$ GeV
does not change the shape of the distribution to within our numerical
uncertainties. We constructed events with $M_{Y}=150$ GeV, $M_{N}=100$ GeV,
$\sqrt{k^{2}}$ uniformly distributed between $2$ and $20$ GeV,
$\vec{k}/k_{o}=0.98$, and with uniform angular distribution. We found the
$M_{2C}$ distribution agreed with the distribution shown in Fig. 6.4 within
the expected error bars after 10000 events. Scaling this down to the masses
studied in model $P1$ we trust these results remain unaffected for
$k_{T}\lesssim 20$ GeV. Introduction of cuts on missing traverse energy and
and very large UTM ($k_{T}\gtrsim M_{-}$) changes the shape of the
distribution. These effects will be studied in Chapter 7.
Figure 6.4: Demonstrates the distribution is independent of the COM energy,
angle with which the pair is produced with respect to the beam axis, and the
frame of reference.
Inclusion of backgrounds also changes the shape. Backgrounds that we can
anticipate or measure, like di-$\tau$s or leptons from other neutralino decays
observed with different edges can be modeled and included in the ideal shapes
used to perform the mass parameter estimation. A more complete study of the
background effects also follows in Chapter 7.
### 6.4 Application of the method: SUSY model examples
To illustrate the power of fitting the full $M_{2C}$ distribution, we now turn
to an initial estimate of our ability to measure $M_{Y}$ in a few specific
supersymmetry scenarios. Our purpose here is to show that fitting the $M_{2C}$
distribution can determine $M_{Y}$ and $M_{N}$ with very few events. We
include detector resolution effects and assume $k=0$ (equivalent to assuming
direct production), but neglect backgrounds until Chapter 7. We calculate
$M_{2C}$ for the case where the analytic $M_{T2}$ solution of Barr and Lester
can be used to speed up the calculations as described in Sec 6.2. We make the
same modeling assumptions described in Sec. 6.3.
Although fitting the $M_{2C}$ distribution could equally well be applied to
the gluino mass studied in CCKP, we explore its applications to pair-produced
$\tilde{\chi}^{o}_{2}$. We select SUSY models where $\tilde{\chi}^{o}_{2}$
decays via a three-body decay to $l^{+}+l^{-}+\tilde{\chi}^{o}_{1}$. The four
momenta $\alpha=p_{l^{+}}+p_{l^{-}}$ for the leptons in the top branch, and
the four momenta $\beta=p_{l^{+}}+p_{l^{-}}$ for the leptons in the bottom
branch.
The production and decay cross section estimates in this section are
calculated using MadGraph/MadEvent [144] and using SUSY mass spectra inputs
from SuSpect [183]. We simulate the typical LHC detector lepton energy
resolution [9, 184] by scaling the $\alpha$ and $\beta$ four vectors by a
scalar normally distributed about $1$ with the width of
$\frac{\delta\alpha_{0}}{\alpha_{0}}=\frac{0.1}{\sqrt{\alpha_{o}(\mathrm{GeV})}}+\frac{0.003}{\alpha_{o}(\mathrm{GeV})}+0.007.$
(6.15)
The missing transverse momentum is assumed to be whatever is missing to
conserve the transverse momentum after the smearing of the leptons momenta. We
do not account for the greater uncertainty in missing momentum from hadrons or
from muons which do not deposit all their energy in the calorimeter and whose
energy resolution is therefore correlated to the missing momentum. Including
such effects is considered in Chapter 7. These finite resolution effects are
simulated in both the determination of the ideal distribution and in the small
sample of events that is fit to the ideal distribution to determine $M_{Y}$
and $M_{N}$. We do not expect expanded energy resolutions to greatly affect
the results because the resolution effects are included in both the simulated
events and in the creation of the ideal curves which are then fit to the low
statistics events to estimate the mass.
We consider models where the three-body decay channel for
$\tilde{\chi}_{2}^{o}$ will dominate. These models must satisfy
$M_{\tilde{\chi}_{2}^{o}}-M_{\tilde{\chi}_{1}^{o}}<M_{Z}$ and must have all
slepton masses greater than the $M_{\tilde{\chi}_{2}^{o}}$. The models
considered are shown in Table 6.1. The Min-Content model assumes that there
are no other SUSY particles accessible at the LHC other than
$\tilde{\chi}_{2}^{o}$ and $\tilde{\chi}_{1}^{o}$ and we place
$M_{\tilde{\chi}_{1}^{o}}$ and $M_{\tilde{\chi}_{2}^{o}}$ at the boundary of
the PDG Live exclusion limit [41]. SPS 6, P1, and $\gamma$ are models taken
from references [3], [2], and [185] respectively. Each has the
$\tilde{\chi}^{o}_{2}$ decay channel to leptons via a three-body decay
kinematically accessible. We will only show simulation results for the masses
in model P1 and SPS 6 because they have the extreme values of $M_{+}/M_{-}$
with which the performance scales. The Min-Content model and the $\gamma$
model are included to demonstrate the range of the masses and production cross
sections that one might expect.
Bisset, Kersting, Li, Moortgat, Moretti, and Xie (BKLMMX) [186] have studied
the four lepton with missing transverse momentum Standard-Model background for
the LHC. They included contributions from jets misidentified as leptons and
estimated about $190$ background events at a ${\mathcal{L}}=300\
\mathrm{fb}^{-1}$ which is equivalent to $0.6$ fb. Their background study made
no reference to the invariant mass squared of the four leptons, so we only
expects a fraction of these to have both lepton pairs to have invariant masses
less than $M_{-}$. Their analysis shows the largest source of backgrounds will
most likely be other supersymmetric states decaying to four leptons. Again, we
expect only a fraction of these to have both lepton pairs with invariant
masses within the range of interest. The background study of BKLMMX is
consistent with a study geared towards a $500$ GeV $e^{+}$ $e^{-}$ linear
collider in Ref. [187] which predicts $0.4$ fb for the standard model
contribution to 4 leptons and missing transverse momentum. The neutralino
decay to $\tau$ leptons also provide a background because the $\tau$ decay to
a light leptons $l=e,\mu$ ($\Gamma_{\tau\rightarrow
l\bar{\nu}_{l}}/\Gamma\approx 0.34$) cannot be distinguished from prompt
leptons. The neutrinos associated with these light leptons will be new sources
of missing transverse momentum and will therefore be a background to our
analysis. The di-$\tau$ events will only form a background when both opposite
sign same flavor $\tau$s decay to the same flavor of light lepton which one
expects about 6% of the time.
Model | Min Content (ref [41]) | SPS 6 (ref [3]) | P1 (ref [2]) | $\gamma$ ( Ref. [185])
---|---|---|---|---
Definition | | $\tilde{\chi}^{o}_{1}$ and $\tilde{\chi}^{o}_{2}$
---
are the only
LHC accessible
SUSY States
with smallest
allowed masses.
| Non Universal
---
Gaugino Masses
$m_{o}=150$ GeV
$m_{1/2}=300$ GeV
$\tan\beta=10$
$\mathrm{sign}(\mu)=+$
$A_{o}=0$
$M_{1}=480$ GeV
$M_{2}=M_{3}=300$ GeV
| mSUGRA
---
$m_{o}=350$ GeV
$m_{1/2}=180$ GeV
$\tan\beta=20$
$\mathrm{sign}(\mu)=+$
$A_{o}=0$
| Non-Universal
---
Higgs Model
$m_{o}=330$ GeV
$m_{1/2}=240$ GeV
$\tan\beta=20$
$\mathrm{sign}(\mu)=+$
$A_{o}=0$
$H_{u}^{2}=-(242\,\mathrm{GeV})^{2}$
$H_{d}^{2}=+(373\,\mathrm{GeV})^{2}$
$M_{\tilde{\chi}^{o}_{1}}$ | $46$ GeV | $189$ GeV | $69$ GeV | $95$ GeV
$M_{\tilde{\chi}^{o}_{2}}$ | $62.4$ GeV | $219$ GeV | $133$ GeV | $178$ GeV
$M_{+}/M_{-}$ | $6.6$ | $13.6$ | $3.2$ | $3.3$
Table 6.1: Models with $\tilde{\chi}^{o}_{2}$ decaying via a three-body decay to leptons. We only show simulation results for the masses in model P1 and SPS 6 because they have the extreme values of $M_{+}/M_{-}$ with which the performance scales. Model | | $\sigma_{\tilde{\chi}^{o}_{2}\,\tilde{\chi}^{o}_{2}}$ Direct
---
$\sigma_{\tilde{\chi}^{o}_{2}\,\tilde{\chi}^{o}_{2}}$ Via $\tilde{g}$ or
$\tilde{q}$
| $\mathrm{BR}_{\tilde{\chi}^{o}_{2}\rightarrow
l+\bar{l}+\tilde{\chi}^{o}_{1}}$
---
$\mathrm{BR}_{\tilde{\chi}^{o}_{2}\rightarrow q+\bar{q}+\tilde{\chi}^{o}_{1}}$
| Events with
---
$4\,$ leptons $+E_{T}$ missing
\+ possible extra jets
${\mathcal{L}}=300\ \mathrm{fb}^{-1}$
Min Content | | $2130$ fb
---
N/A
| 0.067
---
0.69
2893
SPS 6 | | $9.3$ fb
---
$626$ fb
| 0.18
---
0.05
6366
P1 | | $35$ fb
---
$12343$ fb
| 0.025
---
0.66
2310
$\gamma$ | | $17$ fb
---
$4141$ fb
| 0.043
---
0.64
2347
Table 6.2: The approximate breakdown of signal events.
Figure 6.5: $\chi^{2}$ fit of 250 events from model P1 of Ref [2] to the
theoretical distributions calculated for different $M_{\chi_{2}^{o}}$ values
but fixed $M_{\chi_{2}^{o}}-M_{\chi_{1}^{o}}$. The fit gives
$M_{\chi_{2}^{o}}=133\pm 6$ GeV.
Figure 6.6: $\chi^{2}$ fit of 3000 events from model SPS 6 of Ref [3] to the
theoretical distributions calculated for different $M_{\chi_{2}^{o}}$ values
but fixed $M_{\chi_{2}^{o}}-M_{\chi_{1}^{o}}$. The fit gives
$M_{\chi_{2}^{o}}=221\pm 20$ GeV.
Table 6.2 breaks down the LHC production cross section for pair producing two
$\tilde{\chi}^{o}_{2}$ in each of these models. In the branching ratio to
leptons, we only consider $e$ and $\mu$ states as the $\tau$ will decay into a
jet and a neutrino introducing more missing transverse momentum. Direct pair
production of $\tilde{\chi}^{o}_{2}$ has a rather modest cross section, but
production via a gluino or squark has a considerably larger cross section but
will be accompanied by additional QCD jets. We do expect to be able to
distinguish QCD jets from $\tau$ jets [188]. The events with gluinos and jets
will lead to considerable $k_{T}$. In this chapter we assume $k_{T}\lesssim
20$ GeV, but we take up the case of large $k_{T}$ in the following chapter.
We now estimate how well we may be able to measure $M_{\tilde{\chi}_{1}^{o}}$
and $M_{\tilde{\chi}_{2}^{o}}$ in these models under these simplifying ideal
assumptions. Figures 6.5 and 6.6 show a $\chi^{2}$ fit555See Appendix D for
details of how $\chi^{2}$ is calculated. of the $M_{2C}$ distribution from the
observed small set of events to ‘ideal’ theoretical $M_{2C}$ distributions
parameterized by $M_{\tilde{\chi}_{2}^{o}}$. The ‘ideal’ theoretical
distributions are calculated for the observed value of $M_{-}$ using different
choices for $M_{\tilde{\chi}_{2}^{o}}$. A second-order interpolation is then
fit to these points to estimate the value for $M_{\tilde{\chi}_{2}^{o}}$. The
$1\,\sigma$ uncertainty for $M_{\tilde{\chi}_{2}^{o}}$ is taken to be the
points where the $\chi^{2}$ increases from its minimum by one.
The difficulty of the mass determination from the distribution grows with the
ratio $M_{+}/M_{-}.$ Figures 6.5 and 6.6 show the two extremes among the cases
we consider. For the model P1 $M_{+}/M_{-}=3.2$, and for model $\gamma$
$M_{+}/M_{-}=3.3$. Therefore these two models can have the mass of
$M_{\tilde{\chi}_{2}^{o}}$ and $M_{\tilde{\chi}_{1}^{o}}$ determined with
approximately equal accuracy with equal number of signal events. Figure 6.5
shows that we may be able to achieve $\pm 6$ GeV resolution after about $30\
\mathrm{fb}^{-1}$. Model SPS 6 shown in Fig 6.6 represents a much harder case
because $M_{+}/M_{-}=13.6$. In this scenario we can only achieve $\pm 20$ GeV
resolution with $3000$ events corresponding to approximately
$150\,\mathrm{fb}^{-1}$. In addition to these uncertainties, we need to also
consider the error propagated from $\delta M_{-}$ in Eq(6.6).
### Chapter Summary
We have proposed a method to extract the masses of new pair-produced states
based on a kinematic variable, $M_{2C}$, which incorporates all the known
kinematic constraints on the observed process and whose endpoint determines
the new particle masses. However the method does not rely solely on the
endpoint but uses the full data set, comparing the observed distribution for
$M_{2C}$ with the ideal distribution that corresponds to a given mass. As a
result the number of events needed to determine the masses is very
significantly reduced so that the method may be employed at the LHC event for
processes with electroweak production cross sections.
This chapter is an initial feasibility study of the method for several
supersymmetric models. We have made many idealized assumptions which amount to
conditions present if the new particle states are directly produced. We have
included the effect of detector resolution but not backgrounds, or cuts. Our
modeling assumed that $k=0$. We demonstrated that for some of the models
studied we are able to determine the masses to within 6 GeV from only 250
events. This efficiency is encouraging, a study including more of the real-
world complications follows in Chapter 7.
The constrained mass variables we advocate here can be readily extended to
other processes. By incorporating all the known kinematical constraints, the
information away from kinematical end-points can, with some mild process-
dependent information, be used to reduce the number of events needed to get
mass measurements. We shall illustrate an extension to three on-shell states
in Chapter 8.
## Chapter 7 The Variable $M_{2C}$: Significant Upstream Transverse Momentum
### Chapter Overview
In the previous chapter, we introduced the $M_{2C}$ kinematic variable which
gives an event-by-event lower bound on the dark-matter particle’s absolute
mass given the mass difference between the dark matter candidate and its
parent. The previous chapter focused on direct pair production with minimal
initial state radiation (ISR). In this chapter, we introduce a complementary
variable $M_{2C,UB}$ which gives an event-by-event _upper_ bound on the same
absolute mass. The complementary variable is only relevant in the presence of
large upstream transverse momentum (UTM). Our study shows that the technique
presented is as good if not better than other model-independent invisible-
particle mass determination techniques in precision and accuracy.
In this chapter, which is based on work first published by Barr, Ross, and the
author in Ref [16], we demonstrate the use of the variable $M_{2C}$ and
$M_{2C,UB}$ in LHC conditions. The variables $M_{2C}(M_{-})$ and
$M_{2C,UB}(M_{-})$ give an event-by-event lower-bound and upper-bound
respectively on the mass of $Y$ assuming the topology in Fig. 4.4 and the mass
difference $M_{-}=M_{Y}-M_{N}$. To get the mass difference, we use events
where $Y$ decays into $N$ and two visible states via a three-body decay in
which we can easily determine the mass difference from the end point of the
visible-states invariant-mass distribution, $m^{2}_{12}$. One might also
conceive of a situation with $M_{2C}$ supplementing an alternative technique
that gives a tight constraint on the mass difference but may have multiple
solutions or a weaker constraint on the mass scale [157][14]. Given this mass
difference and enough statistics, $M_{2C}$’s endpoint gives the mass of $Y$.
However the main advantage of the $M_{2C}$ method is that it does not rely
simply on the position at the endpoint but it uses the additional information
contained in events which lie far from the endpoint. As a result it gives a
mass determination using significantly fewer events and is less sensitive to
energy resolution and other errors.
To illustrate the method, in this chapter we study in detail the performance
of the $M_{2C}$ constrained mass variable in a specific supersymmetric model.
We study events where each of the two branches have decay chains that end with
a $\tilde{\chi}^{o}_{2}$ decaying to a $\tilde{\chi}^{o}_{1}$ and a pair of
opposite-sign same-flavor (OSSF) leptons. Thus the final states of interest
contain four isolated leptons (made up of two OSSF pairs) and missing
transverse momentum. Fig 4.4 defines the four momentum of the particle states
with $Y=\tilde{\chi}^{o}_{2}$, $N=\tilde{\chi}^{o}_{1}$, and the OSSF pairs
forming the visible particles $1-4$. Any decay products early in the decay
chains of either branch are grouped into $k$ which we generically refer to as
upstream transverse momentum (UTM). Nonzero $k$ could be the result of initial
state radiation (ISR) or decays of heavier particles further up the decay
chain. Events with four leptons and missing transverse momentum have a very
small Standard-Model background. To give a detailed illustration of the
$M_{2C}$ methods, we have chosen to analyze the benchmark point P1 from [2]
which corresponds to mSUGRA with $m_{o}=350$ GeV, $m_{1/2}=180$ GeV,
$\tan\beta=20$, ${\rm{sign}}(\mu)=+$, $A_{o}=0$. Our SUSY particle spectrum
was calculated with ISAJET [189] version 7.63. We stress that the analysis
technique employed applies generically to models involving decays to a massive
particle state that leaves the detector unnoticed.
A powerful feature of the $M_{2C}$ distribution is that, with some mild
assumptions, the shape away from the endpoint can be entirely determined from
the unknown mass scale and quantities that are measured. The ideal shape fit
against early data therefore provides an early mass estimate for the invisible
particle. This study is meant to be a guide on how to overcome difficulties in
establishing and fitting the shape: difficulties from combinatoric issues,
from differing energy resolutions for the leptons, hadrons, and missing
transverse momentum, from backgrounds, and from large upstream transverse
momentum (UTM) 111Our references to UTM correspond to the Significant
Transverse Momentum (SPT), pair production category in [176] where SPT
indicates that the relevant pair of parent particles can be seen as recoiling
against a significant transverse momentum.. As we shall discuss, UTM actually
provides surprising benefits.
The chapter is structured as follows: In Section 7.1, we review $M_{2C}$ and
introduce the new observation that, in addition to an event-by-event lower
bound on $M_{Y}$, large recoil against UTM enables one also to obtain an
event-by-event _upper_ bound on $M_{Y}$. We call this quantity $M_{2C,UB}$.
Section 7.2 describes the modeling and simulation employed. Section 7.3
discusses the implications of several effects on the shape of the distribution
including the $m_{12}$ (in our case $m_{ll}$) distribution, the UTM
distribution, the backgrounds, combinatorics, energy resolution, and missing
transverse momentum cuts. In Section 7.4, we put these factors together and
estimate the performance. We conclude in Section 7 with a discussion about the
performance in comparison to previous work.
### 7.1 Upper Bounds on $M_{Y}$ from Recoil against UTM
We will now review the definition of $M_{2C}$ as providing an event-by-event
lower bound on $M_{Y}$. In generalizing this framework, we find a new result
that one can also obtain an upper bound on the mass $M_{Y}$ when the two
parent particles $Y$ recoil against some large upstream transverse momentum
$k_{T}$.
#### 7.1.1 Review of the Lower Bound on $M_{Y}$
Fig 4.4 gives the relevant topology and the momentum assignments. The visible
particles $1$ and $2$ and invisible particle $N$ are labeled with momentum
$\alpha_{1}$ and $\alpha_{2}$ (which we group into
$\alpha=\alpha_{1}+\alpha_{2}$) and $p$, respectively
$\beta=\beta_{1}+\beta_{2}$ and $q$ in the other branch. We assume that the
parent particle $Y$ is the same in both branches so
$(p+\alpha)^{2}=(q+\beta)^{2}$. Any earlier decay products of either branch
are grouped into the upstream transverse momentum (UTM) 4-vector momentum,
$k$.
In the previous chapter we showed how to find an event-by-event lower bound on
the true mass of $M_{N}$ and $M_{Y}$. We assume that the mass difference
$M_{-}=M_{Y}-M_{N}$ can be accurately measured from the invariant mass edges
${\tt{max}}\ m_{12}$ or ${\tt{max}}\ m_{34}$. For each event, the variable
$M_{2C}$ is the minimum value of the mass of $Y$ (the second lightest state)
after minimizing over the unknown division of the missing transverse momentum
$\not{P}_{T}$ between the two dark-matter particles $N$ as described in
Eq(6.5-6.5).
One way of calculating $M_{2C}$ for an event is to use $M_{T2}(\chi_{N})$ [12,
13, 15], which provides a lower bound on the mass of $Y$ for an assumed mass
$\chi_{N}$ of $N$. The true mass of $Y$ lies along the line
$\chi_{Y}(\chi_{N})=M_{-}+\chi_{N}$ where we use $\chi_{Y}$ to denote the
possible masses of $Y$ and to distinguish it from the true mass of $Y$ denoted
with $M_{Y}$. In other words $M_{T2}$ provides the constraint
$\chi_{Y}(\chi_{N})\geq M_{T2}(\chi_{N})$. Thus we can see that for $\chi_{N}$
to be compatible with an event, we must have
$M_{T2}(\chi_{N})\leq\chi_{Y}(\chi_{N})=\chi_{N}+M_{-}$.
For a given event, if one assumes a mass $\chi_{N}$ for $N$, and if the
inequality $M_{T2}(\chi_{N})\leq\chi_{N}+M_{-}$ is satisfied, then there is no
contradiction, and the event is compatible with this value of $\chi_{N}$. If
however, $M_{T2}(\chi_{N})>M_{-}+\chi_{N}$, then we have a contradiction, and
the event excludes this value $\chi_{N}$ as a viable mass of $N$. Using this
observation, $M_{2C}$ can be found for each event by seeking the intersection
between $M_{T2}(\chi_{N})$ and $\chi_{N}+M_{-}$ [15]. Equivalently, the lower
bound on $M_{Y}$ is given by $M_{Y}\geq M_{-}+\chi_{N}^{o}$ where
$\chi_{N}^{o}$ is the zero of
$\displaystyle g(\chi_{N})=M_{T2}(\chi_{N})-\chi_{N}-M_{-}$
$\displaystyle{\rm{with}}\ \ \ g^{\prime}(\chi_{N}^{o})<0.$ (7.1)
In the case $k=0$, the extreme events analyzed in CCKP [174] demonstrate that
$g(\chi)$ will only have one positive zero or no positive zeros, and the slope
at a zero will always be negative. For no positive zeros, the lower bound is
the trivial lower bound given by $M_{-}$. Note that a lower bound on the value
of $M_{Y}$ corresponds to a lower bound on the value of $M_{N}$. The Appendix
in Ref. [15] shows that at the zeros of $g(\chi_{N})$ which satisfy Eq(7.1),
the momenta satisfy Eqns(6.5-6.5).
#### 7.1.2 A New Upper Bound on $M_{Y}$
If there is large upstream transverse momentum (UTM) ($k_{T}\gtrapprox M_{-}$)
against which the system recoils, then we find a new result. Using the
$M_{T2}$ method to calculate $M_{2C}$ gives one the immediate ability to see
that $M_{Y}$ can also have an upper bound when requiring Eqns(6.5-6.5). This
follows because for large UTM the function $g(\chi_{N})$ may have two
zeros222There may be regions in parameter space where function $g(\chi)$ has
more than two zeros, but we have not encountered such cases in our
simulations. which provides both an upper and a lower bound for $M_{Y}$ from a
single event. We have also found regions of parameter space where $g(\chi)$
has a single zero but $g^{\prime}(\chi_{N}^{o})>0$ corresponding to an upper
bound on the true mass of $M_{N}$ ( and $M_{Y}$) and only the trivial lower
bound of $M_{N}\geq 0$.
We can obtain some insight into the cases in which events with large UTM
provides upper bounds on the mass by studying a class of extreme event with
two hard jets, $j_{\alpha}$ and $j_{\beta}$ against which $Y$ recoils
($k=j_{\alpha}+j_{\beta}$). We will describe this extreme event and solve for
the regions of parameter space for which one can analytically see the
intersection points giving a lower bound and/or an upper bound. The event is
extremal in that $M_{T2}(\chi_{N})$, which gives a lower bound on $M_{Y}$,
actually gives the true value of $M_{Y}$ when one selects $\chi_{N}$ equal to
the true mass $M_{N}$.
The ideal event we consider is where a heavier state $G$ is pair-produced on
shell at threshold. For simplicity we assume the lab-frame is the collision
frame. Assume that the $G$s, initially at rest, decay into visible massless
jets $j_{\alpha}$, $j_{\beta}$ and the two $Y$ states with the decay product
momenta $\alpha+p$ and $\beta+q$. Both jets have their momenta in the same
transverse plane along the negative $\hat{x}$-axis, and both $Y$’s momentum
are directed along the $\hat{x}$-axis. Finally, in the rest frame of the two
$Y$s, both decay such that the decay products visible states have their
momentum $\alpha$ and $\beta$ along the $\hat{x}$-axis and both invisible
massive states $N$ have their two momenta along the negative $\hat{x}$-axis.
In the lab frame, the four-vectors are given by
$\displaystyle j_{\alpha}=j_{\beta}$ $\displaystyle=$
$\displaystyle\frac{M_{G}}{2}\left(1-\frac{M_{Y}^{2}}{M_{G}^{2}}\right)\left\\{1,-1,0,0\right\\}$
(7.2) $\displaystyle\alpha=\beta$ $\displaystyle=$
$\displaystyle\frac{M_{G}}{2}\left(1-\frac{M_{N}^{2}}{M_{Y}^{2}}\right)\left\\{1,1,0,0\right\\}$
(7.3) $\displaystyle p=q$ $\displaystyle=$
$\displaystyle\frac{M_{G}}{2}\left\\{\left(\frac{M_{N}^{2}}{M_{Y}^{2}}+\frac{M_{Y}^{2}}{M_{G}^{2}}\right),\left(\frac{M_{N}^{2}}{M_{Y}^{2}}-\frac{M_{Y}^{2}}{M_{G}^{2}}\right),0,0\right\\}.$
(7.4)
For the event given by Eqns(7.2-7.4), we can exactly calculate
$M_{T2}(\chi_{N})$:
$\displaystyle M^{2}_{T2}(\chi_{N})$ $\displaystyle=$
$\displaystyle\chi_{N}^{2}+\frac{\left(M_{N}^{2}-M_{Y}^{2}\right)\left(M_{N}^{2}M_{G}^{2}-M_{Y}^{4}\right)}{2\,M_{Y}^{4}}$
(7.5)
$\displaystyle+\frac{\left(M_{Y}^{2}-M_{N}^{2}\right)\sqrt{4\,M_{G}^{2}\,\chi_{N}^{2}M_{Y}^{4}+\left(M_{Y}^{4}-M_{N}^{2}M_{G}^{2}\right)^{2}}}{2\,M_{Y}^{4}}.$
This is found by calculating the transverse mass for each branch while
assuming $\chi_{N}$ to be the mass of $N$. The value of $p_{x}$ is chosen so
that the transverse masses of the two branches are equal. Substituting this
value back into the transverse mass of either branch gives $M_{T2}(\chi_{N})$.
Fig 7.1 shows $g(\chi_{N})$, given in Eq(7.1.1), for several choices of
$M_{G}$ for the process described by Eqs(7.2-7.4) with
$M_{-}=53\,\,{\rm{GeV}}$ and $M_{N}=67.4\,\,{\rm{GeV}}$. Because $G$ is the
parent of $Y$, we must have $M_{G}>M_{Y}$. If
$M_{Y}<M_{G}<2M_{Y}^{2}/(M_{N}+M_{Y})$, then $M_{T2}(\chi_{N}<M_{N})$ is
larger than $\chi_{N}+M_{-}$ up until their point of intersection at
$\chi_{N}=M_{N}$. In this case their point of intersection provides a lower
bound as illustrated by the dotted line in Fig. 7.1 for the case with
$M_{G}=150\,\,{\rm{GeV}}$. For
$2M_{Y}^{2}/(M_{N}+M_{Y})<M_{G}<\sqrt{M_{Y}^{3}/M_{N}}$ there are two
solutions
$\displaystyle\chi_{N,{\rm{Min}}}$ $\displaystyle=$ $\displaystyle M_{N}$
(7.6) $\displaystyle\chi_{N,{\rm{Max}}}$ $\displaystyle=$
$\displaystyle\frac{(M_{N}-M_{Y})\left(-2M_{Y}^{4}+M_{N}M_{G}^{2}M_{Y}+M_{N}^{2}M_{G}^{2}\right)}{(M_{N}M_{G}+(M_{G}-2M_{Y})M_{Y})(M_{N}M_{G}+M_{Y}(M_{G}+2M_{Y}))}.$
(7.7)
When $M_{G}=\sqrt{M_{Y}^{3}/M_{N}}$, the function $g(\chi_{N})$ has only one
zero with the lower bound equalling the upper bound at $M_{N}$. The solid line
in Fig. 7.1 shows this case. Between
$\sqrt{M_{Y}^{3}/M_{N}}<M_{G}<\sqrt{6}M_{Y}^{2}/\sqrt{(M_{N}+M_{Y})(2M_{N}+M_{Y})}$
we again have two solutions but this time with
$\displaystyle\chi_{N,{\rm{Min}}}$ $\displaystyle=$
$\displaystyle\frac{(M_{N}-M_{Y})\left(-2M_{Y}^{4}+M_{N}M_{G}^{2}M_{Y}+M_{N}^{2}M_{G}^{2}\right)}{(M_{N}M_{G}+(M_{G}-2M_{Y})M_{Y})(M_{N}M_{G}+M_{Y}(M_{G}+2M_{Y}))}$
(7.8) $\displaystyle\chi_{N,{\rm{Max}}}$ $\displaystyle=$ $\displaystyle
M_{N}.$ (7.9)
The dashed line in Fig. 7.1 shows this case with $M_{G}=170\,\,{\rm{GeV}}$.
For $M_{G}$ greater than this, we have $\chi_{N,{\rm{Max}}}=M_{N}$ and
$\chi_{N,{\rm{Min}}}=0$.
Figure 7.1: Shows $g(\chi_{N})$ for the extreme event in Eq(7.2-7.4) with
$M_{-}=53\,\,{\rm{GeV}}$ and $M_{N}=67.4\,\,{\rm{GeV}}$. The red dotted line
has $M_{G}=150\,\,{\rm{GeV}}$ and shows an event providing a lower bound on
$M_{Y}$. The blue dashed line $M_{G}=170\,\,{\rm{GeV}}$ and shows an event
with both a lower bound and an upper-bound on $M_{Y}$. The black solid line
shows $M_{G}=\sqrt{M_{Y}^{3}/M_{N}}$ where the lower bound equals the upper
bound.
This example illustrates how $M_{2C}$ can provide both a lower-bound and an
upper-bound on the true mass for those events with large UTM. The upper-bound
distribution provides extra information that can also be used to improve early
mass determination, and in what follows we will refer to the upper bound as
$M_{2C,UB}$. We now move on to discuss modeling and simulation of this new
observation.
### 7.2 Modeling and Simulation
As a specific example of the application of the $M_{2C}$ method, we have
chosen a supersymmetry model mSUGRA, $m_{o}=350$ GeV, $m_{1/2}=180$ GeV,
$\tan\beta=20$, ${\rm{sign}}(\mu)=+$, $A_{o}=0$ 333 This was model $P1$ from
[2] which we also used in [15].. The spectrum used in the simulation has
$M_{\tilde{\chi}^{o}_{1}}=67.4\,\,{\rm{GeV}}$ and
$M_{\tilde{\chi}^{o}_{2}}=120.0\,\,{\rm{GeV}}$. We have employed two
simulation packages. One is a Mathematica code that creates the ‘ideal’
distributions based only on very simple assumptions and input data. The second
is HERWIG [17, 18, 19] which simulates events based on a SUSY spectrum, MSSM
cross sections, decay chains, and appropriate parton distribution functions.
If the simple Mathematica simulator predicts ‘ideal’ shapes that agree with
HERWIG generator, then we have reason to believe that all the relevant factors
relating to the shape are identified in the simple Mathematica simulation.
This is an important check in validating the benefits of fitting the $M_{2C}$
and $M_{2C,UB}$ distribution shape as a method to measure the mass of new
invisible particles produced at hadron colliders.
#### 7.2.1 Generation of “Ideal” Distributions
Our ‘ideal’ distributions are produced from a home-grown Monte Carlo event
generator written in Mathematica. This generater serves to ensure that we
understand the origin of the distribution shape. It also ensures that we have
control over measuring the parameters needed to determine the mass without
knowing the full model, coupling coefficients, or parton distribution
functions. We also use this simulation to determine on what properties the
ideal distributions depends.
The simulator is used to create events satisfying the topology shown in Fig
4.4 for a set of specified masses. We take as given the previously measured
mass difference
$M_{\tilde{\chi}^{o}_{2}}-M_{\tilde{\chi}^{o}_{1}}=52.6\,\,{\rm{GeV}}$, which
we use in this chapter’s simulations. We neglect finite widths of the particle
states as most are in the sub GeV range for the model we are considering. We
neglect spin correlations between the two branches. We perform the simulations
in the center-of-mass frame because $M_{2C}$ and $M_{2C,UB}$ are transverse
observables and are invariant under longitudinal boosts. The collision energy
$\sqrt{s}$ is distributed according to normalized distribution
$\rho(\sqrt{s})=12\,M_{\tilde{\chi}^{o}_{2}}^{2}\frac{\sqrt{s-4\,M_{\tilde{\chi}^{o}_{2}}^{2}}}{s^{2}}$
(7.10)
unless otherwise specified. The $\tilde{\chi}^{o}_{2}$ is produced with a
uniform angular distribution, and all subsequent decays have uniform angular
distribution in the rest frame of the parent. The UTM is simulated by making
$k_{T}$ equal to the UTM with $k^{2}=(100\,\,{\rm{GeV}})^{2}$ (unless
otherwise specified), and boosting the other four-vectors of the event such
that the total transverse momentum is zero. As we will show, these simple
assumptions capture the important elements of the process. Being relatively
model independent, they provide a means of determining the mass for various
production mechanisms. If one were to assume detailed knowledge of the
production process, it would be possible to obtain a better mass determination
by using a more complete simulation like HERWIG to provide the ‘ideal’
distributions against which one compares with the data. Here we concentrate on
the more model independent simulation to demonstrate that it predicts the
$M_{2C}$ and $M_{2C,UB}$ distributions well-enough to perform the mass
determination that we demonstrate in this case-study.
#### 7.2.2 HERWIG “Data”
Figure 7.2: The $M_{2C}$ and $M_{2C,UB}$ distributions of HERWIG events before
smearing (to simulate detector resolution) is applied. The distributions’ end-
points show $M_{\tilde{\chi}^{o}_{2}}\approx 120$ GeV. The top thick curve
shows the net distribution, the next curve down shows the contribution of only
the signal events, and the bottom dashed curve shows the contribution of only
background events.
In order to obtain a more realistic estimate of the problems associated with
collision data, we generate samples of unweighted inclusive supersymmetric
particle pair production, using the HERWIG Monte Carlo program with LHC beam
conditions. These samples produce a more realistic simulation of the event
structure that would be obtained for the supersymmetric model studied here,
including the (leading order) cross sections and parton distributions. It
includes all supersymmetric processes and so contains the relevant background
processes as well as the particular decay chain that we wish to study. Figure
7.2 shows the $M_{2C}$ and $M_{2C,UB}$ distributions of a sample of HERWIG
generated signal and background events.
Charged leptons ($e^{\pm}$ and $\mu^{\pm}$) produced in the decay of heavy
objects (SUSY particles and $W$ and $Z$ bosons) were selected for futher study
provided they satisfied basic selection criteria on transverse momentum
($p_{T}>10$ GeV) and pseudorapdity ($|\eta|<2.5$). Leptons coming from hadron
decays are usually contained within hadronic jets and so can be experimentally
rejected with high efficiency using energy or track isolation criteria. This
latter category of leptons was therefore not used in this study. The
acceptance criterion used for the hadronic final state was $|\eta|<5$. The
detector energy resolution functions used are described in Section 7.3.2.
### 7.3 Factors for Successful Shape Fitting
There are several factors that control or affect the shape of the $M_{2C}$ and
$M_{2C,UB}$ distributions. We divide the factors into those that affect the
in-principle distribution and the factors that affect the observation of the
distribution by the detector like energy resolution and selection cuts.
The in-principle distribution of these events is influenced by the presence or
absence of spin-correlations between the branches, the $m_{ll}$ distribution
of the visible particles, any significant upstream transverse momentum (UTM)
against which the system is recoiling (_e.g._ gluinos or squarks decaying
further up the decay chain), and background coming from other new-physics
processes or the Standard Model. As all these processes effectively occur at
the interaction vertex, there are some combinatoric ambiguities. These are the
factors that influence the in-principle distribution of events that impinges
on the particle detector.
The actual distribution recorded by the detector will depend on further
factors. Some factors we are able to regulate – for example cuts on the
missing transverse momentum. Other factors depend on how well we understand
the detector’s operation – such as the energy resolution and particle
identification.
Where the effect of such factors is significant, for example for the $m_{12}$,
$k_{T}$, and background distributions, our approach has been to model their
effect on the ideal distributions by using appropriate information from the
‘data’, much as one would do in a real LHC experiment. For the present our
‘data’ are provided by HERWIG, rather than LHC events, but the principle is
the same.
#### 7.3.1 Factors Affecting the In-principle Distribution
* •
Mass Difference and Mass Scale
The end-point of $M_{2C}$ and $M_{2C,UB}$ distributions give the mass of
$\tilde{\chi}^{o}_{2}$. Therefore the mass scale, $M_{\tilde{\chi}^{o}_{2}}$,
is a dominant factor in the shape of the ‘ideal’ distribution. This is the
reason we can use these distributions to determine the mass scale. Fig 7.3
shows the $M_{2C}$ and $M_{2C,UB}$ distributions for five choices of
$M_{\tilde{\chi}^{o}_{2}}$ assuming the HERWIG generated $m_{ll}$ and UTM
distributions.
Figure 7.3: We show the $M_{2C}$ and $M_{2C,UB}$ ideal distributions for five
choices of $M_{\tilde{\chi}^{o}_{2}}$ assuming the HERWIG generated $m_{ll}$
and UTM distributions.
How does the shape change with mass scale? The shape is typically sharply
peaked at $M_{2C}=M_{-}$ followed by a tail that ends at the mass of
$M_{\tilde{\chi}^{o}_{2}}$. The peak at $M_{-}$ is due to events that are
compatible with $M_{\tilde{\chi}^{o}_{1}}=0$. We say these events give the
trivial constraint. Because we bin the data, the height of the first bin
depends on the bin size. As
$M_{+}/M_{-}=(M_{\tilde{\chi}^{o}_{2}}+M_{\tilde{\chi}^{o}_{1}})/(M_{\tilde{\chi}^{o}_{2}}-M_{\tilde{\chi}^{o}_{1}})$
becomes larger, then the non-trivial events are distributed over a wider range
and the endpoint becomes less clear. In general if all other things are equal,
the larger the mass, the more events in the first bin and the longer and
flatter the tail.
The distribution also depends on the mass difference $M_{-}$ which we assume
has been determined. We expect that experimentally one should be able to read
off the mass difference from the $m_{ll}$ kinematic end-point with very high
precision. Gjelsten, Miller, and Osland estimate this edge can be measured to
better than $0.08\,\,{\rm{GeV}}$ [157, 159] using many different channels that
lead to the same edge, and after modeling energy resolution and background.
Errors in the mass determination propagated from the error in the mass
difference in the limit of $k_{T}=0$ are given approximately by
$\delta M_{\tilde{\chi}^{o}_{2}}=\frac{\delta
M_{-}}{2}\left(1-\frac{M_{+}^{2}}{M_{-}^{2}}\right)\ \ \ \delta
M_{\tilde{\chi}^{o}_{1}}=-\frac{\delta
M_{-}}{2}\left(1+\frac{M_{+}^{2}}{M_{-}^{2}}\right)$ (7.11)
where $\delta M_{-}$ is the error in the determination of the mass difference
$M_{-}$. An error in $M_{-}$ will lead to an $M_{2C}$ distribution with a
shape and endpoint above or below the true mass in the direction indicated by
Eq(7.11).
To isolate this source of error from the uncertainty in the fit, we assume
that the mass difference is known exactly in our stated results. In our case
an uncertainty of $\delta M_{-}=0.08\,\,{\rm{GeV}}$ would lead to an
additional $\delta M_{\tilde{\chi}^{o}_{1}}=\pm 0.5\,\,{\rm{GeV}}$ to be added
in quadrature to the error from fitting.
* •
Spin Correlations Between Branches
The potential impact of spin correlation between branches on $M_{2C}$ were
studied in Sec 6.3. There are also no spin correlations if the
$\tilde{\chi}^{o}_{2}$ parents are part of a longer decay chain which involves
a scalar at some stage as is the case in most of the events in the model we
study here. In the simple Mathematica simulations, we have assumed no spin
dependence in the production of the hypothetical ideal distribution.
* •
Input $m_{12}$ Distributions
The $m_{ll}$ distribution affects the $M_{2C}$ distribution. Fig 7.4 shows two
$m_{ll}$ distributions and the corresponding $M_{2C}$ distributions with
$k_{T}=0$ (no UTM). The solid lines show the case where the three-body decay
from $\tilde{\chi}^{o}_{2}$ to $\tilde{\chi}^{o}_{1}$ is completely dominated
by a $Z$ boson. The dashed line shows the case where the $m_{ll}$ distribution
is extracted from the ‘realistic’ HERWIG simulation. We can see that the
$M_{2C}$ distribution is affected most strongly in the first several non-zero
bins. If we were to determine the mass only from the shape of these first
several bins using only the $Z$ contribution for the $m_{ll}$ difference, we
would estimate of the mass to be about $4$ GeV below the true mass. This can
be understood because the shape change of the $m_{ll}$ distribution
effectively took events out of the first bin and spread them over the larger
bins simulating the effect of a smaller mass.
Figure 7.4: Dependence of $M_{2C}$ distribution on the $m_{ll}$ distribution.
Left: The $m_{ll}$ distributions. Right: The corresponding $M_{2C}$
distributions. The solid curves show the case where the $m_{ll}$ distribution
when the three-body decay is dominated by the $Z$ boson channel, and the
dashed curves show the case where the $m_{ll}$ distribution is taken directly
from the HERWIG simulation.
* •
Input Upstream Transverse Momentum Distribution
As we discussed in Section 7.1, if there is a large upstream transverse
momentum (UTM) against which the two $\tilde{\chi}^{o}_{2}$’s recoil, then we
have both an upper and lower bound on the mass scale. The left frame of Fig.
7.5 shows the UTM distribution observed in the ‘realistic’ HERWIG data. The
right frame of Fig. 7.5 shows the $M_{2C}$ and $M_{2C,UB}$ distributions for
fixed UTM ($k_{T}$) of $0$, $75$, $175$, $275$, $375$, and $575\
\,\,{\rm{GeV}}$ all with $k^{2}=(100\,\,{\rm{GeV}})^{2}$. As we discuss under
the next bullet, we also find the distribution is not sensitive to the value
of $k^{2}$. For $k_{T}>275\,\,{\rm{GeV}}$, these curves begin to approach a
common shape. These are ideal $M_{2C}$ upper and lower bound distributions
where $M_{N}=70\,\,{\rm{GeV}}$ and $M_{Y}=123\,\,{\rm{GeV}}$. Notice that
there is no upper-bound curve for the case with zero $k_{T}$ UTM. The UTM
makes the distribution have a sharper endpoint and thereby make the mass
easier to determine. This is equivalent to having a sharper kink in
${\tt{max}}\ M_{T2}$ in the presence of large UTM [176].
How do we determine $k_{T}$ from the data? Because we demand exactly four
leptons (two OSSF pairs), we assume all other activity, basically the hadronic
activity, in the detector is UTM. The shape used in the ‘ideal’ distribution
is a superposition of the different fixed UTM distributions, shown on right
frame of Fig. 7.5, weighted by the observed UTM distribution, shown on the
left frame of Fig. 7.5. Equivalently, we obtain the ideal distribution by
selecting $k_{T}$ in the Mathematica Monte Carlo according to the observed UTM
distribution.
Figure 7.5: Left: The UTM distribution observed in the HERWIG simulation.
Right: Ideal $M_{2C}$ upper bound and lower bound distribution for a range of
upstream transverse momentum (UTM) values
($k_{T}=0,75,175,275,375,575\,\,{\rm{GeV}}$) where $M_{N}=70$ GeV and
$M_{Y}=123$ GeV.
* •
Shape Largely Independent of Parton Distributions and Collision Energy
In the limit where there is no UTM, then $M_{2C}$ is invariant under back-to-
back boosts of the parent particles; therefore, $M_{2C}$ is also invariant to
changes in the parton distribution functions.
How much of this invariance survives in the presence of large UTM? The answer
is that it remains largely independent of the parton collision energy and
largely independent of the mass $k^{2}$ as shown in Fig 7.6 numerically. On
the left frame, we show three distributions and in the right frame their
difference with $2\ \sigma$ error bars calculated from $15000$ events. The
first distribution assumes $k_{T}=175\,\,{\rm{GeV}}$,
$k^{2}=(100\,\,{\rm{GeV}})^{2}$, $\sqrt{s}$ distributed via Eq(7.10). The
second distribution assumes $k_{T}=175\,\,{\rm{GeV}}$,
$k^{2}=(2000\,\,{\rm{GeV}})^{2}$, $\sqrt{s}$ distributed via Eq(7.10). The
third distribution assumes $k_{T}=175\,\,{\rm{GeV}}$,
$k^{2}=(100\,\,{\rm{GeV}})^{2}$, and a fixed collision energy of
$\sqrt{s}=549\,\,{\rm{GeV}}$.
Figure 7.6: Figure shows that even with large UTM, the distribution is
independent of $k^{2}$ and the parton collision energy to the numerical
accuracies as calculated from 15000 events. Shown are three distributions and
their difference. (1) $k_{T}=175\,\,{\rm{GeV}}$,
$k^{2}=(100\,\,{\rm{GeV}})^{2}$, $\sqrt{s}$ distributed via Eq(7.10). (2)
$k_{T}=175\,\,{\rm{GeV}}$, $k^{2}=(2000\,\,{\rm{GeV}})^{2}$, $\sqrt{s}$
distributed via Eq(7.10). (3) $k_{T}=175\,\,{\rm{GeV}}$,
$k^{2}=(100\,\,{\rm{GeV}})^{2}$, $\sqrt{s}=549\,\,{\rm{GeV}}$.
* •
Backgrounds
Figure 7.7: The invariant mass of the OSSF leptons from both branches of
forming a Dalitz-like wedgebox analysis. The events outside the $m_{ll}\leq
53\,\,{\rm{GeV}}$ signal rectangle provide control samples from which we
estimate the background shape and magnitude. The dark events are signal, the
lighter events are background.
Backgrounds affect the shape and, if not corrected for, could provide a
systematic error in the estimated mass. In Section 7.4 we will see that the
position of the minimum $\chi^{2}$ in a fit to $M_{\tilde{\chi}^{o}_{1}}$ is
barely affected by the background. The main effect of the background is to
shift the parabola up, giving a worse fit. To improve the fit, we may be able
to estimate the $M_{2C}$ and $M_{2C,UB}$ distribution and magnitude of the
background from the data itself. We first discuss the sources of background,
and then we describe a generic technique using a Dalitz-like wedgebox analysis
to estimate a background model which gives approximately the correct shape and
magnitude of the background.
One reason we study the four-lepton with missing transverse momentum channel
is because of the very-low Standard-Model background [187, 186]. A previous
study [186] estimates about $120$ Standard-Model four-lepton events (two OSSF
pairs) for $100\,\,\,{\rm{fb}}^{-1}$ with a $\not{P}_{T}>20$ GeV cut. They
suggest that we can further reduce the Standard-Model background by requiring
several hadronic jets. Because we expect very-little direct
$\tilde{\chi}^{o}_{2}$ pair production, this would have very little effect on
the number of signal events. Also, because $Z^{o}$s are a part of the
intermediate states of these background processes, very few of these events
will have $m_{ll}$ significantly different from $M_{Z}$.
What is the source of these Standard-Model backgrounds? About $60\%$ is from
$Z$-pair production events with no invisible decay products, in which the
missing transverse momentum can only arise from experimental particle
identification and resolution errors. This implies that a slightly stronger
$\not{P}_{T}$ cut could further eliminate this background. Another $40\%$ are
due to $t,\bar{t},Z$ production. Not explicitly discussed in their study but
representing another possible source of backgrounds are events containing
heavy baryons which decay leptonically. If we assume $b$-quark hadrons decay
to isolated leptons with a branching ratio of $0.01$, then LHC $t,\bar{t}$
production will lead to about $10$ events passing these cuts for
$100\,\,\,{\rm{fb}}^{-1}$ where both OSSF leptons pairs have $m_{ll}<M_{Z}$.
Tau decays also provide a background for our specific process of interest. The
process $\tilde{\chi}^{o}_{2}\rightarrow\tau^{+}\tau^{-}\tilde{\chi}^{o}_{1}$
will be misidentified as $e^{-}e^{+}$ or $\mu^{+}\mu^{-}$ about $3\%$ of the
time ($6\%$ total). Because the $\tau$ decays introduce new sources of missing
transverse momentum ($\nu_{\tau}$), these events will distort the $M_{2C}$
calculation. This suggests that the dominant background to the
$\tilde{\chi}^{o}_{2},\tilde{\chi}^{o}_{2}\rightarrow 4l+\not{P}_{T}+$ hadrons
will be from other SUSY processes.
We now create a crude background model from which we estimate the magnitude
and distribution of the background using the ‘true’ HERWIG data as a guide. We
follow the suggestion of Ref. [160, 186] and use a wedgebox analysis plotting
the invariant mass $m_{ee}$ against $m_{\mu\mu}$ to supplement our knowledge
of the background events mixed in with our signal events. This wedgebox
analysis, seen in Fig. 7.7 for our HERWIG simulation, shows patterns that tell
about other SUSY states present. The presence of the strips along $91$ GeV
indicate that particle states are being created that decay to two leptons via
an on-shell $Z$. The observation that the intensity changes above and below
$m_{\mu\mu}=53$ GeV shows that many of the states produced have one branch
that decays via a $\tilde{\chi}^{o}_{2}$ and the other branch decays via an
on-shell $Z$. The lack of events immediately above and to the right of the
$(53\,\,{\rm{GeV}},53\,\,{\rm{GeV}})$ coordinate but below and to the left of
$(91\,\,{\rm{GeV}},91\,\,{\rm{GeV}})$ coordinate suggest that symmetric
process are not responsible for this background.
We also see the density of events in the block above $53\,\,{\rm{GeV}}$ and to
the right of $53\,\,{\rm{GeV}}$ suggest a cascade decay with an endpoint near
enough to $91\,\,{\rm{GeV}}$ that it is not distinguishable from $M_{Z}$.
Following this line of thinking, we model the background with a guess of an
asymmetric set of events where one branch has new states $G$, $X$ and $N$ with
masses such that the $m_{ll}$ endpoint is
${\tt{max}}\ m^{2}_{ll}({\rm{odd}\
\rm{branch}})=\frac{(M_{G}^{2}-M_{X}^{2})(M_{X}^{2}-M_{N}^{2})}{M_{X}^{2}}=(85\,\,{\rm{GeV}})^{2}$
(7.12)
and the other branch is our $\tilde{\chi}^{o}_{2}$ decay. The masses one
chooses to satisfy this edge did not prove important so long as the mass
differences were reasonably sized; we tried several different mass triplets
ending with the LSP, and all gave similar answers.
We now describe the background model used in our fits. One branch starts with
a massive state with $M_{G}=160\,\,{\rm{GeV}}$ which decays to a lepton and a
new state $M_{X}=120\,\,{\rm{GeV}}$ which in-turn decays to a lepton and the
LSP. The second branch has our signal decay with the $\tilde{\chi}^{o}_{2}$
decaying to $\tilde{\chi}^{o}_{1}$ and two leptons via a three-body decay. We
added UTM consistent with that observed in the events.
By matching the number of events seen outside the $m_{ll}<53\,\,{\rm{GeV}}$
region, we estimate the number of the events within the signal cuts that are
due to backgrounds. We estimate $0.33$ of the events with both OSSF pairs
satisfying $m_{ll}<53\,\,{\rm{GeV}}$ are background events. The model also
gives a reasonable distribution for these events. Inspecting the actual HERWIG
results showed that actual fraction of background events was $0.4$. If we let
the fraction be free and minimize the $\chi^{2}$ with respect to the
background fraction, we found a minimum at $0.3$.
Our background model is simplistic and does not represent the actual
processes, but it does a good job of accounting for the magnitude and the
shape of the background mixed into our signal distribution. Most of the HERWIG
background events came from $W$ and charginos which introduce extra sources of
missing transverse momentum. Never the less, the shape fits very accurately
and the performance is discussed in Section 7.4. It is encouraging that our
estimate of the background shape and magnitude is relatively insensitive to
details of the full spectrum. Even ignoring the background, as we will see in
Section 7.4, still leads to a minimum $\chi^{2}$ at the correct mass.
* •
Combinatoric Ambiguities
If we assume that the full cascade effectively occurs at the primary vertex
(no displaced vertices), then the combinatoric question is a property of the
ideal distribution produced in the collisions. There are no combinatoric
issues if the two opposite-sign same-flavor lepton pairs are each different
flavors (_e.g._ the four leptons are $e^{+}$, $e^{-}$, $\mu^{+}$ and
$\mu^{-}$). However if all four leptons are the same flavor, we have found
that we can still identify unique branch assignments 90% of the time. The
unique identification comes from the observation that both pairs must have an
invariant mass $m_{ll}$ less than the value of the ${\tt{max}}\ m_{ll}$ edge.
In $90\%$ of the events, there is only one combination that satisfies this
requirement. This allows one to use $95\%$ of the four lepton events without
ambiguity. The first $50\%$ are identified from the two OSSF pairs being of
different flavors and $90\%$ of the remaining can be identified by requiring
that both branches of OSSF lepton for an event’s combinatoric pairing satisfy
$m_{ll,\mathrm{Event}}<{\tt{max}}\ _{\mathrm{All}\ \mathrm{Events}}\ m_{ll}$.
The events which remain ambiguous have two possible assignments, both of which
are included with a weight of $0.5$ in the distribution.
#### 7.3.2 Factors Affecting Distribution Recorded by the Detector
As just described, the ‘ideal’ in-principle distribution is created from the
observed $m_{ll}$ distribution and the observed UTM distribution. We include
combinatoric effects from events with four leptons of like flavors. Last, we
can estimate the magnitude of background events and their $M_{2C}$ and
$M_{2C,UB}$ shape. We now modify the in-principle distribution to simulate the
effects of the particle detector to form our final ‘ideal’ distribution that
includes all anticipated effects. The two main effects on the $M_{2C}$ and
$M_{2C,UB}$ distributions are the energy resolution and the $\not{P}_{T}$
cuts.
* •
Shape Dependence on Energy Resolution
Energy resolution causes the $M_{2C}$ and $M_{2C,UB}$ distributions to be
smeared. Here we assume the angular resolution is negligible. For both the
Mathematica Monte Carlo model and the HERWIG events we simulate the detector’s
energy resolution by scaling the four vectors for electrons, muons, and
hadrons by
$\displaystyle\frac{\delta E_{e}}{E_{e}}$ $\displaystyle=$
$\displaystyle\frac{0.1}{\sqrt{E_{e}}}+\frac{0.003}{E_{e}}+0.007$ (7.13)
$\displaystyle\frac{\delta E_{\mu}}{E_{\mu}}$ $\displaystyle=$ $\displaystyle
0.03$ (7.14) $\displaystyle\frac{\delta E_{H}}{E_{H}}$ $\displaystyle=$
$\displaystyle\frac{0.58}{\sqrt{E_{H}}}+\frac{0.018}{E_{H}}+0.025$ (7.15)
respectively [190][9]. The muon energy resolution is different because they
are typically not contained by the calorimeter. A more detailed detector
simulation is of course possible, but since we do not know the true behavior
of any LHC detector until the device begins taking data, a more sophisticated
treatment would be of limited value here. In practice the dependence of the
ideal distribution shapes on the missing transverse momentum resolution should
reflect the actual estimated uncertainty of the missing transverse momentum of
the observed events.
Smearing of the distributions decreases the area difference between two
normalized distributions, thereby decreasing the precision with which one can
determine the mass from a given number of signal events. This expanded
uncertainty can be seen in Section 7.4.
The $M_{2C}$ calculations depend on the mass difference, the four-momenta of
the four leptons, and the missing transverse momentum. As the lepton energy
resolution is very tight, the missing transverse momentum’s energy resolution
is dominated by the hadronic energy resolution. We model the energy resolution
of the UTM as a hadronic jet. This significantly increases the uncertainty in
the missing transverse momentum because hadrons have about five times the
energy resolution error.
In our Mathematica model, we represent the UTM as a single four-vector $k$,
but in reality it will be the sum of many four-vectors. Because we apply the
energy resolution smearing to $k$, if $k$ is small the simple Mathematica
model will have a smaller missing transverse momentum resolution error.
However, an events with almost $0$ UTM could have a large missing momentum
energy resolution if it has a lot of hadronic jets whose transverse momentum
mostly cancels. Fig 7.5 shows that most of the time we have considerable
hadronic UTM, so this effect is a minor correction on our results.
* •
Shape Dependence on Missing Transverse Momentum Cuts
A key distinguishing feature of these events is missing transverse momentum.
To eliminate the large number of Standard Model events with four-lepton and
with no $\not{P}_{T}$, we will need to cut on this parameter. Fig. 7.8 shows
the HERWIG simulation’s missing transverse momentum versus the $M_{2C}$. A
non-trivial $M_{2C}$ requires substantial $\not{P}_{T}$. Small $\not{P}_{T}$
of less than about $20\,\,{\rm{GeV}}$ only affects the $M_{2C}$ shape below
about $65\,\,{\rm{GeV}}$. The shape of the $M_{2C}<65\,\,{\rm{GeV}}$ therefore
will require a higher fidelity model from which to train the shapes. Instead,
we just choose to not fit bins with $M_{2C}<65\,\,{\rm{GeV}}$.
All events near the end of $M_{2C,UB}$ distribution require significant
$\not{P}_{T}$, therefore $\not{P}_{T}$ cuts will not affect the part of this
distribution which we fit. The number of events with no non-trivial upper-
bounds will also be affected by $\not{P}_{T}$ cuts. We only fit the
$M_{2C,UB}$ distribution up to about $233\,\,{\rm{GeV}}$.
Figure 7.8: The missing transverse momentum vs $M_{2C}$ values for HERWIG
data. This shows that a $\not{P}_{T}>20\,\,{\rm{GeV}}$ cut would not affect
the distribution for $M_{2C}>65\,\,{\rm{GeV}}$.
### 7.4 Estimated Performance
Figure 7.9: The result of $\chi^{2}$ fits to the data with differing
assumptions for $100\,\,{\rm{fb}}^{-1}$ (left panel) and
$400\,\,{\rm{fb}}^{-1}$ (right panel). The thick line with filled squares
shows the final result with all cuts, resolution error, combinatorics, and
backgrounds included and estimated in the shape fitting. This gives us
$M_{\tilde{\chi}^{o}_{1}}=63.2\pm 4.1\,\,\,{\rm{GeV}}$ with $700$ events
(signal or background) representing $100\,\,{\rm{fb}}^{-1}$. After
$400\,\,{\rm{fb}}^{-1}$ this improves to $M_{\tilde{\chi}^{o}_{1}}=66.0\pm
1.8\,\,\,{\rm{GeV}}$. The error-free best case gives
$M_{\tilde{\chi}^{o}_{1}}=67.0\pm 0.9\,\,\,{\rm{GeV}}$. The correct value is
$M_{\tilde{\chi}^{o}_{1}}=67.4\,\,{\rm{GeV}}$.
Determining the mass based on the shape of the distribution enables one to use
all the events and not just those near the end point. We fit both upper-bound
and lower-bound shapes to the data as described in the Appendix D. As one
expects, fitting the lower-bound shape more tightly constrains the mass from
below and fitting the upper-bound shape more tightly constrains the mass from
above. Combining the two gives approximately even uncertainty. We calculate
ideal distributions assuming $M_{\tilde{\chi}^{o}_{1}}$ at five values $50$,
$60$, $70$, $80$, and $90$ GeV. We then fit a quadratic interpolation through
the points. Our uncertainties are based on the value where $\chi^{2}$
increases by $1$ from its minimum of this interpolation. This uncertainty
estimate agrees with about $2/3$ of the results falling within that range
after repeated runs. Our uncertainty estimates do not include the error
propagated from the uncertainty in the mass difference (see Eq(7.11)).
We present results for an early LHC run, about $100\ {\rm{fb}}^{-1}$, and for
the longest likely LHC run before an upgrade, about $400\ {\rm{fb}}^{-1}$.
After about $100\ {\rm{fb}}^{-1}$, we have $700$ events ( about $400$ signal
and $300$ background). After $400\ {\rm{fb}}^{-1}$, we have about $2700$
events (about $1600$ signal and $1100$ background). Only $4$ events out of
$1600$ are from direct pair production. Most of our signal events follow at
the end of different decay chains starting from gluinos or squarks. The
upstream decay products produce significant UTM against which the two
$\tilde{\chi}^{o}_{2}$ parent particles recoil.
First for the ideal case. After $400\,\,\,{\rm{fb}}^{-1}$, using only signal
events and no energy resolution, the $\chi^{2}$ fits to the predicted shapes
give $M_{\tilde{\chi}^{o}_{1}}=67.0\pm 0.9\,\,\,{\rm{GeV}}$ (filled circles in
Fig. 7.9). This mass determination can practically be read off from the
endpoints seen in Fig. 7.2; the $M_{2C}$ endpoint is near $120\,\,{\rm{GeV}}$
and subtracting the mass differences gives
$M_{\tilde{\chi}^{o}_{1}}=120\,\,{\rm{GeV}}-M_{-}=67\,\,{\rm{GeV}}$. We now
explore how well we can do with fewer events and after incorporating the
effects listed in Section 7.3.
How does background affect the fit? If we ignore the existence of background
in our sample, and we fit all the events to the signal-only shapes, then we
find a poor fit shown as the empty circle curve in Fig. 7.9. By poor fit, we
mean the $\chi^{2}$ is substantially larger than the $72$ bins being compared
($36$ bins from each the upper-bound and lower-bound distributions). Despite
this worse fit, the shape fits still give a very accurate mass estimate:
$M_{\tilde{\chi}^{o}_{1}}=65.4\pm 1.8\,\,{\rm{GeV}}$ after $100\
{\rm{fb}}^{-1}$ and $M_{\tilde{\chi}^{o}_{1}}=67.4\pm 0.9\,\,{\rm{GeV}}$ after
$400\ {\rm{fb}}^{-1}$. At this stage, we still assume perfect energy
resolution and no missing transverse momentum cut.
Next, if we create a background model as described in Section 7.3, we are able
to improve the $\chi^{2}$ fit to nearly $1$ per bin; the mass estimate remains
about the same, but the uncertainty increases by about $20\%$. We find a small
systematic shift (smaller than the uncertainty) in our mass prediction as we
increase the fraction of the shape due to the background model vs the signal
model. As we increased our fraction of background, we found the mass estimate
was shifted down from $66.5$ at $0\%$ background to $65.6$ when we were at
$60\%$ background. The best $\chi^{2}$ fit occurs with $30\%$ background;
which is very close to the $33\%$ we use from the estimate, but farther from
the true background fraction of about $40\%$. With $400\,\,{\rm{fb}}^{-1}$ of
data, the systematic errors are all but eliminated with the endpoint
dominating the mass estimate. These fits are shown as the triangles with
dashed lines and give $M_{\tilde{\chi}^{o}_{1}}=65.1\pm 2.4\,\,{\rm{GeV}}$
which after the full run becomes $M_{\tilde{\chi}^{o}_{1}}=67.3\pm
1.1\,\,{\rm{GeV}}$.
Including energy resolution as described in Section 7.3 shows a large increase
in the uncertainty. The dashed-line with empty square markers shows the
$\chi^{2}$ fit when we include both a background model and the effect of
including energy resolution. These fits are shown as the empty squares with
dashed lines and give $M_{\tilde{\chi}^{o}_{1}}=63.0\pm 3.6\,\,{\rm{GeV}}$
which after the full run becomes $M_{\tilde{\chi}^{o}_{1}}=66.5\pm
1.6\,\,{\rm{GeV}}$.
The final shape factor that we account for are the cuts associated with the
missing transverse momentum. After we apply cuts requiring
$\not{P}_{T}>20\,\,{\rm{GeV}}$ and fit only $M_{2C}>65\,\,{\rm{GeV}}$ we have
our final result shown by the thick lines with filled squares. This includes
all cuts, resolution error, combinatorics and backgrounds. We find
$M_{\tilde{\chi}^{o}_{1}}=63.2\pm 4.1\,\,\,{\rm{GeV}}$ with $700$ events
(signal or background) representing $100\,\,{\rm{fb}}^{-1}$, and after
$400\,\,{\rm{fb}}^{-1}$ this improves to $M_{\tilde{\chi}^{o}_{1}}=66.0\pm
1.8\,\,\,{\rm{GeV}}$. The true mass on which the HERWIG simulation is based is
$M_{\tilde{\chi}^{o}_{1}}=67.4$, so all the estimates are within about $1\
\sigma$ of the true mass.
Figure 7.10: HERWIG data for $100\,\,{\rm{fb}}^{-1}$ (thick line) and the
smooth ideal expectation assuming $M_{\tilde{\chi}^{o}_{1}}=70\,\,{\rm{GeV}}$
generated by Mathematica with all resolution, background, and combinatoric
effects included (thin line). The $\chi^{2}$ of this curve to the HERWIG gives
the solid-black square on the left frame of Fig. 7.9.
Fig 7.10 shows the ideal curve expected if
$M_{\tilde{\chi}^{o}_{1}}=70\,\,{\rm{GeV}}$ including all effects from energy
resolution, background, combinatoric, and $\not{P}_{T}$ cuts. The $\chi^{2}$
corresponds to the solid square on the left panel of Fig. 7.9.
The error in mass determination obtained with limited statistics can be
estimated using Poisson statistics. In our studies we find that, as one would
expect, increasing the number of events by a factor of four, we bring down our
error by about a factor of two. This means that one could expect $\pm
8\,\,{\rm{GeV}}$ after about $25\,\,{\rm{fb}}^{-1}$ which represents $100$
signal events and $75$ background events.
### Chapter Summary
Despite adding some of the complicating effects one would encounter with real
data, we have discovered other factors which demonstrate that one could obtain
an even better precision than estimated in Chapter 6. There, we used only our
simple Mathematica model and neglected most sources of realistic uncertainty.
We assumed all the events could be modeled as being direct production without
spin-correlations. With these simplifications, we argued the mass could be
determined to $\pm 6\,\,{\rm{GeV}}$ using $250$ signal events.
In this chapter, we performed a case study to show that that the relevant
$M_{2C}$ and $M_{2C,UB}$ shapes can be successfully determined from the mass
difference $M_{-}$, the $m_{ll}$ distribution observation, the upstream
transverse momentum (UTM) distribution observation. We included and accounted
for many realistic effects: we modeled the large energy-resolution error of
hadronic jets. We also included the effects of backgrounds, $\not{P}_{T}$
cuts, and combinatorics. Our signal and backgrounds were generated with
HERWIG. We discussed how a Dalitz-like plot can estimate the background
fraction and shape. Observed inputs were used in a simple model to determine
ideal distribution shapes that makes no reference to parton distribution
functions, cross sections, or other model-dependent factors that we are not
likely to know early in the process.
Despite these extra sources of uncertainty, we found a final mass
determination of $\pm 4.1\,\,{\rm{GeV}}$ with about $400$ signal events which
is still better than the appropriately scaled result from RS [15]. The sources
of the mass determination improvement are twofold: (1) the prediction and
fitting of upper-bound distribution, and (2) the sharper end-point in the
presence of large UTM. Under equivalent circumstances, the sharper endpoint is
enough to give a factor of $2$ improvement in the uncertainty over the direct
production case assumed in [15]. Fitting the upper bound tends to improve the
determination by an additional factor of $\sqrt{2}$. This improvement is then
used to fight the large hadronic-jet energy resolution and background
uncertainty.
Mass determination using $M_{2C}$ and $M_{2C,UB}$ applies to many other
processes. We have focused on cases where the mass difference is given by the
end-point of an $m_{ll}$ distribution involving a three-body decay. If there
is not a three-body decay, then the mass difference may be found by applying
other mass determination techniques like the mass shell techniques (MST) [166,
167, 168] or edges in cascade decays [156, 158, 159] or $M_{T2}$ at different
stages in symmetric decay chains [14].
How does our method’s performance compare to previous mass determination
methods? Firstly, this technique is more robust than the ${\tt{max}}\ M_{T2}$
‘kink’ because in fitting to the shape of the distribution, it does not rely
entirely on identification of the events near kinematic boundary. One can view
$M_{2C}$ and $M_{2C,UB}$ as variables that event-by-event quantify the ‘kink’.
Other than the ‘kink’ technique, the previous techniques surveyed in Chapter 4
apply to cases where there is no three-body decay from which to measure the
mass difference directly. However, each of those techniques still constrains
the mass difference with great accuracy. The technique of [156, 157, 158, 159]
which uses edges from cascade decays determines the LSP mass to $\pm
3.4\,\,{\rm{GeV}}$ with about 500 thousand events from
$300\,\,{\rm{fb}}^{-1}$. The approach of [167] assumes a pair of symmetric
decay chains and assumes two events have the same structure. They reach $\pm
2.8\,\,{\rm{GeV}}$ using $700$ signal events after $300\,\,{\rm{fb}}^{-1}$,
but have a $2.5\,\,{\rm{GeV}}$ systematic bias that needs modeling to remove.
By comparison, adjusting to $700$ signal events we achieve $\pm
2.9\,\,{\rm{GeV}}$ without a systematic bias after propagating an error of
$0.08\,\,{\rm{GeV}}$ in the mass difference and with all discussed effects.
Uncertainty calculations differ amongst groups, some use repeated trial with
new sets of Monte Carlo data, and others use $\chi^{2}$. Without a direct
comparison under like circumstance, the optimal method is not clear; but it is
clear that fitting the $M_{2C}$ and $M_{2C,UB}$ distributions can determine
the mass of invisible particles at least as well, if not better than the other
known methods in both accuracy and precision.
In summary, we have developed a mass determination technique, based on the
constrained mass variables, which is able to determine the mass of a dark-
matter particle state produced at the LHC in events with large missing
transverse momentum. The $M_{2C}$ method, which bounds the mass from below,
was supplemented by a new distribution $M_{2C,UB}$ which bounds the mass from
above in events with large upstream transverse momentum. A particular
advantage of the method is that it also obtains substantial information from
events away from the end point allowing for a significant reduction in the
error. The shape of the distribution away from the end-point can be determined
without detailed knowledge of the underlying model, and as such, can provide
an early estimate of the mass. Once the underlying process and model
generating the event has been identified the structure away from the end-point
can be improved using, for example, HERWIG to produce the process dependent
shape. We performed a case-study simulation under LHC conditions to
demonstrate that mass-determination by fitting the $M_{2C}$ and $M_{2C,UB}$
distributions survives anticipated complications. With this fitting procedure
it is possible to get an early measurement of the mass - with just 400 signal
events in our case study we found we would determine
$M_{\tilde{\chi}^{o}_{1}}=63.2\pm 4.1$. The ultimate accuracy obtainable by
this method is $M_{\tilde{\chi}^{o}_{1}}=66.0\pm 1.8\,\,{\rm{GeV}}$. We
conclude that this technique’s precision is as good as, if not better than,
the best existing techniques.
## Chapter 8 The Variable $M_{3C}$: On-shell Interlineate States
### Chapter Overview
The main concept of the constrained mass variable $M_{2C}$ [15] [16] is that
after studying several kinematic quantities we may have well determined the
mass difference between two particle states but not the mass itself. We then
incorporate these additional constraints in the analysis of the events. We
check each event to test the lower bounds and upper bounds on the mass scale
that still satisfies the mass difference and the on-shell conditions for the
assumed topology. Because the domain over which we are minimizing contains the
true value for the mass, the end-points of the lower bounds and upper bounds
distributions’ give the true mass.
The subject of this chapter is extending the constrained mass variable to the
case with three new on-shell states as depicted in Fig 4.3. The constrained
mass variable for this case will be called $M_{3C}$. We structure the chapter
around a case study of the benchmark point SPS 1a [3]. In this study, the
three new states are identified as $Y=\tilde{\chi}^{o}_{2}$, $X=\tilde{l}$ and
$N=\tilde{\chi}^{o}_{1}$. The visible particles leaving each branch are all
opposite-sign same-flavor (OSSF) leptons ($\mu$ or $e$). This allows us to
identify any hadronic activity as upstream transverse momentum.
The chapter is structured as follows: Sec. 8.1 introduces the definition of
$M_{3C}$ and how to calculate it. Sec. 8.3 discuses the dependence of $M_{3C}$
on complications from combinatorics, large upstream transverse momentum (UTM),
$\not{P}_{T}$ cuts, parton distributions, and energy resolution. Sec 8.4
applies $M_{3C}$ variables to HERWIG data from the benchmark supersymmetry
spectrum SPS 1a. Finally we summarize the chapter’s contributions.
### 8.1 Introducing $M_{3C}$
We will now introduce the definition of $M_{3C}$, how to calculate it, its
relationship to previous mass shell techniques.
#### 8.1.1 Definition of $M_{3C}$
The upper bound and lower bound on the mass of the third lightest new particle
state in the symmetric decay chain is the constrained mass variable
$M_{3C,LB}$ and $M_{3C,UB}$. This variable applies to the symmetric, on-shell
intermediate state, topology from Fig. 4.3 which depicts two partons that
collide and produce some observed upstream transverse momentum (UTM) with four
momenta $k$ and an on-shell, pair-produced new state $Y$. On each branch, $Y$
decays to on-shell intermediate particle state $X$ and a visible particle
$v_{1}$ with masses $M_{X}$ and $m_{v_{1}}$. Then $X$ decays to the dark-
matter particle $N$ and visible particle $v_{2}$ with masses $M_{N}$ and
$m_{v_{2}}$. The four-momenta of $v_{1}$, $v_{2}$ and $N$ are respectively
$\alpha_{1}$, $\alpha_{2}$ and $p$ on one branch and $\beta_{1}$, $\beta_{2}$
and $q$ in the other branch. The missing transverse momenta $\not{P}_{T}$ is
given by the transverse part of $p+q$.
We initially assume that we have measured the mass differences from other
techniques. For an on-shell intermediate state, there is no single end-point
that gives the mass difference. The short decay chain gives a kinematic
endpoint $\max m_{12}$ described in Eq(4.3) that constrains a combination of
the squared mass differences. Unless two of the states are nearly degenerate,
the line with constant mass differences lies very close to the surface given
by Eq(4.3). The two mass differences are often tightly constrained in other
methods. The mass differences are constrained to within $0.3$ GeV from
studying long cascade decay chains where one combines constraints from several
endpoints of different invariant mass combinations [157]. The concepts from
Chapter 5 also provide another technique to determine the mass differences.
After initially assuming that we know the mass difference, we show that our
technique can also find the mass differences. The $M_{3C}$ distribution shape
is a function of both the mass scale and mass differences. We can constrain
both the mass differences and the mass scale by fitting the $\max m_{12}$ edge
constrains and the ideal $M_{3C}(M_{N},\Delta M_{YN},\Delta M_{XN})$
distribution shapes to the observed $M_{3C}(\Delta M_{YN},\Delta M_{XN})$. To
find all three parameters from this fit, we will take $M_{N}$, $\Delta
M_{YN}$, and $\Delta M_{XN}$ as independent variables.
For this first phase of the analysis, let’s assume the mass differences are
given. For each event, the variable $M_{3C,LB}$ is the minimum value of the
mass of $Y$ (third lightest state) after minimizing over the unknown division
of the missing transverse energy $\not{P}_{T}$ between the two dark matter
particles $N$:
$\displaystyle m^{2}_{3C,LB}(\Delta M_{YN},\Delta M_{XN})$ $\displaystyle=$
$\displaystyle\min_{p,q}\ (p+\alpha_{1}+\alpha_{2})^{2}$ (8.1)
$\displaystyle{\rm{Constrained}\ \rm{to}}$ $\displaystyle(p+q)_{T}$
$\displaystyle=$ $\displaystyle\not{P}_{T}$ (8.2)
$\displaystyle\sqrt{(\alpha_{1}+\alpha_{2}+p)^{2}}-\sqrt{(p^{2})}$
$\displaystyle=$ $\displaystyle\Delta M_{YN}$ (8.3)
$\displaystyle\sqrt{(\alpha_{2}+p)^{2}}-\sqrt{(p^{2})}$ $\displaystyle=$
$\displaystyle\Delta M_{XN}$ (8.4)
$\displaystyle(\alpha_{1}+\alpha_{2}+p)^{2}$ $\displaystyle=$
$\displaystyle(\beta_{1}+\beta_{2}+q)^{2}$ (8.5)
$\displaystyle(\alpha_{2}+p)^{2}$ $\displaystyle=$
$\displaystyle(\beta_{2}+q)^{2}$ (8.6) $\displaystyle p^{2}$ $\displaystyle=$
$\displaystyle q^{2}$ (8.7)
where $\Delta M_{YN}=M_{Y}-M_{N}$ and $\Delta M_{XN}=M_{X}-M_{N}$. There are
eight unknowns in the four momenta of $p$ and $q$ and seven equations of
constraint. Likewise we define $M_{3C,UB}$ as the maximum value of $M_{Y}$
compatible with the same constraints. We discuss how to numerically implement
this minimization and maximization in Sec. 8.2. Because the true $p$ and $q$
are within the domain over which we are minimizing ( or maximizing), the
minimum (maximum) is guaranteed to be less than (greater than) or equal to
$M_{Y}$.
Figure 8.1: Ideal $M_{3C,LB}$ and $M_{3C,UB}$ distribution for 25000 events
in two cases both sharing $\Delta M_{YN}=100$ GeV and $\Delta M_{XN}=50$ GeV.
The solid, thick line shows $M_{Y}=200$ GeV, and the dashed, thin line shows
$M_{Y}=250$ GeV.
Figure 8.1 shows an ideal $M_{3C,LB}$ and $M_{3C,UB}$ distributions for
$25000$ events in two cases both sharing $\Delta M_{YN}=100$ GeV and $\Delta
M_{XN}=50$ GeV. The dashed line represents the distributions from events with
$M_{Y}=250$ GeV, and in the solid line represents the distributions from
events with $M_{Y}=200$ GeV. One can clearly see sharp end-points in both the
upper bound and lower bound distributions that give the value $M_{Y}$. The
upper-bound distribution is shown in red.
We expect an event to better constrain the mass scale if we are given
additional information about that event. In comparison to $M_{2C}$ where $Y$
decays directly to $N$, the on-shell intermediate state has an additional
state $X$. The extra state $X$ and information about its mass difference
$\Delta M_{XN}$ enables $M_{3C}$ to make an event-by-event bound on $M_{Y}$
stronger than in the case of $M_{2C}$. We will see that this stronger bound is
partially offset by greater sensitivity to errors in momentum measurements.
The variable $M_{3C}$, like other variables we have discussed $M_{2C}$,
$M_{T2}$ $M_{T}$ and $M_{CT}$, is invariant under longitudinal boosts of its
input parameters. We can understand this because all the constraint equations
are invariant under longitudinal boosts. The unknown $p$ and $q$ are minimized
over all possible values fitting the constraints so changing frame of
reference will not change the extrema of the Lorentz invariant quantity
$(p+\alpha)^{2}$.
### 8.2 How to calculate $M_{3C}$
To find the $M_{3C}$, we observe that if we assume masses of $Y$, $X$, and $N$
to be $(\chi_{Y},\chi_{X},\chi_{N})$ 111We again use $\chi$ to distinguish
hypothetical masses $(\chi_{Y},\chi_{X},\chi_{N})$ from the true masses
$(M_{Y},M_{X},M_{N})$. with the given mass differences then there are eight
constraints
$\displaystyle(p+q)_{T}$ $\displaystyle=$ $\displaystyle\not{P}_{T}$ (8.8)
$\displaystyle(\alpha_{1}+\alpha_{2}+p)^{2}=(\beta_{1}+\beta_{2}+q)^{2}$
$\displaystyle=$ $\displaystyle\chi^{2}_{Y}$ (8.9)
$\displaystyle(\alpha_{2}+p)^{2}=(\beta_{2}+q)^{2}$ $\displaystyle=$
$\displaystyle\chi^{2}_{X}=(\chi_{Y}-\Delta M_{YN}+\Delta M_{XN})^{2}$ (8.10)
$\displaystyle p^{2}=q^{2}$ $\displaystyle=$
$\displaystyle\chi_{N}^{2}=(\chi_{Y}-\Delta M_{YN})^{2}$ (8.11)
and eight unknowns, $p_{\mu}$ and $q_{\mu}$. The spatial momenta $\vec{p}$ and
$\vec{q}$ can be found as linear functions of the $0^{\rm{th}}$ component of
$p$ and $q$ by solving the matrix equation
$\displaystyle\left(\begin{matrix}1&0&0&1&0&0\cr
0&1&0&0&1&0\cr-2\alpha_{x}&-2\alpha_{y}&-2\alpha_{z}&0&0&0\cr
0&0&0&-2\beta_{x}&-2\beta_{y}&-2\beta_{z}\cr-2(\alpha_{2})_{x}&-2(\alpha_{2})_{y}&-2(\alpha_{2})_{z}&0&0&0\cr
0&0&0&-2(\beta_{2})_{x}&-2(\beta_{2})_{y}&-2(\beta_{2})_{z}\end{matrix}\right)\left(\begin{matrix}p_{x}\cr
p_{y}\cr p_{z}\cr q_{x}\cr q_{y}\cr
q_{z}\end{matrix}\right)=\left(\begin{matrix}-(k+\alpha+\beta)_{x}\cr-(k+\alpha+\beta)_{y}\cr-2\alpha_{o}p_{o}+(\chi_{Y}^{2}-\chi_{N}^{2})-\alpha^{2}\cr-2\beta_{o}q_{o}+(\chi_{Y}^{2}-\chi_{N}^{2})-\beta^{2}\cr-2(\alpha_{2})_{o}p_{o}+(\chi_{X}^{2}-\chi_{N}^{2})-(\alpha_{2})^{2}\cr-2(\beta_{2})_{o}q_{o}+(\chi_{X}^{2}-\chi_{N}^{2})-(\beta_{2})^{2}\end{matrix}\right)$
(8.12)
where $\alpha=\alpha_{1}+\alpha_{2}$ and $\beta=\beta_{1}+\beta_{2}$. We
substitute $\vec{p}$ and $\vec{q}$ into the on-shell constraints
$\displaystyle p_{o}^{2}-(\vec{p}(p_{o},q_{o}))^{2}=\chi_{N}^{2}$ (8.13)
$\displaystyle q_{o}^{2}-(\vec{q}(p_{o},q_{o}))^{2}=\chi_{N}^{2}$ (8.14)
giving two quadratic equations for $p_{o}$ and $q_{o}$. These give four
complex solutions for the pair $p_{o}$ and $q_{o}$. We test each event for
compatibility with a hypothetical triplet of masses
$(\chi_{Y},\chi_{X},\chi_{N})=(\chi_{Y},\chi_{Y}-\Delta M_{YN}+\Delta
M_{XN},\chi_{Y}-\Delta M_{YX})$. If there are any purely real physical
solutions where ($p_{o}>0$ and $q_{o}>0$), then we consider the mass triplet
$(\chi_{Y},\chi_{X},\chi_{N})$ viable.
As we scan $\chi_{Y}$, a solution begins to exist at a value less than or
equal to $M_{Y}$ and then sometimes ceases to be a solution above $M_{Y}$.
Sometimes there are multiple islands of solutions. To find the $M_{3C}$, we
can test each bin starting at $\chi_{Y}=\Delta M_{YN}$ along the path
parameterized by $\chi_{Y}$ and the mass differences to find the first bin
where at least one physical solution exists. This is the lower bound value of
$M_{3C}$ for the event.
Likewise for an upper bound. We begin testing at the largest conceivable mass
scale we expect for the $Y$ particle state. If a solution exists, we declare
this a trivial $M_{3C,UB}$. If no solution exists, then we search downward in
mass scale until a solution exists.
A faster algorithm involves a bisection search for a solution within the
window that starts at $\Delta M_{YN}$ and ends at our highest conceivable
mass. We then use a binary search algorithm to find at what $\chi_{Y}$ the
solution first appears for $M_{3C,LB}$ or at what $\chi_{Y}$ the solution
disappears giving $M_{3C,UB}$. There are rare events where there are multiple
islands of solutions. This occurs in about $0.01\%$ of the events with $0$ UTM
and in about $0.1\%$ for $k_{T}=250$ GeV. In our algorithm we neglect windows
of solutions more narrow than $15$ GeV. We report the smallest edge of the
lower-mass island as the lower bound and the upper edge of the larger-mass
island as the upper bound. Because of the presence of islands, we are not
guaranteed that solutions exist everywhere between $M_{3C,LB}$ and
$M_{3C,UB}$. With the inclusion of energy resolution errors and background
events, we also find cases where there are no solutions anywhere along the
path being parameterized. If there is no solutions anywhere in the domain we
make $M_{3C,LB}$ to be the largest conceivable mass scale, and we set
$M_{3C,UB}=\Delta M_{YN}$.
#### 8.2.1 Comparison to other Mass Shell Techniques
The variable $M_{3C}$ is a hybrid mass shell technique[168]. In Chapter 4 we
reviewed other mass shell technique that measure the mass in the case of three
new states. Cheng, Gunion, Han Marandella, McElrath (CGHMM) [166] describe
counting solutions at assumed values for the mass for $Y$, $X$, and $N$. By
incorporating a minimization or maximization, we enhance CGHMM’s approach
because we have a variable whose value changes slightly with slight changes of
the inputs instead of the binary on-off that CGHMM has with the existence of a
solution222I am grateful to Chris Lester for pointing out to me the importance
of this feature.. We also incorporate knowledge of the added information from
other measurements which accurately determine the mass differences. Finally,
the quantity $M_{3C}$ can form a distribution whose shape tells us information
about the masses. Because for most events there is only one ‘turn-on’ point
below $M_{Y}$, the distribution $M_{3C,LB}$ is very similar to the derivative
of the figure 8 of CGHMM[166] to the left of their peak and $M_{3C,UB}$ is
similar to the negative of the derivative to the right of their peak. They
differ in that there may be multiple windows of solutions; also CGHMM’s Fig 8
is not exactly along the line of fixed mass differences; and the effect of
backgrounds and energy resolution are dealt with differently.
We also hope to show that the use of the distribution’s shape enables us to
exploit the essentially non-existent dependence of the distributions on the
unknown collision energies and incorporate the dependency on UTM directly.
This diminishes the dependence of the measurement on the unknown model while
still allowing us to exploit the majority of the distribution shape in the
mass determination.
After studying previous MSTs, we were tempted to use Bayes theorem with a
parton distribution function as a likelihood function as was done in Goldstein
and Dalitz [161] and Kondo, Chikamatsu, Kim [162] (GDKCK). They used the
parton distribution function to weight the different mass estimates of the
top-quark mass ($M_{Y}$ in our topology). We found that such a weighting leads
to a prediction for $M_{Y}$ much smaller than the true value. This can be
understood because the parton distributions make collisions with smaller
center-of-mass energies (small $x$) more likely, therefore the posterior will
prefer smaller values of $M_{Y}$ which are only possible for smaller values of
$x$. Only if one includes the cross-section for production, i.e. the
likelihood of the event existing at all, in the Bayes likelihood function will
we have the appropriate factor that suppresses small values of $x$ and
therefore small values of $M_{Y}$. This balance therefore leads to the maximum
likelihood (in the limit of infinite data) occurring at the correct $M_{Y}$.
Unfortunately, inclusion of the magnitude of the cross section introduces
large model dependence. In the case of the top-quarks mass determination, the
GDKCK technique gives reasonable results. This is because they were not
scanning the mass scale, but rather scanning $\chi_{Y}$ (the top quark mass)
while assuming $\chi_{N}=M_{N}=0$ and $\chi_{X}=M_{X}=M_{W}$. The likelihood
of solutions as one scans $\chi_{Y}$ rapidly goes to zero below the true top-
quark mass $M_{top}$. The parton distribution suppresses the likelihood above
the true $M_{top}$. The net result gives the maximum likelihood near the true
top-quark mass but suffers from a systematic bias [163][164] that must be
removed by modeling [165].
### 8.3 Factors for Successful Shape Fitting
One major advantage of using the $M_{3C}$ distribution (just as the $M_{2C}$
distribution) is that the bulk of the relevant events are used to determine
the mass and not just those near the endpoint. To make the approach mostly
model independent, we study on what factors the distributions shape depends.
We show that there is a strong dependence on upstream transverse momentum
(UTM) which can be measured and used as an input and therefore does not
increase the model dependency. We show there is no numerically significant
dependence on the collision energy which is distributed according to the
parton distribution functions. This makes the distribution shape independent
of the production cross section and the details of what happens upstream from
the part of the decay chain that we are studying. We model these effects with
a simple Mathematica Monte Carlo event generator assuming $M_{Y}=200$ GeV,
$M_{X}=150$ GeV, and $M_{N}=100$ GeV.
* •
Effect of Combinatorics Ambiguities
Just as in the topology in Fig 4.4 studied earlier, where
$\tilde{\chi}^{o}_{2}$ decays via a three body decay, the branch assignments
can be determined by either distinct OSSF pairs or by studying which OSSF
pairs have both $m_{12}\leq\max m_{12}$. In $90\%$ of the events, there is
only one combination that satisfies this requirement. This allows us to know
the branch assignment of $95\%$ of the four lepton events without ambiguity.
Unlike the three-body decay case, the order of the two leptons on each branch
matters. The intermediate mass $M_{X}^{2}=(\alpha_{2}+p)^{2}$ depends on
$\alpha_{2}$ and does not depend on $\alpha_{1}$. To resolve this ambiguity we
consider the four combinations that preserve the branch assignment but differ
in their ordering. The $M_{3C,LB}$ for the event is the minimum of these
combinations. Likewise the $M_{3C,UB}$ is the maximum of these combinations.
As one expects, Fig. 8.2 (Left) shows how the combinatorics ambiguity degrades
the sharpness of the cut-off at the true mass. Not all applications share this
ambiguity; for example in top-quark mass determination (pair produced with
$Y=top$, $X=W^{\pm}$, $N=\nu$) the $b$-quark-jet marks $\alpha_{1}$ and the
lepton marks $\alpha_{2}$.
Figure 8.2: (Left) The $M_{3C}$ distributions before (solid) and after
(dashed) introducing the combinatoric ambiguity. (Right) The $M_{3C}$
distributions with and without UTM. The no UTM case ($k_{T}=0$) is shown by
the solid line; the large UTM case with $k_{T}=250$ GeV is shown by the dashed
line.
* •
Effect Large Upstream Transverse Momentum
In a similar behavior to $M_{2C}$, the distributions of the variable $M_{3C}$
show a strong dependence on large upstream transverse momentum (UTM). In our
case study this is identified as the combination of all the hadronic activity.
Fig 8.2 (Right) shows the stronger upper-bound cut-off in the presence of
large UTM. Unlike $M_{2C}$, in $M_{3C}$ with $k_{T}=0$ we still have events
with non-trivial upper bound values.
We also tested the distribution for different values of $k^{2}$. In Fig 8.2
(Right) we fixed $k^{2}=(100\,\,{\rm{GeV}})^{2}$. We also performed
simulations with $k^{2}=(500\,\,{\rm{GeV}})^{2}$ and found the difference of
the two $M_{3C}$ distributions consistent with zero after 15000 events. In
other words, the distribution depends mostly on $k_{x}$ and $k_{y}$ and
appears independent of $k_{0}$.
* •
The Effects of Detector Energy Resolution
Figure 8.3: The effect of energy resolution on the $M_{3C}$ distribution.
(Left) The dotted line shows the energy resolution has washed out the sharp
cut-off. (Right) $M_{3C,LB}$ with perfect energy resolution plotted against
the result with realistic energy resolutions.
Compared to $M_{2C}$, the information about the extra states gives a stronger
set of bounds. Unfortunately, the solution is also more sensitive to momentum
measurement error. We model the finite energy resolution using Eqs(7.13-7.15).
The hadronic energy resolution is larger than the leptonic energy resolution
which will increase the uncertainty in the missing transverse momentum.
Fig. 8.3 shows the effect on the $M_{3C}$ distribution of realistic leptonic
energy resolution while keeping $k_{T}=0$. On the left we show the energy
resolution (dashed line) compared to the perfect energy resolution (solid
line). The energy resolution washes out the sharp cut-off. On the right we
show $M_{3C,LB}$ with perfect energy resolution plotted against the result
with realistic energy resolution. This shows that the cut-off is strongly
washed out because the events with $M_{3C}$ closer to the true value of
$M_{Y}$ ($200$ GeV in this case) are more sensitive to energy resolution than
the events with $M_{3C}$ closer to $\Delta M_{YN}$. The peak in the upper-
bound distribution at $M_{3C,UB}=100$ GeV comes from events that no longer
have solutions after smearing the four momenta.
Because the energy resolution affects the distribution shape, its correct
modeling is important. In the actual LHC events the $\not{P}_{T}$ energy
resolution will depend on the hadronic activity in the events being
considered. Two events with the same $k_{T}=0$ may have drastically different
$\not{P}_{T}$ resolutions. Modeling the actual detector’s energy resolution
for the events used is important to predict the set of ideal distribution
shapes which are compared against the low-statistics observed data.
* •
Parton Distribution Function Dependence
For a mostly model-independent mass determination technique, we would like to
have a distribution that is independent of the specific production mechanism
of the assumed event topology. The parton distributions determine the center-
of-mass energy $\sqrt{s}$ of the hard collisions; but the cross section
depends on model-dependent couplings and parameters. The events we consider
may come from production of different initial states (gluons or squarks) but
end in the assumed decay topology. The $M_{3C}$ distribution, like the
$M_{2C}$ distribution, shows very little dependence on the underlying
collision parameters or circumstances.
Fig 8.4 (Left) shows the dependence of the $M_{3C}$ distributions on the
parton collision energy. The solid line shows the $M_{3C}$ distributions of
events with collision energy $\sqrt{s}$ distributed according to Eq(7.10), and
the dashed line shows the $M_{3C}$ distributions of events with fixed
$\sqrt{s}=600$ GeV. Fig 8.4 (Right) shows the difference of these two
distributions with $2\sigma$ error bars as calculated from $15000$ events. The
two distributions are equal to within this numerical precision.
Figure 8.4: The dependence of the $M_{3C}$ distributions on the parton
collision energy. The solid line shows the collision distributed according to
Eq(7.10), and the dashed line shows the collision energy fixed at
$\sqrt{s}=600$ GeV.
* •
Effects of $\not{P}_{T}$ Cuts
As described in [187, 186, 15, 16], the Standard Model four leptons with
missing transverse momentum backgrounds are very strongly suppressed after a
missing transverse momentum cut. This requires an analysis of what type of an
effect a $\not{P}_{T}>20$ GeV cut will have on the distribution shape. Fig.
8.5 shows that the effect of this cut is dominantly on the smallest $M_{3C}$
bins. On the left we see the $M_{3C,LB}$ result versus the $\not{P}_{T}$.
Unlike the $M_{2C}$ case in Fig 7.8, the $M_{3C}$ solutions in Fig. 8.5 do not
correlate with the $\not{P}_{T}$. The right side of the figure shows
difference between the $M_{3C,UB}$ and $M_{3C,LB}$ distributions with and
without the cut $\not{P}_{T}>20$ GeV. The smallest bins of $M_{3C,LB}$ are the
only bins to be statistically significantly affected. The left-side suggests
this lack of dependence on $\not{P}_{T}$ cuts is somewhat accidental and is
due to the nearly uniform distribution of $M_{3C}$ solutions being removed by
the cut. The stronger dependence of the smallest $M_{3C}$ bins on the
$\not{P}_{T}$ cut means we can either model the effect or exclude the first
bins (about $10$ GeV worth) from the distribution used to predict the mass. We
will choose the latter because we will find that the background events also
congregate in these first several bins.
Figure 8.5: The effect of missing transverse momentum cuts on the $M_{3C}$
distributions. (Left) The $M_{3C,LB}$ result versus the $\not{P}_{T}$. (Right)
The difference of the $M_{3C,UB}$ and $M_{3C,LB}$ distributions with and
without the cut $\not{P}_{T}>20$ GeV. The smallest bins of $M_{3C,LB}$ are the
only bins to be statistically significantly affected.
* •
Spin correlations
In our simulation to produce the ideal curves, we assumed each decay was
uncorrelated with its spin in the rest frame of the decaying particle. Spin
correlations at production may affect this; however, such spin correlations
are washed out when each branch of our assumed topology is at the end of
longer decay chain. These upstream decays are the source of considerable UTM.
Some spin correlation information can be easily taken into account. The
$m_{12}$ (or $m_{34}$) distribution’s shape is sensitive to the spin
correlations along the decay chain [150]. The observed $m_{12}$ (or $m_{34}$)
distribution can be used as an input to producing the ideal distribution
shape. In this way spin correlations along the decay chain can be taken into
account in the simulations of the ideal distributions.
Spin correlations between the two branches can also affect the distribution
shape. To demonstrate this, we modeled a strongly spin-correlated direct
production process. Fig 8.6 (Left) shows the spin-correlated process that we
consider. Fig 8.6 (Right) shows the $M_{3C}$ upper and lower bound
distributions from this process compared to the $M_{3C}$ distribution from the
same topology and masses but without spin correlations. We compare
distributions with perfect energy resolution, $m_{v_{1}}=m_{v_{2}}=0$ GeV,
$M_{Y}=200$ GeV, $M_{X}=150$ GeV, and $M_{N}=100$ GeV. Our maximally spin-
correlated process involves pair production of $Y$ through a pseudoscalar $A$.
The fermion $Y$ in both branches decays to a complex scalar $X$ and visible
fermion $v_{1}$ through a purely chiral interaction. The scalar $X$ then
decays to the dark-matter particle $N$ and another visible particle $v_{2}$.
The production of the pseudoscalar ensures that the $Y$ and $\bar{Y}$ are in a
configuration
$\sqrt{2}^{\,-1}(|\uparrow\downarrow\rangle+|\downarrow\uparrow\rangle)$. The
particle $Y$ then decays with $X$ preferentially aligned with the spin. The
$\bar{Y}$ decays with $X^{*}$ preferentially aligned against the spin. Because
$X$ is a scalar, the particle $N$ decays uniformly in all directions from the
rest frame of $X$. The correlated directions of $X$ causes the two sources of
missing transverse momentum to be preferentially parallel. The resulting
greater magnitude of missing transverse momentum increases the cases where
$M_{3C}$ has a solution closer to the endpoint. For this reason the spin
correlated distribution (red dotted distribution) is above the uncorrelated
distribution (black thick lower bound distribution and blue thick upper bound
distribution). The upper bound distribution is statistically identical after
25000 events. The lower bound distribution clearly has been changed, but not
as much as the $M_{2C}$ distribution in Fig. 6.3. This is due to the
subsequent decay of the $X$ particle which lessens the likelihood that the two
$N$s will be parallel. For the remainder of the chapter we assume no such spin
correlations are present.
Figure 8.6: Effect of a spin correlated process on the $M_{3C}$ distributions.
Modeled masses are $M_{Y}=200$ GeV, $M_{X}=150$ GeV, and $M_{N}=100$ GeV. The
thick black and thick blue lines show the distributions of the uncorrelated
lower bound and upper bound $M_{3C}$. The dotted red lines show the
distributions of the spin correlated process.
* •
Backgrounds
The Standard Model backgrounds for four-leptons and missing transverse
momentum are studied in [187, 186]. In the two previous chapters we summarized
the SM backgrounds and the dominance of SUSY backgrounds for this channel (see
also [15][16]). As was mentioned earlier, the Standard Model backgrounds for
four leptons with missing transverse momentum are very strongly suppressed
after a missing transverse momentum cut.
To improve the quality of the fit, a model for backgrounds can be created
based on assumptions about the origin of the events and wedge-box analysis
like those described in Bisset et al.[160] and references therein. We
preformed such a model in the previous chapter, and found the distribution
shape isolated the correct mass of $\tilde{\chi}^{o}_{1}$ to within $1$ GeV
with versus without the background model. Although the quality of the fit
without modeling the backgrounds is decreased, we find that the mass of the
LSP associated with the best-fit is tolerant to unknown backgrounds. In the
SPS 1a example studied in the next section, SUSY background events form about
$12\%$ of the events. We again see less than a $1$ GeV shift in
$M_{\tilde{\chi}^{o}_{1}}$ with versus without the background events. As such,
we do not try to model the background in this $M_{3C}$ study.
### 8.4 Estimated Performance
With an understanding of the factors affecting the shapes of the $M_{3C}$
distributions, we combine all the influences together and consider the mass
determination performance. We follow the same modeling and simulation
procedures used in Sec. 7.2 except now we include an on-shell intermediate
state and calculate $M_{3C}$. We use HERWIG [17, 18, 19] to generate events
according to the SPS 1a model [3]. This is an mSUGRA model with $m_{o}=100$
GeV, $m_{1/2}=250$ GeV, $A_{o}=-100$ GeV, $\tan\beta=10$, and sign$(\mu)=+$.
We initially assume the mass differences $\Delta M_{YN}=80.8$ GeV and $\Delta
M_{XN}=47.0$ GeV have been previously measured and take them as exact. We
later show how the distribution shape with the $m_{ll}$ endpoint also solves
for the two mass differences.
Like $M_{2C}$, the $M_{3C}$ distributions is able to be well-predicted from
observations. When we are determining masses based on distribution shapes, the
larger the area difference between two distributions representing different
masses, the more accurately and precisely we will be able to tell the
difference. Unfortunately, the $M_{3C}$ distribution is sensitive to the
finite momentum resolution errors and combinatoric errors which have the
effect to decrease the large area difference between the distributions of two
different masses shown in Figs. 8.1.
Just as in Chapter 7, we model the distribution shape with a simple
Mathematica Monte Carlo event generator, and compare the predicted
distribution shapes against the HERWIG data modeling the benchmark point SPS
1a. We again use the observed UTM as an input to the Mathematica simulated
ideal distributions. By modeling with the Mathematica, which does not use the
SUSY cross sections, and comparing to more realistic HERWIG generated data, we
hope to test that we understand the major dependencies of the shape of the
$M_{3C}$ distributions. The Mathematica event generator produces events based
on assumptions uniform angular distribution of the parents in the COM frame,
the parent particles decay with a uniform angular distribution in the rest
frame of the parent. The particles are all taken to be on shell. $k_{T}>0$ is
simulated by boosting the event in the transverse plane to compensate a
specified $k_{T}$.
Figure 8.7 shows the performance. The left side of Fig. 8.7 shows the $M_{3C}$
lower bound and upper bound counts per $5$ GeV bin from the HERWIG generated
data, and it shows the predicted ideal counts calculated with Mathematica
using the observed UTM distribution and assuming $M_{\tilde{\chi}^{o}_{1}}=95$
GeV. The upper bound and lower bound show very close agreement. The background
events are shown in dotted lines and are seen accumulating in the first few
bins. These are the same bins dominantly affected by $\not{P}_{T}$ cuts. For
this reason we excluded these first two bins from the distribution fit. The
right side of Fig 8.7 shows the $\chi^{2}$ fit of the HERWIG simulated data
$M_{3C}$ distribution to the ideal $M_{3C}(M_{\tilde{\chi}^{o}_{1}})$
distribution with $M_{\tilde{\chi}^{o}_{1}}$ taken as the independent
variable. Ideal distribution shapes are calculated at values of
$M_{\tilde{\chi}^{o}_{1}}=80,85,90,95,100,105,110$ GeV. The $\chi^{2}$ fitting
procedure is described in more detail in Appendix D. All effects discussed in
this chapter are included: combinatoric errors, SUSY backgrounds, energy
resolution, and $\not{P}_{T}$ cuts. Our ideal curves were based on the
Mathematica simulations with $25000$ events per ideal curve. Despite the
presence of backgrounds, the $\chi^{2}$ is not much above $1$ per bin.
The particular fit shown in Fig 8.7 gives $M_{\tilde{\chi}^{o}_{1}}=98.6\pm
2.2$ GeV where we measure uncertainty by using the positions at which
$\chi^{2}$ is increased by one. We repeat the fitting procedure on nine sets
each with $\approx 100\,\,{\rm{fb}}^{-1}$ of HERWIG data ($\approx 1400$
events for each set). The mean and standard deviation of these nine fits give
$M_{\tilde{\chi}^{o}_{1}}=96.8\pm 3.7$ GeV. After $300\,\,{\rm{fb}}^{-1}$ one
should expect a $\sqrt{3}$ improvement in the uncertainty giving $\pm
2.2\,\,{\rm{GeV}}$. The correct mass is $M_{\tilde{\chi}^{o}_{1}}=96.05$ GeV.
Figure 8.7: Fit of ideal $M_{3C}(M_{\tilde{\chi}^{o}_{1}})$ distributions to
the HERWIG generated $M_{3C}$ distributions. Includes combinatoric errors,
backgrounds, energy resolution, and $\not{P}_{T}$ cuts. (Left) The observed
HERWIG counts versus the expected counts for ideal
$M_{\tilde{\chi}^{o}_{1}}=95$ GeV. (Right) The $\chi^{2}$ fit to ideal
distributions of $M_{\tilde{\chi}^{o}_{1}}=80,85,90,95,100,105,110$ GeV. The
correct mass is $M_{\tilde{\chi}^{o}_{1}}=96.0$ GeV.
Figure 8.8: Combined constraint from fitting both $\max m_{ll}$ and $M_{3C}$
with the mass difference as free parameters. We parameterized the difference
from the true values in the model by $\Delta
M_{YN}=80.8\,\,{\rm{GeV}}+\delta\Delta M_{YN}$ and $\Delta
M_{XN}=47.0\,\,{\rm{GeV}}+\delta\Delta M_{XN}$. We shown the $1,2,3\sigma$
contours.
Our technique also enables a combined fit to both the mass differences and the
mass scale. The $m_{ll}$ endpoint in Eq(4.3) constrains a relationship between
the three masses. Gjelsten, Miller, and Osland estimate this edge can be
measured to better than $0.08\,\,{\rm{GeV}}$ [157, 159] using many different
channels that lead to the same edge, and after modeling energy resolution and
background. In the next several paragraphs we show that by combining this edge
with the fits to the $M_{3C}$ upper bound and lower bound distribution shapes,
we can constrain all three masses.
We first numerically calculated the effect of errors in the mass differences.
We used $300\,\,{\rm{fb}}^{-1}$ of events (about $3600$ signal and $450$
background) including all the effects discussed. We parameterize the error
from the correct mass difference in the model by the variables $\delta\Delta
M_{YN}$ and $\delta\Delta M_{XN}$ so that mass differences are given by
$\Delta M_{YN}=80.8\,\,{\rm{GeV}}+\delta\Delta M_{YN}$ and $\Delta
M_{XN}=47.0\,\,{\rm{GeV}}+\delta\Delta M_{XN}$. We calculated the
$\chi^{2}_{M_{3C}}$ at $8$ points surrounding the correct mass difference by
amounts $\delta\Delta M_{YN}=\pm 1\,\,{\rm{GeV}}$ and $\delta\Delta M_{XN}=\pm
1\,\,{\rm{GeV}}$. The minimum $\chi^{2}_{M_{3C}}$ at each of the $9$ points
gives the value of $M_{\tilde{\chi}^{o}_{1}}$ for each mass difference
assumed. The position of the minima can be parameterized by a quadratic near
the true mass difference. The resulting fit
$M_{\tilde{\chi}^{o}_{1}}=96.4+1.9\,(\delta\Delta
M_{XN})^{2}+2.5\,\delta\Delta M_{YN}\,\delta\Delta M_{XN}+3.2\delta\Delta
M_{XN}-3.8\,(\delta\Delta M_{YN})^{2}-8.3\,\delta\Delta M_{YN}$ (8.15)
shows in units of GeV how the mass $M_{\tilde{\chi}^{o}_{1}}$ is affected by
small errors in the mass difference.
The $\chi^{2}_{M_{3C}}$ at these $9$ different values for the mass difference
provides another constraint on the mass differences. Fitting the
$\chi^{2}_{M_{3C}}$ to a general quadratic near the true mass difference gives
$\chi^{2}_{M_{3C}}=162+38\,(\delta\Delta M_{XN})^{2}-8\,\delta\Delta
M_{YN}\delta\Delta M_{XN}-5\,\delta\Delta M_{XN}-25\,\delta\Delta M_{YN}.$
(8.16)
The $\chi^{2}_{M_{3C}}$ described by Eq(8.16) shows a sloping valley. The
sides of the valley constrain $\delta\Delta M_{XN}$ as seen by the large
positive coefficient of $(\delta\Delta M_{XN})^{2}$. The valley slopes
downward along $\delta\Delta M_{YN}$ as can be seen by the large negative
coefficient of $\delta\Delta M_{YN}$ which leaves this axis unbounded within
the region studied.
The unconstrained direction along $\Delta M_{YN}$ can by constrained by the
mass relationships given by the endpoint $\max m_{ll}$ or by $M_{T2}$ as
described in Chapter 5. Here we work with $\max m_{ll}$ to provide this
constraint. We calculate the $\chi^{2}_{\max m_{ll}}$ using $\delta\max
m_{ll}=0.08\,\,{\rm{GeV}}$, and Eq(4.3) with $M_{Y}=\Delta M_{YN}+\delta\Delta
M_{YN}+M_{\tilde{\chi}^{o}_{1}}$ and $M_{X}=\Delta M_{XN}+\delta\Delta
M_{XN}+M_{\tilde{\chi}^{o}_{1}}$ where we use $M_{\tilde{\chi}^{o}_{1}}$ from
Eq(8.15). This $\chi^{2}_{\max m_{ll}}$ constrains a diagonal path in
$(\delta\Delta M_{YN},\delta\Delta M_{XN})$. The value of the $\chi^{2}_{\max
m_{ll}}$ at the minimum is a constant along this path. The combined constraint
$\chi^{2}_{M_{3C}}$ to $\chi^{2}_{\max m_{ll}}$ leads to the a minimum at
$\delta\Delta M_{YN}=0.18\,\,{\rm{GeV}}$ and $\delta\Delta
M_{XN}=0.25\,\,{\rm{GeV}}$ where $M_{\tilde{\chi}^{o}_{1}}=95.7\,\,{\rm{GeV}}$
as shown in Fig. 8.8. We have shown the contours where $\chi^{2}$ increases
from its minimum by $1$,$2$ and $3$. The uncertainty in the mass differences
around this minimum is about $\pm 0.2\,\,{\rm{GeV}}$. The bias from the true
mass differences is due to the unconstrained $\chi^{2}_{M_{3C}}$ along
$\delta\Delta M_{YN}$. We can use modeling to back out the unbiased mass
differences. Propagating the effects of uncertainty in the mass differences,
we estimate a final performance of $M_{\tilde{\chi}^{o}_{1}}=96.4\pm 2.4$ GeV
after $300\,\,{\rm{fb}}^{-1}$ with about $3600$ signal events amid $450$
background events. We find the mass differences (without bias correction) of
$M_{\tilde{\chi}^{o}_{2}}-M_{\tilde{\chi}^{o}_{1}}=81.0\pm 0.2$ GeV and
$M_{\tilde{l}_{R}}-M_{\tilde{\chi}^{o}_{1}}=44.3\pm 0.2$ GeV. This is to be
compared to the HERWIG values of $M_{\tilde{\chi}^{o}_{1}}=96.0$ GeV,
$M_{\tilde{\chi}^{o}_{2}}-M_{\tilde{\chi}^{o}_{1}}=80.8$ GeV, and
$M_{\tilde{l}_{R}}-M_{\tilde{\chi}^{o}_{1}}=44.3$ GeV.
How does this performance compare to other techniques? Because SPS 1a is
commonly used as a test case, we can approximately compare performance with
two different groups. The technique of [156, 157, 158, 159] which uses edges
from cascade decays determines the LSP mass to $\pm 3.4\,\,{\rm{GeV}}$ with
about 500 thousand events from $300\,\,{\rm{fb}}^{-1}$. The approach of CEGHM
[167] assumes a pair of symmetric decay chains and assumes two events have the
same structure. They reach $\pm 2.8\,\,{\rm{GeV}}$ using $700$ signal events
after $300\,\,{\rm{fb}}^{-1}$, but have a $2.5\,\,{\rm{GeV}}$ systematic bias
that needs modeling to remove. Both techniques also constrain the mass
differences. By comparison we find $\pm 3.7$ GeV after $100\,\,{\rm{fb}}^{-1}$
($1200$ signal, $150$ background) and estimate $\pm 2.4$ GeV after
$300\,\,{\rm{fb}}^{-1}$ ($3600$ signal, $450$ background) and propagating
reasonable uncertainties in the mass differences. The uncertainty calculations
differ amongst groups. Some groups estimate uncertainty from repeated trials,
and others use the amount one can change the mass before $\chi^{2}$ increases
by one. Without careful comparison under like circumstance by the same
research group, the optimal method is not clear. What is clear is that fitting
the $M_{3C}$ and $M_{3C,UB}$ distributions determines the mass of invisible
particles as well if not better than the other known methods in both accuracy
and precision.
### Chapter Summary
In this chapter, we have extended the constrained mass variable to the case
with three new particle states. We assume events with a symmetric, on-shell
intermediate-state topology shown in Fig. 4.3. We can either assume that we
have measured the mass difference between these new states through other
techniques, or combine our technique with the $\max m_{ll}$ edge to find both
mass differences and the mass scale. The new constrained mass variables
associated with events with these three new particle states are called
$M_{3C,LB}$ and $M_{3C,UB}$, and they represent an event-by-event lower bound
and upper bound (respectively) on the mass of the third lightest state
possible while maintaining the constraints described in Eqs(8.2-8.7). We have
shown that most of the $M_{2C}$ distribution properties described in the
previous chapter carry through to $M_{3C}$. The additional particle state and
mass difference enable a tighter event-by-event bound on the true mass. The
$M_{3C}$ distribution is more sensitive than the $M_{2C}$ distribution to the
momentum and energy resolution errors. Studying the performance on the SPS 1a
benchmark point, we find that despite the energy resolution degradation, we
are able to determine $M_{\tilde{\chi}^{o}_{1}}$ to at least the same level of
precision and accuracy or possibly even better precision and accuracy than
that found by using cascade decays or by using other MSTs.
## Chapter 9 Discussions and Conclusions
In this thesis, we have started in Chapter 2 with a study of principles that
in the past have successfully predicted the existence and the mass of new
particle states. We showed astrophysical evidence for dark matter and
discussed properties that make supersymmetry an attractive theory for the
dark-matter candidate. We discussed the grand unification of the three gauge
couplings, and we introduced the mass-matrix unification suggested by Georgi
and Jarlskog. In Chapter 3 we showed how the mass unification is being
quantitatively challenged by tighter constraints on the parameters, but that
the mass unification hypothesis leads to a favored class of $\tan\beta$
enhanced threshold corrections where $\tan\beta$ is large and the sign of the
gluino mass is opposite that of the wino mass. Chapter 4 turns to the task of
measuring the masses of new particle states if they are produced with dark
matter at hadron colliders. We observed that determining the sign of $M_{3}$,
determining spin, and determining the entire sparticle mass spectra are all
facilitated by a model-independent measure the mass of the dark-matter
particle. We discussed the current techniques to constrain the masses of the
new particle states. We developed ideas of how to use $M_{T2}$ in Chapter 5
which enables us to find new constraints between particles in symmetric
cascade decays, and we argued that $M_{T2}$ is a better variable to extract
this information than $M_{CT}$. Chapters 6 and 7 discussed the constrained
mass variables $M_{2C}$ and $M_{2C,UB}$ which uses a previously measured mass
difference to find the maximum and minimum mass of the dark-matter particle on
an event-by-even basis. This technique benefits from an $M_{2C}$ and
$M_{2C,UB}$ distribution that can be determined from experimental observables
and the unknown dark-matter mass. The distribution shape can be fit to the
bulk of the data to estimate the mass of the dark-matter particle more
accurately than end-point techniques. Chapter 8 discussed the benefits and
down-falls of the constraint mass variable $M_{3C}$. Unlike the edges-from-
cascade-decays method [156, 157, 158, 159] and the mass-shell technique of
CEGHM[167] each of which require the existence of four new particle states,
the constrained mass variables only require two new particle states for
$M_{2C}$ and three new particle states for $M_{3C}$. Although proper
comparisons to other techniques are difficult, it is clear that the
constrained mass variables can determine the mass of invisible particles at
least as well if not better than the other known model-independent methods in
both accuracy and precision.
From here what are the directions for continued research? The origin of the
Yukawa mass matrices remains unknown. The Georgi-Jarlskog relations appear to
hint at a solution, but only decrease the number of unknowns slightly. The
fits to the GUT-scale Yukawa parameters is a starting point for more research
on explaining this structure. There are relatively few studies on how to
measure the sign of the gluino mass parameter relative to the winos mass
parameter; there are even fewer that do not make use of simplifying
assumptions about the underlying supersymmetry model. The results of Chapter 5
need to be extended to include the effects of combinatoric ambiguities. In
Chapters 7-8 there are several distribution properties which should be better
studied. We would like to understand the physics of the invariance of the
constrained mass variable distributions to changes in $\sqrt{s}$ and choice of
$k^{2}$ when $k_{T}\neq 0$. In Chapter 8 it would also be nice to classify the
conditions which determine how many islands of solutions exist for $M_{3C}$.
The $M_{3C}$ method could be tested on top-quark data to remeasure the masses
of the $t$, $W$ and $\nu$. In Chapters 6-8 we have considered two constrained
mass variables $M_{2C}$ and $M_{3C}$ where the decay chains are symmetric, but
we may find greater statistics for asymmetric events. For example in many
models one expects more events where $\tilde{\chi}^{o}_{2}$ is produced with
$\tilde{\chi}^{o}_{1}$ or $\tilde{\chi}^{\pm}$. The concepts of the
constrained mass variable could be extended to deal with these asymmetric
event topologies or to cases with both LSPs and neutrinos.
In conclusion, the thesis contributed to predictions of mass properties of new
particle states that may be discovered at the LHC and developed the tools to
make precision mass measurements that may help falsify or validate this
prediction and many others. Modern physics is rooted in an interplay between
creating theoretical models, developing experimental techniques, making
observations, and falsifying theories. The LHC’s results will complete this
long-awaited cycle making experimental observations to constrain the growing
balloon of theories of which this thesis is a small part.
## Appendix A Renormalization Group Running
This appendix details our use of the Renormalization Group Equations (RGEs) to
relate the reported values for observables at low-energy scales to their
values the GUT scale. The one-loop and two-loop RG equations for the gauge
coupling and the Yukawa couplings in the Standard Model and in the MSSM that
we use in this thesis come from a number of sources [113][40][114][110]. We
solve the system using the internal differential equation solver in
Mathematica 6.0 on a basic Intel laptop. We considered using SoftSUSY or
SuSpect to decrease the likelihood of programming errors on our part, but
neither met our needs sufficiently. SoftSUSY[191] has only real Yukawa
couplings, and SuSpect [183] focuses primarily on the third generation RG
evolution.
We assume a model for the neutrino spectrum that does not impact the running
of the quark or charged lepton parameters below the GUT scale. This assumption
is not trivial; there are neutrino models where $Y^{\nu}$ will affect the
ratio $y_{b}/y_{\tau}$ by as much as 15% [192]. Furthermore due to unitarity
constraints, the UV completion of the effective dimension-five neutrino mass
operators must take place before the GUT scale [193]. One way to have a UV
completion below the GUT scale while making the RGE running independent of the
specific model is to set-up a non-degenerate right-handed neutrino spectrum.
One then constructs the couplings such that large Yukawa couplings in
$Y^{\nu}$ are integrated out with the heaviest right-handed neutrino mass
which is at or above the GUT scale. One can then have the remaining right-
handed neutrinos considerably below the GUT scale which couple to much smaller
values $Y^{\nu}$ that do not affect the running of $Y^{e}$ or $Y^{u}$. The
rules for running the neutrino parameters and decoupling the right-handed
neutrinos for the $U_{MNS}$ are found in Refs. [113][194] [195] [196] [197].
For our work finding the quark and lepton masses and mixings at the GUT scale,
we assume the effects of the neutrinos on the RGE equations effectively
decouple at or above the GUT scale.
Our running involves three models at different energy scales. Between $M_{X}$
to an effective SUSY scale $M_{S}$, we assume the MSSM. Between $M_{S}$ to
$M_{Z}$, we assume the Standard Model. Below $M_{Z}$, we assume $QCD$ and
$QED$. We do not assume unification, rather we use the low-energy values in
Table 3.1 and Eq(3.1) provide our boundary condition. The uncertainty in the
final digit(s) of each parameter is listed in parenthesis. For example
$123.4\pm 0.2$ is written as $123.4(2)$.
First, we use QCD and QED low-energy effective theories to run the quark and
lepton masses from the scales at which the particle data book reports their
values to $M_{Z}$. The $V_{CKM}$ mixing angles are set at this scale. Next, we
run using the Standard-Model parameters, including the Higgs VEV and Higgs
self coupling from $M_{Z}$ to an effective SUSY scale $M_{S}$. At an effective
SUSY scale $M_{S}$, we match the Standard Model Yukawa couplings using the
running Higgs VEV $v(M_{S})$ onto MSSM Yukawa couplings. In the matching
procedure, we convert the gauge coupling constants from $\overline{MS}$ to
$\overline{DR}$ as described in [191], and we apply approximate supersymmetric
threshold corrections as parameterized in Chapter 3. Details of the RG
equations and the matchings are described in the following sections.
### A.1 RGE Low-Energy $SU(3)_{c}\times U(1)_{EM}$ up to the Standard Model
Below $M_{Z}$ we use QCD and QED RG Equations to move the parameters to the
scale reported in the Particle Data Book. We run the light-quark masses
between $\mu=M_{Z}$ to $\mu_{L}=2$ GeV with the following factors:
$\eta_{u,d,s}=(1.75\pm 0.03)+19.67\left(\alpha_{s}^{(5)}(M_{Z})-0.118\right)$
(A.1) $\eta_{c}=(2.11\pm
0.03)+41.13\left(\alpha_{s}^{(5)}(M_{Z})-0.118\right)$ (A.2)
$\eta_{b}=(1.46\pm 0.02)+8.14\left(\alpha_{s}^{(5)}(M_{Z})-0.118\right),$
(A.3)
that are calculated with the 4-loop RGE equations using Chetyrkin’s RUNDEC
software for Mathematica [198]. These factors are used as
$m_{b}(m_{b})\,/\eta_{b}=m_{b}(M_{Z})$. The uncertainty in the first term is
an estimate of the theoretical error from neglecting the five-loop running
give by
$\delta\eta/\eta\approx\exp({\mathcal{O}}(1)\langle\alpha_{s}\rangle^{5}\log\frac{M_{Z}}{\mu_{L}})-1$.
RUNDEC also converts the top quark’s pole mass to an $\overline{MS}$ running
mass, and applies the small threshold corrections from decoupling each of the
quarks. The different $\eta$ factors take the parameters from a six-quark
effective model at $\mu=M_{Z}$ to their respective scales.
The leptons can be converted from their Pole mass into running masses at
$M_{Z}$ by a similar running factor:
$\eta_{e}=1.046\ \ \ \eta_{\mu}=1.027\ \ \ \eta_{\tau}=1.0167.$ (A.4)
We did not incorporate the lepton masses as error sources.
The input values in Table 3.1 lead to the values at $M_{Z}$ in Table A.1 where
we have propagated uncertainty from both the strong coupling constant
uncertainty and the uncertainty in the mass at the starting scale. These
values for the parameters at $M_{Z}$ agree with those calculated recently
elsewhere in the literature [199].
Parameter At Scale $\mu=M_{Z}$ $m_{t}$ $169.6(2.3)$ GeV $m_{c}$ $599(50)$ MeV
$m_{u}$ $1.54(50)$ MeV $m_{b}$ $2.87(6)$ GeV $m_{s}$ $57(12)$ MeV $m_{d}$
$3.1(5)$ MeV
Table A.1: The $\overline{MS}$ values for the running quark masses at
$M_{Z}$.
The gauge coupling constants $g_{1}$ and $g_{2}$ are determined from $e_{EM}$,
$\sin^{2}\theta_{W}$ at the scale $M_{Z}$ by
${g_{1}^{2}(M_{Z})}=\frac{5}{3}\frac{{e^{2}_{EM}(M_{Z})}}{\cos^{2}\theta_{W}}\
\ \ {g^{2}_{2}(M_{Z})}=\frac{{e^{2}_{EM}(M_{Z})}}{\sin^{2}\theta_{W}}.$ (A.5)
### A.2 RGE for Standard Model with Neutrinos up to MSSM
We found using the two-loop vs the one-loop RG equations for the gauge-
coupling produced a shift in the low energy parameters of more than $1\sigma$.
For this reason, we always use two-loop RG equations for the gauge couplings.
For the Yukawa couplings we find the two-loop vs the one-loop RGE equations
shift the low-energy parameters much less than the experimental uncertainty.
Therefore, we perform our minimizations using the one-loop RG equations for
the Yukawa and checked the final fits against the two-loop RG equations.
We reproduce here for reference the two-loop gauge coupling RG equations and
the one-loop Yukawa coupling RG equations as this level of detail is
sufficient to reproduce our results. We define $t=\log M$. The equations are
compiled from [196], [113], [110].
$\displaystyle\frac{d}{dt}g_{1}$ $\displaystyle=$
$\displaystyle\frac{1}{16\pi^{2}}\frac{41}{10}g_{1}^{3}$ (A.6)
$\displaystyle+\frac{g_{1}^{3}}{(16\pi^{2})^{2}}\left(\frac{199}{50}g_{1}^{2}+\frac{27}{15}g_{2}^{2}+\frac{44}{5}g_{3}^{2}-\frac{17}{10}{\rm{Tr}}(Y^{u}Y^{u\,{\dagger}})-\frac{1}{2}{\rm{Tr}}(Y^{d}Y^{d\,{\dagger}})-\frac{3}{2}{\rm{Tr}}(Y^{e}Y^{e\,{\dagger}})\right)$
$\displaystyle\frac{d}{dt}g_{2}$ $\displaystyle=$
$\displaystyle\frac{1}{16\pi^{2}}\frac{-19}{6}g_{2}^{3}$ (A.7)
$\displaystyle+\frac{g_{2}^{3}}{(16\pi^{2})^{2}}\left(\frac{9}{10}\,g_{1}^{2}+\frac{35}{6}\,g_{2}^{2}+12\,g_{3}^{2}-\frac{3}{2}{\rm{Tr}}(Y^{u}Y^{u\,{\dagger}})-\frac{3}{2}{\rm{Tr}}(Y^{d}Y^{d\,{\dagger}})-\frac{1}{2}{\rm{Tr}}(Y^{e}Y^{e\,{\dagger}})\right)$
$\displaystyle\frac{d}{dt}g_{3}$ $\displaystyle=$
$\displaystyle\frac{-7}{16\pi^{2}}g_{3}^{3}$ (A.8)
$\displaystyle+\frac{g_{3}^{3}}{(16\pi^{2})^{2}}\left(\frac{11}{10}\,g_{1}^{2}+\frac{9}{2}\,g_{2}^{2}-26g_{3}^{2}-2{\rm{Tr}}(Y^{u}Y^{u\,{\dagger}})-2{\rm{Tr}}(Y^{d}Y^{d\,{\dagger}})\right)$
$T={\rm{Tr}}(3Y^{u}Y^{u{\dagger}}+3Y^{d}Y^{d{\dagger}}+Y^{e}Y^{e{\dagger}}+Y^{\nu}Y^{\nu{\dagger}})$
(A.9) $\displaystyle\frac{d}{dt}Y^{u}$ $\displaystyle=$
$\displaystyle\frac{1}{16\pi^{2}}\left(-\textbf{I}\,G^{u}+\textbf{I}\,T+\frac{3}{2}\,Y^{u}\,Y^{u{\dagger}}-\frac{3}{2}\,Y^{d}\,Y^{d\,{\dagger}}\right).Y^{u}$
(A.10) $\displaystyle\frac{d}{dt}Y^{\nu}$ $\displaystyle=$
$\displaystyle\frac{1}{16\pi^{2}}\left(-\textbf{I}\,G^{\nu}+\textbf{I}\,T+\frac{3}{2}Y^{\nu}Y^{\nu\,{\dagger}}-\frac{3}{2}Y^{e}\,Y^{e\,{\dagger}}\right).Y^{\nu}$
(A.11) $\displaystyle\frac{d}{dt}Y^{d}$ $\displaystyle=$
$\displaystyle\frac{1}{16\pi^{2}}\left(-\textbf{I}\,G^{d}+\textbf{I}\,T+\frac{3}{2}\,Y^{d}\,Y^{d\,{\dagger}}-\frac{3}{2}\,Y^{u}\,Y^{u{\dagger}}\right).Y^{d}$
(A.12) $\displaystyle\frac{d}{dt}Y^{e}$ $\displaystyle=$
$\displaystyle\frac{1}{16\pi^{2}}\left(-\textbf{I}\,G^{e}+\textbf{I}\,T+\frac{3}{2}\,Y^{e}\,Y^{e\,{\dagger}}-\frac{3}{2}\,Y^{\nu}\,Y^{\nu\,{\dagger}}\right).Y^{e}$
(A.13)
where
$\displaystyle\begin{matrix}G^{u}=\frac{17}{20}\,g_{1}^{2}+\frac{9}{4}\,g_{2}^{2}+8\,g_{3}^{2}&G^{d}=\frac{1}{4}\,g_{1}^{2}+\frac{9}{4}\,g_{2}^{2}+8\,g_{3}^{2}\cr
G^{e}=\frac{9}{4}\,g_{1}^{2}+\frac{9}{4}\,g_{2}^{2}&G^{\nu}=\frac{9}{20}\,g_{1}^{2}+\frac{9}{4}\,g_{2}^{2}.\end{matrix}$
(A.14)
The Higgs self-interaction $\lambda$ (following the convention
$-\lambda(H^{\dagger}H)^{2}\subset{\mathcal{L}}$) obeys the RG equation:
$\displaystyle 16\pi^{2}\frac{d}{dt}\lambda$ $\displaystyle=$ $\displaystyle
12\lambda^{2}-(\frac{9}{5}g_{1}^{2}+9g_{2}^{2})\lambda+\frac{9}{4}(\frac{3}{25}g_{1}^{4}+\frac{2}{5}g_{1}^{2}g_{2}^{2}+g_{2}^{4})+4T\,\lambda$
(A.16)
$\displaystyle-4{\rm{Tr}}(3\,Y^{u}Y^{u{\dagger}}Y^{u}Y^{u{\dagger}}+3\,Y^{d}Y^{d{\dagger}}Y^{d}Y^{d{\dagger}}+Y^{e}Y^{e{\dagger}}Y^{e}Y^{e{\dagger}}).$
Between $M_{Z}$ and $M_{S}$, we run the Standard-Model VEV using [40]:
$16\pi^{2}\frac{d}{dt}v=v\left(\frac{9}{4}(\frac{1}{5}g_{1}^{2}+g_{2}^{2})-T\right).$
(A.17) $\displaystyle 16\pi^{2}\frac{d}{dt}\kappa$ $\displaystyle=$
$\displaystyle\left(-\frac{3}{2}Y^{e}Y^{e\,{\dagger}}\kappa-\frac{3}{2}\kappa
Y^{e}Y^{e\,{\dagger}}+\frac{1}{2}Y^{\nu}Y^{\nu\,{\dagger}}\kappa+\frac{1}{2}\kappa
Y^{\nu}Y^{\nu\,{\dagger}}\right)$
$\displaystyle+\textbf{I}\left({\rm{Tr}}(2\,Y^{\nu}Y^{\nu{\dagger}}+2\,Y^{e}Y^{e{\dagger}}+6Y^{u}Y^{u\,{\dagger}}+6Y^{d}Y^{d\,{\dagger}})-3g_{2}^{2}+4\lambda\right)\kappa$
$\frac{d}{dt}M_{RR}=\frac{1}{16\pi^{2}}\left((Y^{\nu{\dagger}}\,Y^{\nu})^{T}\,M_{RR}+M_{RR}\,Y^{\nu{\dagger}}\,Y^{\nu}\right).$
(A.18)
We normalize $g_{1}$ to the $SU(5)$ convention:
$g_{1}^{2}=\frac{3}{5}g_{Y}^{2}$.
### A.3 RGE for the MSSM with Neutrinos up to GUT Scale
The MSSM RG equations are in the $\overline{DR}$ scheme. To convert
$\overline{MS}$ running quark masses to $\overline{DR}$ masses at the same
scale, we use [111]
$\delta
m_{sch}=\frac{m_{\overline{DR}}}{m_{\overline{MS}}}=\left(1-\frac{1}{3}\frac{g_{3}^{2}}{4\pi^{2}}-\frac{29}{72}(\frac{g_{3}^{2}}{4\pi^{2}})^{2}\right).$
(A.19)
No corrections were used for switching lepton running masses from one scheme
to the other. The gauge coupling constants are related via [191]
$\displaystyle\alpha_{3\,\overline{DR}}^{-1}=\alpha_{3\,\overline{MS}}^{-1}+\frac{3}{12\pi}$
(A.20)
$\displaystyle\alpha_{2\,\overline{DR}}^{-1}=\alpha_{2\,\overline{MS}}^{-1}+\frac{2}{12\pi}$
(A.21)
where $\alpha_{3}=g_{3}^{2}/4\pi$ and $\alpha_{2}=g_{2}^{2}/4\pi$. We define
$t=\log M$. The RG equations are:
$\displaystyle\frac{d}{dt}g_{1}$ $\displaystyle=$
$\displaystyle\frac{1}{16\pi^{2}}\frac{33}{5}g_{1}^{3}$ (A.22)
$\displaystyle+\frac{g_{1}^{3}}{(16\pi^{2})^{2}}\left(\frac{199}{25}g_{1}^{2}+\frac{27}{5}g_{2}^{2}+\frac{88}{5}g_{3}^{2}-\frac{26}{5}{\rm{Tr}}(Y^{u}Y^{u\,{\dagger}})-\frac{14}{5}{\rm{Tr}}(Y^{d}Y^{d\,{\dagger}})-\frac{18}{5}{\rm{Tr}}(Y^{e}Y^{e\,{\dagger}})\right)$
$\displaystyle\frac{d}{dt}g_{2}$ $\displaystyle=$
$\displaystyle\frac{1}{16\pi^{2}}g_{2}^{3}$ (A.23)
$\displaystyle+\frac{g_{2}^{3}}{(16\pi^{2})^{2}}\left(\frac{9}{5}\,g_{1}^{2}+25\,g_{2}^{2}+24\,g_{3}^{2}-6{\rm{Tr}}(Y^{u}Y^{u\,{\dagger}})-6{\rm{Tr}}(Y^{d}Y^{d\,{\dagger}})-2{\rm{Tr}}(Y^{e}Y^{e\,{\dagger}})\right)$
$\displaystyle\frac{d}{dt}g_{3}$ $\displaystyle=$
$\displaystyle\frac{-3}{16\pi^{2}}g_{3}^{3}$ (A.24)
$\displaystyle+\frac{g_{3}^{3}}{(16\pi^{2})^{2}}\left(\frac{11}{5}\,g_{1}^{2}+9\,g_{2}^{2}+14\,g_{3}^{2}-4{\rm{Tr}}(Y^{u}Y^{u\,{\dagger}})-4{\rm{Tr}}(Y^{d}Y^{d\,{\dagger}})\right)$
$\displaystyle\frac{d}{dt}Y^{u}$ $\displaystyle=$
$\displaystyle\frac{1}{16\pi^{2}}\left(-\textbf{I}\,G^{u}+\textbf{I}\,3\,{\rm{Tr}}(Y^{u}Y^{u\,{\dagger}})+\textbf{I}\,{\rm{Tr}}(Y^{\nu}Y^{\nu\,{\dagger}})+3\,Y^{u}\,Y^{u{\dagger}}+Y^{d}\,Y^{d\,{\dagger}}\right).Y^{u}$
(A.25) $\displaystyle\frac{d}{dt}Y^{\nu}$ $\displaystyle=$
$\displaystyle\frac{1}{16\pi^{2}}\left(-\textbf{I}\,G^{\nu}+\textbf{I}\,3\,{\rm{Tr}}(Y^{u}Y^{u\,{\dagger}})+\textbf{I}\,{\rm{Tr}}(Y^{\nu}Y^{\nu\,{\dagger}})+3Y^{\nu}Y^{\nu\,{\dagger}}+Y^{e}\,Y^{e\,{\dagger}}\right).Y^{\nu}$
(A.26) $\displaystyle\frac{d}{dt}Y^{d}$ $\displaystyle=$
$\displaystyle\frac{1}{16\pi^{2}}\left(-\textbf{I}\,G^{d}+\textbf{I}\,3\,{\rm{Tr}}(Y^{d}Y^{d\,{\dagger}})+\textbf{I}\,{\rm{Tr}}(Y^{e}Y^{e\,{\dagger}})+3\,Y^{d}\,Y^{d\,{\dagger}}+Y^{u}\,Y^{u{\dagger}}\right).Y^{d}$
(A.27) $\displaystyle\frac{d}{dt}Y^{e}$ $\displaystyle=$
$\displaystyle\frac{1}{16\pi^{2}}\left(-\textbf{I}\,G^{e}+\textbf{I}\,3\,{\rm{Tr}}(Y^{d}Y^{d\,{\dagger}})+\textbf{I}\,{\rm{Tr}}(Y^{e}Y^{e\,{\dagger}})+3\,Y^{e}\,Y^{e\,{\dagger}}+Y^{\nu}\,Y^{\nu\,{\dagger}}\right).Y^{e}$
(A.28)
where
$\displaystyle\begin{matrix}G^{u}=\frac{13}{15}\,g_{1}^{2}+3\,g_{2}^{2}+\frac{16}{3}\,g_{3}^{2}&G^{d}=\frac{17}{15}\,g_{1}^{2}+3\,g_{2}^{2}+\frac{16}{3}\,g_{3}^{2}\cr&\cr
G^{e}=\frac{9}{5}\,g_{1}^{2}+3\,g_{2}^{2}&G^{\nu}=\frac{3}{5}\,g_{1}^{2}+3\,g_{2}^{2}.\end{matrix}$
(A.29)
$\displaystyle
16\pi^{2}\frac{d}{dt}\kappa=\left(Y^{e}Y^{e\,{\dagger}}\kappa+\kappa
Y^{e}Y^{e\,{\dagger}}+Y^{\nu}Y^{\nu\,{\dagger}}\kappa+\kappa
Y^{\nu}Y^{\nu\,{\dagger}}\right)$
$\displaystyle+\textbf{I}\left(\,2\,{\rm{Tr}}(Y^{\nu}Y^{\nu{\dagger}}+6Y^{u}Y^{u\,{\dagger}})-\frac{6}{5}g_{1}^{2}-6g_{2}^{2}\right)\kappa$
(A.30)
$\frac{d}{dt}M_{RR}=\frac{1}{16\pi^{2}}\left(2\,(Y^{\nu{\dagger}}\,Y^{\nu})^{T}\,M_{RR}+2\,M_{RR}\,Y^{\nu{\dagger}}\,Y^{\nu}\right)$
(A.31)
We normalize $g_{1}$ to the $SU(5)$ convention:
$g_{1}^{2}=\frac{3}{5}g_{Y}^{2}$.
### A.4 Approximate Running Rules of Thumb
Figure A.1: The impact of RG running parameter ratios with $M_{S}=500$ GeV.
These ratios determine $\chi$ defined in Eq. A.34. If $M_{S}=M_{Z}$, all three
are degenerate at small $\tan\beta$.
At $\tan\beta<10$ and when $M_{S}\sim M_{Z}$, the running of parameters from
$M_{Z}$ to $M_{X}$ obey the following relationships exploited in the RRRV [1]
study:
$\frac{\bar{\eta}_{o}(M_{X})}{\bar{\eta}_{o}(M_{Z})}\approx\frac{\bar{\rho}_{o}(M_{X})}{\bar{\rho}_{o}(M_{Z})}\approx\frac{\lambda_{o}(M_{X})}{\lambda_{o}(M_{Z})}\approx
1\ \ \ \ \frac{A_{o}(M_{X})}{A_{o}(M_{Z})}\approx\chi$ (A.32)
$\frac{(m_{u}/m_{c})_{o}(M_{X})}{(m_{u}/m_{c})_{o}(M_{Z})}\approx\frac{(m_{d}/m_{s})_{o}(M_{X})}{(m_{d}/m_{s})_{o}(M_{Z})}\approx
1\ \ \ \ \frac{(m_{s}/m_{b})_{o}(M_{X})}{(m_{s}/m_{b})_{o}(M_{Z})}\approx\chi\
\ \ \
\frac{(m_{c}/m_{t})_{o}(M_{X})}{(m_{c}/m_{t})_{o}(M_{Z})}\approx\chi^{3}$
(A.33)
where
$\chi\approx\exp\left(\int_{t_{o}}^{t_{f}}\frac{-y_{t}^{2}}{16\pi^{2}}\,dt\right)$
(A.34)
and $t_{o}=\log M_{Z}$ and $t_{f}=\log M_{X}$. Fig. A.1 shows the two-loop
results reflecting how the running of these ratios change when we consider
$M_{S}=500$ GeV and larger values of $\tan\beta$. If we had plotted the
$M_{S}=M_{Z}$ case, the curves in Fig. A.1 would have been completely
degenerate at small $\tan\beta$. As a consistency check, we compare our
results to Fusaoka and Koide [110]. They find $\chi\approx 0.851$ at
$\tan\beta=10$ with their choice of $m_{t}(M_{Z})=180$. Our code also gives
$\chi=0.851$ at $\tan\beta=10$ if we select $M_{S}=M_{Z}$ and omit the
conversion to $\overline{DR}$ scheme.
## Appendix B Hierarchical Yukawa couplings and Observables
In this appendix, we find the masses and unitary matrices associated with a
general hierarchical Yukawa coupling matrix. We use this expansion to find
general expressions for $V_{CKM}$ to the order needed for comparison to
current experiments. Consider a general hierarchical Yukawa matrix
$Y^{u}=y_{t}\left(\begin{matrix}Y^{u}_{11}&Y^{u}_{12}&Y^{u}_{13}\cr
Y^{u}_{21}&Y^{u}_{22}&Y^{u}_{23}\cr
Y^{u}_{31}&Y^{u}_{32}&1\end{matrix}\right)\ \ \approx\ \
Y^{u}=y_{t}\left(\begin{matrix}{\mathcal{O}}(\epsilon^{4})&{\mathcal{O}}(\epsilon^{3})&{\mathcal{O}}(\epsilon^{3})\cr{\mathcal{O}}(\epsilon^{3})&{\mathcal{O}}(\epsilon^{2})&{\mathcal{O}}(\epsilon^{2})\cr{\mathcal{O}}(\epsilon^{3})&{\mathcal{O}}(\epsilon^{2})&1\end{matrix}\right)$
(B.1)
where $\epsilon$ is a small parameter. Diagonalization of
$Y^{u}\,Y^{u{\dagger}}$ leads to the diagonal matrix from
$U^{u}_{L}\,Y^{u}\,Y^{u{\dagger}}\,U^{u{\dagger}}_{L}=|D_{u}|^{2}$ where
$Y^{u}=U_{L}^{u{\dagger}}D_{u}U^{u}_{R}$. The matrix $|D^{u}|^{2}$ gives the
square of the mass eigenstates.
$\displaystyle|D^{u}_{3}/y_{t}|^{2}$ $\displaystyle=$ $\displaystyle
1+Y^{u}_{23}\,Y^{u*}_{23}+Y^{u}_{32}\,Y^{u*}_{32}+{\mathcal{O}(\epsilon^{5})}$
(B.2) $\displaystyle|D^{u}_{2}/y_{t}|^{2}$ $\displaystyle=$ $\displaystyle
Y^{u}_{22}\,Y^{u*}_{22}+Y^{u}_{12}\,Y^{u*}_{12}+Y^{u}_{21}\,Y^{u*}_{21}-Y^{u*}_{22}\,Y^{u}_{23}\,Y^{u}_{32}-Y^{u}_{22}\,Y^{u*}_{23}\,Y^{u*}_{32}+{\mathcal{O}(\epsilon^{7})}$
(B.3) $\displaystyle|D^{u}_{3}/y_{t}|^{2}$ $\displaystyle=$ $\displaystyle
Y^{u}_{11}\,Y^{u*}_{11}-\frac{Y^{u*}_{11}\,Y^{u}_{12}\,Y^{u}_{21}}{Y^{u}_{22}}-\frac{Y^{u}_{11}\,Y^{u*}_{12}\,Y^{u*}_{21}}{Y^{u*}_{22}}+\frac{Y^{u}_{12}\,Y^{u*}_{12}\,Y^{u}_{21}\,Y^{u*}_{21}}{Y^{u}_{22}\,Y^{u*}_{22}}+{\mathcal{O}(\epsilon^{10})}$
(B.4)
The resulting approximate expressions for $U_{L}^{u{\dagger}}$ are
$\displaystyle(U_{L}^{u{\dagger}})^{13}=Y^{u}_{13}+Y^{u}_{12}\,Y^{u*}_{32}+{\mathcal{O}}(\epsilon^{6})$
(B.5)
$\displaystyle(U_{L}^{u{\dagger}})^{23}=Y^{u}_{23}+Y^{u}_{22}\,Y^{u*}_{32}+{\mathcal{O}}(\epsilon^{6})$
(B.6)
$\displaystyle(U_{L}^{u{\dagger}})^{33}=1-\frac{1}{2}Y^{u}_{23}Y^{u*}_{23}+{\mathcal{O}}(\epsilon^{6})$
(B.7) $\displaystyle(U_{L}^{u{\dagger}})^{12}$ $\displaystyle=$
$\displaystyle\frac{Y^{u}_{12}}{Y^{u}_{22}}$
$\displaystyle-\frac{{Y^{u}_{12}}^{2}\,Y^{u*}_{12}}{2\,{Y^{u}_{22}}^{2}\,Y^{u*}_{22}}-\frac{Y^{u}_{12}\,Y^{u}_{21}\,Y^{u*}_{21}}{{Y^{u}_{22}}^{2}\,Y^{u*}_{22}}+\frac{Y^{u}_{11}\,Y^{u*}_{21}}{Y^{u}_{22}\,Y^{u*}_{22}}-\frac{Y^{u}_{13}\,Y^{u}_{32}}{Y^{u}_{22}}+\frac{Y^{u}_{12}\,Y^{u}_{23}\,Y^{u}_{32}}{{Y^{u}_{22}}^{2}}+{\mathcal{O}}(\epsilon^{5})$
$\displaystyle(U_{L}^{\dagger})^{22}$ $\displaystyle=$ $\displaystyle
1-\frac{1}{2}\left(|Y^{u}_{23}|^{2}+\left|\frac{Y^{u}_{12}}{Y^{u}_{22}}\right|^{2}\right)+{\mathcal{O}}(\epsilon^{4})$
(B.9) $\displaystyle(U_{L}^{\dagger})^{32}$ $\displaystyle=$
$\displaystyle-Y^{u*}_{23}-Y^{u*}_{22}\,Y^{u}_{32}-\frac{Y^{u}_{12}\,Y^{u*}_{13}}{Y^{u}_{22}}+\frac{Y^{u}_{12}\,Y^{u*}_{12}\,Y^{u*}_{23}}{2\,Y^{u}_{22}\,Y^{u*}_{22}}+{\mathcal{O}}(\epsilon^{5})$
(B.10) $\displaystyle(U_{L}^{u{\dagger}})^{11}$ $\displaystyle=$
$\displaystyle
1-\frac{1}{2}\left|\frac{Y^{u}_{12}}{Y^{u}_{22}}\right|^{2}+{\mathcal{O}}(\epsilon^{4})$
(B.11) $\displaystyle(U_{L}^{u{\dagger}})^{21}$ $\displaystyle=$
$\displaystyle-\frac{Y^{u*}_{12}}{Y^{u*}_{22}}$
$\displaystyle+\frac{Y^{u}_{12}\,{Y^{u*}_{12}}^{2}}{2\,Y^{u}_{22}\,{Y^{u*}_{22}}^{2}}+\frac{Y^{u*}_{12}\,Y^{u}_{21}\,Y^{u*}_{21}}{Y^{u}_{22}\,{Y^{u*}_{22}}^{2}}-\frac{Y^{u*}_{11}\,Y^{u}_{21}}{Y^{u}_{22}\,Y^{u*}_{22}}+\frac{Y^{u*}_{13}\,Y^{u*}_{32}}{Y^{u*}_{22}}-\frac{Y^{u*}_{12}\,Y^{u*}_{23}\,Y^{u*}_{32}}{{Y^{u*}_{22}}^{2}}+{\mathcal{O}}(\epsilon^{5})$
$\displaystyle(U_{L}^{\dagger})^{31}$ $\displaystyle=$
$\displaystyle-Y^{u*}_{13}+\frac{Y^{u*}_{12}\,Y^{u*}_{23}}{Y^{u*}_{22}}+{\mathcal{O}}(\epsilon^{5}).$
(B.13)
These results have been checked numerically including the phase.
If we assume the $Y^{d}$ Yukawa matrices follow the same form as Eq. B.1, we
can obtain parallel expressions for $U^{d}_{L}$. The CKM matrix is then given
by $V_{CKM}=U^{u}_{L}U^{d{\dagger}}_{L}$ and, we obtain these expressions for
the individual components:
$\displaystyle V_{us}$ $\displaystyle=$
$\displaystyle\frac{Y^{d}_{12}}{Y^{d}_{22}}-\frac{Y^{u}_{12}}{Y^{u}_{22}}+$
$\displaystyle\frac{-{Y^{d}_{12}}^{2}\,Y^{d*}_{12}}{2\,{Y^{d}_{22}}^{2}\,Y^{d*}_{22}}-\frac{Y^{d}_{12}\,Y^{d}_{21}\,Y^{d*}_{21}}{{Y^{d}_{22}}^{2}\,Y^{d*}_{22}}+\frac{Y^{d}_{11}\,Y^{d*}_{21}}{Y^{d}_{22}\,Y^{d*}_{22}}-\frac{Y^{d}_{13}\,Y^{d}_{32}}{Y^{d}_{22}}+\frac{Y^{d}_{12}\,Y^{d}_{23}\,Y^{d}_{32}}{{Y^{d}_{22}}^{2}}+\frac{Y^{d}_{12}\,Y^{d*}_{12}\,Y^{u}_{12}}{2\,Y^{d}_{22}\,Y^{d*}_{22}\,Y^{u}_{22}}$
$\displaystyle+\frac{{Y^{u}_{12}}^{2}\,Y^{u*}_{12}}{2\,{Y^{u}_{22}}^{2}\,Y^{u*}_{22}}+\frac{Y^{u}_{12}\,Y^{u}_{21}\,Y^{u*}_{21}}{{Y^{u}_{22}}^{2}\,Y^{u*}_{22}}-\frac{Y^{d}_{12}\,Y^{u}_{12}\,Y^{u*}_{12}}{2\,Y^{d}_{22}\,Y^{u}_{22}\,Y^{u*}_{22}}-\frac{Y^{u}_{11}\,Y^{u*}_{21}}{Y^{u}_{22}\,Y^{u*}_{22}}+\frac{Y^{u}_{13}\,Y^{u}_{32}}{Y^{u}_{22}}-\frac{Y^{u}_{12}\,Y^{u}_{23}\,Y^{u}_{32}}{{Y^{u}_{22}}^{2}}+{\mathcal{O}(\epsilon^{5})}$
$\displaystyle V_{cd}$ $\displaystyle=$
$\displaystyle-\frac{Y^{d*}_{12}}{Y^{d*}_{22}}+\frac{Y^{u*}_{12}}{Y^{u*}_{22}}$
$\displaystyle-\frac{Y^{u}_{12}\,{Y^{u*}_{12}}^{2}}{2\,Y^{u}_{22}\,{Y^{u*}_{22}}^{2}}-\frac{Y^{u*}_{12}\,Y^{u}_{21}\,Y^{u*}_{21}}{Y^{u}_{22}\,{Y^{u*}_{22}}^{2}}+\frac{Y^{d*}_{12}\,Y^{u}_{12}\,Y^{u*}_{12}}{2\,Y^{d*}_{22}\,Y^{u}_{22}\,Y^{u*}_{22}}+\frac{Y^{u*}_{11}\,Y^{u}_{21}}{Y^{u}_{22}\,Y^{u*}_{22}}-\frac{Y^{u*}_{13}\,Y^{u*}_{32}}{Y^{u*}_{22}}+\frac{Y^{u*}_{12}\,Y^{u*}_{23}\,Y^{u*}_{32}}{{Y^{u*}_{22}}^{2}}$
$\displaystyle\frac{Y^{d}_{12}\,{Y^{d*}_{12}}^{2}}{2\,Y^{d}_{22}\,{Y^{d*}_{22}}^{2}}+\frac{Y^{d*}_{12}\,Y^{d}_{21}\,Y^{d*}_{21}}{Y^{d}_{22}\,{Y^{d*}_{22}}^{2}}-\frac{Y^{d*}_{11}\,Y^{d}_{21}}{Y^{d}_{22}\,Y^{d*}_{22}}+\frac{Y^{d*}_{13}\,Y^{d*}_{32}}{Y^{d*}_{22}}-\frac{Y^{d*}_{12}\,Y^{d*}_{23}\,Y^{d*}_{32}}{{Y^{d*}_{22}}^{2}}-\frac{Y^{d}_{12}\,Y^{d*}_{12}\,Y^{u*}_{12}}{2\,Y^{d}_{22}\,Y^{d*}_{22}\,Y^{u*}_{22}}+{\mathcal{O}(\epsilon^{5})}$
$\displaystyle V_{ub}$ $\displaystyle=$ $\displaystyle
Y^{d}_{13}-Y^{u}_{13}-\frac{Y^{d}_{23}\,Y^{u}_{12}}{Y^{u}_{22}}+\frac{Y^{u}_{12}\,Y^{u}_{23}}{Y^{u}_{22}}+{\mathcal{O}(\epsilon^{5})}$
(B.16) $\displaystyle V_{td}$ $\displaystyle=$
$\displaystyle-Y^{d*}_{13}+Y^{u*}_{13}+\frac{Y^{d*}_{12}\,Y^{d*}_{23}}{Y^{d*}_{22}}-\frac{Y^{d*}_{12}\,Y^{u*}_{23}}{Y^{d*}_{22}}+{\mathcal{O}(\epsilon^{5})}$
(B.17) $\displaystyle V_{cb}$ $\displaystyle=$
$\displaystyle-Y^{u}_{23}+Y^{d}_{23}+{\mathcal{O}(\epsilon^{4})}$ (B.18)
$\displaystyle V_{ts}$ $\displaystyle=$ $\displaystyle
Y^{u*}_{23}-Y^{d*}_{23}+{\mathcal{O}(\epsilon^{4})}$ (B.19)
In the above equations, we have included enough terms to achieve experimental
accuracy for the Yukawa couplings.
For completeness, we include the diagonal entries
$\displaystyle V_{ud}$ $\displaystyle=$ $\displaystyle
1-\frac{Y^{d}_{12}\,Y^{d*}_{12}}{2\,Y^{d}_{22}\,Y^{d*}_{22}}+\frac{Y^{d*}_{12}\,Y^{u}_{12}}{Y^{d*}_{22}\,Y^{u}_{22}}-\frac{Y^{u}_{12}\,Y^{u*}_{12}}{2\,Y^{u}_{22}\,Y^{u*}_{22}}$
(B.20) $\displaystyle V_{cs}$ $\displaystyle=$ $\displaystyle
1-\frac{Y^{d}_{12}\,Y^{d*}_{12}}{2\,Y^{d}_{22}\,Y^{d*}_{22}}+\frac{Y^{d}_{12}\,Y^{u*}_{12}}{Y^{d}_{22}\,Y^{u*}_{22}}-\frac{Y^{u}_{12}\,Y^{u*}_{12}}{2\,Y^{u}_{22}\,Y^{u*}_{22}}$
(B.21) $\displaystyle V_{tb}$ $\displaystyle=$ $\displaystyle
1+Y^{d}_{23}\,Y^{u*}_{23}-\frac{Y^{d}_{23}\,Y^{d*}_{23}}{2}-\frac{Y^{u}_{23}\,Y^{u*}_{23}}{2}$
(B.22)
To express the Wolfenstein $A$ and $\lambda$ terms to experimental accuracy,
one needs the lengthy expression for $V_{us}$. Because the Wolfenstein
parameters are redundant with the $V_{CKM}$ elements from which they are
based, we will omit explicit expressions for $A$ and $\lambda$. The
Wolfenstein parameters $\bar{\rho}+i\,\bar{\eta}$ represent a phase convention
independent expression for the CP violation and can be expressed compactly:
$\displaystyle\bar{\rho}+i\,\bar{\eta}$ $\displaystyle=$
$\displaystyle-\frac{Y^{d*}_{13}-Y^{u*}_{13}-\frac{Y^{d*}_{23}\,Y^{u*}_{12}}{Y^{u*}_{22}}+\frac{Y^{u*}_{12}\,Y^{u*}_{23}}{Y^{u*}_{22}}}{\left(-\frac{Y^{d*}_{12}}{Y^{d*}_{22}}+\frac{Y^{u*}_{12}}{Y^{u*}_{22}}\right)\,\left(Y^{d*}_{23}-Y^{u*}_{23}\right)}.$
(B.23)
The Jarkslog CP Invariant is given by
$\displaystyle J_{CP}$ $\displaystyle=$
$\displaystyle{\rm{Im}}\left(-\frac{\left(\frac{Y^{d*}_{12}}{Y^{d*}_{22}}-\frac{Y^{u*}_{12}}{Y^{u*}_{22}}\right)\,\left(-\left(Y^{d}_{23}\,Y^{u}_{12}\right)+Y^{d}_{13}\,Y^{u}_{22}-Y^{u}_{13}\,Y^{u}_{22}+Y^{u}_{12}\,Y^{u}_{23}\right)\,\left(Y^{d*}_{23}-Y^{u*}_{23}\right)}{Y^{u}_{22}}\right).$
(B.24)
## Appendix C Verifying $M_{T2}$ in Eq(5.4)
We derived the $M_{T2}$ side of Eq(5.4) by following the analytic solution
given by Barr and Lester in [170]. In this appendix, we outline how to verify
that $M_{T2}$ is is indeed given by
$M_{T2}(\chi=0,\alpha,\beta,\not{P}_{T}=-\alpha_{T}-\beta_{T})=2({\alpha}_{T}\cdot{\beta}_{T}+|{\alpha}_{T}|\,|{\beta}_{T}|)$
(C.1)
when $\alpha^{2}=\beta^{2}=0$ and $p^{2}=q^{2}=\chi^{2}=0$ and $g_{T}=0$. To
do this we note that $M_{T2}$ can also be defined as the minimum value of
$(\alpha+p)^{2}$ minimized over $p$ and $q$ subject to the conditions
$p^{2}=q^{2}=\chi^{2}$ (on-shell dark-matter particle state), and
$(\alpha+p)^{2}=(\beta+q)^{2}$ (equal on-shell parent-particle state), and
$(\alpha+\beta+p+q+g)_{T}=0$ (conservation of transverse momentum) [15].
The solution which gives Eq(C.1) has $p_{T}=-\beta_{T}$ and
$q_{T}=-\alpha_{T}$ with the rapidity of $p$($q$) equal to the rapidity of
$\alpha$ ($\beta$). We now verify that this solution satisfies all these
constraints. Transverse momentum conservation is satisfied trivially:
$(\alpha+\beta+p+q)_{T}=(\alpha+\beta-\alpha-\beta)_{T}=0$. The constraint to
have the parent particles on-shell can be verified with
$2|\alpha_{T}||p_{T}|-2\vec{p}_{T}\cdot\vec{\alpha}_{T}=2|\beta_{T}||q_{T}|-2\vec{q}_{T}\cdot\vec{\beta}_{T}=2|\beta_{T}||\beta_{T}|+2\vec{\alpha}_{T}\cdot\vec{\beta}_{T}$.
Now all that remains is to show that the parent particle’s mass is at a
minimum with respect to ways in which one splits up $p$ and $q$ to satisfy
$p_{T}+q_{T}=\not{P}_{T}$ while satisfying the above constraints. We take $p$
and $q$ to be a small deviation from the stated solution
${p}_{T}=-{\beta}_{T}+{\delta}_{T}$ and ${q}_{T}=-\alpha_{T}-{\delta}_{T}$
where $\delta_{T}$ is the small deviation in the transverse plane. We keep $p$
and $q$ on shell at $\chi=0$. The terms $p_{o}$, $p_{z}$, $q_{o}$, $q_{z}$ are
maintained at their minimum by keeping the rapidity of $p$ and $q$ equal to
$\alpha$ and $\beta$. The condition $(p+\alpha)^{2}=(q+\beta)^{2}$ is
satisfied for a curve of values for $\delta_{T}$. The deviation tangent to
this curve near $|\delta_{T}|=0$ is given by
$\delta_{T}(\lambda)=\lambda\,\hat{z}\times(\alpha_{T}|\beta_{T}|+\beta_{T}|\alpha_{T}|)$
where $\times$ is a cross product, $\hat{z}$ denotes the beam direction, and
we parameterized the magnitude by the scalar $\lambda$. Finally, we can
substitute $p$ and $q$ with the deviation $\delta_{T}(\lambda)$ back into the
expression $(\alpha+p)^{2}$ and verify that
$2({\alpha}_{T}\cdot{\beta}_{T}+|{\alpha}_{T}|\,|{\beta}_{T}|)$ at $\lambda=0$
is indeed the minimum.
## Appendix D Fitting Distributions to Data
In order to determine $M_{N}$(which is $M_{\tilde{\chi}^{o}_{1}}$ in our case
studies) we perform a $\chi^{2}$ fit between ideal constrained mass variable
distributions and the HERWIG data or other low-statistics data. Because the
mass difference is a given, it does not matter whether we work with $M_{N}$ or
$M_{Y}$ as the independent variable.
First we illustrate the procedure if there is only one distribution being fit
as in Chapter 6. We work with the constrained mass variable $M_{2C}$ using
$M_{Y}$ as the independent variable. To do this we define a $\chi^{2}$
distribution by computing the number of events, $C_{j},$ in a given range,
$j,$ (bin $j$) of $M_{2C}.$ The variable $N$ is the total number of events in
bins that will be fit. Assuming a Poisson distribution, we assign an
uncertainty, $\sigma_{j}$, to each bin $j$ given by
$\sigma_{j}^{2}=\frac{1}{2}\left(N\,f({M_{XC}}_{j},M_{Y})+C_{j}\right).$ (D.1)
Here the normalized distribution of ideal events is $f(M_{2C},M_{Y})$, and the
second term has been added to ensure that the contribution of bins with very
few events, where Poisson statistics does not apply111By this we mean that
$N\,f({M_{2C}}_{j},M_{Y})$ has a large percent error when used as a predictor
of the number of counts $C_{j}$ when $N\,f({M_{2C}}_{j},M_{Y})$ is less than
about 5., have a reasonable weighting. Then $\chi^{2}$ is given by
$\chi^{2}(M_{Y})=\sum_{\mathrm{bin}\
j}\left(\frac{C_{j}-N\,f({M_{2C}}_{j},M_{Y})}{\sigma_{j}}\right)^{2}.$ (D.2)
The minimum $\chi^{2}(M_{Y})$ is our estimate of $M_{Y}$. The amount $M_{Y}$
changes for an increase of $\chi^{2}$ by one gives our $1\,\sigma$
uncertainty, $\delta M_{Y}$, for $M_{Y}$ [200]. As justification for this we
calculate ten different seed random numbers to generate ten distinct groups of
250 events. We check that the $M_{Y}$ estimates for the ten sets are
distributed with about 2/3 within $\delta M_{Y}$ of the true $M_{Y}$ as one
would expect for $1\,\sigma$ error bars. One might worry that with our
definition of $\chi^{2}$, the value of $\chi^{2}$ per degree of freedom is
less than one. However this is an artifact of the fact that the bins with very
few or zero events are not adequately described by Poisson statistics and if
we remove them we do get a reasonable $\chi^{2}$ per degree of freedom. The
determination of $M_{Y}$ using this reduced set gives similar results.
In Chapters 7 and 8 we determine the mass with two distributions (the upper
and the lower bound distributions). We treat these two distributions
separately and add the resulting $\chi^{2}$ to form a final $\chi^{2}$. In
these chapters we choose $M_{\tilde{\chi}^{o}_{1}}$ as the independent
variable. We illustrate with $M_{2C,LB}$ and $M_{2C,UB}$, but the procedure is
the same for $M_{3C}$.
First we update the definitions for this case. We define $N_{LB}$ as the
number of $M_{2C}$ events in the region to be fit, and likewise $N_{UB}$ is
the number of $M_{2C,UB}$ events in the region to be fit. The $M_{2C}$ of the
events are grouped into bins, $C_{j},$ in a given range, $j$. The variable
$f_{LB}(M_{2C},M_{\tilde{\chi}^{o}_{1}})$ is the normalized $M_{2C}$
distribution of ideal events expected in bin $j$ as calculated with an assumed
$M_{\tilde{\chi}^{o}_{1}}$, the measured $M_{-}$, the observed $m_{ll}$
distribution, the observed UTM distribution, and the appropriate detector
simulator. We likewise define the upper bound distribution to be
$f_{UB}(M_{2C,UB},M_{\tilde{\chi}^{o}_{1}})$. We also define the background
distribution for lower-bound and upper-bound distributions to be
$f_{B,LB}(M_{2C})$ and $f_{B,UB}(M_{2C,UB})$ and the fraction of the total
events we estimate are from background $\lambda$.
Again, we assign an uncertainty, $\sigma_{j}$, to each bin $j$ given by
$\sigma_{LB,j}^{2}(M_{\tilde{\chi}^{o}_{1}})=\frac{1}{2}\left(N_{LB}\,((1-\lambda)f_{LB}({M_{2C}}_{j},M_{\tilde{\chi}^{o}_{1}})+\lambda
f_{B,LB}({M_{2C}}_{j}))+C_{j}\right),$ (D.3)
and likewise for the upper-bound distribution. The second term has been added
to ensure an appropriate weighting of bins with very few events that does not
bias the fit towards or away from this end-point. In bins with few counts,
normal Poisson statistics does not apply222By this we mean that
$N\,f({M_{2C}}_{j},M_{\tilde{\chi}^{o}_{1}})$ has a large percent error when
used as a predictor of the number of counts $C_{j}$ when
$N\,f({M_{2C}}_{j},M_{\tilde{\chi}^{o}_{1}})$ is less than about 5..
The $\chi^{2}$ is given by
$\displaystyle\chi^{2}(M_{\tilde{\chi}^{o}_{1}})$ $\displaystyle=$
$\displaystyle\sum_{\mathrm{bin}\
j}\left(\frac{C_{j}-N_{LB}\,(1-\lambda)\,f_{LB}({M_{2C}}_{j},M_{\tilde{\chi}^{o}_{1}})-N_{LB}\,\lambda\,f_{B,LB}({M_{2C}}_{j},M_{\tilde{\chi}^{o}_{1}})}{\sigma_{LB,j}}\right)^{2}$
$\displaystyle+\sum_{\mathrm{bin}\
j}\left(\frac{C_{UB,j}-N_{UB}\,(1-\lambda)\,f_{UB}({M_{2C,UB}}_{j},M_{\tilde{\chi}^{o}_{1}})-N_{UB}\,\lambda\,f_{B,UB}({M_{2C,UB}}_{j},M_{\tilde{\chi}^{o}_{1}})}{\sigma_{UB,j}}\right)^{2}.$
We calculate ideal distributions for
$M_{\tilde{\chi}^{o}_{1}}=50,60,70,80,90\,\,{\rm{GeV}}$. We fit quadratic
interplant through the points. The minimum
$\chi^{2}(M_{\tilde{\chi}^{o}_{1}})$ of the interplant is our estimate of
$M_{\tilde{\chi}^{o}_{1}}$. The amount $M_{\tilde{\chi}^{o}_{1}}$ changes for
an increase in $\chi^{2}$ by one gives our $1\,\sigma$ uncertainty, $\delta
M_{\tilde{\chi}^{o}_{1}}$, for $M_{\tilde{\chi}^{o}_{1}}$ [200].
The $M_{3C}$ fits in Chapter 8 were performed following this same procedure.
In this case we choose $M_{\tilde{\chi}^{o}_{1}}=80,85,90,95,100,105,110$ GeV.
For the $M_{3C}$ studies we did not create a background model and therefore
fixed the background parameter to $\lambda=0$. The studies involving nine
different choices of $(\Delta M_{YN},\Delta M_{XN})$ were done using
$M_{\tilde{\chi}^{o}_{1}}=90,100,110$ GeV.
Some subtleties for which we should check. If the bin size is not large
enough, the artificially large variations in the distribution sometimes bias
the fits to place the endpoint near a fluctuation. The bin size should be made
large enough so that this does not happen. This can be checked for by testing
if the results are invariant with respect to changing the bin size by a small
amount. Because of the larger number of lower bound events, the optimal bin
size may be different for the upper-bound and the lower-bound distributions.
## Appendix E Uniqueness of Event Reconstruction
In the Chapters 6 and 8 we claim that the events near an endpoint of $M_{2C}$
and $M_{3C}$ distributions (events that nearly saturate the bound) are nearly
reconstructed. This appendix offers a proof of the claim. To prove uniqueness,
we need to establish that as $M_{3C}$ or $M_{2C}$ of an event (lower bound or
upper bound) approach the endpoint of the distributions, the solutions with
different values of $q$ and $p$ approach a common solution.
Figure E.1: Shows the ellipses defined for $p_{o}$ and $q_{o}$ in
Eqs(8.13-8.14) using the correct mass scale for an event that nearly saturates
the $M_{3C}$ endpoint. For this event, the $M_{3C}$ lies within $1\%$ of the
endpoint and reconstructs $p$ and $q$ to within $4\%$. Perfect error
resolution and combinatorics are assumed.
We begin with $M_{3C}$. Sec. 8.2 shows that there are at most four solutions
given $M_{N}$, $M_{X}$ and $M_{Y}$ formed by the intersection of two ellipses
in $(p_{o},q_{o})$ defined by Eqs(8.13-8.14) as shown in Fig E.1. Consider the
case that an event has a lower bound $M_{3C}$ near $M_{Y}$. We are guaranteed
that a solution occurs at the true mass scale when we choose the correct
combinatoric assignments. As one varies the mass scale downward, the two
ellipses drift and change shape and size so that four solutions become two
solutions and eventually, at the value of $M_{3C}$ for the event, become one
single solution. When the disconnection of the two ellipses occurs near the
true mass scale, the value of $M_{3C}$ will be near the endpoint. The unique
solutions for $p$ and $q$ given at $M_{3C}$ are nearly degenerate with the
true values of $p$ and $q$ found when one uses the true masses to solve for
$p$ and $q$. The closer $M_{3C}$ is to the endpoint the closer the two
ellipses are to intersecting at a single point when the true masses are used
and to giving a unique reconstruction. The example pictured in Fig. E.1 show
an event with $M_{3C}$ within $1\%$ of the endpoint and where the $p$ and $q$
are reconstructed to within $4\%$. This shows that for $M_{3C}$ events that
are near the endpoint allowing for any choice of combinatorics then nearly
reconstruct the true values for $p$ and $q$. If there are combinatoric
ambiguities, one need to test all combinatoric possibilities. If the minimum
combinatoric option has a lower bound at the end-point, the above arguments
follow unchanged. The above arguments can be repeated to show $M_{3C,UB}$ near
the endpoint also reconstructs the correct $p$ and $q$.
Next we turn to $M_{2C}$. For every event the lower-bounds satisfy
$M_{2C}(M_{Y}-M_{N})\leq M_{3C}(M_{Y}-M_{N},M_{X}-M_{N})$. With $M_{2C}$ the
propagator $(p+\alpha_{2})^{2}$, which we can equate with $\chi_{X}^{2}$, is
not fixed. The kinematically allowed values for $\chi_{X}$ are
$M_{N}^{2}<\chi^{2}_{X}<M_{Y}^{2}$ assuming the visible states $\alpha_{1}$
and $\alpha_{2}$ are massless. Eq(8.12) shows that $\vec{p}$ and $\vec{q}$
solutions are linear in $\chi^{2}_{X}$ with no terms dependent on $\chi_{X}$
alone or other powers of $\chi_{X}$. Including $\chi_{X}^{2}$ as a free
parameter in Eqs(8.13-8.14) leads to two ellipsoids (or hyperboloids) in the
space $(p_{o},q_{o},\chi_{X}^{2})$. We will assume without loss of generality
that these are ellipsoids. Again, at the true mass scale we are guaranteed the
two ellipsoids intersect at an ellipse. Now as one varies the mass scale the
two ellipsoids drift and change shape and size. The $M_{2C}$ value then
corresponds to the mass scale where the two ellipsoids are in contact at one
point. As we select events with a value of $M_{2C}$ that approaches the true
mass scale the intersection of the two ellipsoids shrink to a point giving a
unique reconstruction of $p$ and $q$. The combinatoric ambiguities for
$M_{2C}$ are avoided by selecting events with two distinct OSSF pairs. Events
that saturate the upper bound of $M_{2C}$ also reconstruct $p$ and $q$ by the
same logic as above.
## Appendix F Acronyms List
* •
$\Lambda$CDM $\Lambda$(Cosmological Constant) Cold Dark Matter
* •
CKM Cabibbo-Kobayashi-Maskawa
* •
CMB Cosmic Microwave Background
* •
$\overline{DR}$ Dimensional Reduction (Renormalization Scheme)
* •
GIM Glashow-Iliopoulos-Maiani
* •
GJ Georgi-Jarlskog
* •
GUT Grand Unified Theory
* •
ISR Initial State Radiation
* •
KK Kaluza Klein
* •
LHC Large Hadron Collider
* •
LKP Lightest Kaluza Klein particle
* •
LSP Lightest supersymmetric particle
* •
$\overline{MS}$ Minimal Subtraction (Renormalization Scheme)
* •
MACHO MAssive Compact Halo Objects
* •
OSSF Opposite-Sign Same Flavor
* •
QFT Quantum Field Theory
* •
REWSB Radiative Electroweak Symmetry Breaking
* •
RG Renormalization Group
* •
RGE Renormalization Group Equations
* •
SM Standard Model
* •
SSB Spontaneous Symmetry Breaking
* •
SUSY Supersymmetry
* •
UED Universal Extra Dimensions
* •
UTM Upstream Transverse Momentum
* •
VEV Vacuum Expectation Value
## References
* [1] R. G. Roberts, A. Romanino, G. G. Ross, and L. Velasco-Sevilla, Precision test of a fermion mass texture, Nucl. Phys. B615 (2001) 358–384, [hep-ph/0104088].
* [2] W. Vandelli, Prospects for the detection of chargino-neutralino direct production with ATLAS detector at the LHC. PhD dissertation, Universita Degli Studi Di Pavia, Dipartimento Di Fisica, 2006.
* [3] B. C. Allanach et. al., The snowmass points and slopes: Benchmarks for susy searches, hep-ph/0202233.
* [4] R. Boyle, A Free Enquiry into the Vulgarly Received Notion of Nature. Cambridge University Press, 1685. Introduction of 1996 edition.
* [5] F. Wilczek, From “not wrong” to (maybe) right, Nature 428 (2004) 261.
* [6] G. D’Agostini, From observations to hypotheses: Probabilistic reasoning versus falsificationism and its statistical variations, physics/0412148.
* [7] CDF Collaboration, Luminocity web page, http://www-cdf.fnal.gov/$\sim$konigsb/lum$\\_$official$\\_$page.html (June, 2008).
* [8] W.-M. Yao, Review of Particle Physics, Journal of Physics G 33 (2008) 1+.
* [9] A. Collaboration, ATLAS computing : Technical design report, CERN, ATLAS-TDR-017,CERN-LHCC-2005-022 (2005).
* [10] G. Ross and M. Serna, Unification and Fermion Mass Structure, Phys. Lett. B664 (2008) 97–102, [0704.1248].
* [11] H. Georgi and C. Jarlskog, A new lepton - quark mass relation in a unified theory, Phys. Lett. B86 (1979) 297–300.
* [12] C. G. Lester and D. J. Summers, Measuring masses of semi-invisibly decaying particles pair produced at hadron colliders, Phys. Lett. B463 (1999) 99–103, [hep-ph/9906349].
* [13] A. Barr, C. Lester, and P. Stephens, m(T2): The truth behind the glamour, J. Phys. G29 (2003) 2343–2363, [hep-ph/0304226].
* [14] M. Serna, A short comparison between $m_{T2}$ and $m_{CT}$, JHEP 06 (2008) 004, [0804.3344].
* [15] G. G. Ross and M. Serna, Mass Determination of New States at Hadron Colliders, Phys. Lett. B665 (2008) 212–218, [0712.0943].
* [16] A. J. Barr, G. G. Ross, and M. Serna, The Precision Determination of Invisible-Particle Masses at the LHC, Phys. Rev. D78 (2008) 056006, [0806.3224].
* [17] G. Corcella et. al., HERWIG 6.5 release note, hep-ph/0210213.
* [18] S. Moretti, K. Odagiri, P. Richardson, M. H. Seymour, and B. R. Webber, Implementation of supersymmetric processes in the HERWIG event generator, JHEP 04 (2002) 028, [hep-ph/0204123].
* [19] G. Marchesini et. al., HERWIG: A Monte Carlo event generator for simulating hadron emission reactions with interfering gluons. Version 5.1 - April 1991, Comput. Phys. Commun. 67 (1992) 465–508.
* [20] A. J. Barr, A. Pinder, and M. Serna, Precision Determination of Invisible-Particle Masses at the CERN LHC: II, Phys. Rev. D79 (2009) 074005, [0811.2138].
* [21] P. A. M. Dirac, The Quantum theory of electron, Proc. Roy. Soc. Lond. A117 (1928) 610–624.
* [22] C. D. Anderson, The positive electron, Phys. Rev. 43 (Mar, 1933) 491–494.
* [23] E. Lane, Harvard’s Nima Arkani-Hamed Ponders New Universes, Different Dimensions, AAAS: Advancing Science Serving Society (May, 2005).
* [24] J. D. Jackson, Weisskopf tribute, Cern Courier (Dec, 2002).
* [25] V. F. Weisskopf, On the self-energy and the electromagnetic field of the electron, Phys. Rev. 56 (Jul, 1939) 72–85.
* [26] M. Gell-Mann, The Eightfold Way: A Theory of strong interaction symmetry, . CTSL-20.
* [27] Y. Ne’eman, Derivation of strong interactions from a gauge invariance, Nucl. Phys. 26 (1961) 222–229.
* [28] T. Ne’eman, In rememberance of yuval ne’eman (1925-2006), Physicsaplus (2006). http://physicsaplus.org.il.
* [29] e. . Fraser, G., The particle century, . Bristol, UK: IOP (1998) 232 p.
* [30] V. E. Barnes, P. L. Connolly, D. J. Crennell, B. B. Culwick, W. C. Delaney, W. B. Fowler, P. E. Hagerty, E. L. Hart, N. Horwitz, P. V. C. Hough, J. E. Jensen, J. K. Kopp, K. W. Lai, J. Leitner, J. L. Lloyd, G. W. London, T. W. Morris, Y. Oren, R. B. Palmer, A. G. Prodell, D. Radojičić, D. C. Rahm, C. R. Richardson, N. P. Samios, J. R. Sanford, R. P. Shutt, and J. R. Smith, Observation of a hyperon with strangeness minus three, Phys. Rev. Lett. 12 (Feb, 1964) 204–206.
* [31] BABAR Collaboration, B. Aubert et. al., Measurement of the spin of the Omega- hyperon at BABAR, Phys. Rev. Lett. 97 (2006) 112001, [hep-ex/0606039].
* [32] S. Okubo, Note on unitary symmetry in strong interactions, Prog. Theor. Phys. 27 (1962) 949–966.
* [33] S. Weinberg, Quantum Theory of Fields. Cambridge University Press, 1996.
* [34] T.-P. Cheng and L.-F. Li, Gauge theory of elementary particle physics. Oxford University Press, 1984.
* [35] M. K. Gaillard and B. W. Lee, Rare Decay Modes of the K-Mesons in Gauge Theories, Phys. Rev. D10 (1974) 897.
* [36] T. D. Lee, ANALYSIS OF DIVERGENCES IN A NEUTRAL SPIN 1 MESON THEORY WITH PARITY NONCONSERVING INTERACTIONS, Nuovo Cim. A59 (1969) 579–598.
* [37] S. L. Glashow, J. Iliopoulos, and L. Maiani, Weak Interactions with Lepton-Hadron Symmetry, Phys. Rev. D2 (1970) 1285–1292.
* [38] V. I. Borodulin, R. N. Rogalev, and S. R. Slabospitsky, CORE: COmpendium of RElations: Version 2.1, hep-ph/9507456.
* [39] M. Srednicki, Quantum field theory, . Cambridge, UK: Univ. Pr. (2007) 641 p.
* [40] P. Ramond, Journeys beyond the standard model, . Reading, Mass., Perseus Books, 1999.
* [41] W.-M. Yao, Review of Particle Physics, Journal of Physics G 33 (2006) 1+.
* [42] S. Weinberg, A Model of Leptons, Phys. Rev. Lett. 19 (1967) 1264–1266.
* [43] O. W. Greenberg, Spin and Unitary Spin Independence in a Paraquark Model of Baryons and Mesons, Phys. Rev. Lett. 13 (1964) 598–602.
* [44] O. W. Greenberg, The color charge degree of freedom in particle physics, 0805.0289.
* [45] D. J. Gross and F. Wilczek, Asymptotically Free Gauge Theories. 1, Phys. Rev. D8 (1973) 3633–3652.
* [46] H. D. Politzer, RELIABLE PERTURBATIVE RESULTS FOR STRONG INTERACTIONS?, Phys. Rev. Lett. 30 (1973) 1346–1349.
* [47] C. H. Llewellyn Smith and J. F. Wheater, Electroweak Radiative Corrections and the Value of sin**2- Theta-W, Phys. Lett. B105 (1981) 486.
* [48] UA1 Collaboration, G. Arnison et. al., Experimental observation of isolated large transverse energy electrons with associated missing energy at s**(1/2) = 540-GeV, Phys. Lett. B122 (1983) 103–116.
* [49] UA1 Collaboration, G. Arnison et. al., Experimental observation of lepton pairs of invariant mass around 95-GeV/c**2 at the CERN SPS collider, Phys. Lett. B126 (1983) 398–410.
* [50] G. Bertone, D. Hooper, and J. Silk, Particle dark matter: Evidence, candidates and constraints, Phys. Rept. 405 (2005) 279–390, [hep-ph/0404175].
* [51] H. Baer and X. Tata, Dark matter and the LHC, 0805.1905.
* [52] J. Oortz, The force exerted by the stellar system in the direction perpendicular to the galactic plane and some related problems, Bulletin of the Astronomical Institutes of the Netherlands VI (1932) 249\.
* [53] Y. Sofue and V. Rubin, Rotation Curves of Spiral Galaxies, Annual Reviews Astronomy and Astrophysics 39 (2001) 137–174, [arXiv:astro-ph/0010594].
* [54] D. Russeil, O. Garrido, P. Amram, and M. Marcelin, Rotation curve of our Galaxy and field galaxies, in Dark Matter in Galaxies (S. Ryder, D. Pisano, M. Walker, and K. Freeman, eds.), vol. 220 of IAU Symposium, pp. 211–+, July, 2004.
* [55] F. Swicky, On the masses of nebulae and of clusers of nebulae, The Astrophysical Journal 86 (1937) 217.
* [56] R. Bottema, J. L. G. Pestana, B. Rothberg, and R. H. Sanders, MOND rotation curves for spiral galaxies with Cepheid- based distances, Astron. Astrophys. 393 (2002) 453–460, [astro-ph/0207469].
* [57] D. Clowe et. al., A direct empirical proof of the existence of dark matter, Astrophys. J. 648 (2006) L109–L113, [astro-ph/0608407].
* [58] D. Spergel, Particle Dark Matter, astro-ph/9603026.
* [59] J. B. Hartle, An introduction to Einstein’s general relativity, . San Francisco, USA: Addison-Wesley (2003) 582 p.
* [60] J. Frieman, M. Turner, and D. Huterer, Dark Energy and the Accelerating Universe, 0803.0982.
* [61] CDMS Collaboration, Z. Ahmed et. al., A Search for WIMPs with the First Five-Tower Data from CDMS, 0802.3530.
* [62] XENON Collaboration, J. Angle et. al., First Results from the XENON10 Dark Matter Experiment at the Gran Sasso National Laboratory, Phys. Rev. Lett. 100 (2008) 021303, [0706.0039].
* [63] DAMA Collaboration, R. Bernabei et. al., On a further search for a yearly modulation of the rate in particle dark matter direct search, Phys. Lett. B450 (1999) 448–455.
* [64] DAMA Collaboration, R. Bernabei et. al., First results from DAMA/LIBRA and the combined results with DAMA/NaI, 0804.2741.
* [65] H. Baer, A. Mustafayev, E.-K. Park, and X. Tata, Target dark matter detection rates in models with a well- tempered neutralino, JCAP 0701 (2007) 017, [hep-ph/0611387].
* [66] J. R. Ellis, K. A. Olive, Y. Santoso, and V. C. Spanos, Update on the direct detection of supersymmetric dark matter, Phys. Rev. D71 (2005) 095007, [hep-ph/0502001].
* [67] S. Arrenberg, L. Baudis, K. Kong, K. T. Matchev, and J. Yoo, Kaluza-Klein Dark Matter: Direct Detection vis-a-vis LHC, 0805.4210.
* [68] W. de Boer, C. Sander, V. Zhukov, A. V. Gladyshev, and D. I. Kazakov, EGRET excess of diffuse galactic gamma rays as tracer of dark matter, Astron. Astrophys. 444 (2005) 51, [astro-ph/0508617].
* [69] CDMS Collaboration, D. S. Akerib et. al., First results from the cryogenic dark matter search in the Soudan Underground Lab, Phys. Rev. Lett. 93 (2004) 211301, [astro-ph/0405033].
* [70] M. Tegmark, Cosmological neutrino bounds for non-cosmologists, Phys. Scripta T121 (2005) 153–155, [hep-ph/0503257].
* [71] N. Arkani-Hamed, S. Dimopoulos, and G. R. Dvali, The hierarchy problem and new dimensions at a millimeter, Phys. Lett. B429 (1998) 263–272, [hep-ph/9803315].
* [72] T. Appelquist, H.-C. Cheng, and B. A. Dobrescu, Bounds on universal extra dimensions, Phys. Rev. D64 (2001) 035002, [hep-ph/0012100].
* [73] G. Servant and T. M. P. Tait, Is the lightest Kaluza-Klein particle a viable dark matter candidate?, Nucl. Phys. B650 (2003) 391–419, [hep-ph/0206071].
* [74] M. Battaglia, A. Datta, A. De Roeck, K. Kong, and K. T. Matchev, Contrasting supersymmetry and universal extra dimensions at the CLIC multi-TeV e+ e- collider, JHEP 07 (2005) 033, [hep-ph/0502041].
* [75] R. Haag, J. T. Lopuszanski, and M. Sohnius, All Possible Generators of Supersymmetries of the s Matrix, Nucl. Phys. B88 (1975) 257.
* [76] F. Gliozzi, J. Scherk, and D. I. Olive, Supersymmetry, Supergravity Theories and the Dual Spinor Model, Nucl. Phys. B122 (1977) 253–290.
* [77] D. Stockinger, The muon magnetic moment and supersymmetry, hep-ph/0609168.
* [78] S. P. Martin, A supersymmetry primer, hep-ph/9709356.
* [79] H. Baer and X. Tata, Weak scale supersymmetry: From superfields to scattering events, . Cambridge, UK: Univ. Pr. (2006) 537 p.
* [80] S. J. Gates, M. T. Grisaru, M. Rocek, and W. Siegel, Superspace, or one thousand and one lessons in supersymmetry, Front. Phys. 58 (1983) 1–548, [hep-th/0108200].
* [81] J. Wess and J. Bagger, Supersymmetry and supergravity, . Princeton, USA: Univ. Pr. (1992) 259 p.
* [82] K. E. Cahill, Elements of supersymmetry, hep-ph/9907295.
* [83] I. J. R. Aitchison, Supersymmetry and the MSSM: An elementary introduction, hep-ph/0505105.
* [84] L. E. Ibanez and G. G. Ross, Supersymmetric Higgs and radiative electroweak breaking, Comptes Rendus Physique 8 (2007) 1013–1028, [hep-ph/0702046].
* [85] L. E. Ibanez and G. G. Ross, $SU(2)_{L}\times U(1)$ symmetry breaking as a radiative effect of supersymmetry breaking in guts, Phys. Lett. B110 (1982) 215–220.
* [86] K. Inoue, A. Kakuto, H. Komatsu, and S. Takeshita, Aspects of Grand Unified Models with Softly Broken Supersymmetry, Prog. Theor. Phys. 68 (1982) 927.
* [87] L. Alvarez-Gaume, J. Polchinski, and M. B. Wise, Minimal Low-Energy Supergravity, Nucl. Phys. B221 (1983) 495.
* [88] S. Raby, SOME THOUGHTS ON THE MASS OF THE TOP QUARK, Nucl. Phys. B187 (1981) 446.
* [89] S. L. Glashow, WHERE IS THE TOP QUARK?, Phys. Rev. Lett. 45 (1980) 1914.
* [90] K. T. Mahanthappa and M. A. Sher, THE MASS OF THE TOP QUARK IN SU(5), Phys. Lett. B86 (1979) 294.
* [91] T. Yanagida, HORIZONTAL SYMMETRY AND MASS OF THE TOP QUARK, Phys. Rev. D20 (1979) 2986.
* [92] B. Pendleton and G. G. Ross, Mass and mixing angle predictions from infrared fixed points, Phys. Lett. B98 (1981) 291.
* [93] W. A. Bardeen, M. Carena, S. Pokorski, and C. E. M. Wagner, Infrared fixed point solution for the top quark mass and unification of couplings in the mssm, Phys. Lett. B320 (1994) 110–116, [hep-ph/9309293].
* [94] S. Dimopoulos, S. Raby, and F. Wilczek, Supersymmetry and the scale of unification, Phys. Rev. D 24 (Sep, 1981) 1681–1683.
* [95] L. E. Ibanez and G. G. Ross, Low-Energy Predictions in Supersymmetric Grand Unified Theories, Phys. Lett. B105 (1981) 439.
* [96] S. Dimopoulos and H. Georgi, Softly Broken Supersymmetry and SU(5), Nucl. Phys. B193 (1981) 150.
* [97] N. Sakai, Naturalness in Supersymmetric Guts, Zeit. Phys. C11 (1981) 153.
* [98] W. de Boer and C. Sander, Global electroweak fits and gauge coupling unification, Phys. Lett. B585 (2004) 276–286, [hep-ph/0307049].
* [99] S. Willenbrock, Triplicated trinification, Phys. Lett. B561 (2003) 130–134, [hep-ph/0302168].
* [100] H. Georgi and S. L. Glashow, Unity of all elementary particle forces, Phys. Rev. Lett. 32 (1974) 438–441.
* [101] J. C. Pati and A. Salam, Lepton number as the fourth color, Phys. Rev. D10 (1974) 275–289.
* [102] G. G. Ross, GRAND UNIFIED THEORIES, . Reading, Usa: Benjamin/cummings ( 1984) 497 P. ( Frontiers In Physics, 60).
* [103] A. J. Buras, J. R. Ellis, M. K. Gaillard, and D. V. Nanopoulos, Aspects of the Grand Unification of Strong, Weak and Electromagnetic Interactions, Nucl. Phys. B135 (1978) 66–92.
* [104] S. F. de Medeiros Varzielas and G. G. Ross, SU(3) family symmetry and bi-tri maximial mixing, Nuclear Physics B 733 (2006) 31 – 47, [hep-ph/0507176].
* [105] T. Appelquist and J. Carazzone, Infrared Singularities and Massive Fields, Phys. Rev. D11 (1975) 2856.
* [106] A. Pich, Effective field theory, hep-ph/9806303.
* [107] B. V. Martemyanov and V. S. Sopov, Light quark mass ratio from dalitz plot of $\eta\to\pi^{+}\pi^{-}\pi^{0}$ decay, Phys. Rev. D71 (2005) 017501, [hep-ph/0502023].
* [108] T. E. W. Group, A combination of cdf and d0 results on the mass of the top quark, hep-ex/0703034.
* [109] D. Groom, Review of Particle Physics, The European Physical Journal C15 (2000) 1+.
* [110] H. Fusaoka and Y. Koide, Updated estimate of running quark masses, Phys. Rev. D57 (1998) 3986–4001, [hep-ph/9712201].
* [111] H. Baer, J. Ferrandis, K. Melnikov, and X. Tata, Relating bottom quark mass in dr-bar and ms-bar regularization schemes, Phys. Rev. D66 (2002) 074007, [hep-ph/0207126].
* [112] V. Lubicz, Lattice qcd, flavor physics and the unitarity triangle analysis, hep-ph/0702204.
* [113] P. H. Chankowski and S. Pokorski, Quantum corrections to neutrino masses and mixing angles, Int. J. Mod. Phys. A17 (2002) 575–614, [hep-ph/0110249].
* [114] V. D. Barger, M. S. Berger, and P. Ohmann, Supersymmetric grand unified theories: Two loop evolution of gauge and yukawa couplings, Phys. Rev. D47 (1993) 1093–1113, [hep-ph/9209232].
* [115] S. M. Barr and I. Dorsner, Atmospheric neutrino mixing and b - tau unification, Phys. Lett. B556 (2003) 185–191, [hep-ph/0211346].
* [116] J. L. Diaz-Cruz, H. Murayama, and A. Pierce, Can supersymmetric loops correct the fermion mass relations in su(5)?, Phys. Rev. D65 (2002) 075011, [hep-ph/0012275].
* [117] D. M. Pierce, J. A. Bagger, K. T. Matchev, and R.-j. Zhang, Precision corrections in the minimal supersymmetric standard model, Nucl. Phys. B491 (1997) 3–67, [hep-ph/9606211].
* [118] T. Blazek, S. Raby, and S. Pokorski, Finite supersymmetric threshold corrections to ckm matrix elements in the large tan beta regime, Phys. Rev. D52 (1995) 4151–4158, [hep-ph/9504364].
* [119] M. Carena, D. Garcia, U. Nierste, and C. E. M. Wagner, Effective lagrangian for the anti-t b h+ interaction in the mssm and charged higgs phenomenology, Nucl. Phys. B577 (2000) 88–120, [hep-ph/9912516].
* [120] M. Carena and H. E. Haber, Higgs boson theory and phenomenology. ((v)), Prog. Part. Nucl. Phys. 50 (2003) 63–152, [hep-ph/0208209].
* [121] K. Tobe and J. D. Wells, Revisiting top-bottom-tau yukawa unification in supersymmetric grand unified theories, Nucl. Phys. B663 (2003) 123–140, [hep-ph/0301015].
* [122] K. Hagiwara, A. D. Martin, D. Nomura, and T. Teubner, Improved predictions for g-2 of the muon and alpha(qed)(m(z)**2), hep-ph/0611102.
* [123] L. L. Everett, G. L. Kane, S. Rigolin, and L.-T. Wang, Implications of muon g-2 for supersymmetry and for discovering superpartners directly, Phys. Rev. Lett. 86 (2001) 3484–3487, [hep-ph/0102145].
* [124] L. Randall and R. Sundrum, Out of this world supersymmetry breaking, Nucl. Phys. B557 (1999) 79–118, [hep-th/9810155].
* [125] L. J. Hall, R. Rattazzi, and U. Sarid, The top quark mass in supersymmetric so(10) unification, Phys. Rev. D50 (1994) 7048–7065, [hep-ph/9306309].
* [126] S. Komine and M. Yamaguchi, Bottom-tau unification in susy su(5) gut and constraints from b –¿ s gamma and muon g-2, Phys. Rev. D65 (2002) 075013, [hep-ph/0110032].
* [127] C. Pallis, b - tau unification and sfermion mass non-universality, Nucl. Phys. B678 (2004) 398–426, [hep-ph/0304047].
* [128] M. R. Ramage and G. G. Ross, Soft susy breaking and family symmetry, JHEP 08 (2005) 031, [hep-ph/0307389].
* [129] S. F. King and M. Oliveira, Yukawa unification as a window into the soft supersymmetry breaking lagrangian, Phys. Rev. D63 (2001) 015010, [hep-ph/0008183].
* [130] T. Blazek, R. Dermisek, and S. Raby, Predictions for higgs and susy spectra from so(10) yukawa unification with mu ¿ 0, Phys. Rev. Lett. 88 (2002) 111804, [hep-ph/0107097].
* [131] T. Blazek, R. Dermisek, and S. Raby, Yukawa unification in so(10), Phys. Rev. D65 (2002) 115004, [hep-ph/0201081].
* [132] D. Auto, H. Baer, C. Balazs, A. Belyaev, J. Ferrandis, and X. Tata, Yukawa coupling unification in supersymmetric models, JHEP 06 (2003) 023, [hep-ph/0302155].
* [133] C. Balazs and R. Dermisek, Yukawa coupling unification and non-universal gaugino mediation of supersymmetry breaking, JHEP 06 (2003) 024, [hep-ph/0303161].
* [134] R. Gatto, G. Sartori, and M. Tonin, Weak selfmasses, cabibbo angle, and broken su(2) x su(2), Phys. Lett. B28 (1968) 128–130.
* [135] K. Matsuda and H. Nishiura, Can four-zero-texture mass matrix model reproduce the quark and lepton mixing angles and cp violating phases?, Phys. Rev. D74 (2006) 033014, [hep-ph/0606142].
* [136] C. T. Hill, Are There Significant Gravitational Corrections to the Unification Scale?, Phys. Lett. B135 (1984) 47.
* [137] J. R. Ellis and M. K. Gaillard, Fermion Masses and Higgs Representations in SU(5), Phys. Lett. B88 (1979) 315.
* [138] S. Antusch and M. Spinrath, Quark and lepton masses at the GUT scale including SUSY threshold corrections, 0804.0717.
* [139] S. Mrenna, G. L. Kane, and L.-T. Wang, Measuring gaugino soft phases and the LSP mass at Fermilab, Phys. Lett. B483 (2000) 175–183, [hep-ph/9910477].
* [140] Baer and Tata, Supersymmetry Phenomoenology. Cambridge University Press, 1996.
* [141] D0 Collaboration, B. Abbott et. al., Spin correlation in $t\bar{t}$ production from $p\bar{p}$ collisions at $\sqrt{s}=1.8$ TeV, Phys. Rev. Lett. 85 (2000) 256–261, [hep-ex/0002058].
* [142] R. Devenish and A. Cooper-Sarkar, Deep inelastic scattering, . Oxford, UK: Univ. Pr. (2004) 403 p.
* [143] F. Maltoni, T. McElmurry, R. Putman, and S. Willenbrock, Choosing the factorization scale in perturbative QCD, hep-ph/0703156.
* [144] J. Alwall et. al., Madgraph/madevent v4: The new web generation, JHEP 09 (2007) 028, [arXiv:0706.2334 [hep-ph]].
* [145] CompHEP Collaboration, E. Boos et. al., Comphep 4.4: Automatic computations from lagrangians to events, Nucl. Instrum. Meth. A534 (2004) 250–259, [hep-ph/0403113].
* [146] The Durham HEP Databases, Parton Distribution Functions, . http://durpdg.dur.ac.uk/HEPDATA/PDF.
* [147] J. L. Feng, T. Moroi, L. Randall, M. Strassler, and S.-f. Su, Discovering supersymmetry at the Tevatron in Wino LSP scenarios, Phys. Rev. Lett. 83 (1999) 1731–1734, [hep-ph/9904250].
* [148] N. V. Krasnikov, SUSY model with R-parity violation, longlived charged slepton and quasistable matter, JETP Lett. 63 (1996) 503–509, [hep-ph/9602270].
* [149] K. Jedamzik, The cosmic 6Li and 7Li problems and BBN with long-lived charged massive particles, Phys. Rev. D77 (2008) 063524, [0707.2070].
* [150] C. Athanasiou, C. G. Lester, J. M. Smillie, and B. R. Webber, Distinguishing spins in decay chains at the Large Hadron Collider, JHEP 08 (2006) 055, [hep-ph/0605286].
* [151] D. J. Phalen and A. Pierce, Sfermion interference in neutralino decays at the LHC, Phys. Rev. D76 (2007) 075002, [arXiv:0705.1366 [hep-ph]].
* [152] M. Drees, W. Hollik, and Q. Xu, One-loop calculations of the decay of the next-to-lightest neutralino in the mssm, JHEP 02 (2007) 032, [hep-ph/0610267].
* [153] B. C. Allanach, C. G. Lester, M. A. Parker, and B. R. Webber, Measuring sparticle masses in non-universal string inspired models at the LHC, JHEP 09 (2000) 004, [hep-ph/0007009].
* [154] B. C. Allanach, J. P. Conlon, and C. G. Lester, Measuring Smuon-Selectron Mass Splitting at the CERN LHC and Patterns of Supersymmetry Breaking, Phys. Rev. D77 (2008) 076006, [0801.3666].
* [155] C. Lester, Model independent sparticle mass measurements at ATLAS. PhD dissertation, University of Cambridge, Department of Physics, December, 2001. CERN-THESIS-2004-003.
* [156] H. Bachacou, I. Hinchliffe, and F. E. Paige, Measurements of masses in sugra models at lhc, Phys. Rev. D62 (2000) 015009, [hep-ph/9907518].
* [157] B. K. Gjelsten, D. J. Miller, and P. Osland, Measurement of SUSY masses via cascade decays for SPS 1a, JHEP 12 (2004) 003, [hep-ph/0410303].
* [158] C. G. Lester, Constrained invariant mass distributions in cascade decays: The shape of the ’m(qll)-threshold’ and similar distributions, Phys. Lett. B655 (2007) 39–44, [hep-ph/0603171].
* [159] B. K. Gjelsten, D. J. Miller, P. Osland, and A. R. Raklev, Mass determination in cascade decays using shape formulas, AIP Conf. Proc. 903 (2007) 257–260, [hep-ph/0611259].
* [160] M. Bisset, N. Kersting, and R. Lu, Improving SUSY Spectrum Determinations at the LHC with Wedgebox and Hidden Threshold Techniques, 0806.2492.
* [161] G. R. Goldstein, K. Sliwa, and R. H. Dalitz, Observing top-quark production at the Fermilab Tevatron, Phys. Rev. 47 (1993) 967–972.
* [162] K. Kondo, T. Chikamatsu, and S. H. Kim, Dynamical likelihood method for reconstruction of events with missing momentum. 3: Analysis of a CDF high p(T) e mu event as t anti-t production, J. Phys. Soc. Jap. 62 (1993) 1177–1182.
* [163] R. Raja, On measuring the top quark mass using the dilepton decay modes, ECONF C960625 (1996) STC122, [hep-ex/9609016].
* [164] R. Raja, Remark on the errors associated with the Dalitz-Goldstein method, Phys. Rev. D56 (1997) 7465–7465.
* [165] O. Brandt, Measurement of the mass of the top quark in dilepton final states with the D0 detector, . FERMILAB-MASTERS-2006-03.
* [166] H.-C. Cheng, J. F. Gunion, Z. Han, G. Marandella, and B. McElrath, Mass Determination in SUSY-like Events with Missing Energy, JHEP 12 (2007) 076, [0707.0030].
* [167] H.-C. Cheng, D. Engelhardt, J. F. Gunion, Z. Han, and B. McElrath, Accurate Mass Determinations in Decay Chains with Missing Energy, Phys. Rev. Lett. 100 (2008) 252001, [0802.4290].
* [168] M. M. Nojiri, G. Polesello, and D. R. Tovey, A hybrid method for determining SUSY particle masses at the LHC with fully identified cascade decays, JHEP 05 (2008) 014, [0712.2718].
* [169] D0 Collaboration, V. M. Abazov et. al., Improved $W$ boson mass measurement with the DØ detector, Phys. Rev. D66 (2002) 012001, [hep-ex/0204014].
* [170] C. Lester and A. Barr, MTGEN : Mass scale measurements in pair-production at colliders, JHEP 12 (2007) 102, [0708.1028].
* [171] W. S. Cho, K. Choi, Y. G. Kim, and C. B. Park, Measuring superparticle masses at hadron collider using the transverse mass kink, JHEP 02 (2008) 035, [0711.4526].
* [172] W. S. Cho, K. Choi, Y. G. Kim, and C. B. Park, Measuring the top quark mass with $m_{T2}$ at the LHC, 0804.2185.
* [173] K. Hamaguchi, E. Nakamura, and S. Shirai, A Measurement of Neutralino Mass at the LHC in Light Gravitino Scenarios, 0805.2502.
* [174] W. S. Cho, K. Choi, Y. G. Kim, and C. B. Park, Gluino Stransverse Mass, Phys. Rev. Lett. 100 (2008) 171801, [0709.0288].
* [175] B. Gripaios, Transverse observables and mass determination at hadron colliders, arXiv:0709.2740 [hep-ph].
* [176] A. J. Barr, B. Gripaios, and C. G. Lester, Weighing Wimps with Kinks at Colliders: Invisible Particle Mass Measurements from Endpoints, JHEP 02 (2008) 014, [0711.4008].
* [177] M. M. Nojiri, Y. Shimizu, S. Okada, and K. Kawagoe, Inclusive transverse mass analysis for squark and gluino mass determination, 0802.2412.
* [178] D. R. Tovey, On measuring the masses of pair-produced semi-invisibly decaying particles at hadron colliders, JHEP 04 (2008) 034, [0802.2879].
* [179] Atlas $m_{T2}$ wiki c++ library, https://twiki.cern.ch/twiki//bin/view/Atlas/StransverseMassLibrary.
* [180] G. A. Moortgat-Pick, H. Fraas, A. Bartl, and W. Majerotto, Polarization and spin effects in neutralino production and decay, Eur. Phys. J. C9 (1999) 521–534, [hep-ph/9903220].
* [181] G. A. Moortgat-Pick and H. Fraas, Implications of cp and cpt for production and decay of majorana fermions, hep-ph/0012229.
* [182] S. Y. Choi, B. C. Chung, J. Kalinowski, Y. G. Kim, and K. Rolbiecki, Analysis of the neutralino system in three-body leptonic decays of neutralinos, Eur. Phys. J. C46 (2006) 511–520, [hep-ph/0504122].
* [183] A. Djouadi, J.-L. Kneur, and G. Moultaka, Suspect: A fortran code for the supersymmetric and higgs particle spectrum in the mssm, hep-ph/0211331.
* [184] C. Collaboration, CMS physics : Technical Design Report. CERN Report No: CERN-LHCC-2006-001 ; CMS-TDR-008-1, 2006.
* [185] A. De Roeck et. al., Supersymmetric benchmarks with non-universal scalar masses or gravitino dark matter, Eur. Phys. J. C49 (2007) 1041–1066, [hep-ph/0508198].
* [186] M. Bisset, N. Kersting, J. Li, F. Moortgat, and Q. Xie, Pair-produced heavy particle topologies: Mssm neutralino properties at the lhc from gluino / squark cascade decays, Eur. Phys. J. C45 (2006) 477–492, [hep-ph/0501157].
* [187] D. K. Ghosh, R. M. Godbole, and S. Raychaudhuri, Signals for r-parity-violating supersymmetry at a 500-gev e+ e- collider, hep-ph/9904233.
* [188] J. Tanaka, Discovery potential of the Standard Model Higgs at the LHC, Nuclear Physics B Proceedings Supplements 144 (July, 2005) 341–348.
* [189] F. E. Paige, S. D. Protopopescu, H. Baer, and X. Tata, ISAJET 7.69: A Monte Carlo event generator for p p, anti-p p, and e+ e- reactions, hep-ph/0312045.
* [190] ATLAS Collaboration, S. Akhmadalev et. al., Hadron energy reconstruction for the ATLAS calorimetry in the framework of the non-parametrical method, Nucl. Instrum. Meth. A480 (2002) 508–523, [hep-ex/0104002].
* [191] B. C. Allanach, Softsusy: A c++ program for calculating supersymmetric spectra, Comput. Phys. Commun. 143 (2002) 305–331, [hep-ph/0104145].
* [192] B. Bajc, G. Senjanovic, and F. Vissani, b - tau unification and large atmospheric mixing: A case for non-canonical see-saw, Phys. Rev. Lett. 90 (2003) 051802, [hep-ph/0210207].
* [193] B. A. Campbell and D. W. Maybury, Triviality and the supersymmetric see-saw, hep-ph/0605144.
* [194] A. Dighe, S. Goswami, and W. Rodejohann, Corrections to tri-bimaximal neutrino mixing: Renormalization and planck scale effects, hep-ph/0612328.
* [195] F. Vissani and A. Y. Smirnov, Neutrino masses and b - tau unification in the supersymmetric standard model, Phys. Lett. B341 (1994) 173–180, [hep-ph/9405399].
* [196] S. Antusch, J. Kersten, M. Lindner, and M. Ratz, Neutrino mass matrix running for non-degenerate see-saw scales, Phys. Lett. B538 (2002) 87–95, [hep-ph/0203233].
* [197] G. K. Leontaris, S. Lola, and G. G. Ross, Heavy neutrino threshold effects in low-energy phenomenology, Nucl. Phys. B454 (1995) 25–44, [hep-ph/9505402].
* [198] K. G. Chetyrkin, J. H. Kuhn, and M. Steinhauser, Rundec: A mathematica package for running and decoupling of the strong coupling and quark masses, Comput. Phys. Commun. 133 (2000) 43–65, [hep-ph/0004189].
* [199] I. Dorsner, P. F. Perez, and G. Rodrigo, Fermion masses and the uv cutoff of the minimal realistic su(5), hep-ph/0607208.
* [200] P. Bevington and K. Robinson, Data Reduction and Error Analysis in the Physics Sciences. McGraw Hill, second edition ed., 1992.
|
arxiv-papers
| 2009-05-09T18:37:48 |
2024-09-04T02:49:02.471924
|
{
"license": "Public Domain",
"authors": "Mario Serna",
"submitter": "Mario A. Serna Jr",
"url": "https://arxiv.org/abs/0905.1425"
}
|
0905.1489
|
# Cyclic theory for commutative differential graded algebras and s–cohomology.
Dan Burghelea Dept. of Mathematics, The Ohio State University, 231 West
Avenue, Columbus, OH 43210, USA. burghele@mps.ohio-state.edu
###### Abstract
In this paper one considers three homotopy functors on the category of
manifolds , $hH^{\ast},cH^{\ast},sH^{\ast},$ and parallel them with other
three homotopy functors on the category of connected commutative differential
graded algebras, $HH^{\ast},CH^{\ast},SH^{\ast}.$ If $P$ is a smooth
1-connected manifold and the algebra is the de-Rham algebra of $P$ the two
pairs of functors agree but in general do not. The functors $HH^{\ast}$ and
$CH^{\ast}$ can be also derived as Hochschild resp. cyclic homology of
commutative differential graded algebra, but this is not the way they are
introduced here. The third $SH^{\ast},$ although inspired from negative cyclic
homology, can not be identified with any sort of cyclic homology of any
algebra. The functor $sH^{\ast}$ might play some role in topology. Important
tools in the construction of the functors $HH^{\ast},CH^{\ast}$and
$SH^{\ast},$ in addition to the linear algebra suggested by cyclic theory, are
Sullivan minimal model theorem and the ”free loop” construction described in
this paper.
(dedicated to A. Connes for his 60-th birthday)
###### Contents
1. 1 Introduction
2. 2 Mixed complexes, a formalism inspired from Connes’ cyclic theory
3. 3 Mixed commutative differential graded algebras
4. 4 De-Rham Theory in the presence of a smooth vector field
5. 5 The free loop space and s-cohomology
6. 6 The free loop construction for CDGA
7. 7 Minimal models and the proof of Theorem 3
## 1 Introduction
This paper deals with commutative graded algebras and the results are of
significance in “commutative” geometry/topology. However they were inspired
largely by the linear algebra underlying Connes’ cyclic theory. The
topological results formulated here, Theorem 2 and Theorem 3, were first
established as a consequence of the identification of the cohomology resp.
$S^{1}-$equivariant cohomology of the free loop spaces of a 1-connected smooth
manifold with the Hochschild resp. cyclic homology of its de-Rham algebra, cf.
[J], [BFG], [B2]. In this paper this identification is circumvented. Still,
the results illustrate the powerful influence of Connes’ mathematics in areas
outside the non commutative geometry.
In this paper, inspired by the relationship between Hochschild, cyclic and
negative cyclic homology of a unital algebra, one considers two systems of
graded vector space valued homotopy functors
$hH^{\ast},cH^{\ast},sH^{\ast}$and $HH^{\ast},CH^{\ast},SH^{\ast}$ and
investigate their relationship. The first three functors are defined on the
category of smooth manifolds and smooth maps via the free loop space
$P^{S^{1}}$ of a smooth manifold $P,$ which is a smooth $S^{1}$-manifold of
infinite dimension. The next three functors are defined on the category of
connected commutative differential graded algebras via an algebraic analogue
of “free loop” construction and via Sullivan minimal model theorem, Theorem 1.
The relationship between them is suggested by the general nonsense- diagram
Fig 2 in section 2.
When applied to the De–Rham algebra of a 1-connected smooth manifold the last
three functors take the same values as the first three. This is not the case
when the smooth manifold is not 1-connected; the exact relationship will be
addressed in a future work.
The first three functors are based on a formalism (manipulation with
differential forms) which can be considered for any smooth (finite or infinite
dimensional) manifold $M$ and any smooth vector field $X$ on $M.$ However it
seems to be of relevance when the vector field $X$ comes from a smooth
$S^{1}-$action on $M.$ This is of mild interest if the manifold is of finite
dimension but more interesting when the manifold is of infinite dimension. In
particular, it is quite interesting when $M=P^{S^{1}},$ the free loop space of
$P,$ and the action is the canonical $S^{1}-$action on $P^{S^{1}}.$
Manipulation with differential forms on $P^{S^{1}}$ leads to the graded vector
spaces $hH^{\ast}(P)$, $cH^{\ast}(P),$ $sH^{\ast}(P)$ with the first two being
the cohomology resp. the $S^{1}-$equivariant cohomology of $P^{S^{1}}$ but
$sH^{\ast}$ being a new homotopy functor, referred here as s–cohomology.
This functor was first introduced in [B1], [B2] but so far not seriously
investigated. The functor $sH^{\ast}$ relates, at least in the case of a
1-connected manifold $P$, the Waldhaussen algebraic $K-$theory of $P$ and the
Atiyah–Hirtzebruch complex $K-$theory (based on complex vector bundles) of
$P.$ It has a rather simple description in terms of infinite sequence of
smooth invariant differential forms on $P^{S^{1}}.$
The additional structures on $P^{S^{1}},$ the power maps $\psi_{k}$
$k=1,2,\cdots,$ and the involution $\tau=\psi_{-1},$ provide endomorphisms of
$hH^{\ast}(P)$, $cH^{\ast}(P),sH^{\ast}(P)$ whose eigenvalues and eigenspaces
are interesting issues. They are clarified only when $P$ is 1-connected. This
is done in view of the relationship with the functors
$HH^{\ast},CH^{\ast},SH^{\ast}.$
It might be only a coincidence but still a appealing observation that the
symmetric resp. antisymmetric part of $sH^{\ast}(P)$ w.r. to the canonical
involution $\tau$ calculates, for a 1-connected manifold $P$ and in the
stability range, the vector space $Hom(\pi_{\ast}(H/\textit{Diff}(P),\
\kappa),$ $\kappa=\mathbb{R},\mathbb{C};$ the symmetric part when $\dim P$ is
even the antisymmetric part when $\dim P$ is odd, cf. [Bu], [B1]. Here
$H/\textit{Diff}(P)$ denotes the (homotopy) quotient of the space of homotopy
equivalences of $P$ by the group of diffeomorphisms with the
$C^{\infty}-$topology.
The functors $HH^{\ast},CH^{\ast},SH^{\ast}$ are the algebraic version of
$hH^{\ast},cH^{\ast},sH^{\ast}$ and are defined on the category of
(homologically connected) commutative differential graded algebras. Their
definition uses the “free loop” construction, an algebraic analogue of the
free loop space, described in this paper only for free connected commutative
differential graded algebras $(\Lambda[V],d_{V}).$ A priori these functors are
defined only for free connected commutative differential graded algebras.
Since they are homotopy functors they extend to all connected commutative
differential graded algebras via Sullivan minimal model theorem, Theorem 1.
Using the definition presented here one can take full advantage of the simple
form the algebraic analogue of the power maps take on the free loop
construction. As a consequence one obtains a simple description of the
eigenvalues and eigenspaces of the endomorphisms induced from the algebraic
power maps on $HH^{\ast}$ and $CH^{\ast}$ and implicitly understand their
additional structure.
The extension of the results of Sullivan–Vigué, cf. [VS], to incorporate
$S^{1}-$action and the power maps in the minimal model of $P^{S^{1}}$
summarized in Section 7 leads finally to results about
$hH^{\ast},cH^{\ast},sH^{\ast}$ when $P$ is 1-connected, cf. Theorem 3.
In addition to the algebraic definition of $HH^{\ast},CH^{\ast},SH^{\ast}$
this paper contains the proof of the homotopy invariance of $sH^{\ast}.$
## 2 Mixed complexes, a formalism inspired from Connes’ cyclic theory
A mixed complex $(C^{\ast},\delta^{\ast},\beta_{\ast})$ consists of a graded
vector space $C^{\ast}$ ( $\ast$ a non negative integer) and linear maps,
$\delta^{\ast}:C^{\ast}\to C^{\ast+1},$ $\beta_{\ast+1}:C^{\ast+1}\to
C^{\ast}$ which satisfy:
$\displaystyle\delta^{\ast+1}\cdot\delta^{\ast}=$ $\displaystyle 0$
$\displaystyle\beta_{\ast}\cdot\beta_{\ast+1}=$ $\displaystyle 0$
$\displaystyle\beta_{\ast+1}\cdot\delta^{\ast}+\delta^{\ast-1}\cdot\beta_{\ast}=$
$\displaystyle 0.$
When no risk of confusion the index $\ast$ will be deleted and we write
$(C,\delta,\beta)$ instead of $(C^{\ast},\delta^{\ast},\beta_{\ast})$. Using
the terminology of [B2], [BV2] a mixed complex can be viewed either as a
cochain complex $(C^{\ast},d^{\ast})$ equipped with an $S^{1}-$action
$\beta_{\ast}$, or as a chain complex $(C^{\ast},\beta_{\ast})$ equipped with
an algebraic $S^{1}-$action $\delta^{\ast}.$
To a mixed complex $(C^{\ast},\delta^{\ast},\beta_{\ast})$ one associates a
number of cochain, chain and $2-$periodic cochain complexes, and then their
cohomologies, homologies and $2-$periodic cohomologies111 We will use the word
”homology” for a functor derived from a chain complex and “cohomology” for one
derived from a cochain complex. The $2-$periodic chain and cochain complexes
can be identified., as follows.
First denote by
$\displaystyle{}^{+}C^{r}=\prod_{k\geq 0}C^{r-2k}$
$\displaystyle{}^{-}C^{r}:=\prod_{k\geq 0}C^{r+2k}$ (1)
$\displaystyle\mathbb{P}C^{2r+1}=\prod_{k\geq 0}C^{2k+1}$ $\displaystyle\ \
\mathbb{P}C^{2r}=\prod_{k\geq 0}C^{2k}\ \text{for any}\ r$ $\displaystyle
PC^{2r+1}=\bigoplus_{k\geq 0}C^{2k+1}$ $\displaystyle\ \
PC^{2r}=\bigoplus_{k\geq 0}C^{2k}\text{for any}\ r.$
Since our vector spaces are $\mathbb{Z}_{\geq 0}$-graded the direct product
${}^{+}C^{r}$ involves only finitely many factors.
Next introduce
$\displaystyle{}^{+}D^{r}_{\beta}(w_{r},w_{r-2},\cdots):=$
$\displaystyle(\delta\omega_{r},(\delta\omega_{r-2}+\beta\omega_{r}),\cdots)$
(2) $\displaystyle{}^{+}D^{\delta}_{r}(w_{r},w_{r-2},\cdots):=$
$\displaystyle((\beta\omega_{r}+\delta\omega_{r-2}),(\beta\omega_{r-2}+\delta\omega_{r-4}),\cdots)$
$\displaystyle{}^{-}D^{r}_{\beta}(\cdots,w_{r+2},w_{r}):=$
$\displaystyle(\cdots,(\beta\omega_{r+4}+\delta\omega_{r+2}),(\beta\omega_{r+2}+\delta\omega_{r}))$
$\displaystyle{}^{-}D^{\delta}_{r}(\cdots,w_{r+2},w_{r}):=$
$\displaystyle(\cdots,(\delta\omega_{r}+\beta\omega_{r+2}),\beta\omega_{r})$
$\displaystyle D^{2r}(\cdots,\omega_{2r+2},\omega_{2r},\cdots\omega_{0})=$
$\displaystyle(\cdots,(\delta\omega_{2r}+\beta\omega_{2r+2}),\cdots)$
$\displaystyle D^{2r+1}(\cdots,\omega_{2r+3},\omega_{2r+1}\cdots\omega_{1})=$
$\displaystyle(\cdots,(\delta\omega_{2k+1}+\beta\omega_{2k+3}),\cdots).$
Finally consider the cochain complexes
$\mathcal{C}:=(C^{\ast},\delta^{\ast}),\ \
^{+}\mathcal{C}_{\beta}:=(^{+}C^{\ast},^{+}D^{\ast}_{\beta}),\ \
^{-}\mathcal{C}_{\beta}:=(^{-}C^{\ast},^{-}D^{\ast}_{\beta}),$
the chain complexes
$\mathcal{H}:=(C^{\ast},\beta_{\ast}),\ \
^{+}\mathcal{H}^{\delta}:=(^{+}C^{\ast},^{+}D^{\delta}_{\ast}),\ \
^{-}\mathcal{H}^{\delta}:=(^{-}C\ast,^{-}D^{\delta}_{\ast})$
and the $2-$periodic cochain complexes 222 here
$(\mathbb{P}C^{\ast},D^{\ast})$ is regarded as a cochain complex with
$D^{\ast}$ obtained from the degree $+1$ derivation $\delta$ perturbed by the
degree $-1$ derivation $\beta$ ; the same complex can be regarded chain
complex with $D^{\ast}$ obtained from the degree $-1$ derivation $\beta$
perturbed by the degree $+1$ derivation $\delta$; the cohomology for the first
is the same as homology for the second
$P\mathcal{C}:=(PC^{\ast},D^{\ast}),\ \
\mathbb{P}\mathcal{C}:=(\mathbb{P}C^{\ast},D^{\ast})$
whose cohomology, homology, $2-$periodic cohomology are denoted by
$\displaystyle H^{\ast}:=H^{\ast}(C,\delta),$
$\displaystyle{}^{+}H^{\ast}_{\beta}:=^{+}H^{\ast}_{\beta}(C,\delta,\beta),\ $
$\displaystyle{}^{-}H^{\ast}_{\beta}$
$\displaystyle:=^{-}H^{\ast}_{\beta}(C,\delta,\beta),$ $\displaystyle
H_{\ast}:=H_{\ast}(C,\beta),$
$\displaystyle{}^{+}H^{\delta}_{\ast}:=^{+}H^{\delta}_{\ast}(C,\delta,\beta),\
$ $\displaystyle{}^{-}H^{\delta}_{\ast}$
$\displaystyle:=^{-}H^{\delta}_{\ast}(C,\delta,\beta),$ $\displaystyle
PH^{\ast}:=PH^{\ast}(C,\delta,$ $\displaystyle\beta),\ $
$\displaystyle\mathbb{P}H^{\ast}$
$\displaystyle:=\mathbb{P}H^{\ast}(C,\delta,\beta).$
In this paper the chain complexes $\mathcal{H},^{\pm}{\mathcal{H}}^{\delta},$
will only be used to derive conclusions about the cochain complexes
$\mathcal{C},^{\pm}{\mathcal{C}}_{\beta},\ \mathbb{P}\mathcal{C}.$
The obvious inclusions and projections lead to the following commutative
diagrams of short exact sequences
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{}^{-}\mathcal{H}^{\delta}_{\ast}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbb{P}\mathcal{C}^{\ast}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{}^{+}\mathcal{H}^{\delta}_{\ast-2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{id}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{H}_{\ast}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{}^{+}\mathcal{H}^{\delta}_{\ast}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{}^{+}\mathcal{H}^{\delta}_{\ast-2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{}^{+}\mathcal{C}^{\ast-2}_{\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{i^{\ast-2}}$$\scriptstyle{id}$$\textstyle{{}^{+}\mathcal{C}^{\ast}_{\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{I^{\ast}}$$\textstyle{\mathcal{C}^{\ast}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{}^{+}\mathcal{C}^{\ast-2}_{\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{I^{\ast-2}}$$\textstyle{\mathbb{P}\mathcal{C}^{\ast}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{}^{-}\mathcal{C}^{\ast}_{\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0.}$
They give rise to the following commutative diagram of long exact sequences
$\textstyle{\cdots\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{}^{-}H_{r}^{\delta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h_{r}}$$\textstyle{\mathbb{P}H(r)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{}^{+}H_{r-2}^{\delta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{id}$$\textstyle{{}^{-}H_{r-1}^{\delta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h_{r-1}}$$\textstyle{\cdots}$$\textstyle{\cdots\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H_{r}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{J_{r}}$$\textstyle{{}^{+}H_{r}^{\delta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{S_{r}}$$\textstyle{{}^{+}H_{r-2}^{\delta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{B_{r-2}}$$\textstyle{H_{r-1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\cdots}$
Fig 1.
$\textstyle{\cdots\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{}^{+}H^{r-2}_{\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{S^{r-2}}$$\scriptstyle{id}$$\textstyle{{}^{+}H^{r}_{\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{I}^{\ast}}$$\textstyle{H^{r}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{B^{r}}$$\scriptstyle{i^{r}}$$\textstyle{{}^{+}H_{\beta}^{r-1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{id}$$\textstyle{\cdots}$$\textstyle{\cdots\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{}^{+}H^{r-2}_{\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{I}^{r-2}}$$\textstyle{\mathbb{P}H^{r}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{J}^{r}}$$\textstyle{{}^{-}H^{r}_{\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{B}^{r}}$$\textstyle{{}^{+}H^{r-1}_{\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\cdots}$
Fig 2.
and
$\textstyle{{}^{+}H^{r-2}_{\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{I}^{r-2}}$$\scriptstyle{S^{r-2}}$$\textstyle{{}^{+}H^{r}_{\beta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{I}^{r}}$$\textstyle{\mathbb{P}H^{r}=\mathbb{P}H^{r+2}}$$\textstyle{{\underset{\underset{S}{\rightarrow}}{\lim}^{+}H^{r+2k}_{\beta}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
The diagram (Fig1) is the one familiar in the homological algebra of
Hochschild versus cyclic homologies, cf [Lo]. The diagram Fig 2 is the one we
will use in this paper.
Note that Hochschild, cyclic, periodic cyclic, negative cyclic homology of an
associative unital algebra $A$ as defined in [Lo], is
$H_{\ast},^{+}H^{\delta}_{\ast},$ $\mathbb{P}H^{\ast},^{-}H^{\beta}_{\ast}$ of
the Hochschild mixed complex with $C^{r}:=A^{\otimes(r+1)}$ , $\beta$ the
Hochschild boundary, and $\delta^{r}=(1-\tau_{r+1})\cdot
s_{r}\cdot(1+\tau_{r}+\cdots\tau^{r}_{r})$ where $\tau_{r}(a_{0}\otimes
a_{1}\otimes\cdots a_{r})=(a_{r}\otimes a_{0}\otimes\cdots a_{r-1})$ and
$s_{r}(a_{0}\otimes a_{1}\otimes\cdots a_{r})=(1\otimes a_{0}\otimes
a_{1}\otimes\cdots a_{r}).$
A morphism
$f:(C^{\ast}_{1},\delta^{\ast}_{1},\beta^{1}_{\ast})\to(C^{\ast}_{2},\delta^{\ast}_{2},\beta^{2}_{\ast})$
is a degree preserving linear map which intertwines $\delta^{\prime}$s and
$\beta^{\prime}$s. It induces degree preserving linear maps between any of the
homologies /cohomologies defined above. The following elementary observations
will be used below.
###### Proposition 1.
Let $(C,\delta,\beta)$ be a mixed cochain complex.
1.$PH^{r}=\underset{\underset{S}{\rightarrow}}{\lim}^{+}H^{r+2k}_{\beta},$
where $S^{k+2r}:^{+}H^{k+2r}_{\beta}\to^{+}H^{k+2r+2}_{\beta}$ is induced by
the inclusion
${}^{+}\mathcal{C}^{\ast}_{\beta}\to^{+}\mathcal{C}^{\ast+2}_{\beta}.$
2\. The following is an exact sequence
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\underset{\underset{S}{\leftarrow}}{\lim^{\prime}}\
^{+}H^{\delta}_{r-1+2k}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbb{P}H^{r}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\underset{\underset{S}{\leftarrow}}{\lim}\
^{+}H^{\delta}_{r+2k}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0,}$
with $S_{k+2r}:^{+}H_{k+2r}^{\delta}\to^{+}H_{k+2r-2}^{\delta}$ induced by the
projection
${}^{+}\mathcal{H}^{\delta}_{\ast}\to^{+}\mathcal{H}^{\delta}_{\ast-2}.$
Let
$f^{\ast}:(C^{\ast}_{1},\delta^{\ast}_{1},\beta^{1}_{\ast})\to(C^{\ast}_{2},\delta^{\ast}_{2},\beta^{2}_{\ast})$
be a morphism of mixed complexes.
3\. If $H^{\ast}(f)$ is an isomorphism then so is ${}^{+}H^{\ast}_{\beta}(f)$
and $PH^{\ast}(f).$
4\. If $H_{\ast}(f)$ is an isomorphism then so is ${}^{+}H^{\delta}_{\ast}(f)$
and $\mathbb{P}H^{\ast}(f).$
5\. If $H^{\ast}(f)$ and $H_{\ast}(f)$ are both isomorphisms then, in addition
to the conclusions in (3) and (4), ${}^{-}H^{\ast}_{\beta}(f)$ is an
isomorphism.
###### Proof.
(1): Recall that a direct sequence of cochain complexes
$\mathcal{C}^{\ast}_{0}\overset{i_{0}}{\rightarrow}\mathcal{C}^{\ast}_{1}\overset{i_{1}}{\rightarrow}\mathcal{C}^{\ast}_{2}\overset{i_{2}}{\rightarrow}\cdots$
induces, by passing to cohomology, the direct sequence
$H^{\ast}(\mathcal{C}^{\ast}_{0})\overset{H(i_{0})}{\rightarrow}H^{\ast}(\mathcal{C}^{\ast}_{1})\overset{H(i_{1})}{\rightarrow}H^{\ast}(\mathcal{C}^{\ast}_{2})\overset{i_{2}}{\rightarrow}\cdots$
and that
$H^{j}(\underset{\rightarrow}{\lim}\mathcal{C}^{\ast}_{i})=\underset{\rightarrow}{\lim}H^{j}(\mathcal{C}^{\ast}_{i})$
for any $j.$
(2): Recall that an inverse sequence of chain complexes
$\mathcal{H}_{\ast}^{0}\overset{p_{0}}{\leftarrow}\mathcal{H}_{\ast}^{1}\overset{p_{1}}{\leftarrow}\mathcal{H}_{\ast}^{2}\overset{p_{2}}{\leftarrow}\cdots$
induces, by passing to homology, the sequence
$H_{\ast}(\mathcal{H}^{\ast}_{0})\overset{H(p_{0})}{\leftarrow}H_{\ast}(\mathcal{H}^{\ast}_{1})\overset{H(p_{1})}{\leftarrow}H_{\ast}(\mathcal{H}^{\ast}_{2})\overset{p_{2}}{\leftarrow}\cdots$
and the following short exact sequence cf.[Lo] 5.1.9.
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\underset{\leftarrow}{\lim^{\prime}}H_{j-1}(\mathcal{H}^{i}_{\ast})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H_{j}(\underset{\leftarrow}{\lim}\mathcal{H}^{i}_{\ast})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\underset{\leftarrow}{\lim}H_{j}(\mathcal{H}^{i}_{\ast})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
for any $j.$
Item (3) follows by induction on degree from the naturality of the first exact
sequence in the diagram Fig 2 and (1).
Item (4) follows by induction from the naturality of the second exact sequence
of the diagram Fig 1 and from (2).
Item (5) follows from the naturality of the second exact sequence in diagram
Fig 2 and from (3) and (4). ∎
The mixed complex $(C^{\ast},\delta^{\ast},\beta_{\ast})$ is called
$\beta-$acyclic if $\beta_{1}$ is surjective and
$\ker(\beta_{r})=\text{im}(\beta_{r+1}).$ If so consider the diagram whose
rows are short exact sequences of cochain complexes
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{(\text{Im}(\beta)^{\ast},\delta^{\ast})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{j}$$\scriptstyle{i}$$\textstyle{(C^{\ast},\delta^{\ast})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta}$$\scriptstyle{id}$$\textstyle{((\text{Im}(\beta))^{\ast-1},\delta^{\ast-1})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{(^{+}C^{\ast-2}_{\beta},^{+}D^{\ast-2}_{\beta})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{i^{\ast-2}}$$\textstyle{(^{+}C^{\ast}_{\beta},^{+}D^{\ast}_{\beta})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{(C^{\ast},\delta^{\ast})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
Each row induces the long exact sequence in the diagram below and a simple
inspection of boundary map in these long exact sequences permits to construct
linear maps $\theta^{r}$ and to verify that the diagram below is commutative.
$\textstyle{H^{r-2}(\text{Im}(\beta),\delta)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\theta^{r-2}}$$\textstyle{H^{r}(\text{Im}(\beta),\delta)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\theta^{r}}$$\textstyle{H^{r}(C,\delta)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{id}$$\textstyle{H^{r-1}(\text{Im}(\beta),\delta)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\theta^{r-1}}$$\textstyle{{}^{+}H^{r-2}_{\beta}(C,\delta,\beta)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{}^{+}H^{r}_{\beta}(C,\delta,\beta)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{r}(C,\delta)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{}^{+}H^{r-1}_{\beta}(C,\delta,\beta)}$
As a consequence one verifies by induction on degree that the inclusion
$j:(\text{Im}\beta,\delta)\to(^{+}C,^{+}D_{\beta})$ induces an isomorphism
$H^{\ast}(\text{Im}(\beta),\delta)\to^{+}H^{\ast}_{\beta}(C,\delta,\beta).$
Mixed complex with power maps and involution
A collection of degree zero (degree preserving) linear maps
$\Psi_{k},k=1,2.\cdots,\tau:=\Psi_{-1}$ which satisfy
1. (i)
$\Psi_{k}\circ\delta=\delta\circ\Psi_{k},$
2. (ii)
$\Psi_{k}\circ\beta=k\beta\circ\Psi_{k},$
3. (iii)
$\Psi_{k}\circ\Psi_{r}=\Psi_{r}\circ\Psi_{k}=\Psi_{kr},\ \ \Psi_{1}=id$
will be referred to as “power maps and involution”, or simpler ”power maps”,
$\Psi_{k},k=-1,1,2,\cdots.$ 333We use the notation $\tau$ for $\Psi_{-1}$ to
emphasize that is an involution and to suggest consistency with other familiar
involutions in homological algebras and topology.. They provide the morphisms
of cochain complexes
$\displaystyle\Psi_{k}:\mathcal{C}\to$ $\displaystyle\mathcal{C},$
$\displaystyle{}^{\pm}\Psi_{k}:^{\pm}\mathcal{C}_{\beta}\to$
$\displaystyle{}^{\pm}\mathcal{C}_{\beta}$
$\displaystyle\mathbb{P}\Psi_{k}:\mathbb{P}\mathcal{C}\to$
$\displaystyle\mathbb{P}\mathcal{C}$
defined as follows
$\displaystyle{}^{+}\Psi_{k}^{r}(w_{r},w_{r-2},\cdots)=$
$\displaystyle(\Psi^{r}(\omega_{r}),\frac{1}{k}\Psi^{r-2}(\omega_{r-2}),\cdots)$
$\displaystyle{}^{-}\Psi_{k}^{r}(\cdots,w_{r+2},w_{r}):=$
$\displaystyle(\cdots
k\Psi_{k}^{2r+2}(\omega_{r+2}),\Psi_{k}^{r}(\omega_{r}))\ $
and
$\displaystyle\mathbb{P}\Psi_{k}^{2r}(\cdots,\omega_{2r+2},\omega_{2r},\omega_{r-2},\cdots,\omega_{0})=$
$\displaystyle=(\cdots,k\Psi_{k}^{2r+2}(\omega_{r+2}),\Psi_{k}^{2r}(\omega_{2r}),\frac{1}{k}\Psi_{k}^{2r-2}(\omega_{2r-2}),\cdots,\frac{1}{k^{r}}\Psi_{k}^{0}(\omega_{0}))$
$\displaystyle\mathbb{P}\Psi_{k}^{2r+1}(\cdots,\omega_{2r+3},\omega_{2r+1},\omega_{2r-1},\cdots,\omega_{1})=$
$\displaystyle=(\cdots,k\Psi_{k}^{2r+3}(\omega_{2r+3}),\Psi_{k}^{2r+1}(\omega_{2r+1}),\frac{1}{k}\Psi_{k}^{2r-1}(\omega_{2r-1}),\cdots\frac{1}{k^{r}}\Psi_{k}^{1}(\omega_{1}))$
Consequently they induce the endomorphisms,
$\underline{\Psi}_{k}^{\ast}:H^{\ast}\to H^{\ast}$
$\underline{{}^{\pm}\Psi}_{k}^{\ast}:^{\pm}H^{\ast}_{\beta}\to^{\pm}H^{\ast}_{\beta}$
$\underline{\mathbb{P}\Psi^{\ast}}_{k}:\mathbb{P}H^{\ast}\to\mathbb{P}H^{\ast}$
Note that in diagram (Fig2):
$\mathbb{J}^{\ast},$ $J^{\ast}$ and the vertical arrows intertwine the
endomorphisms induced by $\Psi_{k},$
$\mathbb{B}^{\ast}$ resp. $B^{\ast}$ intertwine $k(^{-}\underline{\Psi}_{k})$
resp. $k\underline{\Psi}_{k}$ with ${}^{+}\underline{\Psi}_{k},$
$\mathbb{I}^{\ast-2}$ resp. $S^{\ast-2}$ intertwine
${}^{+}\underline{\Psi}_{k}$ with $k\underline{\mathbb{P}\Psi}_{k}$ resp.
$k(^{+}\underline{\Psi}_{k}),$
The above elementary linear algebra will be applied to CDGA’s in the next
sections.
## 3 Mixed commutative differential graded algebras
Let $\kappa$ be a field of characteristic zero (for example
$\mathbb{Q},\mathbb{R},\mathbb{C}$).
###### Definition 1.
1. (i)
A commutative graded algebra abbreviated CGA, is an associative unital
augmentable graded algebra $\mathcal{A}^{\ast},$ (the augmentation is not part
of the data) which is commutative in the graded sense, i.e.
$a_{1}\cdot a_{2}=(-1)^{r_{1}r_{2}}a_{2}\cdot a_{1},\ \
a_{i}\in\mathcal{A}^{r_{i}},i=1,2.$
2. (ii)
An exterior differential
$d^{\ast}_{\mathcal{A}}:\mathcal{A}^{\ast}\to\mathcal{A}^{\ast+1},$ is a
degree +1-linear map which satisfies
$d(a_{1}\cdot a_{2})=d(a_{1})\cdot a_{2}+(-1)^{r_{1}}a_{1}\cdot
d(a_{2}),a_{1}\in\mathcal{A}^{r_{1}},\ \
d^{*+1}_{\mathcal{A}}d^{*}_{\mathcal{A}}=0.$
3. (iii)
An interior differential
$\beta^{\mathcal{A}}_{\ast}:\mathcal{A}^{\ast}\to\mathcal{A}^{\ast-1}$ is a
degree -1 linear map which satisfies
$\beta(a_{1}\cdot a_{2})=\beta(a_{1})\cdot
a_{2}+(-1)^{r_{1}}a_{1}\cdot\beta(a_{2}),a_{1}\in\mathcal{A}^{r_{1}},\ \
\beta_{\ast-1}^{\mathcal{A}}\beta_{\ast}^{\mathcal{A}}=0.$
4. (iv)
The exterior and interior differentials $d^{\ast}$ and $\beta_{\ast}$ are
compatible if
$d^{\ast-1}\cdot\beta_{\ast}+\beta_{\ast+1}\cdot d^{\ast}=0.$
5. (v)
A pair $(\mathcal{A}^{\ast},d^{\ast}),\ \mathcal{A}^{\ast}$ a CGA and
$d^{\ast}$ exterior differential, is called CDGA and a triple
$(\mathcal{A}^{\ast},d^{\ast},\beta_{\ast}),\ \mathcal{A}^{\ast}$ a CGA,
$d^{\ast}$ exterior differential and $\beta_{\ast}$ interior differential,
with $d^{\ast}$ and $\beta_{\ast}$ compatible, is called a mixed CDGA.
A mixed CDGA is a mixed cochain complex .
A degree preserving linear map
$f^{\ast}:\mathcal{A}^{\ast}\to\mathcal{B}^{\ast}$ is a morphism of CGA’s,
resp. CDGA’s, resp. mixed CDGA’s if is a unit preserving graded algebra
homomorphism and intertwines $d^{\prime}$s and $\beta^{\prime}$s when the
case.
We will consider the categories of CGA’s, CDGA’s and mixed CDGA’s. In all
these three categories there is a canonical tensor product and in the category
of CDGA’s a well defined concept of homotopy between two morphisms (cf. [Lo],
[Ha]) 444 let $k(t,dt)$ be the free commutative graded algebra generated by
the symbol $t$ of degree zero and $dt$ of degree one, equipped with the
differential $d(t)=dt.$ A morphism
$F:(\mathcal{A},d_{\mathcal{A}})\to(\mathcal{B},d_{\mathcal{B}})\otimes_{k}(k(t,dt),d),$
is called elementary homotopy from $f$ to $g$ ,
$f,g:(\mathcal{A},d_{\mathcal{A}})\to(\mathcal{B},d_{\mathcal{B}}),$ if
$\rho_{0}\cdot F=f,$ and $\rho_{1}\cdot F=g$ where
$\displaystyle\rho_{0}(a\otimes p(t))=$ $\displaystyle p(0)a,\
\rho_{0}(a\otimes p(t)dt)=0,$ $\displaystyle\rho_{1}(a\otimes p(t))=$
$\displaystyle p(1)a,\ \rho_{1}(a\otimes p(t)dt)=0,$ The homotopy is the
equivalence relation generated by elementary homotopy.. The category of mixed
CDGA’s is a subcategory of mixed cochain complexes and all definitions and
considerations in section 2 can be applied.
For a (commutative) differential graded algebra
$(\mathcal{A}^{\ast},d^{\ast}_{\mathcal{A}}),$ the graded vector space
$H^{\ast}(\mathcal{A}^{\ast},d^{\ast})=Ker(d^{\ast})/{Im(d^{\ast-1})}$ is a
commutative graded algebra whose multiplication is induced by the
multiplication in $\mathcal{A}^{\ast}.$ A morphism
$f=f^{\ast}:(\mathcal{A}^{\ast},d^{\ast}_{\mathcal{A}})\to(\mathcal{B}^{\ast},d^{\ast}_{\mathcal{B}})$
induces a degree preserving linear map,
$H^{\ast}(f):H^{\ast}(\mathcal{A}^{\ast},d_{\mathcal{A}}^{\ast})\to
H^{\ast}(\mathcal{B}^{\ast},d_{\mathcal{B}}^{\ast}),$ which is an algebra
homomorphism.
###### Definition 2.
A morphism of CDGA’s $f,$ with $H^{k}(f)$ isomorphism for any $k,$ is called a
quasi isomorphism.
The CDGA $(\mathcal{A},d_{\mathcal{A}})$ is called homologically connected if
$H^{0}(\mathcal{A},d_{\mathcal{A}})$ $=\kappa$ and homologically 1-connected
if homologically connected and $H^{1}(\mathcal{A},d_{\mathcal{A}})=0.$
The full subcategory of homologically connected CDGA’s will be denoted by
c–CDGA. For all practical purposes (related to geometry and topology) it
suffices to consider only c-CDGA’ s.
###### Definition 3.
1\. The CDGA $(\mathcal{A},d)$ is called free if $\mathcal{A}=\Lambda[V],$
where $V=\sum_{i\geq 0}V^{i}$ is a graded vector space and $\Lambda[V]$
denotes the free commutative graded algebra generated by $V.$ If in addition
$V^{0}=0$ is called free connected commutative differential graded algebra,
abbreviated fc-CDGA.
2\. The CDGA $(\mathcal{A},d)$ is called minimal if it is a fc-CDGA and in
addition
i. $d(V)\subset\Lambda^{+}[V]\cdot\Lambda^{+}[V],$ with $\Lambda^{+}[V]$ the
ideal generated by $V$,
ii. $V^{1}=\oplus_{\alpha\in I}V_{\alpha}$ with $I$ a well ordered set and
$d(V_{\beta})\subset\Lambda[\oplus_{\alpha<\beta}V_{\alpha}]$ (the set $I$ and
its order are not part of the data)
###### Observation 1.
If $(\Lambda[V],d_{V})$ is minimal and 1-connected, then $V^{1}=0$ and, for
$v\in V^{i},$ $d_{V}(v)$is a linear combination of products of elements
$v_{j}\in V^{j}$ with $j<i$. In particular for $v\in V^{2}$ one has $dv=0.$
The interest of minimal algebras comes from the following result [L], [Ha].
###### Theorem 1.
1 (D. Sullivan)
1\. A quasi isomorphism between two minimal CDGA’s is an isomorphism.
2\. For any homologically connected CDGA, $(\mathcal{A},d_{\mathcal{A}}),$
there exists quasi isomorphisms
$\theta:(\Lambda[V],d_{V})\to(\mathcal{A},d_{\mathcal{A}})$ with
$(\Lambda[V],d_{V})$ minimal. Such $\theta$ will be called minimal model of
$(\mathcal{A},d_{\mathcal{A}}).$
3\. Given a morphism
$f:(\mathcal{A},d_{\mathcal{A}})\to(\mathcal{B},d_{\mathcal{B}})$ and the
minimal models $\theta_{A}:(\Lambda[V_{A}],d_{V_{A}})\to(A,d_{A})$ and
$\theta_{B}:(\Lambda[V_{B}],d_{V_{B}})\to(B,d_{B}),$ there exists morphisms
$f^{\prime}:(\Lambda[V_{A}],d_{V_{A}})\to(\Lambda[V_{B}],d_{V_{B}})$ such that
$f\cdot\theta_{A}$ and $\theta_{B}\cdot f^{\prime}$ are homotopic; moreover
any two such $(f^{\prime})$’s are homotopic.
We can therefore consider the homotopy category of c–CDGA’s, whose the
morphisms are homotopy classes of morphisms of CDGA’s. By the above theorem
the full subcategory of fc-CDGA is a skeleton, and therefore any homotopy
functor a priori defined on fc–CDGA’s admits extensions to homotopy functors
defined on the full homotopy category of c–CDGA’s and all these extensions are
isomorphic as functors. In particular any statement about a homotopy functor
on the category c-CDGA suffices to be verified for fc–CDGA.
Precisely for any c–GDGA, $(\mathcal{A},d_{\mathcal{A}}),$ choose a minimal
model
$\theta_{\mathcal{A}}:(\Lambda[V_{\mathcal{A}}],d_{V_{\mathcal{A}}})\to(\mathcal{A},d_{\mathcal{A}})$
and for any $f:(\mathcal{A},d_{\mathcal{A}})\to(\mathcal{B},d_{\mathcal{B}})$
choose a morphism
$f^{\prime}:(\Lambda[V_{\mathcal{A}}],d_{V_{\mathcal{A}}})\to(\Lambda[V_{\mathcal{B}}],d_{V_{\mathcal{B}}})$
so that $\theta_{B}\cdot f^{\prime}$ and $f\cdot\theta_{A}$ are homotopic.
Define the value of the functor on $(\mathcal{A},d_{\mathcal{A}})$ to be the
value on $(\Lambda[V_{\mathcal{A}}],d_{V_{\mathcal{A}}})$ and the value on a
morphism $f:(\mathcal{A},d_{\mathcal{A}})\to(\mathcal{B},d_{\mathcal{B}})$ to
be the value on the morphism
$f^{\prime}:(\Lambda[V_{\mathcal{A}}],d_{V_{\mathcal{A}}})\to(\Lambda[V_{\mathcal{B}}],d_{V_{\mathcal{B}}}).$
There are two natural examples of mixed CDGA’s; one is provided by a smooth
manifold equipped with a smooth vector field, the other by a construction
referred to as ”the free loop”, considered first by Sullivan-Vigué. The free
loop construction applies directly only to a fc-CDGA but in view of Theorem 1
can be indirectly used for any c–CDGA.
The first will lead to (the de–Rham version of) a new homotopy functor defined
on the category of possibly infinite dimensional manifolds (hence on the
homotopy category of all countable CW complexes) , the s-cohomology, and its
relationship with other familiar homotopy functors 555this functor was called
in [B2] and [B4] string cohomology for its unifying role explained below, cf.
Observation 2. The name ”string homology” was afterwards used by Sullivan and
his school to designate the homology and equivariant homology of the free loop
space of a closed manifold when endowed with additional structures induced by
intersection theory and the Pontrjagin product in the chains of based pointed
loops cf. [CS]., cf section 4 below. The second leads to simple definitions of
three homotopy functors defined on the full category of c–CDGA’s (via the
minimal model theorem) with values in the graded vector spaces endowed with
weight decomposition, cf section 5 below. Their properties lead to interesting
results about cohomology of the free loop space of 1-connected spaces.
## 4 De-Rham Theory in the presence of a smooth vector field
Let $M$ be a smooth manifold, possibly of infinite dimension. In the last case
the manifold is modeled on a good Frechet space 666this is a Frechet space
with countable base which admits a smooth partition of unity; Note that if a
Frechet space $V$ is good then the space of smooth maps $C^{\infty}(S^{1},V)$
equipped with the $C^{\infty}-$ topology is also good., for which the
differential calculus can be performed as expected.
Consider the CDGA of differential forms $\Omega^{\ast}(M)$ with exterior
differential $d^{\ast}:\Omega^{\ast}(M)\to\Omega^{\ast+1}(M)$ and interior
differential $i^{X}_{\ast}:\Omega^{\ast}(M)\to\Omega^{\ast-1}(M),$ the
contraction along the vector field $X.$ They are not compatible. However we
can consider the Lie derivative $L_{X}:=d\cdot i^{X}+i^{X}\cdot d$ and define
$\Omega_{X}(M):=\\{\omega\in\Omega(M)|L_{X}\omega=0\\};$ $\Omega_{X}(M)$
consists of the smooth forms invariant by the flow induced by $X.$ The graded
vector space $\Omega^{\ast}_{X}(M)$ is a commutative graded algebra, a sub
algebra of $\Omega^{\ast}(M),$ and the restriction of $d^{\ast}$ and of
$i^{X}_{\ast}$ leave invariant $\Omega^{\ast}_{X}(M)$ and are compatible.
Consequently $(\Omega^{\ast}_{X}(M),d^{\ast},i^{X}_{\ast})$ is a mixed CDGA.
Denote by
1. (i)
$H^{\ast}_{X}(M):=H^{\ast}(\Omega^{\ast}_{X},d^{\ast}),$
2. (ii)
${}^{\pm}H^{\ast}_{X}(M):=^{\pm}H^{\ast}(\Omega^{\ast}_{X},d^{\ast},i^{X}_{\ast}),$
3. (iii)
$PH^{\ast}_{X}(M):=PH^{\ast}(\Omega^{\ast}_{X},d^{\ast},i^{X}_{\ast}),$
4. (iv)
$\mathbb{P}H^{\ast}_{X}(M):=\mathbb{P}H^{\ast}(\Omega^{\ast}_{X},d^{\ast},i^{X}_{\ast}).$
The diagram Fig 2 becomes
$\textstyle{{}^{+}H^{r}_{X}(M)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{S^{r}}$$\scriptstyle{id}$$\textstyle{{}^{+}H^{r+2}_{X}(M)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{J^{r+2}}$$\scriptstyle{\mathbb{I}^{r+2}}$$\textstyle{H^{r+2}(M)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{B^{r+2}}$$\textstyle{{}^{+}H_{X}^{r+1}(M)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\cdots}$$\textstyle{{}^{+}H^{r}_{X}(M)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{I}^{r}}$$\textstyle{\mathbb{P}H^{r+2}(M)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{J}^{r+2}}$$\textstyle{{}^{-}H^{r+2}_{X}(M)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{B}^{r+2}}$$\textstyle{{}^{+}H^{r+1}_{X}(M)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\cdots}$
Fig 3
The above diagram becomes more interesting if the vector field $X$ is induced
from an $S^{1}$ action $\mu:S^{1}\times M\to M$ (i.e. if $x\in M$ then $X(x)$
is the tangent to the orbits through $x$). In this section we will explore
particular cases of this diagram.
Observe that since $\mu$ is a smooth action, the subset $F$ of fixed points is
a smooth sub manifold. For any $x\in F$ denote by $\rho_{x}:S^{1}\times
T_{x}(M)\to T_{x}(M)$ the linearization of the action at $x$ which is a linear
representation. The inclusion $F\subset M$ induces the morphism
$r^{\ast}:(\Omega^{\ast}_{X}(M),d^{\ast},i^{X}_{\ast})\to(\Omega^{\ast}(F),d^{\ast},0).$
For a linear representation $\rho:S^{1}\times V\to V$ on a good Frechet space
denote by $V^{f}$ the fixed point set and by $X$ the vector field associated
to $\rho$ when regarded as a smooth action.
###### Definition 4.
A linear representation $\rho:S^{1}\times V\to V$ on the good Frechet space is
good if the following conditions hold:
a. $V^{f},$ the fixed point set, is a good Frechet space,
b. The map $r^{\ast}:\Omega^{\ast}(V)\to\Omega^{\ast}(V^{f})$ induced by the
inclusion is surjective,
c. $(\Omega^{\ast}_{X}(V,V^{f}),i^{X}_{\ast})$ with
$(\Omega^{\ast}_{X}(V,V^{f})=\ker r^{\ast}$ is acyclic.
We have:
###### Proposition 2.
1\. Any representation on a finite dimensional vector space is good.
2\. If $V$ is a good Frechet space then the regular representation
$\rho:S^{1}\times C^{\infty}(S^{1},V)\to C^{\infty}(S^{1},V)$, with
$C^{\infty}(S^{1},V),$ the Frechet space of smooth functions, is good.
For a proof consult Appendix [B1]. The proof is based on an explicit formula
for $i^{X}$ in the case of irreducible $S^{1}-$ representation and on the
writing of the elements of $C^{\infty}(S^{1},V)$ as Fourier series.
###### Definition 5.
A smooth action $\mu:S^{1}\times M\to M$ is good if its linearization at any
fixed point is a good representation.
Then a smooth action on any finite dimensional manifold is good and so is the
canonical smooth action of $S^{1}$ on $P^{S^{1}},$ the smooth manifold of
smooth maps from $S^{1}$ to $P$ where $P$ is any smooth Frechet manifold (in
particular a finite dimensional manifold). In view of the definitions above
observe the following.
###### Proposition 3.
If $\tilde{M}=(M,\mu)$ is a smooth $S^{1}-$manifold and $X$ is the associated
vector field, then:
1\. $H^{\ast}_{X}(M)=H^{\ast}(M),$
2\. ${}^{+}H^{\ast}_{X}(M)=H^{\ast}_{S^{1}}(\tilde{M})$ and $S:H^{\ast}(M)\to
H^{\ast+2}(M)$ identifies to the multiplication with $u\in H^{2}_{S^{1}}(pt),$
the generator of the equivariant cohomology of one point space,
3\.
$PH^{\ast}_{X}(M)=\underset{\underset{S}{\rightarrow}}{\lim}H^{\ast+2k}_{S^{1}}(\tilde{M}).$
If the action is good then:
4\. $\mathbb{P}H^{\ast}_{X}(M)=K^{\ast}(F)\ $ where
$\displaystyle K^{r}(F)=$ $\displaystyle\prod_{k}H^{2k}(F)\ \text{if}\ \ r\ \
even$ $\displaystyle K^{r}(F)=$ $\displaystyle\prod_{k}H^{2k+1}(F)\ \text{if}\
\ r\ \ odd.$
If $M$ is a closed of $n-$dimension manifold then :
5\. ${}^{-}H^{\ast}_{X}(M)=H^{S^{1}}_{n-1-\ast}(\tilde{M},\mathcal{O}_{M})$
with $H^{S^{1}}_{\ast}(\tilde{M},\mathcal{O}_{M})$ the equivariant homology
with coefficients in the orientation bundle 777 Recall that
$H^{S^{1}}_{\ast}(M,\mathcal{O}_{M})=H_{\ast}(M//S^{1},\mathbb{O}_{M})$ where
$M//S^{1}$ is the homotopy quotient of this action. This equivariant homology
can be derived from invariant currents in the same way as equivariant
cohomology from invariant forms, cf. [AB86]. The complex of invariant currents
(with coefficients in orientation bundle) contains the complex
$(\Omega^{n-\ast}_{X}(M,\mathcal{O}_{M}),\partial_{n-\ast})$ as a quasi
isomorphic sub complex. of $M.$
###### Proof.
1\. The verification is standard since $S^{1}$ is compact and connected; one
construct
$av^{\ast}:(\Omega^{\ast}(M),d^{\ast})\to(\Omega_{X}^{\ast},d^{\ast})$ by
$S^{1}-$ averaging using the compacity of $S^{1}.$ The homomorphism induced in
cohomology by $av^{\ast}$ is obviously surjective. To check it is injective
one has to show that any closed $k$ differential form $\omega$ which becomes
exact after applying $av$ is already exact, precisely $\int_{c}\omega=0$ for
any smooth $k-$ cycle $c.$ Indeed, since the connectivity of $S^{1}$ implies
$\int_{c}\omega=\int_{\mu(-\theta,c)}\omega,\ \theta\in S^{1}$ one has:
$\displaystyle\int_{c}\omega=1/2\pi\int_{S^{1}}(\int_{c}\omega)d\theta=$
$\displaystyle
1/2\pi\int_{S^{1}}(\int_{\mu(-\theta,c)}\omega)d\theta=1/2\pi\int_{S^{1}}(\int_{c}\mu_{\theta}^{\ast}(\omega))d\theta=$
$\displaystyle\int_{c}(1/2\pi\int_{S^{1}}\mu_{\theta}^{\ast}(\omega)d\theta)=\int_{c}av^{\ast}(\omega)=0.$
Here $\mu_{\theta}$ denotes the diffeomorphism $\mu(\theta,\cdot):M\to M.$
2\. Looking at the definition in section 2 one recognizes one of the most
familiar definition of equivariant cohomology using invariant differential
forms cf. [AB86].
3\. The proof is a straightforward consequence of Proposition 1 in section 2
and (2) above.
4\. Let $F$ be the smooth sub manifold of the fixed points of $\mu.$ Clearly
$r^{\ast}:(\Omega^{\ast}_{X}(M),d^{\ast},i^{X}_{\ast})\to(\Omega^{\ast}(F),d^{\ast}_{F},0)$
is a morphism of mixed CDGA, hence of mixed complexes. If the smooth action is
good then the above morphism induces an isomorphism in homology
$H_{\ast}(\Omega^{\ast}_{X},i^{X}_{\ast})\to H_{\ast}(\Omega^{\ast}(F),0).$ To
check this we have to show that $(\ker r^{\ast},i^{X}_{\ast})$, with $\ker
r^{\ast}:=\\{\omega\in\Omega_{X}^{\ast}(M)|\omega|_{F}=0\\},$ is acyclic. This
follows (by $S^{1}-$average) from the acyclicity of the chain complex
$(\Omega^{\ast}(M,F),i^{X}_{\ast})$ which in turn can be derived using the
linearity w.r. to functions of of $i^{X}.$ Indeed, using a ”partition of
unity” argument it suffices to verify this acyclicity locally. For points
outside $F$ the acyclicity follows from the acyclicity of the complex
$\cdots\Lambda^{\ast-1}(V)\overset{i^{e}}{\to}\Lambda^{\ast}(V)\overset{i^{e}}{\to}\Lambda^{\ast-1}(V)\to\cdots,$
where $V$ is a Frechet space , $\Lambda^{k}(F)$ the space of skew symmetric
$k-$linear maps from $V$ to $\kappa=\mathbb{R},\mathbb{C}$ and $e\in
V\setminus 0.$ For points $x\in F$ this follows from the fact that the
linearization of the action at $x$ is a good representation, as stated in
Proposition 2.
5\. If $\tilde{M}$ is a finite dimensional smooth $S^{1}-$manifold we can
equip $M$ with an invariant Riemannian metric $g$ and consider
$\star:\Omega^{\ast}(M)\to\Omega^{n-\ast}(M;\mathcal{O}_{M})$ the Hodge star
operator. Denote by $\omega\in\Omega^{1}(M)$ the $1-$form corresponding to $X$
w.r. to the metric $g$, by
$e_{\omega}:\Omega^{\ast}(M;\mathcal{O}_{M})\to\Omega^{\ast+1}(M;\mathcal{O}_{M})$
the exterior multiplication with $\omega$ and by
$\partial_{\ast}:\Omega^{\ast}(M;\mathcal{O}_{M})\to\Omega^{\ast-1}(M;\mathcal{O}_{M})$
the formal adjoint of $d^{\ast-1}$ w.r. to $g$ i.e.
$\partial_{\ast}=\pm\star\cdot\ d^{n-\ast}\cdot\ \star^{-1}.$ Note that
$e_{\omega}=\pm\star\cdot\ i^{X}\cdot\star^{-1}.$ All these operators leave
$\Omega_{X}$ invariant since $g$ is invariant. Clearly
$(\Omega^{\ast}_{X}(M;\mathcal{O}_{M}),e^{\ast}_{\omega},\partial_{\ast})$ is
a mixed cochain complex and we have
${}^{-}H^{\ast}_{i^{X}}(\Omega_{X}(M),d,i^{X})=^{+}H^{\partial}_{n-\ast}(\Omega_{X}(M;\mathcal{O}_{M}),e^{\omega},\partial).$
The equivariant homology of $\tilde{M}$ with coefficients in the orientation
bundle can be calculated from the complex of invariant currents which, if $M$
closed, contains the complex
$(\Omega^{n-\ast}_{X}(M,\mathcal{O}_{M}),\partial_{n-\ast})$ as a quasi
isomorphic sub complex. As a consequence we have
$\displaystyle H_{n-\ast}(\Omega_{X}(M;\mathcal{O}_{M}),\partial)=$
$\displaystyle H_{\ast}(M;\mathcal{O}_{M})$
$\displaystyle{}^{+}H^{e_{\omega}}_{n-\ast}(\Omega_{X}(M;\mathcal{O}_{M}),e_{\omega},\partial)=$
$\displaystyle H_{\ast}^{S^{1}}(M;\mathcal{O}_{M})$
(cf. section 2 for notations). ∎
As a consequence of Proposition 3 (1)-(4) for any smooth $S^{1}$ manifold with
good $S^{1}-$ action the second long exact sequence in diagram Fig 3 becomes
| |
---|---|---
$\textstyle{\cdots\to
H^{r-2}_{S^{1}}(\tilde{M})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{I}^{r-2}}$$\textstyle{K^{r}(F)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{J}^{r}}$$\textstyle{{{}^{-}H}^{r}_{S^{1}}(\tilde{M})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{B}^{r}}$$\textstyle{H^{r-1}_{S^{1}}(\tilde{M})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\cdots}$$\textstyle{\underset{\underset{S}{\rightarrow}}{\lim}H^{r+2k}_{S^{1}}(\tilde{M})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
Fig 4 The sequence above is obviously natural in the sense that
$f:\tilde{M}\to\tilde{N},$ an $S^{1}-$ equivariant smooth map, induces a
commutative diagram whose rows are the above exact sequence Fig 4 for
$\tilde{M}$ and $\tilde{N}.$ Then if $f$ and its restriction to the fixed
point set induce isomorphisms in cohomology it induces isomorphisms in
$H^{\ast}_{S^{1}}$ and $K^{\ast}$ and then all other types of equivariant
cohomologies ${}^{-}H^{\ast}_{S^{1}},\ \ PH^{\ast}_{S^{1}},\ \
\mathbb{P}H^{\ast}_{S^{1}}.$
If $\tilde{M}$ is a compact smooth $S^{1}-$ manifold in view of Proposition 3
(5) one identifies ${}^{-}H^{r}_{S^{1}}(M)$ to
$H^{S^{1}}_{n-r}(M;\mathcal{O}_{M})$ and in view of this identification write
$Pd_{n-r}.$ instead of $\mathbb{B}^{r}.$ The long exact sequence becomes
$\textstyle{\cdots\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{K^{r}(F)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{H}_{n-r}^{S^{1}}(\tilde{M};\mathcal{O}_{M})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Pd_{n-r}}$$\textstyle{H^{r-1}_{S^{1}}(\tilde{M})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{K^{r+1}(F)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\cdots.}$
Fig 5
In case that the fixed point set is empty we conclude that
$Pd_{n-r}:{H}_{n-r}^{S^{1}}(\tilde{M},\mathcal{O}_{M})\to
H^{r-1}_{S^{1}}(\tilde{M})$
is an isomorphism. In this case the orbit space $M/S^{1}$ is a
$\mathbb{Q}-$homological manifold of dimension $(n-1)$, hence
$\displaystyle H_{n-r}^{S^{1}}(\tilde{M};\mathcal{O}_{M})=$ $\displaystyle
H_{n-r}(M/S^{1};\mathcal{O}_{M/S^{1}})$ $\displaystyle
H_{S^{1}}^{r-1}(\tilde{M};\mathcal{O}_{M})=$ $\displaystyle
H^{r-1}(M/S^{1};\mathcal{O}_{M/S^{1}})$
and $Pd_{\ast}$ is nothing but the Poincaré duality isomorphism for
$\mathbb{Q}-$ homology manifolds. In general the long exact sequence Fig 5
measures the failure of the Poincaré duality map, $Pd_{\ast},$ to be an
isomorphism.
## 5 The free loop space and s-cohomology
A more interesting example is provided by the $S^{-}$manifold
$\tilde{P^{S^{1}}}:=(P^{S^{1}},\mu).$ Here $P^{S^{1}}$ denotes the smooth
manifold of smooth maps from $S^{1}$ to $P$ modeled by the Frechet space
$C^{\infty}(S^{1},V)$ where $V$ is the model for $P$ (finite or infinite
dimensional Frechet space) cf [B1]. This smooth manifold is equipped with the
canonical smooth $S^{1}-$action $\mu:S^{1}\times P^{S^{1}}\to P^{S^{1}}$
defined by
$\mu(\theta,\alpha)(\theta^{\prime})=\alpha(\theta+\theta^{\prime}),\ \
\alpha:S^{1}\to P,\ \ \theta,\ \theta^{\prime}\in S^{1}=\mathbb{R}/2\pi.$
The fixed points set of the action $\mu$ consists of the constant maps hence
identifies with $P.$ This action is the restriction of the canonical action of
$O(2),$ the group of isometries of $S^{1},$ to the subgroup of orientation
preserving isometries identified to $S^{1}$ itself. For any $x\in P$ viewed as
a fixed point of $\mu$ the linearization representation is the regular
representation of $S^{1}$ on $V=T_{x}(P).$ In view of Proposition 2 the action
$\mu$ is good. The space $P^{S^{1}}$ is also equipped with the natural maps
$\psi_{k},\ k=1,2,\cdots,$ the geometric power maps and with the involution
$\tau,$ defined by
$\displaystyle\psi_{k}(\alpha)(\theta)=\alpha(k\theta)$
$\displaystyle\tau(\alpha)(\theta)=\alpha(-\theta)$
with $\alpha\in P^{S^{1}}$, and $\theta\in S^{1}.$
The involution $\tau$ is the restriction of the action of $O(2)$ to the
reflexion $\theta\to-\theta$ in $S^{1}.$ Then
$(\Omega^{\ast}_{X}(P^{S^{1}}),d^{\ast},i^{X}_{\ast})$ is a mixed CDGA, hence
a mixed complex with power maps $\Psi_{k},\tau$ and involution $\tau$ induced
from $\psi_{k}$ and $\tau.$
Suppose $f:P_{1}\to P_{2}$ is a smooth map. It induces a smooth equivariant
map $f^{S^{1}}:{P_{1}^{S^{1}}}\to{P_{2}^{S^{1}}}$ whose restriction to the
fixed points set is exactly $f.$ If $f$ is a homotopy equivalence then so is
$f^{S^{1}}.$
Introduce the notation
$\displaystyle hH^{\ast}(P):=H^{\ast}(P^{S^{1}}),$ $\displaystyle
cH^{\ast}(P):=H^{\ast}_{S^{1}}(\tilde{P^{S^{1}}}),$ $\displaystyle
sH^{\ast}(P):=^{-}H^{\ast}_{S^{1}}(\tilde{P^{S^{1}}}).$
The assignments $P\rightsquigarrow hH^{\ast}(P),$ $P\rightsquigarrow
cH^{\ast}(P),$ $P\rightsquigarrow sH^{\ast}(P)$ are functors 888the notations
$hH^{\ast},cH^{\ast}$ are motivated by the Hochschild resp. cyclic homology
interpretation of these functors, while $sH^{\ast}$ is abbreviation from
string cohomology. with the property that
$hH^{\ast}(f),cH^{\ast}(f),sH^{\ast}(f)$ are isomorphism if $f$ is a homotopy
equivalence, hence they are all homotopy functors . They are related by the
commutative diagram below. This diagram is the same as diagram (Fig 3) applied
to $\tilde{M}=\tilde{P^{S^{1}}}$ with the specifications provided by
Proposition 3.
| |
---|---|---
$\textstyle{\cdots\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{cH^{r-2}(P)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{S^{r-2}}$$\scriptstyle{id}$$\textstyle{cH^{r}(P)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{J^{r}}$$\scriptstyle{\mathbb{I}^{r}}$$\textstyle{hH^{r}(P)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{B^{r}}$$\textstyle{cH^{r-1}(P)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{id}$$\textstyle{\cdots}$$\textstyle{\cdots\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{cH^{r-2}(P)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{I}^{r-2}}$$\scriptstyle{S}$$\textstyle{K^{r}(P)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{J}^{r}}$$\textstyle{sH^{r}(P)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{B}^{r}}$$\textstyle{cH^{r-1}(P)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\cdots}$$\textstyle{cH^{r}(P)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{I}^{r}}$$\textstyle{\underset{\rightarrow}{\lim}\
cH^{r+2k}(P)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
Fig 6
where $\underset{\rightarrow}{\lim}\ \
cH^{r+2k}(P)=\underset{\rightarrow}{\lim}\\{\cdots\to
cH^{r+2k}(P)\overset{S}{\rightarrow}cH^{r+2k+2}(P)\to\cdots\\}.$
The linear map
$cH^{\ast}(P)=H^{\ast}_{S^{1}}(\tilde{P^{S^{1}})}\overset{\mathbb{I}^{r}}{\rightarrow}K^{\ast}$
factors through $\underset{\rightarrow}{\lim}\ cH^{r+2k}(P)$ which depends
only on the fundamental group of $P.$ Indeed, it is shown in [B3] that if
$P(1)$ 999the notation for the first stage Postnikov term of $P.$ is a smooth
manifold (possibly of infinite dimension) which has the homotopy type of
$K(\pi,1)$ and $p(1):P\to P(1)$ is smooth map inducing an isomorphism for the
fundamental group then
$\underset{\rightarrow}{\lim}H^{r+2k}_{S^{1}}(\tilde{P^{S^{1}}})\to\underset{\rightarrow}{\lim}H^{r+2k}_{S^{1}}(\tilde{P(1)^{S^{1}}})$
is an isomorphism. cf [B3]. Then if one denotes by
$\overline{cH^{\ast}}(M):=\text{coker}(cH^{\ast}(M)\to cH^{\ast}(pt))$ and
$\overline{K^{\ast}}(M):=\text{coker}(K^{\ast}(M)\to K^{\ast}(pt))$101010
clearly $K^{r}(pt)=H^{r}_{S^{1}}(pt)=\kappa\ \ \text{resp.}\ \ 0$ if $r$ is
even resp. odd one obtains
###### Theorem 2.
If $P$ is a 1-connected smooth manifold then we have the following short exact
sequence:
$0\to\overline{K}^{r}(P)\otimes\kappa\overset{\mathbb{J}^{r}}{\rightarrow}{sH}^{r}(P)\overset{\mathbb{B}^{r}}{\rightarrow}\overline{c}H^{r-1}(P)\to
0$
where $\kappa=\mathbb{R}$ or $\mathbb{C}.$
###### Observation 2.
The vector space $K^{\ast}(P)$ can be identified via the Chern character to
the Atiyah–Hirtzebruch (complex) $K-$theory tensored with the field
$\kappa=\mathbb{R}\text{or}\ \mathbb{C},$ depending of what sort of
differential forms one consider (real or complex valued). When $P$ is
1-connected $\overline{c}H^{\ast}(P)$ identifies to
$\text{Hom}(\tilde{A}_{\ast}(P),k)$ where $\tilde{A}_{\ast}(P)$ denotes the
reduced Waldhaussen algebraic $K-$theory111111often referred to as $A-$
theory., cf [B2]. From this perspective $sH^{\ast}$ unifies topological
(Atiyah–Hirtzebruch) $K-$theory and Waldhaussen algebraic $K-$theory.
###### Observation 3.
In view of the definition of
${}^{-}H^{\ast}_{\beta}(C^{\ast},\delta^{\ast},\beta_{\ast}),$ cf. section 2,
observe that $sH^{\ast}(P)$ is represented by infinite sequences
121212$sH^{\ast}(P)$ is the cohomology of the cochain complex
$(^{-}C^{\ast},^{-}D^{\ast})$ with ${}^{-}C^{r}=\prod_{k\geq
0}\Omega^{r+2k}_{inv}(P^{S^{1}})$ and
${}^{-}D^{r}(\cdots,\omega_{r+2},\omega_{r})=(\cdots,(i^{X}\omega_{r+2}+d\omega_{r})),$
cf. section 2., rather than eventually finite sequences of invariant
differential forms on $P^{S^{1}}.$ If instead of “infinite sequences” we would
have considered “eventually finite sequences” the outcome would have been
different for infinite dimensional manifolds. The difference between “infinite
sequences” and ”eventually finite sequences” exists only for infinite
dimensional manifolds which $P^{S^{1}}$ always is.
The power maps $\psi_{k}$ induce the endomorphisms $h\Psi_{k},$ $c\Psi_{k}$
$s\Psi_{k}$ and $K\Psi_{k}$ on $hH^{\ast},cH^{\ast},sH^{\ast},$ and
$K^{\ast}.$
In general only $K\Psi_{k}$ are easy to describe. Precisely if $r$ is even
then $K^{r}=\prod_{i\geq 0}H^{2i}(P)$ and if $r$ is odd then
$K^{r}=\prod_{i\geq 0}H^{2i+1}(P),$ and in both cases $K\Psi_{k}=\prod_{i\geq
0}k^{i-r}Id.$
The symmetric part with respect to the involution $c\Psi_{-1},$ i.e. the
eigenspaces corresponding to the eigenvalues $+1$ identifies with
$H^{\ast}_{O(2)}(P^{S^{1}}),$ the equivariant cohomology for the canonical
$O(2)-$action.
However, if $P$ is 1-connected, in view of the section 6, one can describe
both the eigenvalues and the eigenspaces of the power maps $h\Psi_{k}$ and
$c\Psi_{k}$ and then of $s\Psi_{k}.$ We have:
###### Theorem 3.
Let $P$ be a 1-connected manifold.
1\. All eigenvalues of the endomorphisms $h\Psi_{k}$ and $c\Psi_{k}$ are
$k^{r},r=0,1,2\cdots,$ and the eigenspaces corresponding to $k^{r}$ are
independent of $k$ provided $k\geq 2.$
2\. Denotes these eigenspaces by $hH^{\ast}(M)(r)$ and $cH^{\ast}(M)(r).$ Then
$hH^{\ast}(0)=H^{\ast}(X;\kappa),cH^{\ast}(0)=H^{\ast+1}(X;\kappa),$ and
$hH^{r}(p)=cH^{r}(p)=0,p\geq r+1.$
3\. If $\sum_{i}\dim\pi_{i}(P)\otimes\kappa<\infty,$ $\kappa$ the field of
real or complex numbers and $\sum_{i}\dim(H^{i}(P))<\infty$ then for any
$r\geq 0$ one has
$\sum_{i}\dim hH^{i}(P)(r)<\infty,\ \ \sum_{i}\dim cH^{i}(P)(r)<\infty.$
If $P$ is ”formal” in the sense of rational homotopy theory 131313 i.e. for
each connected component of $P$ De-Rham algebra and the cohomology algebra
equipped with the differential $0$ are homotopy equivalent, cf section 6., (a
projective complex algebraic variety, or more general a closed Kaehler
manifold is formal, cf [DGMS] ) then the Euler Poincaré characteristic
$\chi^{h}(\lambda):=\sum_{i,r}\dim hH^{i}(r)\lambda^{r}$
and
$\chi^{c}(\lambda):=\sum_{i,r}\dim cH^{i}(r)\lambda^{r}$
can be explicitly calculated in terms of the numbers $\dim H^{i}(P),$ cf [B].
The explicit formulae are quite complicated. They require the results of
P.Hanlon [H] about the eigenspaces of Adams operations in Hochschild and
cyclic homology as well as the identification of $hH^{\ast}(P)$ resp.
$cH^{\ast}(P)$ with the Hochschild resp. cyclic homology of the graded algebra
$H^{\ast}(P).$ These are not discussed in this paper but the reader can
consult [BFG] and [B4] for precise statements.
The functor $\overline{sH}^{r}(P)$ is of particular interest in geometric
topology. In the case $P$ is 1-connected it calculates in some ranges the
homotopy groups of the (homotopy) quotient space of homotopy equivalences by
the group of diffeomorphisms [B1], [B4].
## 6 The free loop construction for CDGA
The ”free loop ” construction associates to a free connected CDGA,
$(\Lambda[V],d_{V})$ a mixed CDGA,
$(\Lambda[V\oplus\overline{V}],\delta_{V},i^{V}),$ endowed with power maps
$\Psi_{k}$ and involution $\tau$ defined as follows.
1. (i)
Let $\overline{V}=\oplus_{i\geq 0}\overline{V}^{i}$ with
$\overline{V}^{i}:=V^{i+1}$ and let $\Lambda[V\oplus\overline{V}]$ be the
commutative graded algebra generated by $V\oplus\overline{V}.$
2. (ii)
Let $i^{V}:\Lambda[V\oplus\overline{V}]\to\Lambda[V\oplus\overline{V}]$be the
unique internal differential (of degree $-1$) which extends
$i^{V}(v)=\overline{v}$ and $i^{V}(\overline{v})=0.$
3. (iii)
Let $\delta_{V}:\Lambda[V\oplus\overline{V}]\to\Lambda[V\oplus\overline{V}]$
be the unique external differential (of degree $+1$) which extends
$\delta_{V}(v)=d(v)$ and $\delta(\overline{v})=-i^{V}(d(v)).$
4. (iv)
Let
$\Psi_{k}:(\Lambda[V\oplus\overline{V}],\delta_{V})\to(\Lambda[V\oplus\overline{V}],\delta_{V}),k=-1,1,2,\cdots$
be the unique morphisms of CDGA which extends
$\Psi_{k}(v)=v,\Psi_{k}(\overline{v})=k\overline{v}.$ We put
$\tau:=\Psi_{-1}.$ The maps $\Psi_{k}$ $k\geq 1$ are called the power maps and
$\tau$ the canonical involution. One has
$\displaystyle\Psi_{k}\cdot\Psi_{r}=$ $\displaystyle\Psi_{kr}$
$\displaystyle\Psi_{k}\cdot i_{V}=$ $\displaystyle ki_{V}\cdot\Psi_{k}$
5. (v)
Let $\Lambda^{+}[V\oplus\overline{V}]$ be the ideal of
$\Lambda[V\oplus\overline{V}]$ generated by $V\oplus\overline{V}$ or the
kernel of the augmentation which vanishes on $V\oplus\overline{V}.$
Note that :
###### Observation 4.
1\. $\text{(}{Im}(i^{V}),\delta_{V},0)$ is a mixed sub complex of
($\Lambda^{+}[V\oplus\overline{V}],\delta_{V},i^{V})\subset(\Lambda[V\oplus\overline{V}],\delta_{V},i^{V})$
2\. $\Psi_{k}$, $k=-1,1,2,\cdots$ leave
$(\Lambda^{+}[V\oplus\overline{V}],\delta_{V},i^{V})$ and
$(\text{Im}(i^{V}),\delta)$ invariant and have $k^{r}$ $r=0,1,2,\cdots$ as
eigenvalues. These are all eigenvalues.
For $k\geq 2$ the eigenspace of
$\Psi_{r}:\Lambda[V\oplus\overline{V}]\to\Lambda[V\oplus\overline{V}]$
corresponding to the eigenvalue $k^{r}$ is exactly
$\Lambda[V]\otimes\overline{V}^{\otimes r},$ resp.
$\Lambda^{+}[V\oplus\overline{V}]\cap\Lambda^{[}V]\otimes\overline{V}^{\otimes
r},$ resp.
$\text{Im}(i^{V})(r)=\text{Im}(i^{V})\cap\Lambda[V]\otimes\overline{V}^{\otimes
r}$, hence independent of $k.$ Each such eigenspace is $\delta_{V}-$invariant.
3\. The mixed complex $(\Lambda^{+}[V\oplus\overline{V}],\delta_{V},i^{V})$ is
$i^{V}-$acyclic,
4\. We have the decomposition
$(\Lambda[V\oplus\overline{V}],\delta_{V})=\bigoplus_{r\geq
1}(\Lambda[V]\otimes\overline{V}^{\otimes r},\delta_{V})$
and the analogous decomposition for
$(\Lambda^{+}[V\oplus\overline{V}],\delta_{V})$ and
$(\text{Im}(i^{V}),\delta_{V})$ referred from now on as the weight
decompositions.
Consider the complex $(\Lambda[V]\otimes\overline{V}^{\otimes r},\delta_{V})$
and the filtration provided by $\Lambda[V]\otimes F_{p}(\overline{V}^{\otimes
r})$ with $F_{p}(\overline{V}^{\otimes r})$ the span of elements in
$\overline{V}^{\otimes r}$ of total degree $\leq p.$
For a graded vector space $W=\oplus_{i}W^{i}$ denote by $\dim W=\sum\dim
W^{i}.$
###### Observation 5.
1\. $(\Lambda[V]\otimes F_{p}(\overline{V}^{\otimes r}),\delta_{V})$ is a sub
complex of $(\Lambda[V]\otimes\overline{V}^{\otimes r},\delta_{V}).$
2.If $(\Lambda[V],d_{V})$ is minimal and one connected then, by Observation 1,
$\delta(F_{p}(\overline{V}^{\otimes r}))\subset\Lambda[V]\otimes
F_{p-1}(\overline{V}^{\otimes r})$ and then
$({\Lambda[V]\otimes F_{p}(\overline{V}^{\otimes r})}/{\Lambda[V]\otimes
F_{p-1}(\overline{V}^{\otimes r})}\
,\delta_{V})=(\Lambda[V],d_{V})\otimes{F_{p}(\overline{V}^{\otimes
r})}/{F_{p-1}(\overline{V}^{\otimes r})}.$
2\. $\sum_{p}\dim(F_{p}(\overline{V}^{\otimes
r})/F_{p-1}(\overline{V}^{\otimes r}))=\dim(\overline{V}^{\otimes r})=(\dim
V)^{r}.$
If $f:(\Lambda[V],d_{V})\to\Lambda(W,d_{W})$ is a morphism of CDGAs then it
induces
$\tilde{f}:(\Lambda[V\oplus\overline{V}],\delta_{V},i^{V})\to(\Lambda[W\oplus\overline{W}],\delta_{W},i^{W})$
which intertwines $\Psi_{k}^{\prime}$s and then preserves the weight
decompositions.
We introduce the the notation $HH^{\ast},CH^{\ast}PH^{\ast}$
$\displaystyle HH^{\ast}(\Lambda[V],d_{V}):=$ $\displaystyle
H^{\ast}(\Lambda[V\oplus\overline{V}],\delta_{V})$ $\displaystyle
CH^{\ast}(\Lambda[V],d_{V}):=$
$\displaystyle{}^{+}H^{\ast}_{i^{V}}(\Lambda[V\oplus\overline{V}],\delta_{V},i^{V})$
$\displaystyle PH^{\ast}(\Lambda[V],d_{V}):=$ $\displaystyle
PH^{\ast}_{i^{V}}(\Lambda[V\oplus\overline{V}],\delta_{V},i^{V})$
and for a morphism $f$ denote by $HH(f),CH(f),PH(f)$ the linear maps induced
by $\tilde{f}.$ The assignments $HH^{\ast},CH^{\ast},PH^{\ast}$ provide
functors from the category of fc–CDGA’s to graded vector spaces. They come
equipped with the operations $H\Psi_{k},C\Psi_{k}$ etc. induced from
$\Psi_{k}.$ Since for $f$ quasi isomorphisms $HH^{\ast}(f),CH^{\ast}(f),$
$PH^{\ast}(f)$ are isomorphisms these functors, as shown in section 3 extend
to the category of c–CDGA’s. We have the following result.
###### Theorem 4.
Let $(\mathcal{A},d_{\mathcal{A}})$ be a connected CDGA.
1\. All eigenvalues of the endomorphisms $H\Psi_{k}$ and $C\Psi_{k}$ are
$k^{r},r=0,1,2\cdots,$ and their eigenspaces are independent of $k$ provided
$k\geq 2.$ One denotes them by $HH(\mathcal{A},d_{\mathcal{A}})(r),$ and
$CH(\mathcal{A},d_{\mathcal{A}})(r).$
2.
$\displaystyle HH^{\ast}(\mathcal{A},d_{\mathcal{A}})(0)=$ $\displaystyle
H^{\ast}(\mathcal{A},d_{\mathcal{A}}),$ $\displaystyle
CH^{\ast}(\mathcal{A},d_{\mathcal{A}})(0)=$ $\displaystyle
H^{\ast+1}(\mathcal{A},d_{\mathcal{A}})$ $\displaystyle
HH^{r}(\mathcal{A},d_{\mathcal{A}})(p)=$ $\displaystyle
CH^{r}(\mathcal{A},d_{\mathcal{A}})(p)=0,\ \ p\geq r+1$
3.Suppose $(\mathcal{A},d_{\mathcal{A}})$ is 1-connected with minimal model
$(\Lambda[V],d_{V}).$ If $\sum_{i}\dim V^{i}<\infty$ and $\sum_{i}\dim
H^{i}(\mathcal{A},d_{\mathcal{A}})<\infty$ then for any $r\geq 0$ one has
$\sum_{i}\dim HH^{i}(\mathcal{A},d_{\mathcal{A}})(r)<\infty,\ \ \sum_{i}\dim
CH^{i}(\mathcal{A},d_{\mathcal{A}})(r)<\infty.$
###### Proof.
It suffices to check the statements for
$(\mathcal{A},d_{\mathcal{A}})=(\Lambda[V],d_{V})$ minimal. Items 1) and 2)
are immediate consequences of Observation 4.
Item 3) follows from Observation 5. Indeed for a fixed $r$ one has
$\displaystyle\sum_{i}\dim H^{i}(\Lambda[V]\otimes\overline{V}^{\otimes
r},\delta_{V}))\leq$ $\displaystyle\sum_{i,p}\dim H^{i}({\Lambda[V]\otimes
F_{p}(\overline{V}^{\otimes r}}/{\Lambda[V]\otimes
F_{p-1}(\overline{V}^{\otimes r})},\ \delta_{V})=$ $\displaystyle(\dim
V)^{r}\cdot\sum_{i}\dim H^{i}(\Lambda[V],d_{V})$
∎
In addition to $\chi(\mathcal{A},d_{\mathcal{A}}):=\sum(-1)^{i}\dim
H^{i}(\mathcal{A},d_{\mathcal{A}})$ one can consider
$\chi^{H}(\mathcal{A},d_{\mathcal{A}})(r):=\sum(-1)^{i}\dim
HH^{i}(\mathcal{A},d\mathcal{A})(r)$ and
$\chi^{C}(\mathcal{A},d_{\mathcal{A}})(r):=\sum(-1)^{i}\dim
CH^{i}(\mathcal{A},d_{\mathcal{A}})(r),$
and then the power series in $\lambda$,
$\chi^{H}(\mathcal{A},d_{\mathcal{A}})(\lambda):=\sum\chi^{H}(\mathcal{A},d_{\mathcal{A}})(r)\lambda^{r},\
\
\chi^{C}(\mathcal{A},d\mathcal{A})(\lambda):=\sum\chi^{C}(\mathcal{A},d\mathcal{A})(r)\lambda^{r}.$
Theorem 4 (3) implies that for $(\mathcal{A},d_{\mathcal{A}})$ 1-connected
with $\sum_{i}\dim V^{i}<\infty$ and $\sum_{i}\dim
H^{i}(\mathcal{A},d_{\mathcal{A}})<\infty$ the partial Euler–Poincaré
characteristics $\chi^{H}(\mathcal{A},d_{\mathcal{A}})(r)$ and
$\chi^{C}(\mathcal{A},d_{\mathcal{A}})(r)$ and therefore the power series
$\chi^{H}(\mathcal{A},d_{\mathcal{A}})(\lambda)$ and
$\chi^{C}(\mathcal{A},d_{\mathcal{A}})(\lambda)$ are well defined. The results
of Hanlon [H] permit to calculate explicitly $\chi^{H}(\lambda)$ and
$\chi^{C}(\lambda)$ in terms of $\dim H^{i}(\mathcal{A},d_{A})$ if
$(\mathcal{A},d_{\mathcal{A}})$ is 1-connected and formal, i.e. there exists a
quasi isomorphims $(\Lambda[V],d)\to(H^{\ast}(\Lambda[V],d),0),$
$(\Lambda[V],d)$ a minimal model of $(\mathcal{A},d_{\mathcal{A}}).$
We want to define an algebraic analogue of the functor $sH^{\ast}$ on the
category of cCDGA’s. Recall that for a morphism
$f^{\ast}:(C^{\ast}_{1},d^{\ast}_{1})\to(C^{\ast}_{2},d^{\ast}_{2})$ the
“mapping cone” $\textit{Cone}(f^{\ast})$ is the cochain complex with
components $C^{\ast}_{f}=C^{\ast}_{2}\oplus C^{\ast+1}_{1}$ and with
$d^{\ast}_{f}=\begin{pmatrix}d^{\ast}_{2}&f^{\ast+1}\\\
0&-d^{\ast+1}_{1}\end{pmatrix}.$
Notice that, when $f^{\ast}$ is injective, the morphism
$\textit{Cone}(f^{\ast})\to C^{\ast}_{2}/f^{\ast}(C^{\ast}_{1})$ defined by
the composition $C^{\ast}_{2}\oplus C^{\ast+1}_{1}\rightarrow
C^{\ast}_{2}\rightarrow C^{\ast}_{2}/f^{\ast}(C^{\ast}_{1})$ is a quasi
isomorphism.
We will consider the composition
$\underline{I}^{\ast-2}:^{+}\mathcal{C}^{\ast-2}_{i^{V}}(\Lambda[V\oplus\overline{V}],\delta_{V},i^{V})\overset{I^{\ast-2}}{\rightarrow}\mathbb{P}\mathcal{C}^{\ast}(\Lambda[V\oplus\overline{V}],\delta_{V},i^{V})\overset{\mathbb{P}^{\ast}(p)}{\rightarrow}\mathbb{P}\mathcal{C}^{r}(\Lambda[V],d_{V},0)$
with the fist arrow provided by the natural transformation
$I^{\ast-2}:^{+}C^{\ast-2}_{\beta}\to\mathbb{P}C^{\ast}$ described in section
2 applied to the mixed complex
$(\Lambda[V\oplus\overline{V}],\delta_{V},i^{V})$ and the second induced by
the projection on the zero weight component of
$(\Lambda[V\oplus\overline{V}],\delta_{V},i^{V}).$
The mapping cone $\textit{Cone}(\underline{I}^{\ast-2}),$ is functorial when
regarded on the category of fc–CDGA’s. Define
$SH^{\ast}(\Lambda[V],d_{V}):=H^{\ast}(\textit{Cone}(\underline{I}^{\ast})).$
The assignment $(\Lambda[V],d_{V})\rightsquigarrow
SH^{\ast}(\Lambda[V],d_{V})$ is an homotopy functor.
Consider the commutative diagrams
$\textstyle{{}^{+}\mathcal{C}^{\ast-2}_{i^{V}}(\Lambda[V\oplus\overline{V}],\delta_{V},i^{V})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{i^{\ast-2}}$$\scriptstyle{id}$$\textstyle{{}^{+}\mathcal{C}^{\ast}_{i^{V}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces(\Lambda[V\oplus\overline{V}],\delta_{V},i^{V})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\underline{I}^{\ast}}$$\textstyle{\textit{Cone}(i^{\ast-2})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{}^{+}\mathcal{C}^{\ast-2}_{i^{V}}(\Lambda[V\oplus\overline{V}],\delta_{V},i^{V})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\underline{I}^{\ast-2}}$$\textstyle{\mathbb{P}\mathcal{C}^{\ast}\ignorespaces\ignorespaces\ignorespaces\ignorespaces(\Lambda[V],d_{V},0)}$$\textstyle{\textit{Cone}(\underline{I}^{\ast-2}).}$
and
$\textstyle{{}^{+}\mathcal{C}^{\ast-2}_{i^{V}}(\Lambda[V\oplus\overline{V}],\delta_{V},i^{V})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{i^{\ast-2}}$$\scriptstyle{id}$$\textstyle{{}^{+}\mathcal{C}^{\ast}_{i^{V}}(\Lambda[V\oplus\overline{V}],\delta,i^{V})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{id}$$\textstyle{\textit{Cone}(i^{\ast-2})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{}^{+}\mathcal{C}^{\ast-2}_{i^{V}}(\Lambda[V\oplus\overline{V}],\delta_{V},i^{V})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{i^{\ast-2}}$$\textstyle{{}^{+}\mathcal{C}^{\ast}_{i^{V}}(\Lambda[V\oplus\overline{V}],\delta_{V},i^{V})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{C^{\ast}(\Lambda[V\oplus\overline{V}],\delta_{V})}$
with the last vertical arrow in the second diagram a quasi isomorphism as
noticed above.
The long exact sequence induced by passing to cohomology in the first diagram
combined with the identifications implied by the second diagram lead to
| |
---|---|---
$\textstyle{\cdots\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{CH^{r-2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{S^{r-2}}$$\scriptstyle{id}$$\textstyle{CH^{r}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{J^{r}}$$\scriptstyle{T^{r}}$$\textstyle{HH^{r}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{B^{r}}$$\textstyle{CH^{r-1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{id}$$\textstyle{\cdots}$$\textstyle{\cdots\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{CH^{r-2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{I}^{r-2}}$$\scriptstyle{S^{r-2}}$$\textstyle{K^{r}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{J}^{r}}$$\textstyle{SH^{r+2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{B}^{r}}$$\textstyle{CH^{r-1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\cdots}$$\textstyle{CH^{r}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{I}^{r}}$$\textstyle{PH^{r}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
Fig 7 with $PH^{r}=\underset{\rightarrow}{\lim}CH^{r+2k}$ and
$K^{r}:=K^{r}(\Lambda[V],d_{V})$ given by $=\prod_{k}H^{2k}(\Lambda[V],d_{V})$
resp. $\prod_{k}H^{2k+1}(\Lambda[V],d_{V})$if $r$ is even resp. odd. It is
immediate that Theorem 2 remains true for $sH^{\ast},cH^{\ast}$ replaced by
$SH^{\ast},CH^{\ast}$ as follows easily from diagram Fig 7. The diagram Fig 7
should be compared with diagram Fig 6. This explains why
$SH^{\ast}(\Lambda[V],d^{V})$ will be regarded as the algebraic analogue of
$sH^{\ast}(P).$
It is natural to ask if the functors $HH^{\ast},CH^{\ast},SH^{\ast}$ applied
to $(\Omega^{\ast}(P),d^{\ast})$ calculate $hH^{\ast},cH^{\ast},sH^{\ast}$
applied to $P$ and the diagram Fig 7 identifies to the diagram Fig 6. The
answer is in general no, but is yes if $P$ 1-connected .
The minimal model theory, discussed in the next section, permits to identify,
$HH^{\ast}(\Omega^{\ast}(P),d^{\ast}),CH^{\ast}(\Omega^{\ast}(P),d^{\ast})$ to
$hH^{\ast}(P),CH^{\ast}(P)$ and then $SH^{\ast}(\Omega^{\ast}(P),d^{\ast})$ to
$SH^{\ast}(P)$ and actually diagram Fig 6 to diagram Fig 7 when $P$ is
1-connected.
## 7 Minimal models and the proof of Theorem 3
Observe that if $(\mathcal{A}^{\ast},d^{\ast},\beta_{\ast})$ is a mixed CDGA
equipped with the power maps and involution $\Psi_{k},k=-1,1,2,\cdots,$ then
the diagram
$\textstyle{{}^{+}H^{\ast}_{\beta}(\mathcal{A}^{\ast},d^{\ast},\beta_{\ast})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{{}^{+}\underline{\Psi}_{k}}$$\textstyle{H^{\ast}(\mathcal{A}^{\ast},d^{\ast})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{{}^{+}\underline{\Psi}_{k}}$$\textstyle{{}^{+}H^{\ast}_{\beta}(\mathcal{A}^{\ast},d^{\ast},\beta_{\ast})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{\ast}(\mathcal{A}^{\ast},d^{\ast})}$
can be derived by passing to cohomology in the commutative diagram of CDGA’s.
$\textstyle{(\mathcal{A}^{\ast}\otimes\Lambda[u],\mathcal{D}[u])\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\Psi_{k}[u]}$$\textstyle{(\mathcal{A}^{\ast},d^{\ast})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\Psi_{k}}$$\textstyle{(\mathcal{A}^{\ast}\otimes\Lambda[u],\mathcal{D}[u])\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{(\mathcal{A}^{\ast},d^{\ast})}$
where $\Lambda[u]$ is the free commutative graded algebra generated by the
symbol $u$ of degree 2, $\mathcal{D}[u](a\otimes u^{r})=d(a)\otimes
u^{r}+\beta(a)\otimes u^{r+1}$ and $\Psi[u](a\otimes
u^{r})=1/k^{r}\Psi(a)\otimes u^{r}.$
For $P$ 1-connected and $(\Lambda([V],d_{V})$ a minimal model of
$(\Omega^{\ast}(P),d^{\ast})$ we want to establish the existence of the
homotopy commutative diagram
---
$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\Psi^{P}_{k}[u]}$$\textstyle{B\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\Psi^{P}_{k}}$$\textstyle{C\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\tilde{\theta}}$$\scriptstyle{\Psi_{k}[u]}$$\textstyle{D\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\theta}$$\scriptstyle{\Psi_{k}}$$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{B}$$\textstyle{C\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\tilde{\theta}}$$\textstyle{D\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\theta}$
where
$\displaystyle A=$
$\displaystyle(\Omega_{X}(P^{S^{1}})\otimes\Lambda[u],\mathcal{D}[u])$
$\displaystyle B=$ $\displaystyle(\Omega_{X}(P^{S^{1}}),d)$ $\displaystyle C=$
$\displaystyle(\Lambda[V\oplus\overline{V}]\otimes\Lambda[u],\delta[u])$
$\displaystyle D=$ $\displaystyle(\Lambda[V\oplus\overline{V}],\delta)$
with
$\displaystyle\mathcal{D}[u](\omega\otimes u^{r})=$ $\displaystyle
d(\omega)\otimes u^{r}+i^{X}(\omega)\otimes u^{r+1}$
$\displaystyle\delta[u](a\otimes u^{r})=$ $\displaystyle\delta(a)\otimes
u^{r}+i^{V}(\omega)\otimes u^{r+1}.$
The existence of the quasi isomorphism $\theta$ was established in in [SV].
The existence of the quasi isomorphism $\tilde{\theta}$ and the homotopy
commutativity of the top square was established in [BV] and the homotopy
commutativity of the side squares was verified in [BFG]. The right side square
resp. left side square in this diagram provide identifications of
$HH^{\ast}(\Lambda[V],d_{V})$ with $hH^{\ast}(P)$ resp. of
$CH^{\ast}(\Lambda[V],d_{V})$ with $cH^{\ast}(P).$ These identifications are
compatible with all natural transformations defined above and with the
endomorphisms induced by the algebraic resp. geometric power maps. In
particular one drive Theorem 3 from Theorem 4. It is tedious but
straightforward to derive, under the hypothesis of 1– connectivity for $P,$
the identification of the diagram Fig 6 for $P$ and the diagram Fig 7 for
$(\Omega^{\ast}(P),d^{\ast}).$
## References
* [AB86] M.Atiyah and R.Bott _The moment map and equivariant cohomology_ , Topology 23, (1986), 1-28
* [Bu] D. Burghelea _T_ he rational homotopy groups of $\text{Diff}(M)$ and $\text{Homeo}(M)$ in stability range LNM , Springer -Verlag763 1978, 604-626
* [B1] D. Burghelea _T_ he free loop space Part 1– Algebraic topology Contemporary of math. 96 1989 59-85
* [B2] D. Burghelea _C_ yclic homology and k-theory of spaces I. Contemporary of math. 55 1986 89-115
* [B3] D. Burghelea _A localization theorem for functional $S^{1}-$spaces_ Math. Ann. 282 1988 513-27
* [B4] D. Burghelea _F_ ree loop spaces, power maps and K-theory Contemporary of math. 199 1996 35–58
* [BF] D. Burghelea and Z.Fiedorowicz _C_ yclic homology and k-theory of spaces II. Topology 25 1986 303-317
* [BFG] D. Burghelea, Z.Fiedorowicz and W.Gajda _A_ dams operations in Hochschild and Cyclic Homology of the de Rham Algebra and Free Loop Spaces K-Theory 4 1991 269-87
* [BV2] D. Burghelea and M.Vigué- Poirrier _C_ yclic homology of commutative algebras I LNM , Springer Verlag 1318 1985 51-72
* [VB] D. Burghelea and M.Vigué- Poirrier _A_ model for cyclic homology and algebraic K-theory of spaces J.Differential Geom. 22 1985 243-53
* [CS] M. Chas , D.Sullivan _S_ tring Topology arxiv; math 9911159 1999
* [G] T. Goodwillie _C_ yclic homology, derivations and the free loop space Topology 24 1985 187-215
* [DGMS] P.Deligne, Ph.Griffiths, J.Morgan, D.Sullivan _R_ eal Homotopy Theory of Kähler Manifolds Inventiones Math. 29 1975 245-274
* [Ha] S.Halperin _L_ ectures on minimal models Mem.Soc.Math.France no 9/10 1983
* [H] P. Hanlon _T_ he action of Sn on the components of the Hodge decomposition of Hochschild homology Michigan Math. J 37 1990 105-24
* [J] J.D.S.Jones _C_ yclic homology and equivariant cohomology Invent.Math. 87 1987 403-423
* [JP] J.D.S.Jones and S.B.Petrack _T_ he fixed point theorem in equivariant cohomology preprint Warwick 1987
* [L] D.Lehmann _T_ heorie homotopique des formes differentielles Asterisque, S.M.F 45 1977
* [Lo] J.L.Loday _Cyclic Homology, (book)_ Grundlehren der mathematischen Wissenchaften, Springer Verlag 301 1991
* [VS] M.Vigué-Poirrier and D.Sullivan _The homology theory of the closed geodesics problem_ J.Differential Geom. 11 1976 653-44
|
arxiv-papers
| 2009-05-10T16:32:57 |
2024-09-04T02:49:02.514285
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Dan Burghelea",
"submitter": "Dan Burghelea",
"url": "https://arxiv.org/abs/0905.1489"
}
|
0905.1562
|
# Partial wave analysis of $J/\psi\rightarrow p\bar{p}\pi^{0}$
M. Ablikim1, J. Z. Bai1, Y. Bai1, Y. Ban11, X. Cai1, H. F. Chen16, H. S.
Chen1, H. X. Chen1, J. C. Chen1, Jin Chen1, X. D. Chen5, Y. B. Chen1, Y. P.
Chu1, Y. S. Dai18, Z. Y. Deng1, S. X. Du1a, J. Fang1, C. D. Fu1, C. S. Gao1,
Y. N. Gao14, S. D. Gu1, Y. T. Gu4, Y. N. Guo1, Z. J. Guo15b, F. A. Harris15,
K. L. He1, M. He12, Y. K. Heng1, H. M. Hu1, T. Hu1, G. S. Huang1c, X. T.
Huang12, Y. P. Huang1, X. B. Ji1, X. S. Jiang1, J. B. Jiao12, D. P. Jin1, S.
Jin1, G. Li1, H. B. Li1, J. Li1, L. Li1, R. Y. Li1, S. M. Li1, W. D. Li1, W.
G. Li1, X. L. Li1, X. N. Li1, X. Q. Li10, Y. F. Liang13, B. J. Liu1d, C. X.
Liu1, Fang Liu1, Feng Liu6, H. M. Liu1, J. P. Liu17, H. B. Liu4e, J. Liu1, Q.
Liu15, R. G. Liu1, S. Liu8, Z. A. Liu1, F. Lu1, G. R. Lu5, J. G. Lu1, C. L.
Luo9, F. C. Ma8, H. L. Ma2, Q. M. Ma1, M. Q. A. Malik1, Z. P. Mao1, X. H. Mo1,
J. Nie1, S. L. Olsen15, R. G. Ping1, N. D. Qi1, J. F. Qiu1, G. Rong1, X. D.
Ruan4, L. Y. Shan1, L. Shang1, C. P. Shen15, X. Y. Shen1, H. Y. Sheng1, H. S.
Sun1, S. S. Sun1, Y. Z. Sun1, Z. J. Sun1, X. Tang1, J. P. Tian14, G. L. Tong1,
G. S. Varner15, X. Wan1, J. X. Wang1, L. Wang1, L. L. Wang1, L. S. Wang1, P.
Wang1, P. L. Wang1, Y. F. Wang1, Z. Wang1, Z. Y. Wang1, C. L. Wei1, D. H.
Wei3, N. Wu1, X. M. Xia1, G. F. Xu1, X. P. Xu6, Y. Xu10, M. L. Yan16, H. X.
Yang1, M. Yang1, Y. X. Yang3, M. H. Ye2, Y. X. Ye16, C. X. Yu10, C. Z. Yuan1,
Y. Yuan1, Y. Zeng7, B. X. Zhang1, B. Y. Zhang1, C. C. Zhang1, D. H. Zhang1, F.
Zhang14f, H. Q. Zhang1, H. Y. Zhang1, J. W. Zhang1, J. Y. Zhang1, X. Y.
Zhang12, Y. Y. Zhang13, Z. X. Zhang11, Z. P. Zhang16, D. X. Zhao1, J. W.
Zhao1, M. G. Zhao1, P. P. Zhao1, Z. G. Zhao16, B. Zheng1, H. Q. Zheng11, J. P.
Zheng1, Z. P. Zheng1, B. Zhong9 L. Zhou1, K. J. Zhu1, Q. M. Zhu1, X. W. Zhu1,
Y. S. Zhu1, Z. A. Zhu1, Z. L. Zhu3, B. A. Zhuang1, B. S. Zou1
(BES Collaboration)
1 Institute of High Energy Physics, Beijing 100049, People’s Republic of China
2 China Center for Advanced Science and Technology(CCAST), Beijing 100080,
People’s Republic of China
3 Guangxi Normal University, Guilin 541004, People’s Republic of China
4 Guangxi University, Nanning 530004, People’s Republic of China
5 Henan Normal University, Xinxiang 453002, People’s Republic of China
6 Huazhong Normal University, Wuhan 430079, People’s Republic of China
7 Hunan University, Changsha 410082, People’s Republic of China
8 Liaoning University, Shenyang 110036, People’s Republic of China
9 Nanjing Normal University, Nanjing 210097, People’s Republic of China
10 Nankai University, Tianjin 300071, People’s Republic of China
11 Peking University, Beijing 100871, People’s Republic of China
12 Shandong University, Jinan 250100, People’s Republic of China
13 Sichuan University, Chengdu 610064, People’s Republic of China
14 Tsinghua University, Beijing 100084, People’s Republic of China
15 University of Hawaii, Honolulu, HI 96822, USA
16 University of Science and Technology of China, Hefei 230026, People’s
Republic of China
17 Wuhan University, Wuhan 430072, People’s Republic of China
18 Zhejiang University, Hangzhou 310028, People’s Republic of China
a Currently at: Zhengzhou University, Zhengzhou 450001, People’s Republic of
China
b Currently at: Johns Hopkins University, Baltimore, MD 21218, USA
c Currently at: University of Oklahoma, Norman, Oklahoma 73019, USA
d Currently at: University of Hong Kong, Pok Fu Lam Road, Hong Kong
e Currently at: Graduate University of Chinese Academy of Sciences, Beijing
100049, People’s Republic of China
f Currently at: Harbin Institute of Technology, Harbin 150001, People’s
Republic of China
###### Abstract
Using a sample of 58 million $J/\psi$ events collected with the BESII detector
at the BEPC, more than 100,000 $J/\psi\rightarrow p\bar{p}\pi^{0}$ events are
selected, and a detailed partial wave analysis is performed. The branching
fraction is determined to be $Br(J/\psi\rightarrow p\bar{p}\pi^{0})=(1.33\pm
0.02\pm 0.11)\times 10^{-3}$. A long-sought ‘missing’ $N^{*}$, first observed
in $J/\psi\rightarrow p\bar{n}\pi^{-}$, is observed in this decay too, with
mass and width of $2040_{-4}^{+3}\pm 25$ MeV/c2 and $230_{-8}^{+8}\pm 52$
MeV/c2, respectively. Its spin-parity favors $\frac{3}{2}^{+}$. The masses,
widths, and spin-parities of other $N^{*}$ states are obtained as well.
###### pacs:
13.25.Gv, 12.38.Qk, 14.20.Gk, 14.40.Cs
## I Introduction
Studies of mesons and searches for glueballs, hybrids, and multiquark states
have been active fields of research since the early days of elementary
particle physics. However, our knowledge of baryon spectroscopy has been poor
due to the complexity of the three quark system and the large number of states
expected.
As pointed out by N. Isgur isgur in 2000, nucleons are the basic building
blocks of our world and the simplest system in which the three colors of QCD
neutralize into colorless objects and the essential non-abelian character of
QCD is manifest, while baryons are sufficiently complex to reveal physics
hidden from us in the mesons. The understanding of the internal quark-gluon
structure of baryons is one of the most important tasks in both particle and
nuclear physics, and the systematic study of baryon spectroscopy, including
production and decay rates, will provide important information in
understanding the nature of QCD in the confinement domain.
In recent years, interest in baryon spectroscopy has revived. For heavy
baryons containing a charm or bottom quark, new exciting results have been
obtained since the experimental evidence for the first charmed baryon
$\Sigma_{c}^{++}$ was reported by BNL bnl in 1975 in the reaction
$\nu_{\mu}p\to\mu^{-}\Lambda\pi^{+}\pi^{-}\pi^{+}\pi^{-}$. Many charmed
baryons have been observed in recent years in CLEO, the two B-factories, the
Fermilab photo-production experiment, FOCUS, and SELEX cb1 ; cb2 ; cb3 ; cb4 ;
cb5 . Only a few baryons with beauty have been discovered so far. Earlier
results on beauty baryons were from CERN ISR and LEP bb1 experiments, while
new beauty baryons are from CDF and D0 at the Tevatron bb2 ; bb3 ; cb5 . Most
information on light-quark baryons comes from $\pi N$ or $KN$ elastic or
charge exchange scattering, but new results are being added from photo- and
electro-production experiments at JLab and the ELSA, GRAAL, SPRING8, and MAMI
experiments, as well as $J/\psi$ and $\psi(2S)$ decays at BES. However, up to
now, the available experimental information is still inadequate and our
knowledge on $N^{*}$ resonances is poor. Even for the well-established lowest
excited states, $N(1440)$, $N(1535)$, etc., their properties, such as masses,
widths, decay branching fractions, and spin-parity assignments, still have
large experimental uncertainties pdg . Another outstanding problem is that,
the quark model predicts a substantial number of $N^{*}$ states around 2.0
GeV/c2 scaw ; nig ; scaw2 , but some of these, the ‘missing’ $N^{*}$ states,
have not been observed experimentally.
$J/\psi$ decays provide a good laboratory for studying not only excited baryon
states, but also excited hyperons, such as $\Lambda^{*}$, $\Sigma^{*}$, and
$\Xi^{*}$ states. All $N^{*}$ decay channels which are presently under
investigation in photo- and electro-production experiments can also be studied
in $J/\psi$ decays. Furthermore, for $J/\psi\to N\bar{N}\pi$ and
$N\bar{N}\pi\pi$ decays, the $N\pi(\bar{N}\pi)$ and $N\pi\pi(\bar{N}\pi\pi)$
systems are expected to be dominantly isospin 1/2 due to that the isospin
conserving three-gluon annihilation of the constituent c-quarks dominates over
the isospin violating decays via intermediate photon for the baronic final
states. This makes the study of $N^{*}$ resonances from $J/\psi$ decays less
complicated, compared with $\pi N$ and $\gamma N$ experiments which have
states that are a mixture of isospin 1/2 and 3/2.
$N^{*}$ production in $J/\psi\to p\bar{p}\eta$ was studied using a partial
wave analysis (PWA) with $7.8\times 10^{6}J/\psi$ BESI events plb75 . Two
$N^{*}$ resonances were observed with masses and widths of $M=1530\pm 10$ MeV,
$\Gamma=95\pm 25$ MeV and $M=1647\pm 20$MeV, $\Gamma=145^{+80}_{-45}$ MeV, and
spin-parities favoring $J^{P}=\frac{1}{2}^{-}$. In a recent analysis of
$J/\psi\rightarrow p\bar{n}\pi^{-}+c.c.$ xbji , a ‘missing’ $N^{*}$ at around
2.0 GeV/c2 named $N_{x}(2065)$ was observed, based on $5.8\times 10^{7}J/\psi$
events collected with BESII at the Beijing Electron Positron Collider (BEPC).
The mass and width for this state are determined to be $2065\pm 3_{-30}^{+15}$
MeV/c2 and $175\pm 12\pm 40$ MeV/c2, respectively, from a simple Breit-Wigner
fit. In this paper, the results of a partial wave analysis of
$J/\psi\rightarrow p\bar{p}\pi^{0}$ are presented, based on the same event
sample.
## II Detector and data samples
The upgraded Beijing Spectrometer detector, is a large solid-angle magnetic
spectrometer which is described in detail in Ref. bes2 . The momenta of
charged particles are determined by a 40-layer cylindrical main drift
chamber(MDC) which has a momentum resolution of
$\sigma_{p}/p=1.78\%\sqrt{1+p^{2}}$ (p in GeV/c). Particle identification is
accomplished by specific ionization ($dE/dx$) measurements in the drift
chamber and time-of-flight (TOF) information in a barrel-like array of 48
scintillation counters. The $dE/dx$ resolution is $\sigma_{dE/dx}=8.0\%$; the
TOF resolution for Bhabha events is $\sigma_{TOF}=180$ ps. A 12-radiation-
length barrel shower counter (BSC) comprised of gas tubes interleaved with
lead sheets is radially outside of the time-of-flight counters. The BSC
measures the energy and direction of photons with resolutions of
$\sigma_{E}/E\simeq 21\%/\sqrt{E}$ ($E$ in GeV), $\sigma_{\phi}=7.9$ mrad, and
$\sigma_{z}=2.3$ cm. Outside of the solenoidal coil, which provides a 0.4
Tesla magnetic field over the tracking volume, is an iron flux return that is
instrumented with three double layers of counters that identify muons of
momenta greater than 0.5 GeV/c.
In this analysis, a GEANT3-based Monte Carlo (MC) program, with detailed
consideration of detector performance is used. The consistency between data
and MC has been carefully checked in many high-purity physics channels, and
the agreement is reasonable. More details on this comparison can be found in
Ref. simbes .
## III Event selection
The decay $J/\psi\rightarrow p\bar{p}\pi^{0}$ with $\pi^{0}\to\gamma\gamma$
contains two charged tracks and two photons. The first level of event
selection for $J/\psi\to p\bar{p}\pi^{0}$ candidate events requires two
charged tracks with total charge zero. Each charged track, reconstructed using
MDC information, is required to be well fitted to a three-dimensional helix,
be in the polar angle region $|\cos\theta_{MDC}|<0.8$, and have the point of
closest approach of the track to the beam axis to be within 1.5 cm radially
and within 15 cm from the center of the interaction region along the beam
line. More than two photons per candidate event are allowed because of the
possibility of fake photons coming from interactions of the charged tracks in
the detector, from $\bar{p}$ annihilation, or from electronic noise in the
shower counter. A neutral cluster is considered to be a photon candidate when
the energy deposited in the BSC is greater than 50 MeV, the angle between the
nearest charged tracks and the cluster is greater than 10∘, and the angle
between the cluster development direction in the BSC and the photon emission
direction is less than 23∘. Because of the large number of fake photons from
$\bar{p}$ annihilation, we further require the angle between the $\bar{p}$ and
the nearest neutral cluster be greater than $20^{\circ}$. Figures 1 (a) and
(b) show the distributions of the angles $\theta_{\gamma p}$ and
$\theta_{\gamma\bar{p}}$ between the $p$ or $\bar{p}$ and the nearest neutral
cluster for $J/\psi\rightarrow p\bar{p}\pi^{0}$ MC simulation; most of the
fake photons from $\bar{p}$ annihilation accumulate at small angles.
Figure 1: Distributions of (a) $\theta_{\gamma p}$ and (b)
$\theta_{\gamma\bar{p}}$ in $J/\psi\rightarrow p\bar{p}\pi^{0}$ MC simulation.
$\theta_{\gamma p}$ and $\theta_{\gamma\bar{p}}$ are the angles between the
$p$ or $\bar{p}$ and the nearest neutral cluster.
To identify the proton and antiproton, the combined TOF and $dE/dx$
information is used. For each charged track in an event, the particle
identification (PID) $\chi^{2}_{PID}(i)$ is determined using:
$\displaystyle\chi_{TOF}(i)=\frac{TOF_{measured}-TOF_{expected}(i)}{\sigma_{TOF}(i)}$
$\displaystyle\chi_{dE/dx}(i)=\frac{dE/dx_{measured}-dE/dx_{expected}(i)}{\sigma_{dE/dx}(i)}$
$\displaystyle\chi^{2}_{PID}(i)=\chi^{2}_{dE/dx}(i)+\chi^{2}_{TOF}(i),$
where $i$ corresponds to the particle hypothesis. A charged track is
identified as a proton if $\chi^{2}_{PID}$ for the proton hypothesis is less
than those for the $\pi$ or $K$ hypotheses. For the channel studied, one
charged track must be identified as a proton and the other as an antiproton.
The selected events are subjected to a 4-C kinematic fit under the
$J/\psi\rightarrow p\bar{p}\gamma\gamma$ hypothesis. When there are more than
two photons in a candidate event, all combinations are tried, and the
combination with the smallest 4-C fit $\chi^{2}$ is retained.
In order to reduce contamination from back-to-back decays, such as
$J/\psi\rightarrow p\bar{p}$ etc., the angle between two charged tracks,
$\theta_{2chrg}$, is required to be less than 175∘. Figures 2 (a) and (b) show
the distributions of $P^{2}_{t\gamma}$ for simulated $J/\psi\rightarrow
p\bar{p}\pi^{0}$ and $J/\psi\rightarrow\gamma p\bar{p}$ events, respectively.
Selected data events are shown in Figure. 2 (a). Here, the variable
$P^{2}_{t\gamma}$ is defined as:
$P^{2}_{t\gamma}=4|\vec{P}_{miss}|^{2}\sin^{2}\theta_{\gamma}/2$ where
$\vec{P}_{miss}$ is the missing momentum in the event determined using the two
charged particles, and $\theta_{\gamma}$ the angle between $\vec{P}_{miss}$
and the higher energy photon. By requiring $P_{t\gamma}^{2}>$ 0.003 GeV2/c2,
background from $J/\psi\rightarrow\gamma p\bar{p}$ is effectively reduced.
The $\gamma\gamma$ invariant mass spectrum after the above selection criteria
is shown in Fig. 3, where $\pi^{0}$ and $\eta$ signals can be seen clearly. To
select $J/\psi\to p\bar{p}\pi^{0}$ events, $|M_{\gamma\gamma}-0.135|<0.03$
GeV$/c^{2}$ is required. Figures 4 (a) and (b) show the invariant mass spectra
of $M_{p\pi^{0}}$ and $M_{\bar{p}\pi^{0}}$, respectively, and clear $N^{*}$
peaks are seen at around 1.5 GeV/c2 and 1.7 GeV/c2. The Dalitz plot of this
decay is shown in Fig. 5, and some $N^{*}$ bands are also evident. Both the
mass spectra and Dalitz plot exhibit an asymmetry for $m_{p\pi^{0}}$ and
$m_{\bar{p}\pi^{0}}$, which is mainly caused by different detection
efficiencies for the proton and antiproton. The re-normalized $M_{p\pi^{0}}$
and $M_{\bar{p}\pi^{0}}$ invariant mass spectra after efficiency corrections
are shown as the solid histogram and crosses, respectively, in Fig. 6, and the
agreement is better.
Figure 2: $P_{t\gamma}^{2}$ distributions. For (a), crosses are data, and the
histogram is MC simulation of $J/\psi\rightarrow p\bar{p}\pi^{0}$. (b)
Distribution for simulated $J/\psi\rightarrow\gamma p\bar{p}$ events.
$P^{2}_{t\gamma}=4|\vec{P}_{miss}|^{2}\sin^{2}\theta_{\gamma}/2$ where
$\vec{P}_{miss}$ is the missing momentum in the event determined using the two
charged particles, and $\theta_{\gamma}$ is the angle between $\vec{P}_{miss}$
and the higher energy photon. Figure 3: The $\gamma\gamma$ invariant mass
spectrum of $J/\psi\rightarrow p\bar{p}\gamma\gamma$ candidates.
Figure 4: The invariant mass spectra of (a) $M_{p\pi^{0}}$ and (b)
$M_{\bar{p}\pi^{0}}$ for $J/\psi\rightarrow p\bar{p}\pi^{0}$ candidate events,
where the circles with error bars are the background events estimated from
$\pi^{0}$ sideband events, and the black dots with error bars are those from
simulated $J/\psi\rightarrow p\bar{p}\pi^{0}\pi^{0}$ events passing the
selection criteria. Figure 5: Dalitz plot of $J/\psi\rightarrow
p\bar{p}\pi^{0}$ candidates. Figure 6: The re-normalized invariant mass
spectra of $M_{p\pi^{0}}$ and $M_{\bar{p}\pi^{0}}$ after correction for
detection efficiency, where the histogram is $M_{p\pi^{0}}$ and the crosses
are $M_{\bar{p}\pi^{0}}$.
Other possible $J/\psi\to p\bar{p}\pi^{0}$ backgrounds are studied using MC
simulation and data. Decay channels that have similar final states as
$J/\psi\to p\bar{p}\pi^{0}$ are simulated, and $J/\psi\to
p\bar{p}\pi^{0}\pi^{0}$ is found to be the main background channel. Surviving
$J/\psi\rightarrow p\bar{p}\pi^{0}\pi^{0}$ events, passing all requirements
described above, are plotted as black dots in Fig. 4. The invariant mass
distribution of this background can be described approximately by phase space.
The $\pi^{0}$ sideband, defined by $0.2<(M_{\gamma\gamma}-0.135)<0.2278$
GeV$/c^{2}$, is used to estimate the background from non-$\pi^{0}$ final
states, such as $J/\psi\rightarrow\gamma p\bar{p}$, etc.. The circles in Fig.
4 show the contribution from $\pi^{0}$ sideband events. In the partial wave
analysis, described below, two kinds of background are considered, $\pi^{0}$
sideband background and a non-interfering phase space background to account
for the background from $J/\psi\rightarrow p\bar{p}\pi^{0}\pi^{0}$.
## IV Partial wave analysis
A partial wave analysis (PWA) is performed to study the $N^{*}$ states in this
decay. The sequential decay process can be described by
$J/\psi\to\bar{p}N^{*}(p\bar{N}^{*})$, $N^{*}(\bar{N}^{*})\to
p\pi^{0}(\bar{p}\pi^{0})$. The amplitudes are constructed using the
relativistic covariant tensor amplitude formalism wrjs ; whl , and the maximum
likelihood method is used in the fit.
### IV.1 Introduction to PWA
The basic procedure for the partial wave analysis is the standard maximum
likelihood method:
(1) Construct the amplitude $A_{j}$ for the $j$-th possible partial wave in
$J/\psi\rightarrow p\bar{N}_{X},\bar{N}_{X}\rightarrow\bar{p}\pi^{0}$ or
$J/\psi\rightarrow\bar{p}N_{X},N_{X}\rightarrow p\pi^{0}$ as:
$\displaystyle A_{j}=A_{prod-X}^{j}(BW)_{X}A_{decay-X},$ (1)
where $A_{prod-X}^{j}$ is the amplitude which describes the production of the
intermediate resonance $N_{X}$, $BW_{X}$ is the Breit-Wigner propagator of
$N_{X}$, and $A_{decay-X}$ is the decay amplitude of $N_{X}$. The
corresponding term for the $\bar{N}_{X}$ is obtained by charge conjugation
with a negative sign due to negative C-parity of $J/\psi$.
(2) The total transition probability, $\omega$, for each event is obtained
from the linear combination of these partial wave amplitudes $A_{j}$ as
$\omega=|\Sigma_{j}c_{j}A_{j}|^{2}$, where the $c_{j}$ parameters are to be
determined by fitting the data.
(3) The differential cross section is given by:
$\displaystyle\frac{d\sigma}{d\Phi}=|\Sigma_{j}c_{j}A_{j}|^{2}+F_{bg},$ (2)
where, $F_{bg}$ is the background function, which includes $\pi^{0}$ sideband
background and non-interfering phase space background.
(4) Maximize the following likelihood function $ln\cal L$ to obtain $c_{j}$
parameters, as well as the masses and widths of the resonances.
$\displaystyle{ln\cal L}=\sum\limits_{k=1}^{n}ln\frac{\omega(\xi_{k})}{\int
d\xi\omega(\xi)\epsilon(\xi)},$ (3)
where $\xi_{k}$ is the energy-momentum of the final state of the $k$-th
observed event, $\omega(\xi)$ is the probability to generate the combination
$\xi$, $\epsilon(\xi)$ is the detection efficiency for the combination $\xi$.
As is usually done, rather than maximizing $\mathcal{L}$,
$\mathcal{S}=-\rm{ln}\mathcal{L}$ is minimized.
For the construction of partial wave amplitudes, we assume the effective
Lagrangian approach mbnc ; mgoet with the Rarita-Schwinger formalism wrjs ;
cfncs ; suc3 ; suc2 . In this approach, there are three basic elements for
constructing amplitudes: the spin wave functions for particles, the
propagators, and the effective vertex couplings. The amplitude can then be
written out by Feynman rules for tree diagrams.
For example, for
$J/\psi\rightarrow\bar{N}N^{*}(\frac{3}{2}^{+})\rightarrow\bar{N}(\kappa_{1},s_{1})N(\kappa_{2},s_{2})\pi(\kappa_{3})$,
the amplitude can be constructed as:
$\displaystyle A_{\frac{3}{2}^{+}}=$
$\displaystyle\bar{u}(\kappa_{2},s_{2})\kappa_{2\mu}P_{3/2}^{\mu\nu}(c_{1}g_{\nu\lambda}+c_{2}\kappa_{1\nu}\gamma_{\lambda}$
(4)
$\displaystyle+c_{3}\kappa_{1\nu}\kappa_{1\lambda})\gamma_{5}\upsilon(\kappa_{1},s_{1})\psi^{\lambda},$
where $u(\kappa_{2},s_{2})$ and $\upsilon(\kappa_{1},s_{1})$ are
$\frac{1}{2}$-spinor wave functions for $N$ and $\bar{N}$, respectively;
$\psi^{\lambda}$ is the spin-1 wave function, $i.e.$, the polarization vector
for $J/\psi$. The $c_{1}$, $c_{2}$, and $c_{3}$ terms correspond to three
possible couplings for the $J/\psi\rightarrow\bar{N}N^{*}(\frac{3}{2}^{+})$
vertex. They can be taken as constant parameters or as smoothly varying vertex
form factors. The spin $\frac{3}{2}^{+}$ propagator $P_{3/2+}^{\mu\nu}$ for
$N^{*}(\frac{3}{2}^{+})$ is:
$\displaystyle P_{3/2+}^{\mu\nu}=$ $\displaystyle\frac{\gamma\cdot
p+M_{N^{*}}}{M_{N^{*}}^{2}-p^{2}+iM_{N^{*}}\Gamma_{N^{*}}}[g^{\mu\nu}-\frac{1}{3}\gamma^{\mu}\gamma^{\nu}$
(5)
$\displaystyle-\frac{2p^{\mu}p^{\nu}}{3M_{N^{*}}^{2}}+\frac{p^{\mu}\gamma^{\nu}-p^{\nu}\gamma^{\mu}}{3M_{N^{*}}}],$
with $p=\kappa_{2}+\kappa_{3}$. Other partial wave amplitudes can be
constructed similarly wrjs ; whl .
The possible intermediate resonances are listed in Table 1. Of these states,
only a few are (well) established states, while $N_{x}(1885)$ is one of the
‘missing’ $N^{*}$ states predicted by the quark model and not yet
experimentally observed. $N_{x}(2065)$ is also a long-sought ‘missing’
$N^{*}$, which was observed recently by BES xbji .
For the lowest lying $N^{*}$ states, $N(1440)$, $N(1520)$, and $N(1535)$,
Breit-Wigner’s with phase space dependent widths are used.
$BW_{X}(s)=\frac{m\Gamma(s)}{s-m^{2}+im\Gamma(s)},$ (6)
where $s$ is the invariant mass-squared. The phase space dependent widths can
be written as tpv181 :
$\displaystyle\Gamma_{N(1440)(s)}=$
$\displaystyle\Gamma_{N(1440)}(0.7\frac{B_{1}(q_{\pi N})\rho_{\pi
N}(s)}{B_{1}(q_{\pi N}^{N^{*}})\rho_{\pi N}(M_{N^{*}}^{2})}$ (7)
$\displaystyle+0.3\frac{B_{1}(q_{\pi\Delta})\rho_{\pi\Delta}(s)}{B_{1}(q_{\pi\Delta}^{N^{*}})\rho_{\pi\Delta}(M_{N^{*}}^{2})}),$
$\displaystyle\Gamma_{N(1520)}=$
$\displaystyle\Gamma_{N(1520)}\frac{B_{2}(q_{\pi N})\rho_{\pi
N}(s)}{B_{2}(q_{\pi N}^{N^{*}})\rho_{\pi N}(M_{N^{*}}^{2})},$ (8)
$\displaystyle\Gamma_{N(1535)}=$
$\displaystyle\Gamma_{N(1535)}(0.5\frac{\rho_{\pi N}(s)}{\rho_{\pi
N}(M_{N^{*}}^{2})}$ (9) $\displaystyle+0.5\frac{\rho_{\eta N}(s)}{\rho_{\eta
N}(M_{N^{*}}^{2})}),$
where $B_{l}(q)$ ($l=1,2$) is the standard Blatt-Weisskopf barrier factor suc3
; suc2 for the decay with orbital angular momentum $L$ and $\rho_{\pi N}(s)$,
$\rho_{\pi\Delta}(s)$, and $\rho_{\eta N}(s)$ are the phase space factors for
$\pi N$, $\pi\Delta$, and $\eta N$ final states, respectively.
$\rho_{XY}(s)=\frac{2q_{XY}(s)}{\sqrt{s}},$ (10)
$\displaystyle
q_{XY}(s)=\frac{\sqrt{(s-(M_{Y}+M_{X})^{2})(s-(M_{Y}-M_{X})^{2})}}{(2\sqrt{s})},$
(11)
where $X$ is $\pi$ or $\eta$, $Y$ is $N$ or $\Delta$, and $q_{XY}(s)$ is the
momentum of $X$ or $Y$ in the center-of-mass (CMS) system of $XY$. For other
resonances, constant width Breit-Wigner’s are used.
As described in Ref. liangwh , the form factors are introduced to take into
account the nuclear structure. We have tried different form factors, given in
Ref. liangwh , in the analysis and find that for $J=\frac{1}{2}$ resonances,
the form factor preferred in fitting is
$F_{N}(s_{\pi N})=\frac{\Lambda_{N}^{4}}{\Lambda_{N}^{4}+(s_{\pi
N}-m_{N^{*}}^{2})^{2}},$ (12)
where $s_{\pi N}$ is the invariant mass squared of $N$, $\pi$, and for
$J=\frac{3}{2}$ or $\frac{5}{2}$ states, the preferred form factor is
$F_{N}(s_{\pi N})=e^{\frac{-|s_{N\pi}-m_{N^{*}}^{2}|}{\Lambda^{2}}}.$ (13)
Therefore, the above form factors are used in this analysis.
In the log likelihood calculation, $\pi^{0}$ sideband background events are
given negative weights; the sideband events then cancel background in the
selected candidate sample. The $J/\psi\to p\bar{p}\pi^{0}\pi^{0}$ background
is described by a non-interfering phase space term, and the amount of this
background is floated in the fit.
Table 1: Resonances considered in the PWA analysis.
Resonance | Mass(MeV) | Width(MeV) | $J^{P}$ | C.L.
---|---|---|---|---
$N(940)$ | 940 | 0 | $\frac{1}{2}^{+}$ | off-shell
$N(1440)$ | 1440 | 350 | $\frac{1}{2}^{+}$ | ****
$N(1520)$ | 1520 | 125 | $\frac{3}{2}^{-}$ | ****
$N(1535)$ | 1535 | 150 | $\frac{1}{2}^{-}$ | ****
$N(1650)$ | 1650 | 150 | $\frac{1}{2}^{-}$ | ****
$N(1675)$ | 1675 | 145 | $\frac{5}{2}^{-}$ | ****
$N(1680)$ | 1680 | 130 | $\frac{5}{2}^{+}$ | ****
$N(1700)$ | 1700 | 100 | $\frac{3}{2}^{-}$ | ***
$N(1710)$ | 1710 | 100 | $\frac{1}{2}^{+}$ | ***
$N(1720)$ | 1720 | 150 | $\frac{3}{2}^{+}$ | ****
$N_{x}(1885)$ | 1885 | 160 | $\frac{3}{2}^{-}$ | ‘missing’ $N^{*}$
$N(1900)$ | 1900 | 498 | $\frac{3}{2}^{+}$ | **
$N(2000)$ | 2000 | 300 | $\frac{5}{2}^{+}$ | **
$N_{x}(2065)$ | 2065 | 150 | $\frac{3}{2}^{+}$ | ‘missing’ $N^{*}$
$N(2080)$ | 2080 | 270 | $\frac{3}{2}^{-}$ | **
$N(2090)$ | 2090 | 300 | $\frac{1}{2}^{-}$ | *
$N(2100)$ | 2100 | 260 | $\frac{1}{2}^{+}$ | *
00footnotetext: **** Existence is certain, and properties are at least fairly
well
explored.00footnotetext: *** Existence ranges from very likely to certain, but
further
confirmation is desirable and/or quantum numbers, branch-
ing fractions, etc. are not well determined.
00footnotetext: ** Evidence of existence is only fair.00footnotetext: *
Evidence of existence is poor.
### IV.2 PWA results
Well established states, such as $N(1440)$, $N(1520)$, $N(1535)$, $N(1650)$,
$N(1675)$, $N(1680)$ are included in this partial wave analysis. According to
the framework of soft $\pi$ meson theory larfd , the off-shell decay process
is also needed in this decay, and therefore $N(940)$ ($M=940$ MeV/c2, $\Gamma$
= 0.0 MeV/c2) is also included. Fig. 7 shows the Feynman diagram for this
process.
Figure 7: Feynman diagrams of $J/\psi\rightarrow p\bar{p}\pi^{0}$ for the off-
shell decay process.
#### IV.2.1 Resonances in the 1.7 GeV/c2 mass region
In the $M=1.7$ GeV/c2 mass region, three resonances
$N(1700)$($\frac{3}{2}^{-}$), $N(1710)$($\frac{1}{2}^{+}$), and
$N(1720)$($\frac{3}{2}^{+}$) pdg are supposed to decay into
$p\pi(\bar{p}\pi)$ final states. According to the Particle Data Group (PDG08)
pdg , only $N(1720)$ is a well established state. We now study whether these
three states are needed in $J/\psi\to p\bar{p}\pi^{0}$. This is investigated
for two cases, first assuming no $N^{*}$ states in the high mass region ($>$
1.8 GeV/c2), and second assuming $N_{x}(2065)$, $N(2080)$, and $N(2100)$
states in the high mass region. With no $N^{*}$ states in the $M>$ 1.8 GeV/c2
mass region, the PWA shows that the significances of $N(1700)$ and $N(1720)$
are 3.2$\sigma$ ($\Delta S=11$) and 0.8$\sigma$ ($\Delta S=3$), and their
fractions are 0.3% and 6%, respectively; only $N(1710)$ is significant. When
$N_{x}(2065)$, $N(2080)$, and $N(2100)$ are included, the $N(1710)$ makes the
log likelihood value $S$ better by 65, which corresponds to a significance
much larger than 5$\sigma$. However, neither the $N(1700)$ nor the $N(1720)$
is significant. We conclude that the $N(1710)$ should be included in the PWA.
#### IV.2.2 $N_{x}(2065)$
The $N_{x}(2065)$, a long-sought ‘missing’ $N^{*}$ predicted by the quark
model, was observed in $J/\psi\rightarrow p\bar{n}\pi^{-}+c.c.$ xbji with a
mass of $2065\pm 3_{-30}^{+15}$ MeV/c2 and a width of $175\pm 12\pm 40$
MeV/c2, determined from a simple Breit-Wigner fit. We investigate the need for
the $N_{x}(2065)$ in $J/\psi\to p\bar{p}\pi^{0}$. Including the $N(1440)$,
$N(1520)$, $N(1535)$, $N(1650)$, $N(1675)$, $N(1680)$, $N(1710)$ and the off-
shell decay in the PWA fit, different $N_{x}(2065)$ spin-parities ($J^{P}$)
and different combinations of high mass resonances are tried. If there are no
other resonances in the high mass region, the log likelihood value improves by
288, which corresponds to a significance of greater than $5\sigma$, when a
$\frac{3}{2}^{+}$$N_{x}(2065)$ is added. Thus, the $N_{x}(2065)$ is definitely
needed in this case, and its mass and width are optimized to be
$M=2057_{-6}^{+4}$ MeV/c2 and $\Gamma=220_{-12}^{+11}$ MeV/c2.
The significance and spin-parity of $N_{x}(2065)$ is further checked under the
following four hypotheses (A, B, C and D) for the high mass resonances. Case A
has $N(2080)$ and $N(2100)$ included, case B $N(2080)$ and $N(2000)$, case C
$N(2000)$, $N(2080)$, and $N(2100)$, and case D $N(2080)$, $N(2090)$, and
$N(2100)$. The changes of the log likelihood values ($\Delta S$), the
corresponding significances, and the fractions of $N_{x}(2065)$ are listed in
Table 2 when a $\frac{3}{2}^{+}$ $N_{x}(2065)$ is added in the four cases. The
log likelihood values become better by 58 to 126 when $N_{x}(2065)$ is
included. Therefore, $N_{x}(2065)$ is needed all cases. The differences of log
likelihood values for different $N_{x}(2065)$ $J^{P}$ assignments for the four
combinations are listed in Table 3. The assignment of $J^{P}=\frac{3}{2}^{+}$
gives the best log likelihood value except for the cases where there is large
interference. Spin-parity of $\frac{3}{2}^{+}$ is favored for $N_{x}(2065)$.
Table 2: Changes of log likelihood values ($\Delta S$), the corresponding significances, and the fractions of $N_{x}(2065)$, when $N_{x}(2065)$ is added in the four cases. Case | $\Delta S$ | significance | fraction (%)
---|---|---|---
A | 126 | $\gg$ 5$\sigma$ | 23
B | 158 | $\gg$ 5$\sigma$ | 24
C | 79 | $\gg$ 5$\sigma$ | 16
D | 58 | $\gg$ 5$\sigma$ | 22
Table 3: Comparison of log likelihood values for different $J^{P}$ assignments for $N_{x}(2065)$. $J^{P}$ | $\frac{1}{2}^{+}$ | $\frac{1}{2}^{-}$ | $\frac{3}{2}^{+}$ | $\frac{3}{2}^{-}$ | $\frac{5}{2}^{+}$ | $\frac{5}{2}^{-}$
---|---|---|---|---|---|---
A | 85.8 | 49.3 | 0.0 | -32.2111 780% interference between $Nx(2605)$ and $N(2080)$. | -36.9222 529% interference between $N(1680)$ and $N(2000)$. | 34.1
B | 5.0 | 68.5 | 0.0 | 54.3 | -12.1333 860% interference between $N(1680)$ and $N(2000)$. | 6.3
C | 98.1 | 39.8 | 0.0 | 85.6 | 76.1 | 14.4
D | 44.2 | 45.2 | 0.0 | 25.0 | 36.2 | 38.0
#### IV.2.3 Other resonances in high mass region
In addition to the observed resonances, $N(2000)$, $N(2080)$, $N(2090)$ and
$N(2100)$, as well as the $N_{x}(2065)$, there is another possible ‘missing’
$N^{*}$ state, $N_{x}(1885)$, which is predicted by theory but not yet
observed.
a) $N_{x}(1885)$
In the $p(\bar{p})\pi^{0}$ invariant mass spectrum, shown in Fig. 4, no
obvious peak is seen near 1.89 GeV/c2. We study whether this state is needed
in the partial wave analysis for the four cases. The significances are
$1.3\sigma$ ($\Delta S=3.0$), $3.2\sigma$ ($\Delta S=8.8$), $3.4\sigma$
($\Delta S=9.7$), and greater than $5\sigma$ ($\Delta S=28.0$) in cases A, B,
C, and D, respectively, when a $N_{x}(1885)$ is included. Thus, the
statistical significance is larger than 5$\sigma$ only in case D. In our final
fit, $N_{x}(1885)$ is not included. However, the difference of including and
not including it will be taken as a systematic error.
b) $N(2000)$, $N(2080)$, $N(2090)$, and $N(2100)$
We next study whether $N(2000)$, $N(2080)$, $N(2090)$ and $N(2100)$ are all
significant in the decay. First, we add $N(2000)$, $N(2080)$, $N(2090)$, and
$N(2100)$ one at a time with $N(940)$, $N(1440)$, $N(1520)$, $N(1535)$,
$N(1650)$, $N(1675)$, $N(1680)$, $N(1710)$, and $N_{x}(2065)$ already
included. The log likelihood values get better by 28, 137, 69, and 73,
respectively, which indicates the $N(2080)$ is the most significant, all the
significances are larger than $5\sigma$. Second, we include $N_{x}(2065)$ and
$N(2080)$ in the high mass region and add the other three states $N(2000)$,
$N(2090)$, and $N(2100)$ one at a time. The significances of the $N(2100)$
($\Delta S=38$) and $N(2090)$ ($\Delta S=30$) are much larger than $5\sigma$,
while $N(2000)$ is $3.9\sigma$ ($\Delta S=14$). Third, we include
$N_{x}(2065)$, $N(2080)$, and $N(2100)$ in the high mass region and test
whether $N(2000)$ and $N(2090)$ are needed again. The significances are larger
than $5\sigma$ ($\Delta S=23$) and $2.7\sigma$ ($\Delta S=7$), respectively
when $N(2000)$ and $N(2090)$ are included.
Due to the complexity of the high mass $N^{*}$ states and the limitation of
our data, we are not able to draw firm conclusions on the high mass region. In
the final fit, we include $N_{x}(2065)$, $N(2080)$, and $N(2100)$ and take the
differences of with and without $N(2000)$ and $N(2090)$ as systematic errors.
#### IV.2.4 The best results up to now
We summarize the results we have so far:
(1) For the three resonances in the $M=1.7$ GeV/c2 mass region ($N(1700)$,
$N(1710)$, and $N(1720)$), only $N(1710)$ is significant.
(2) The $N_{x}(2065)$ is definitely needed in all cases, and its spin-parity
favors $\frac{3}{2}^{+}$.
(3) $N_{x}(1885)$ is not significant and therefore is not included in the
final fit.
(4) For other resonances in the high mass region, $N(2080)$ and $N(2100)$ are
both needed in all cases tried, but the other two states $N(2000)$ and
$N(2090)$ are not very significant and so are not included in the final fit.
Therefore, we consider $N(940)$, $N(1440)$, $N(1520)$, $N(1535)$, $N(1650)$,
$N(1675)$, $N(1680)$, $N(1710)$, $N_{x}(2065)$, $N(2080)$, and $N(2100)$ in
the fit.
Table 4 lists the optimized masses and widths for some $N^{*}$ resonances; the
others are fixed to those from PDG08. Here, only statistical errors are
indicated. The fractions of these states are also listed.
The $M_{p\pi^{0}}$ and $M_{\bar{p}\pi^{0}}$ invariant mass spectra and the
angular distributions after the optimization are shown in Figs. 8 (a) and (b)
and Fig. 9, respectively. In Fig. 8 and 9, the crosses are data and the
histograms are the PWA fit projections. The PWA fit reasonably describes the
data.
Table 4: Optimized masses and widths, as well as fractions. Errors shown are statistical only. Resonance | Mass(MeV/c2) | Width(MeV/c2) | $J^{P}$ | Fraction ($\%$)
---|---|---|---|---
$N(1440)$ | $1455_{-7}^{+2}$ | $316_{-6}^{+5}$ | $\frac{1}{2}^{+}$ | 16.37
$N(1520)$ | $1513_{-4}^{+3}$ | $127_{-8}^{+7}$ | $\frac{3}{2}^{-}$ | 7.96
$N(1535)$ | $1537_{-6}^{+2}$ | $135_{-8}^{+8}$ | $\frac{1}{2}^{-}$ | 7.58
$N(1650)$ | $1650_{-6}^{+3}$ | $145_{-10}^{+5}$ | $\frac{1}{2}^{-}$ | 9.06
$N(1710)$ | $1715_{-2}^{+2}$ | $95_{-1}^{+2}$ | $\frac{1}{2}^{+}$ | 25.33
$N_{x}(2065)$ | $2040_{-4}^{+3}$ | $230_{-8}^{+8}$ | $\frac{3}{2}^{+}$ | 23.39
Figure 8: The $p\pi^{0}$ and $\bar{p}\pi^{0}$ invariant mass spectra after
optimization of masses and widths. Plot (a) is $M_{p\pi^{0}}$, and plot (b) is
$M_{\bar{p}\pi^{0}}$, where the crosses are data and histograms are fit
results.
Figure 9: Distributions of (a) the cosine of the sum of the $p$ and $\bar{p}$
momenta, (b) cosine of the momentum of the ${p\pi^{0}}$ system in the
$p\bar{p}$ CMS, (c) cosine of the momentum of the $p\bar{p}$ system in the
$p\pi^{0}$ CMS, and (d) cosine of the momentum of $p\bar{p}$ in the
$\bar{p}\pi^{0}$ CMS. The crosses are data and histograms are the fit results.
#### IV.2.5 $N_{x}(1885)$ significance with optimized $N^{*}$ states
In the analysis above, the $N_{x}(1885)$ was not found to be significant. Here
its significance is redetermined using the optimized masses and widths for the
$N^{*}$’s, and it is still only 1.2$\sigma$ ($\Delta S=2.7$). Therefore, we
have the same conclusion: the $N_{x}(1885)$ is not needed.
#### IV.2.6 $N(1900)$
In PDG08 pdg , there is an $N(1900)$($\frac{3}{2}^{+}$) state near
$N_{x}(2065)$ pdg . Our previous results show that if there is only one
$\frac{3}{2}^{+}$ state in this region, the mass and width are optimized at
$M=2057_{-6}^{+4}$ MeV/c2 and $\Gamma=220_{-12}^{+11}$ MeV/c2, which are
consistent with those of $N_{x}(2065)$. If $N(1900)$ is also included in this
analysis, i.e. there are two $\frac{3}{2}^{+}$ states in this region, we find
that the second $\frac{3}{2}^{+}$ state also has a statistical significance
much larger than 5$\sigma$ ($\Delta S=49$). However, the interference between
$N(1900)$ and $N_{x}(2065)$ is about 80%. This analysis does not exclude the
possibility that there are two $\frac{3}{2}^{+}$ states in this region.
#### IV.2.7 Search for additional $N^{*}$ and $\Delta^{*}$ resonances
Besides the contributions from the well-established $N^{*}$ resonances, there
could be smaller contributions from other $N^{*}$ resonances and even
$\Delta^{*}$ resonances from isospin violating virtual photon production.
What might be expected for the isospin violating decay? For the
$J/\psi\rightarrow p\bar{p}$ decay, the isospin violating fraction can be
estimated using the PDG $J/\psi$ leptonic branching fraction and the proton
electromagnetic form factor $F_{p}(q^{2})$baub to be
$B(J/\psi\rightarrow\gamma^{*}\rightarrow p\bar{p})$ = $B(J/\psi\rightarrow
l^{+}l^{-})\times(F_{p}(M_{J/\psi}^{2})^{2}$ = $2.4\times 10^{-5}$. The total
$J/\psi\rightarrow p\bar{p}$ branching fraction is $2.2\times 10^{-3}$pdg .
This means, the fraction of decays through a virtual photon in the
$J/\psi\rightarrow\gamma^{*}\rightarrow p\bar{p}$ decay mode is close to 1.1%.
For the non-strange channel, the ratio of photon couplings to isospin 1 and
isospin 0 is 9:1, so the isospin violating part is about 1% for this channel.
For the $J/\psi\rightarrow p\bar{p}\pi^{0}$ decay, one would expect a similar
isospin violating fraction.
If we add an extra state with different possible spin-parities
($J^{P}=\frac{1}{2}^{\pm},\frac{3}{2}^{\pm},\frac{5}{2}^{\pm}$) in the large
mass (1.65 GeV/c2 to 1.95 GeV/c2) region with widths from 0.05 GeV/c2 to 0.20
GeV/c2 and re-optimize, we find that no additional $N^{*}$’s or $\Delta^{*}$’s
with the statistical significance of greater than 5$\sigma$ are required.
#### IV.2.8 Search for $\rho(2150)$
A resonance with mass 2149 MeV/c2 and $J^{P}=1^{-}$ is listed in PDG08 pdg
with the decay $\rho(2150)\rightarrow p\bar{p}$. Here, we test whether there
is evidence for this decay in our sample. The significance of this resonance
is less than 3$\sigma$ when we vary the width of this state in the fit from
200 to 660 MeV/c2. Therefore, our data do not require this state. Figure 10
shows the $p\bar{p}$ invariant mass spectrum, and there is no clear structure
near 2149 MeV/c2.
Figure 10: The $p\bar{p}$ invariant mass spectra. Crosses are data and
histogram is the PWA fit after the optimization of masses and widths.
## V Branching fraction of $J/\psi\to p\bar{p}\pi^{0}$
The branching fraction of $J/\psi\to p\bar{p}\pi^{0}$ is obtained by fitting
the $\pi^{0}$ signal (see Fig. 3) with a $\pi^{0}$ shape obtained from
$J/\psi\rightarrow p\bar{p}\pi^{0}$ MC simulation and a polynomial background.
The numbers of fitted signal and background events are 11,166 and 691,
respectively. The efficiency of $J/\psi\to p\bar{p}\pi^{0}$ is determined to
be 13.77% by MC simulation with all intermediate $N^{*}$ states being
included. Figures 11 (a) and (b) show the $p$ and $\bar{p}$ momentum
distributions, where the histograms are MC simulation of $J/\psi\rightarrow
p\bar{p}\pi^{0}$ using the $J^{P}$’s and fractions of $N^{*}$ states obtained
from our partial wave analysis, and the crosses are data. There is good
agreement between data and MC simulation.
The branching fraction is determined to be:
$Br(J/\psi\rightarrow p\bar{p}\pi^{0})=(1.33\pm 0.02~{}(stat.))\times
10^{-3},$ (14)
which is higher than that in PDG08 pdg ($(1.09\pm 0.09)\times 10^{-3}$).
Figure 11: Momentum distributions of $p$ and $\bar{p}$ in $J/\psi\rightarrow
p\bar{p}\pi^{0}$, where histograms are $J/\psi\rightarrow p\bar{p}\pi^{0}$ MC
simulation using the spin parities and fractions of $N^{*}$ states obtained
from our partial wave analysis and crosses are data.
## VI Systematic errors
The systematic errors for the masses and widths of $N^{*}$ states mainly
originate from the difference between data and MC simulation, the influence of
the interference between $N(2100)$ and other states, uncertainty of the
background, the form-factors, and the influence of high mass states, as well
as the differences when small components are included or not.
(1) Two different MDC wire resolution simulation models are used to estimate
the systematic error from the data/MC difference.
(2) In this analysis, the interference between $N(2100)$($\frac{1}{2}^{+}$)
and the low mass regions states such as $N(940)$($\frac{1}{2}^{+}$) and
$N(1440)$($\frac{1}{2}^{+}$) can be very large, even larger than 50%. We fix
the fraction of $N(2100)$ to be less than 10% to reduce the interference and
then compare its impact on other resonances. The biggest differences for the
masses, widths, and fractions of the other resonances between fixing the
fraction of $N(2100)$ and floating its fraction are considered as systematic
errors.
(3) Two kinds of backgrounds are considered in the partial wave analysis,
$\pi^{0}$ sideband and non-interfering phase space. We increase the number of
background events by 10%, and take the changes of the optimized masses and
widths as systematic errors.
(4) Equations (12) and (13) are the form factors used in this analysis, where
$\Lambda$ is 2.0 for $N^{*}$ states with $J^{P}=\frac{1}{2}$ and $\Lambda$ is
1.2 for those with $J^{P}=\frac{3}{2}$ and $\frac{5}{2}$. Other form factors
have also been tried, however their log likelihood values are much worse than
those from the form factors used here. We also vary the $\Lambda$ values from
2.0 and 1.2 to 1.5. The biggest differences are taken as the form factor
systematic errors.
(5) The effect of using different combinations of states in the high mass
region on the masses and widths of other resonances was investigated above
(see Table 2), and the differences also taken as systematic errors.
Table 5 shows the summary of the systematic errors for the masses and widths,
and the total systematic errors are the sum of each source added in
quadrature.
Table 5: Summary of the systematic errors for masses and widths of $N^{*}$ resonances (MeV/c2). Systematic error | $N(1440)$ | $N(1520)$ | $N(1535)$ | $N(1650)$ | $N(1710)$ | $N_{x}(2065)$
---|---|---|---|---|---|---
| $\Delta M$ | $\Delta\Gamma$ | $\Delta M$ | $\Delta\Gamma$ | $\Delta M$ | $\Delta\Gamma$ | $\Delta M$ | $\Delta\Gamma$ | $\Delta M$ | $\Delta\Gamma$ | $\Delta M$ | $\Delta\Gamma$
Data/MC comparison | 3 | 14 | 2 | 13 | 2 | 11 | 4 | 12 | 1 | 12 | 1 | 19
Interference | 12 | 25 | 2 | 23 | 3 | 22 | 25 | 15 | 15 | 2 | 10 | 20
Background uncertainty | 18 | 51 | 11 | 23 | 6 | 28 | 2 | 8 | 4 | 10 | 5 | 15
Different form-factors | 12 | 25 | 2 | 5 | 8 | 1 | 3 | 5 | 15 | 22 | 20 | 14
Different combinations in high mass region | 35 | 21 | 7 | 12 | 5 | 10 | 5 | 23 | 20 | 35 | 10 | 39
Total | 43 | 67 | 13 | 37 | 12 | 39 | 26 | 31 | 29 | 44 | 25 | 52
Table 6: Systematic error for the branching fraction $B(J/\psi\to\pi^{0}p\bar{p})$ from different sources. Sys. error sources | Systematic error(%)
---|---
Wire resolution | 2.18
Photon efficiency | 4.00
Particle ID | 4.00
Mass spectrum fitting | 1.93
Number of $J/\psi$ events | 4.72
Total | 7.93
Table 7: Summary of $N^{*}$ states optimized results. Resonance | Mass(MeV/c2) | width(MeV/c2) | $J^{P}$ | Fraction(%) | Branching fraction ($\times 10^{-4}$)
---|---|---|---|---|---
$N(1440)$ | $1455_{-7}^{+2}\pm 43$ | $316_{-6}^{+5}\pm 67$ | $\frac{1}{2}^{+}$ | 9.74$\sim$25.93 | 1.33$\sim$3.54
$N(1520)$ | $1513_{-4}^{+3}\pm 13$ | $127_{-8}^{+7}\pm 37$ | $\frac{3}{2}^{-}$ | 2.38$\sim$10.92 | 0.34$\sim$1.54
$N(1535)$ | $1537_{-6}^{+2}\pm 12$ | $135_{-8}^{+8}\pm 39$ | $\frac{1}{2}^{-}$ | 6.83$\sim$15.58 | 0.92$\sim$2.10
$N(1650)$ | $1650_{-6}^{+3}\pm 26$ | $145_{-10}^{+5}\pm 31$ | $\frac{1}{2}^{-}$ | 6.89$\sim$27.94 | 0.91$\sim$3.71
$N(1710)$ | $1715_{-2}^{+2}\pm 29$ | $95_{-1}^{+2}\pm 44$ | $\frac{1}{2}^{+}$ | 4.17$\sim$30.10 | 0.54$\sim$3.86
$N_{x}(2065)$ | $2040_{-4}^{+3}\pm 25$ | $230_{-8}^{+8}\pm 52$ | $\frac{3}{2}^{+}$ | 7.11$\sim$24.29 | 0.91$\sim$3.11
The systematic errors for the branching fraction $B(J/\psi\to\pi^{0}p\bar{p})$
mainly originate from the data/MC discrepancy for the tracking efficiency,
photon efficiency, particle ID efficiency, fitting region used, the background
uncertainty, and the uncertainty in the number of $J/\psi$ events.
(1) The systematic error from MDC tracking and the kinematic fit, 2.18%, is
estimated by using different MDC wire resolution simulation models.
(2) The photon detection efficiency has been studied using
$J/\psi\rightarrow\rho\pi$ smli . The efficiency difference between data and
MC simulation is about 2% for each photon. So 4% is taken as the systematic
error for two photons in this decay.
(3) A clean $J/\psi\rightarrow p\bar{p}\pi^{+}\pi^{-}$ sample is used to study
the error from proton identification. The error from the proton PID is about
2%. So the total error from PID is taken as 4% in this decay.
(4) The $\pi^{0}$ fitting range is changed from 0.04 - 0.3 GeV/c2 to 0.04 -
0.33 GeV/c2, and the difference , 1.28%, is taken to be the systematic error
from the fitting range. To estimate the uncertainty from the background shape,
we change the background shape from 3rd order polynomial to other functions.
The biggest change, 1.44%, is taken as the systematic error.
(5) The total number of $J/\psi$ events determined from inclusive 4-prong
hadrons is $(57.70\pm 2.72)\times 10^{6}$ fangss . The uncertainty is 4.72%.
Table 6 lists the different sources of systematic errors for the branching
fraction of $J/\psi\rightarrow p\bar{p}\pi^{0}$. The total systematic error is
the sum of each error added in quadrature.
## VII Summary
Based on 11,166 $J/\psi\rightarrow p\bar{p}\pi^{0}$ candidates from $5.8\times
10^{7}$ BESII $J/\psi$ events, a partial wave amplitude analysis is performed.
A long-sought ‘missing’ $N^{*}$, which was observed first by BESII in
$J/\psi\to p\bar{n}\pi^{-}+c.c.$, is also observed in this decay with mass and
width of $2040_{-4}^{+3}\pm 25$ MeV/c2 and $230_{-8}^{+8}\pm 52$ MeV/c2,
respectively. The mass and width obtained here are consistent with those from
$J/\psi\to p\bar{n}\pi^{-}+c.c.$ within errors. Its spin-parity favors
$\frac{3}{2}^{+}$. The masses and widths of other $N^{*}$ resonances in the
low mass region are also obtained and listed in Table 7, where the first
errors are statistical and the second are systematic. The ranges for the
fractions of $N^{*}$ states, and thus the branching fractions, are given too.
From this analysis, we find that the fractions of each $N^{*}$ state depend
largely on the $N^{*}$’s used in the high mass region, the form factors, and
Breit-Wigner parameterizations, as well as the background. We also determine
the $J/\psi\to p\bar{p}\pi^{0}$ branching fraction to be $Br(J/\psi\rightarrow
p\bar{p}\pi^{0})=(1.33\pm 0.02\pm 0.11)\times 10^{-3}$, where the efficiency
used includes the intermediate $N^{*}$ and $\bar{N}^{*}$ states obtained in
our partial wave analysis.
## VIII Acknowledgments
The BES collaboration thanks the staff of BEPC and computing center for their
hard efforts. This work is supported in part by the National Natural Science
Foundation of China under contracts Nos. 10491300, 10225524, 10225525,
10425523, 10625524, 10521003, 10821063, 10825524, the Chinese Academy of
Sciences under contract No. KJ 95T-03, the 100 Talents Program of CAS under
Contract Nos. U-11, U-24, U-25, and the Knowledge Innovation Project of CAS
under Contract Nos. U-602, U-34 (IHEP), the National Natural Science
Foundation of China under Contract Nos. 10775077, 10225522 (Tsinghua
University), and the Department of Energy under Contract No. DE-FG02-04ER41291
(U. Hawaii).
## References
* (1) N. Isgur, nucl-th/0007008 (2000).
* (2) E. G. Cazzoli, Phys. Rev. Lett. 34, 1125 (1975).
* (3) S. E. Csorna et al., Phys. Rev. Lett., 86, 4243 (2001).
* (4) B. Aubert (BABAR Colaboration), Phys. Rev. D 72, 052006 (2005).
B. Aubert (BABAR Colaboration), hep-ex/0607042.
B. Aubert (BABAR Colaboration), Phys. Rev. Lett., 97, 232001 (2006).
B. Aubert (BABAR Colaboration), Phys. Rev. D 74, 011103 (2006).
B. Aubert (BABAR Colaboration), hep-ex/0607086.
B. Aubert (BABAR Colaboration), Phys. Rev. Lett., 98, 012001 (2007).
B. Aubert (BABAR Colaboration), Phys. Rev. D 77, 012002 (2009).
* (5) K. Abe (BELLE Collaboration), Phys. Rev. Lett., 98, 262001 (2007).
R. Chistov (BELLE Collaboration), Phys. Rev. Lett., 97, 162001 (2006).
R. Mizuk (BELLE Collaboration), Phys. Rev. Lett., 94, 122002 (2005).
* (6) M. Iori et al., hep-ex/0701021.
* (7) C. Amsler et al., Phys. Lett. B 667, 1 (2008).
* (8) G. Bari, et al., Nuovo Cim., A 104, 1787 (1991).
G. Bari, et al., Nuovo Cim., A 104, 571 (1991).
* (9) T. Aaltonen (CDF Collaboration), Phys. Rev. Lett., 99, 202001 (2007).
T. Aaltonen (CDF Collaboration), Phys. Rev. Lett., 99, 052002 (2007).
* (10) V. M. Abazov (D0 Collaboration), Phys. Rev. Lett., 99, 052001 (2007).
V. M. Abazov (D0 Collaboration), arXiv:0808.4142.
* (11) B. Aubert et al., Phys. Rev. D73, 012005 (2006).
* (12) C. Amsler et al., Physics Lett. B667, 1 (2008).
* (13) S. Capstick and W. Roberts, Prog. Part. Nucl. Phys. 45, S241 (2000).
* (14) N. Isgur and G. Karl, Phys. Rev. D19, 2653 (1979).
* (15) S. Capstick and W. Roberts, Phys. Rev. D47, 1994 (1993).
* (16) J. Z. Bai et al. (BES Collaboration), Phys. Lett. B510, 75 (2001).
* (17) M. Ablikim et al. (BES Collaboration), Phys. Rev. Lett. 97, 062001 (2006).
* (18) J. Z. Bai et. al, (BES Collab.), Nucl. Inst. and Meths. A458, 627 (2001).
* (19) M. Ablikim et al. (BES Collaboration), Nucl. Instrum. Meth. A552, 344 (2005).
* (20) W. Rarita and J. Schwinger, Phys. Rev. 60, 61 (1941).
* (21) W. H. Liang, P. N. Shen, J. X. Wang and B. S. Zou, J. Phys. G28, 333 (2002).
* (22) S. U. Chung, Phys. Rev. D48, 1225 (1993).
* (23) M. Benmerrouche, N. C. Mukhopadhyay and J. F. Zhang, Phys. Rev. Lett. 77, 4716 (1996); Phys. Rev. D51, 3237 (1995).
* (24) M. G. Olsson and E. T. Osypowski, Nucl. Phys. B87, 399 (1975); Phys. Rev. D17, 174 (1978); M. G. Olsson, E. T. Osypowski and E. H. Monsay, Phys. Rev D17,2938 (1978).
* (25) C. Fronsdal, Nuovo Cimento Sppl. 9, 416 (1958); R. E. Behrends and C. Fronsdal, Phys. Rev. 106, 345 (1957).
* (26) S. U. Chung, Spin Formalisms, CERN Yellow Report 71-8 (1971); S. U. Chung, Phys. Rev. D48, 1225 (1993); J. J. Zhu and T. N. Ruan, Communi. Theor. Phys. 32, 293, 435 (1999).
* (27) L. Adler and R. F. Dashen, Current Algebra and Application to Particle Physics (Benjamin, New York, 1968); B. W. Lee, Chiral Dynamics (Gordon and Breach, New York, 1972).
* (28) T. P. Vrana, S. A. Dytman and T. S. H. Lee, Phys. Rept. 328, 181 (2000).
* (29) Liang Wei-hong. Ph.D thesis, Institute of High Energy Physics, Chinese Academy of Science, 2002 (in Chinese); G.Penner and U. Mosel, Phys. Rev. C66, 055211 (2002); W. H. Liang et al., Eur. Phys. J. A21, 487 (2004).
* (30) S. M. Li et al., HEP $\&$ NP 28, 859 (2004) (in Chinese).
* (31) Fang S.S. et al., HEP $\&$ NP 27, 277 (2003) (in Chinese).
|
arxiv-papers
| 2009-05-11T07:39:23 |
2024-09-04T02:49:02.522199
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "BES Collaboration",
"submitter": "Shumin Li",
"url": "https://arxiv.org/abs/0905.1562"
}
|
0905.1591
|
# Veech groups, irrational billiards and stable abelian differentials
Ferrán Valdez Max Planck Institut für Mathematik Vivatsgasse 7. 53111, Bonn,
Germany. ferran@mpim-bonn.mpg.de
###### Abstract.
We describe Veech groups of flat surfaces arising from irrational angled
polygonal billiards or irreducible stable abelian differentials. For
irrational polygonal billiards, we prove that these groups are non-discrete
subgroups of $\rm SO(2,\mathbf{R})$ and we calculate their rank.
## 1\. Introduction
The Veech group of a flat surface is the group of derivatives of orientation-
preserving affine homeomorphisms. If the surface is compact, Veech groups are
discrete subgroups of $\mathbf{SL}(2,\mathbf{R})$ that can be related to the
geodesic flow on the surface [7]. Our main goal is to describe Veech groups
arising from non-compact flat surfaces associated to billiards on an
irrational angled polygon. Nevertheless, in this article we will not discuss
dynamical aspects of geodesics. More precisely,
###### Theorem 1.1.
Let $P\subset\mathbf{R}^{2}$ be a simply connected polygon with interior
angles $\\{\lambda_{j}\pi\\}_{j=1}^{N}$, $S(P)$ the flat surface obtained from
$P$ via the Katok-Zemljakov construction and $G(S)$ the Veech group of $S(P)$.
Suppose there exists $\lambda_{j}\in\mathbf{R}\setminus\mathbf{Q}$ for some
$j=1,\ldots,n$. Then, $G(S)<\mathbf{SO}(2,\mathbf{R})$ and the group generated
by the rotations
(1.1) $R(S)=<\begin{pmatrix}\cos(2\lambda_{j}\pi)&-\sin(2\lambda_{j}\pi)\\\
\sin(2\lambda_{j}\pi)&\cos(2\lambda_{j}\pi)\end{pmatrix}\mid j=1,\ldots,N>$
has maximal rank in $G(S)$.
The surface $S(P)$ has infinite genus and only one end. A topological surface
satisfying these two conditions is called a _Loch Ness monster_ [6].
After a suggestion from M. Möller, we consider Veech groups arising from
stable abelian differentials at $\partial\Omega\overline{M_{g}}$, the boundary
of the Deligne-Mumford compactification of the Hodge bundle $\Omega M_{g}$
(see §5, [1] for a definition). On this boundary, the notion of Veech group
makes sense only for stable abelian differentials on an irreducible stable
curve, called irreducible. In this direction, we prove the following:
###### Proposition 1.2.
Let $(X,\omega)\in\partial\rm\Omega\overline{\mathcal{M}_{g}}$ be an
irreducible stable Abelian differential of genus $g$. Suppose that there
exists at least one node in $X$ where the 1–form $\omega$ has a pole. Let
$\\{r_{j},-r_{j}\\}_{j=1}^{k}$ be the set of all residues of $\omega$ and
define
(1.2) $\ N:=<\\{\begin{pmatrix}1&s\\\ 0&t\end{pmatrix}\mid
t\in\mathbf{R}^{+},s\in\mathbf{R}\\},-Id>.$
Let $G(X)=G(X,\omega)$ be the Veech group of $(X,\omega)$. Then,
1. (1)
If there exist $i\neq j$ such that $r_{i}/r_{j}\notin\mathbf{R}$, then $G(X)$
is finite.
2. (2)
If all residues of $\omega$, as vectors in $\mathbf{C}\simeq\mathbf{R}^{2}$
are parallel, then $G(X)<N$ is either conjugated to a discrete subgroup or the
equal to $N$.
Recently, Hubert and Schmithüsen [3] have shown the existence of a countably
family of _infinite area_ origamis whose Veech groups are infinitely generated
subgroups of $\mathbf{SL}(2,\mathbf{Z})$. These origamis arise as
$\mathbf{Z}$-covers of (finite area) genus 2 origamis. Motivated by this work,
in the last section of this article we construct, for each $n\in\mathbf{N}$,
an uncountable family of flat surfaces $\mathcal{S}_{n}=\\{S_{i}\\}_{i\in I}$
such that each $S_{i}$ is homeomorphic to the Loch Ness monster and the Veech
group $G(S_{i})<\mathbf{SO}(2,\mathbf{R})$ is infinitely generated.
This article is organized as follows. We introduce the notion of _tame_ flat
surface and extend the definition of some classical geometric invariants
(saddle connections, Veech groups) to the non-compact realm in Section 2 .
Loosely speaking, _tame_ flat surfaces present a discrete set of
singularities, which are either of finite or infinite angle. We briefly recall
the Katok-Zemljakov construction, the notion of stable Abelian differential
and define Veech groups for irreducible nodal flat surfaces. Section 3 deals
with the proof of Theorem 1.1 and Section 4 with the proof of Proposition 1.2.
Finally, Section 5 presents the construction of the family of flat surfaces
$\mathcal{S}_{n}$ mentioned above.
Acknowledgments. This article was written during a stay of the author at the
Max Planck Institut für Mathematik in Bonn. The author wishes to express his
gratitude to the administration and staff of the MPI for the wonderful working
facilities and the atmosphere. The author acknowledges support from the
Sonderforschungsbereich/Transregio 45 and the ANR Symplexe. The author thanks
M. Möller and M. Bainbridge for valuable discussions.
## 2\. Preliminaries
Non-compact flat surfaces. Let $(S,\omega)$ be a pair formed by a connected
Riemann surface $S$ and a holomorphic 1–form $\omega$ on $S$ which is not
identically zero. Denote by $Z(\omega)\subset S$ the zero locus of the form
$\omega$. Local integration of this form endows $S\setminus Z(\omega)$ with an
atlas whose transition functions are translations of $\mathbf{C}$. The
pullback of the standard translation invariant flat metric on the complex
plane defines a flat metric $d$ on $S\setminus Z(\omega)$. Let $\widehat{S}$
be the metric completion of $S$. Each point in $Z(\omega)$ has a neighborhood
isometric to the neighborhood of $0\in\mathbf{C}$ with the metric induced by
the 1–form $z^{k}dz$ for some $k>1$ (which is a cyclic finite branched
covering of $\mathbf{C}$). The points in $Z(\omega)$ are called finite angle
singularities. Note that there is a natural embedding of $S$ into
$\widehat{S}$.
###### Definition 2.1.
A point $p\in\widehat{S}$ is called an _infinite angle singularity_ , if there
exists a radius $\epsilon>0$ such that the punctured neighborhood:
(2.3) $\\{z\in\widehat{S}\mid 0<d(z,p)<\epsilon\\}$
is isometric to the infinite cyclic covering of
$\epsilon\mathbf{D}^{*}=\\{w\in\mathbf{C}^{*}\mid 0<\mid w\mid<\epsilon\\}$.
We denote by $Y_{\infty}(\omega)$ the set of infinite angle singularities of
$\widehat{S}$.
###### Definition 2.2.
The pair $(S,\omega)$ is called a _tame_ flat surface if $\hat{S}\setminus
S=Y_{\infty}(\omega)$.
One can easily check that flat surfaces arising from irrational polygons or
stable abelian differentials are tame.
###### Definition 2.3.
A _singular geodesic_ of $S=(S,\omega)$ is an open geodesic segment in the
flat metric $d$ whose image under the natural embedding
$S\hookrightarrow\widehat{S}$ issues from a singularity of $\widehat{S}$,
contains no point of $Y(\omega)$ in its interior and is not properly contained
in some other geodesic segment. A _saddle connection_ is a finite length
singular geodesic.
To each saddle connection we can associate a _holonomy vector_ : we ’develop’
the saddle connection in the plane by using local coordinates of the flat
structure. The difference vector defined by the planar line segment is the
holonomy vector. Two saddle connections are _parallel_ , if their
corresponding holonomy vectors are linearly dependent.
Let $\mathrm{Aff}_{+}(S)$ be the group of affine orientation preserving
homeomorphisms of the flat surface $S$ (by definition $S$ comes with a
distinguished 1–form $\omega$). Consider the map
(2.4)
$\mathrm{Aff}_{+}(S)\overset{D}{\longrightarrow}\mathbf{GL}_{+}(2,\mathbf{R})$
that associates to every $\phi\in\mathrm{Aff}_{+}(S)$ its (constant) Jacobian
derivative $D\phi$.
###### Definition 2.4.
Let $S$ be a flat surface. We call $G(S)=D(\mathrm{Aff}_{+}(S))$ the _Veech
group_ of $S$.
The Katok-Zemljakov construction. In the following paragraph we recall briefly
the definition of this construction. For details see [6] and references
within.
Let $P_{0}$ denote the polygon $P$ deprived of its vertices. The
identification of two disjoint copies of $P_{0}$ along ”common sides” defines
a Euclidean structure on the $N$-punctured sphere. We denote it by
$\SS^{2}(P)$. This punctured surface is naturally covered by $S(P_{0})$, the
_minimal translation surface_ corresponding to $P$. We denote the projection
of this covering by $\pi:S(P_{0})\longrightarrow\SS^{2}(P)$. Call a vertex of
$P$ _rational_ , if the corresponding interior angle is commensurable with
$\pi$. When the set of rational vertices of $P$ is not empty, the translation
surface $S(P_{0})$ can be locally compactified by adding points ”above”
rational vertices of $P$. The result of this local compactification is a flat
surface with finite angle singularities that we denote by $S(P)$. If the set
of rational vertices of $P$ is empty, we set $S(P)=S(P_{0})$. In both cases,
$S(P)$ is called the flat surface obtained from the polygon $P$ via the
_Katok-Zemljakov construction_. Remark that, in the case of rational polygons,
some authors give a different definition (see [5] or [2]).
Stable Abelian differentials. We recall briefly the notion of stable Abelian
differential, following Bainbridge [1].
A _nodal Riemann surface_ $X$ is a finite type Riemann surface, _i.e._ with
finitely generated fundamental group, that has finitely many cusps which have
been identified pairwise to form nodes. A connected component of a nodal
Riemann surface $X$ with its nodes removed is called a _part_ of $X$, and the
closure of a part of $X$ is an _irreducible component_. The genus of a nodal
Riemann surface is the topological genus of the non-singular Riemann surface
obtained by replacing each node in $X$ with an annulus. A _stable Riemann
surface_ is a connected nodal Riemann surface for which each part has negative
Euler characteristic. A _stable Abeliann differential_ $\omega$ on a stable
Riemann surface $X$ is a holomophic 1–form on $X$ minus its nodes such that
its restriction to each part of $X$ has at worst simple poles at the cusps and
at two cusps which have been identified to form a node, the differential has
opposite residues, if any. Nodes at which $\omega$ presents a pole are called
_polar nodes_.
Veech groups on stable Abelian differentials. Let $(X,\omega)$ be a stable
Abelian differential. We denote by ${\rm Aff_{+}}(X,\omega)$ the group of
affine orientation preserving homeomorphisms of $X$. The Jacobian derivative
$D\phi$ of an affine homeomorphism $\phi$ is constant on each irreducible
component of $(X,\omega)$. In general, there is no canonical derivation
morphism from the affine group of a stable Abelian differential onto
$\mathbf{GL}_{+}(2,\mathbf{R})$. Consider, for example, the genus 2 stable
Abelian differential given by the following figure:
Figure 1.
We avoid this situation by restricting ourselves to irreducible Riemann
surfaces.
###### Definition 2.5.
Let $X=(X,\omega)$ be an _irreducible_ stable Abelian differential. We call
$G(X)=D({\rm Aff_{+}}(X,\omega))$ the _Veech group_ of $X$.
Abelian differentials close to a stable Abelian differential $\rm(X,\omega)$
with a polar node develop very long cylinders which are pinched off to form a
node in the limit, (see §5.3 [1]). In the following figure we depict a genus
two stable abelian differential with two nodes ( with residues $\rm\pm 1$ and
$\rm\pm(1+i)$) and two double zeroes:
Figure 2.
When considering the flat metric, every stable Abelian differential deprived
of its polar nodes is a complete metric space. We call _singular geodesic_ in
the context of stable Abelian differentials, every geodesic segment that
issues from a zero or a non-polar node of $\omega$, contains no such zero or
non-polar node on its interior and is not properly contained in some other
geodesic segment. As before, finite length singular geodesics will be called
_saddle connections_.
_Decomposition of stable Abelian differentials with polar nodes_. Suppose that
$(X,\omega)$ has polar nodes with residues $r_{1},\ldots,r_{k}$. Every $r_{j}$
defines a direction $\theta(r_{j})\in\mathbf{R}/\mathbf{Z}$ for which
$(X,\omega)$ presents a set of disjoint infinite area cylinders
$C_{1,j},\ldots,C_{n(j),j}$ foliated by closed geodesics parallel to
$\theta(r_{j})$ and whose length is $\mid r_{j}\mid$. Denote by $C_{j}$ the
closure in $(X,\omega)$ of $\cup_{i=1}^{n(j)}C_{i,j}$ and
$C=\cup_{j=1}^{k}C_{j}$. We define
(2.5) $X^{\prime}:=X\setminus C$
The Veech group of $(X,\omega)$ acts linearly on the set of residues of
$\omega$ and leaves the decomposition $X=X^{\prime}\sqcup C$ invariant.
## 3\. Proof of Theorem 1.1
First, we prove that the matrix group $R(S)$ defined in (1.1) is a subgroup of
$G(S)$. Then, we prove that $G(S)<\mathbf{SO}(2,\mathbf{R})$ and, finally,
that ${\rm Rank}(G(S))={\rm Rank}(R(S))$.
(i) The locally Euclidean structure on the $N$-punctured sphere
$\mathbf{S}^{2}(P)$ gives rise to the holonomy representation:
(3.6) $\rm hol:\pi_{1}(\mathbf{S}^{2}(P))\longrightarrow
Isom_{+}(\mathbf{R}^{2})$
Let $B_{j}$ be a simple loop in $\mathbf{S}^{2}(P)$ around the missing vertex
of $P$ whose interior angle is $\lambda_{j}\pi$, $\rm j=1,\ldots,N$. Suppose
that $B_{j}\cap B_{i}=*$, for $\rm i\neq j$. Then, $\\{B_{j}\\}_{j=1}^{N}$
generates $\pi_{1}(\mathbf{S}^{2}(P),*)$. Given an isometry $\varphi\in\rm
Isom_{+}(\mathbf{R}^{2})$, we denote its derivative by $D\circ\varphi$. A
direct calculation in local coordinates shows that $\rm hol(B_{j})$ is affine
and that $M_{j}=D\circ\rm hol(B_{j})$ is given by:
(3.7) $M_{j}=\begin{pmatrix}\cos(2\lambda_{j}\pi)&-\sin(2\lambda_{j}\pi)\\\
\sin(2\lambda_{j}\pi)&\cos(2\lambda_{j}\pi)\end{pmatrix}\hskip
28.45274ptj=1,\ldots,N.$
Since $G(S(P_{0}))=G(S(P))$, we conclude that $R(S)$ is a subgroup of $G(S)$.
(ii) We claim that length of every saddle connection in $S(P)$ is bounded
below by some constant $c=c(P)>0$. Indeed, consider the folding map
$f:\SS^{2}(P)\longrightarrow P$ which is 2-1 except along the boundary of $P$.
The projection $f\circ\pi:S(P_{0})\longrightarrow P$ maps every saddle
connection $\gamma\subset S(P_{0})$ onto a _generalized diagonal_ of the
billiard game on $P$ (see [4] for a precise definition). The length of
$\gamma$ is bounded below by the length of the generalized diagonal
$f\circ\pi(\gamma)$. The length of any generalized diagonal of the billiard
table $P$ is bounded below by some positive constant $c$ depending only on
$P$. This proves our claim. The constant $c$ is realized by a generalized
diagonal. Therefore, we can choose a holonomy vector $v$ is of minimal length.
Given that $R(S)<G(S)$, the $G(S)$-orbit of $v$ is dense in the circle of
radius $|v|$ centered at the origin. This forces the Veech group $G(S)$ to lie
in $\mathbf{SO}(2,\mathbf{R})$.
(iii) Suppose that there exist an affine homeomorphism $\varphi\in{\rm
Aff}_{+}(S)$ such that $D\varphi$ is an infinite order element of
$\mathbf{SO}(2,\mathbf{R})/R(S)$. Let $\gamma_{0}$ be a fixed saddle
connection. Then $\\{f\circ\pi\circ\varphi^{k}(\gamma_{0})\\}_{k\in Z}$ is an
infinite set of generalized diagonals of bounded length. But this is a
contradiction, for the set of generalized diagonals of bounded length on a
polygonal billiard is always finite [4].
## 4\. Proof of Proposition 1.2
The Veech group of the irrational stable Abelian differential $(X,\omega)$
acts linearly on the (finite) set of residues of $\omega$. Therefore, if not
all residues are parallel, $G(X)$ must be finite.
Suppose now that all residues are parallel to the horizontal direction. Then
$G(X)<N$. If every holonomy vector of $(X,\omega)$ is horizontal we claim that
$G(X)=N$. Indeed, in this situation $X^{\prime}$ defined in (2.5) is empty and
the horizontal geodesic flow decomposes $X$ into finitely many cylinders with
horizontal boundaries. This allows to define, for every $g\in N$, an
orientation preserving affine homeomorphism of $X$ whose differential is
exactly $g$. On the other hand, if at least one holonomy vector fails to be
horizontal, then $G(X)<N$ is discrete, for the set of holonomy vectors of any
stable Abelian differential is discrete.
$\square$
Remark. Veech groups of irreducible stable Abelian differentials in
$\partial\Omega\overline{\mathcal{M}_{g}}$ without polar nodes are as
”complicated” as Veech groups of flat surfaces in $\Omega\mathcal{M}_{g}$ with
marked points. More precisely, a nodal Riemann surface $X$ has a
_normalization_ $f:S(X)\longrightarrow X$ defined by separating the two
branches passing through each node of $X$. For every node $p$, denote
$\\{p_{+},p_{-}\\}:=f^{-1}(p)$. Then, if the stable Abelian differential
$(X,\omega)$ has no polar nodes, we have the equality:
(4.8) ${\rm Aff}_{+}(X,\omega)=\\{\phi\in{\rm Aff}_{+}(S(X),\omega)\hskip
2.84526pt|\hskip 2.84526pt\phi(p_{+})=\phi(p_{-}),\hskip
2.84526pt\forall\hskip 2.84526ptp\hskip 2.84526pt\text{node of $X$}\\}.$
## 5\. Infinitely generated Veech groups in $\mathbf{SO}(2,\mathbf{R})$
Fix $n\in\mathbf{N}$. Consider an unbounded sequence of real numbers
(5.9) $\rm x_{0}=0<x_{1}<x_{2}<\ldots<x_{j}<...$
such that $\rm x_{j+1}-x_{j}>1$ for all $j$. The segments of straight line
joining the point $\rm(x_{j},x_{j}^{2n})$ to$\rm(x_{j+1},x_{j+1}^{2n})$ and
$\rm(-x_{j},x_{j}^{2n})$ to $\rm(-x_{j+1},x_{j+1}^{2n})$, $j\geq 0$, define a
polygonal line $\partial P$ in $\mathbf{C}$. Let $int(P)$ be the connected
component of $\mathbf{C}\setminus\partial P$ intersecting the positive
imaginary axis $\rm Im(z)>0$. We define $P=\partial P\cup int(P)$. We call $P$
the _unbouded polygon_ defined by the sequence (5.9). Remark that $P$ is
symmetric with respect to the imaginary axis. For each $\rm j\geq 0$, let
$\rm\lambda_{j}\pi$ be the interior angle of $P$ at the vertex
$\rm(x_{j},x_{j}^{2n})$.
###### Definition 5.1.
We say that a sequence of real numbers $\rm\\{\mu_{j}\\}_{j\geq 0}$ is _free
of resonances_ if and only if for every finite subset
$\rm\\{\mu_{j_{1}},\ldots\mu_{j_{N}}\\}$ the kernel of the group morphism
$\mathbf{Z}^{N}\longrightarrow\mathbf{C}$ defined by
$\rm(n_{1},\ldots,n_{N})\longrightarrow exp(2\pi
i(\sum_{k=1}^{N}n_{k}\mu_{j_{k}}))$
is trivial.
There are uncountable many choices $\\{x_{j\geq 0}^{i}\\}$, $i\in I$, for
(5.9), such that the sequence $\\{\lambda_{j}^{i}\\}_{j\geq 0}$ defining the
interior angles of $P=P_{i}$ is free of resonances. For each $i\in I$, denote
by $S^{2}(P_{i})$ the identification of two vertexless copies of $P_{i}$ along
”common sides”. The Katok-Zemljakov construction described in Section 2 can be
applied to the unbounded polygon $P_{i}$. The result is a flat covering
$S_{i}\longrightarrow\mathbf{S}^{2}(P_{i})$.
###### Lemma 5.2.
The flat surface $S_{i}$ is homeomorphic to the Loch Ness monster. The Veech
group $G(S_{i})<\mathbf{SO}(2,\mathbf{R})$ is infinitely generated and
contains the infinite rank group generated by the matrices
(5.10) $\begin{pmatrix}\cos(2\lambda_{j}^{i}\pi)&-\sin(2\lambda_{j}^{i}\pi)\\\
\sin(2\lambda_{j}^{i}\pi)&\cos(2\lambda_{j}^{i}\pi)\end{pmatrix}\rm\hskip
28.45274ptj\geq 0,$
###### Proof.
Every flat surface obtained via the Katok-Zemljakov construction from a
(bounded) polygon whose angles are all irrational multiples of $\pi$, is
homeomorphic to a Loch-Ness monster. This is proved in Theorem 1 ( Case (A)
and absence of resonances) in [6]. The same conclusion can be drawn for the
unbounded polygons $P_{i}$ after replacing in the proof of Theorem 1 [_Ibid._]
_polygon P_ and surface $X(P)$ by _unbounded polygon_ $P_{i}$ and $S_{i}$,
respectively.
In §3, sections (i) and (ii) made use of the boundness of $P$ to assure that
the length of every saddle connection in $S(P)$ was bounded below by a
constant depending only on $P$. For unbounded polygons, this is ensured by the
condition $\rm x_{j+1}-x_{j}>1$, for all $j$, on the sequence (5.9). It
follows that, for every $i\in I$, $G(S_{i})<\mathbf{SO}(2,\mathbf{R})$ and
that this Veech group contains the group generated by the matrices (5.10).
This matrix group is infinitely generated, for the sequence
$\\{\lambda_{j}^{i}\\}_{j\geq 0}$ is free of resonances, for every $i\in I$.
$\square$
## References
* [1] M. Bainbridge. _Euler characteristics of Teichmüller curves in genus two_. Geom. Topol. 11 (2007), 1887–2073.
* [2] E. Gutkin and S. Troubetzkoy _Directional flows and strong recurrence for polygonal billiards_. International Conference on Dynamical Systems (Montevideo, 1995), 21–45, Pitman Res. Notes Math. Ser., 362, Longman, Harlow, 1996. _Geom. Dedicata_ 125 (2007), 39–46.
* [3] P. Hubert and G. Schmithüsen. _Infinite translation surfaces with infinitely generated Veech groups_. Preprint. http://www.cmi.univ-mrs.fr/ hubert/articles/hub-schmithuesen.pdf
* [4] A. B. Katok, _The growth rate for the number of singular and periodic orbits for a polygonal billiard_. Comm. Math. Phys. 111 (1987), no. 1, 151–160.
* [5] H. Masur and S. Tabachnikov. _Rational billiards and flat structures_. Handbook of dynamical systems. Vol. 1A, 1015–1089, North Holland. Amsterdam 2002.
* [6] J.F. Valdez. _Infinite genus surfaces and irrational polygonal billiards_. To appear in Geom. Dedicata.
* [7] W.A. Veech. _Teichmüller curves in the moduli space, Eisenstein series and applications to triangular billiards_. Inventiones mathematicae, 97, 1989, 553–583.
|
arxiv-papers
| 2009-05-11T10:12:00 |
2024-09-04T02:49:02.528254
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Ferran Valdez",
"submitter": "Ferran Valdez",
"url": "https://arxiv.org/abs/0905.1591"
}
|
0905.1693
|
# Diverse protostellar evolutionary states in the young cluster AFGL961
Jonathan P. Williams1, Rita K. Mann1, Christopher N. Beaumont1, Jonathan J.
Swift1, Joseph D. Adams2, Joe Hora3, Marc Kassis4, Elizabeth A. Lada5, Carlos
G. Román-Zúñiga6 1 Institute for Astronomy, University of Hawaii, 2680
Woodlawn Drive, Honolulu, HI 96822; jpw@ifa.hawaii.edu 2 Cornell University,
Department of Radiophysics Space Research, Ithaca NY 14853 3 Harvard-
Smithsonian, CfA, 60 Garden St., MS 65, Cambridge, MA 02138 4 Keck
Observatory 65-1120 Mamalahoa Hwy, Kamuela, HI 96743 5 Department of
Astronomy, University of Florida, Gainesville, FL 32611 6 Centro Astronómico
Hispano Alemán, Camino Bajo de Huétor 50, Granada 18008, Spain
###### Abstract
We present arcsecond resolution mid-infrared and millimeter observations of
the center of the young stellar cluster AFGL961 in the Rosette molecular
cloud. Within 0.2 pc of each other, we find an early B star embedded in a
dense core, a neighboring star of similar luminosity with no millimeter
counterpart, a protostar that has cleared out a cavity in the circumcluster
envelope, and two massive, dense cores with no infrared counterparts. An
outflow emanates from one of these cores, indicating a deeply embedded
protostar, but the other is starless, bound, and appears to be collapsing. The
diversity of states implies either that protostellar evolution is faster in
clusters than in isolation or that clusters form via quasi-static rather than
dynamic collapse. The existence of a pre-stellar core at the cluster center
shows that that some star formation continues after and in close proximity to
massive, ionizing stars.
circumstellar matter – stars: formation — stars: pre-main sequence — ISM:
structure
## 1 Introduction
The dominant mode of star formation is in groups. In this sense, understanding
cluster formation is a prerequisite for understanding the origin of most
stars, including all massive stars and, in all likelihood, our Sun. Some major
unanswered questions include the role of global and local processes, the
formation time-scale, and whether high mass stars shut down further cluster
growth.
Long wavelength, mid-infrared to millimeter, observations are required to
image the youngest, most deeply embedded protostars and their nascent cores.
The study of dense stellar groups is hampered by the low resolution at these
wavelengths and, consequently, less is known about their origins and evolution
than closer, isolated star forming systems. Technological developments
including large format mid-infrared arrays and sensitive millimeter
interferometers provide a new view of the most embedded regions of young
clusters. We report here observations of young protostars and molecular cores
in the AFGL961 cluster and the constraints that these place on models of
cluster formation.
AFGL961 owes its name to an Air Force infrared sky survey carried out on a
rocket-borne telescope in the early 1970s. The resolution of these data was
low and Cohen (1973) was the first to give a precise, ground-based, position.
Due to its high luminosity and location in the nearby Rosette molecular cloud,
AFGL961 has been the subject of many subsequent studies.
Bally & Predmore (1983) found a weak point-like 5 GHz source with the VLA and
interpreted it as a compact HII region around a B3 star. Based on the
definition of Zinnecker & Yorke (2007), AFGL961 can therefore be considered as
a (borderline) massive star forming region. Near-infrared imaging by Lenzen et
al. (1984) revealed a double source with the easternmost, redder component
coinciding with the VLA source. They also found a third object about 30”
further west near the most prominent optical emission, a fan-shaped reflection
nebula. Castelaz et al. (1985) confirmed the double source and showed that
both members are pre-main-sequence with the luminosity of B stars. Aspin
(1998) presented a wide field image of shocked H2 emission showing numerous
stellar sources and bow shock structures. Román-Zúñiga et al. (2008) mapped
the entire Rosette molecular cloud in the near-infrared and identified 10
embedded clusters of which AFGL961 is the brightest and the most heavily
obscured by nebulosity. This obscuration, however, prevented a detailed study
of the fainter and most embedded sources.
AFGL961 lies in a massive clump along a broad ridge of molecular emission
extending away from the Rosette nebula (Blitz & Thaddeus, 1980). Lada &
Gautier (1982) mapped a bipolar outflow from the cluster, most recently imaged
in CO 3–2 by Dent et al. (2009) who show a high velocity, collimated flow
extending over $5^{\prime}$.
Distance estimates to the Rosette cloud range from $1.4-1.7$ kpc (Ogura &
Ishida, 1981; Perez et al., 1987; Hensberge et al., 2000) and we adopt 1.6 kpc
here for consistency with most previous work. AFGL961 has an infrared
luminosity of $11400\,L_{\odot}$ (Cox et al., 1990) and is, by far, the
brightest embedded cluster in the cloud.
A large-scale view of AFGL961 is shown in Figure 1. The image is from the
observations of Román-Zúñiga et al. (2008) and shows the embedded protostars
and associated nebulosity at 2.2 $\mu$m. The contours show the emission at 850
$\mu$m from the dusty cluster envelope and were produced from archival SCUBA
data. The flux per beam at this wavelength is a direct measure of column
density where we have assumed a uniform temperature $T=20$ K and dust opacity
$\kappa_{\nu}=0.1(\nu/1200\,{\rm GHz})$ cm2 g-1 (Hildebrand, 1983). The
contours are in units of g cm-2 to best compare to theoretical calculations
and numerical simulations (see §4). The total flux of the map is 10 Jy which
converts to a mass $M=90\,M_{\odot}$. Three bright stars dominate the
luminosity of the cluster at this wavelength and through the mid-infrared
(Poulton et al., 2008). The dust peak is slightly elongated and offset from
the central double source and the western source at the center of the optical
fan-shaped nebulosity lies near the edge of the envelope.
We have gathered data on AFGL961 over a period of several years with the
Infrared Telescope Facility111 The Infrared Telescope Facility, is operated by
the University of Hawaii under Cooperative Agreement no. NCC 5-538 with the
National Aeronautics and Space Administration, Science Mission Directorate,
Planetary Astronomy Program. (IRTF), Submillimeter Array222 The Submillimeter
Array is a joint project between the Smithsonian Astrophysical Observatory and
the Academia Sinica Institute of Astronomy and Astrophysics and is funded by
the Smithsonian Institution and the Academia Sinica. (SMA), and James Clerk
Maxwell Telescope333 The James Clerk Maxwell Telescope is operated by The
Joint Astronomy Centre on behalf of the Science and Technology Facilities
Council of the United Kingdom, the Netherlands Organisation for Scientific
Research, and the National Research Council of Canada. (JCMT) at the Mauna Kea
Observatory. Our observations, detailed in §2, are centered on the white box
in Figure 1. The data, presented in §3, reveal a surprising diversity of
evolutionary states at the cluster center, including the discovery of a Class
0 and starless core neighboring the known infrared sources. We discuss the
implications of this work for understanding cluster formation in §4 and
conclude in §5.
## 2 Observations
We used the MIRSI camera on the IRTF (Kassis et al., 2008) to map the mid-
infrared emission on December 12th 2003\. Images were taken in the M, N, and Q
filters centered at 4.9, 10.4, and 20.9 $\mu$m respectively using a
$5^{\prime\prime}\times 5^{\prime\prime}$ dither. The skies were clear, cold
and dry, and images were diffraction limited at
$0\hbox{$.\\!\\!^{{}^{\prime\prime}}$}5,0\hbox{$.\\!\\!^{{}^{\prime\prime}}$}9$,
and $1\hbox{$.\\!\\!^{{}^{\prime\prime}}$}7$ respectively. Sky subtraction was
obtained by chopping 30” North-South and nodding 40” East-West. This kept the
cluster on the array at all times and maximized the sensitivity in the
registered and co-added maps. Calibration was performed by observations of
$\alpha$ Canis Minor (Procyon) bracketing the AFGL961 observations. The final
maps were made using an IDL reduction pipeline written by support astronomer
E. Volquardsen. Three sources were detected (Figure 2) and all have
counterparts in the 2MASS catalog. We therefore used the latter to define the
astrometry.
The SMA observations were carried out under dry and stable skies on 13th
December 2005 in the compact configuration ($20-70$ m baselines) and in a
partial track on 4th February 2006 in the extended configuration ($70-240$ m
baselines). A mosaic of two Nyquist-spaced pointings was made to ensure
approximately uniform coverage over the MIRSI field. The receiver was tuned to
place the H2CO $3_{12}-2_{11}$ line at 225.7 GHz in the upper sideband and DCN
3–2 at 217.2 GHz in the lower sideband with a resolution of 0.81 MHz (1.1
${\rm km~{}s^{-1}}$) per channel. The time dependence of the gains was
measured via observations of 0530+135 interleaved with the source, the shape
of the passband was measured by deep observations of 3C454.3 and 3C111, and
observations of Uranus were used to set the flux scale. The visibilities were
calibrated with the MIR software package and maps were then produced using
standard MIRIAD routines. A continuum image centered at 1400 $\mu$m was
produced by combining the 2 GHz lower and upper sidebands from the compact and
extended configuration datasets and inverting with natural weighting. The
resulting resolution and noise is
$3\hbox{$.\\!\\!^{{}^{\prime\prime}}$}1\times
2\hbox{$.\\!\\!^{{}^{\prime\prime}}$}8$ and 3 mJy beam-1. We also produced a
higher resolution map, $1\hbox{$.\\!\\!^{{}^{\prime\prime}}$}4\times
0\hbox{$.\\!\\!^{{}^{\prime\prime}}$}9$, at the expense of increased noise, by
inverting with higher weights on the longer baselines. In this paper, we use
the former to measure the distribution and masses of the cores, and the latter
to measure core locations and determine their association (or lack thereof)
with mid-infrared sources. The lines were undetected on the long baselines and
maps were made using the compact configuration data only using natural
weighting. The resolution and noise in the H2CO map are
$3\hbox{$.\\!\\!^{{}^{\prime\prime}}$}7\times
3\hbox{$.\\!\\!^{{}^{\prime\prime}}$}4$ and 0.17 K. The corresponding numbers
for the DCN map are $3\hbox{$.\\!\\!^{{}^{\prime\prime}}$}8\times
3\hbox{$.\\!\\!^{{}^{\prime\prime}}$}6$ and 0.08 K.
As interferometers are unable to measure the flux at small spatial
frequencies, we used Receiver A on the JCMT to observe the same H2CO line as
the SMA observations. We mapped a $3^{\prime}\times 3^{\prime}$ region
centered on the peak of the SMA map on 19th March 2008. The map was made using
the on-the-fly mapping mode with a nearby reference position that was first
verified to be free of emission. Data reduction was carried out with the
STARLINK package. The JCMT map and SMA dirty map were weighted by their
respective beam areas, combined in the image plane, and cleaned with a
similarly combined beam following the method detailed in Stanimirovic (2002).
The resulting map retains the high resolution in the interferometer map and,
as the signal-to-noise ratio was very high in the JCMT data, the noise is
similar, 0.17 K per channel, as the SMA data.
## 3 Results
### 3.1 Morphology
The MIRSI maps are shown in Figure 2. The three infrared sources first seen by
Lenzen et al. (1984) are detected in each filter. To avoid confusion,
particularly for the varied “western” nomenclature, we label them AFGL961A, B
and C in order of infrared luminosity. The positions and fluxes of each source
are listed in Table 1. The fluxes of each source sharply rise with wavelength
indicating that they are deeply embedded. The infrared spectral energy
distributions (SEDs) are discussed in more detail in §4; we focus here on the
morphological differences. AFGL961C is slightly elongated in the N-band image
and the emission at Q-band is very extended. Aspin (1998) found H2 bow shock
features on either side of the star and Li et al. (2008) present a detailed
study of these features. The position angle of the elongated stucture in the
Q-band image lines up with these and the extended mid-infrared emission is
likely to be from hot dust filling a cavity that the star has blown out around
itself.
The differences between the sources are even more striking at 1400 $\mu$m.
Figure 3 shows the SMA continuum map in relation to the infrared sources. As
for the SCUBA map in Figure 1, the contour units are converted from flux per
beam to a mass surface density assuming $T=20$ K and a Hildebrand (1983) dust
opacity, $\kappa=0.018$ cm2 g-1. Three prominent sources, strung out along a
filament, are detected and labeled SMA1–3. Their positions and fluxes are
listed in Table 2. There is a negligible
$0\hbox{$.\\!\\!^{{}^{\prime\prime}}$}2$ offset between the peak of the SMA1
core and AFGL961A. The $1\hbox{$.\\!\\!^{{}^{\prime\prime}}$}6$ offset between
SMA2 and AFGL961B is significant, however. A closeup of the region with the
higher resolution continuum map is discussed in §3.2 and more clearly shows
the distinction between these two objects.
We therefore consider SMA1 to be the dusty envelope around the pre-main-
sequence B star AFGL961A and suggest that SMA2 is a distinct source in the
cluster, a dense core that lacks an infrared counterpart at the limits of
detection in the MIRSI map. SMA3 is the brightest source in the 1400 $\mu$m
map and also lacks an infrared counterpart. Both these sources are also
undetected in 2MASS images and the deeper JHK observations of Román-Zúñiga et
al. (2008).
The MIRSI data show that any embedded object in SMA2 or SMA3 is more than 500
times fainter than AFGL961A from 4.9-20.9 $\mu$m but this does not rule out a
solar mass protostar. For the case of the relatively isolated SMA3 core, we
can place far more stringent limits on the infrared luminosity from the
Spitzer observations of Poulton et al. (2008). We plot the SMA continuum map
in contours over the IRAC 3.6 $\mu$m image in the lower panel of Figure 3. No
infrared source is apparent at the position of SMA3 in this image or in the
other IRAC bands. Comparing with faint stars in the cluster and taking into
account the point spread function and extended nebulosity, we estimate the
limits on the flux of any embedded object in SMA3 to be 0.4, 0.8, 1.2, and 2
mJy at 3.6, 4.5, 5.8, and 8.0 $\mu$m respectively. Many of the low mass,
moderately embedded protostars in Perseus (Jørgensen et al., 2006) would have
been detected at this level. SMA2 is lost in the glare around AFGL961A,B of
the IRAC images so these limits do not apply in that case.
The Q-band data are critical for ruling out very deeply embedded protostars.
Based on the SED models of Robitaille et al. (2006), we estimate that the most
luminous object consistent with our MIRSI upper limits in SMA2 and SMA3 is a
$300\,L_{\odot}$ protostar behind $\lower
2.15277pt\hbox{$\;\buildrel>\over{\sim}\;$}100$ magnitudes of visual
extinction. Unfortunately the Spitzer MIPS 24 $\mu$m image is heavily
saturated and the SMA2 and SMA3 cores lie in the point spread function wings
of AFGL961A,B. de Wit et al. (2009) recently imaged the AFGL961A,B pair at
24.5 $\mu$m with the 8 m Subaru telescope. Their map also shows no source
within SMA2 but the field-of-view is too small to include SMA3. The resolution
of these data is higher than our MIRSI Q-band image but the sensitivity does
not appear to be significantly greater.
At the other extreme of the MIRSI-SMA comparison, the bright infrared source
AFGL961C is not detected in the 1400 $\mu$m map. This indicates a lack of
compact, cool dust around it and, despite the similarities of the mid-infrared
spectral slopes, places it at a more advanced evolutionary state than AFGL961A
which is fully embedded in a cold molecular core.
The millimeter fluxes can be converted directly to a mass assuming a
temperature and dust grain opacity. Masses are listed in Table 2 under the
same prescription for the envelope mass calculation in §1. The three cores
have similar fluxes and, hence, similar inferred masses. However, we might
expect SMA1, which contains a B star, to be hotter than SMA2 and 3 and its
mass to be proportionately lower than listed here.
The principal finding from our examination of the IRTF and SMA maps is the
detection of 5 distinct pre- or proto-stellar sources within the central
regions of the cluster. AFGL961 A is embedded in a dense core, AFGL961 B and C
lack a similarly sized cold dusty envelope, and there are two moderately
massive cores, SMA2 and 3 that lack bright infrared sources.
### 3.2 Dynamics
Our SMA data provide spectroscopic information at $\sim 1$ ${\rm km~{}s^{-1}}$
resolution and allow us to study the dynamics of the dusty cores detected in
the continuum. We chose a tuning that placed 3–2 rotational transitions of
H2CO in the upper sideband and DCN in the lower sideband. The former molecule
is abundant in protostellar envelopes and a good tracer of infall motions
(Mardones et al., 1997), the latter, as a deuterated species, is well suited
for pinpointing the cold, potentially pre-stellar gas (Stark et al., 1999) and
measuring the systemic motions of the core. Figure 4 shows the integrated
intensity of H2CO $3_{12}-2_{11}$ and DCN 3–2 overlaid on the continuum. Both
lines follow the same filamentary structure as the dust. The H2CO map shows a
very strong peak toward SMA2 and a weaker peak toward SMA3. The small offsets
between the line and dust peaks is likely due to the high opacity of this
line. The DCN emission is weaker but more closely follows the dust morphology.
There is a single peak toward SMA2 but enhanced emission toward SMA1 and 3 is
also evident.
The close correspondence between the DCN and continuum maps shows that we can
use the former to estimate the velocity dispersion, $\sigma$, of the SMA1–3
cores. The central velocities, linewidths, $\Delta v=2.355\sigma$, and virial
masses, $M_{\rm vir}=3R\sigma^{2}/G$, are listed in Table 2. The latter
assumes spherical cores with an inverse square density profile (Bertoldi &
McKee, 1992). At the resolution of these data, it is hard to determine with
much certainty where the cores end and the filament begins from the continuum
map in Figure 3 but, depending on the intensity threshold used to define their
limits, we estimate core radii $\sim 3-4^{\prime\prime}$ from their projected
area indicating that the cores are barely resolved with deconvolved sizes
$\lower 2.15277pt\hbox{$\;\buildrel<\over{\sim}\;$}5000$ AU. We find that the
virial masses of SMA1 and 3 exceed the measured masses by about a factor of
two indicating an approximate balance between kinetic and potential energy in
these two cores. The large virial mass of SMA2, due its high linewidth,
indicates that this core is unbound.
The intense H2CO emission toward SMA2 is likely due to grain mantle sputtering
or evaporation and signposts an embedded protostar. Indeed, inspection of the
spectra and channel maps reveal an outflow centered on this core with line
wings discernible from $4-22$ ${\rm km~{}s^{-1}}$. Figure 5 zooms in on this
region and shows the most intense emission from the blue and redshifted sides
of the line. The two moment maps are offset from each other and on opposite
sides of the continuum emission. The discovery of this outflow explains the
high DCN linewidth, why the core is unbound, and confirms that SMA2 is a
physically distinct object from AFGL961B.
SMA3 also lacks a detectable infrared source but has a much smaller linewidth
and appears to be bound. The H2CO data does not show any evidence for an
outflow but the spectrum toward the core is asymmetric and shows a small dip
at the velocity of the DCN line (Figure 6). This is indicative of red-shifted
self-absorption and a signature of infall motion. Unfortunately the velocity
resolution of these data is poor and the self-absorption is seen in only one
spectra channel. We were consequently unable to successfully fit the spectra
using the radiative transfer models of De Vries & Myers (2005). As there is
some ambiguity in modeling the interferometric data alone without knowledge of
the larger scale emission, we observed the same line with the JCMT and made a
fully sampled H2CO datacube. The integrated intensity is shown as dotted
contours in Figure 4. The combined JCMT+SMA map show the large scale cluster
envelope emission, similar to the SCUBA map in Figure 1 and the individual
pre- and proto-stellar cores are not apparent. No self-absorption is seen in
the combined spectra across the cluster and toward SMA3 in particular. Hence
there is no evidence from these data for large scale collapse onto the
cluster.
## 4 Discussion
### 4.1 Evolutionary states
Our IRTF images from $5-20$ $\mu$m show the most deeply embedded, luminous
stars in the cluster. The sensitivity of these images is much lower than the
Spitzer data but the resolution is higher and the fidelity of the bright
sources better. We do not find any new sources that are not seen in existing
near-infrared images but we show that the three known members, AFGL961A, B,
and C, each have rising spectral indices in the mid-infrared. The SEDs are
plotted in Figure 7. We have fit the near- and mid-infrared data using the
precomputed models of Robitaille et al. (2006, 2007). As the model SEDs are
noisy at millimeter wavelengths we have not fit the SMA data point but show
the interpolation between model and 1400 $\mu$m flux. The model parameters
include a central source, disk, and envelope. In each case, most fits show the
sources are embedded under more than 20 magnitudes of extinction. The precise
amount is not well constrained (although background sources are ruled out).
Similarly, a wide range of disk and envelope parameters can fit the data and
their masses are not well determined. However, all the fits show that the
envelope dominates, typically by more than a factor of 100. The best
constrained parameters are the total luminosity and stellar mass of the
source. These are tabulated in Table 3 for the best fit and the mean and
standard deviation of the top 100 fits, weighted by the inverse of $\chi^{2}$.
The three sources are all massive stars but the morphological differences and
comparison with the millimeter map shows that they are in quite different
evolutionary states.
AFGL961A,B have been considered a binary system based on their close proximity
to each other (e.g. Aspin, 1998). However, their properties at 1400 $\mu$m are
quite different: source A is embedded in the dense $6\,M_{\odot}$ SMA1 core
but there is no millimeter emission centered on source B. Using the higher
resolution continuum map (Figure 5) we can place a $3\sigma$ limit on the 1400
$\mu$m flux toward source B of 9 mJy ($0.3\,M_{\odot}$) at $\sim 2000$ AU
scales. AFGL961A,B did not form from a common core, therefore, and are not a
binary pair but simply neighboring protostars, in different evolutionary
states, in a dense cluster.
AFGL961C is different in its own way. It is a point source in the M-band
image, noticeably elongated at N-band, and a large cavity of hot dust is seen
in the Q-band image. This star is clearing out its surrounding material, as
also shown by the shocked H2 emission image of Aspin (1998). Li et al. (2008)
postulated that the hourglass shape of the infrared nebula is due to polar
winds from a very young protostar punching a hole in its surrounding core. No
core emission is detected at 1400 $\mu$m, however, and the same
$0.3\,M_{\odot}$ limit at $\sim 2000$ AU scales applies as for source B.
The ratio of stellar to envelope mass, $M_{*}/M_{\rm env}$, is an effective
measure of protostellar evolutionary state (Andre et al., 1993). There is
certainly some uncertainty in determining each of these quantities but it is
clear that there is a large range in the ratio. $M_{*}/M_{\rm env}\approx 2$
for AFGL961A but is greater than 30 for AFGL961B and greater than 15 for
AFGL961C.
The outflow toward SMA2 indicates an embedded protostar and the non-detection
in the infrared limit the bolometric luminosity to less than $300\,L_{\odot}$.
Depending on its age, this constrains the stellar mass to no more than
$6\,M_{\odot}$ and probably much lower (Siess et al., 2000). Therefore
$M_{*}/M_{\rm env}\lower 2.15277pt\hbox{$\;\buildrel<\over{\sim}\;$}1$ and
this is a Class 0 object. Given its youth and the bipolar H2CO structure we
see toward it, we further suggest that this source is the driving source of
the energetic CO outflow from the cluster (Lada & Gautier, 1982; Dent et al.,
2009) and not the more luminous infrared sources.
SMA3 is far enough offset from the bright cluster center for the Spitzer/IRAC
data to provide the most stringent limits on any embedded object and it
appears to be truly starless. As it is gravitationally bound and very dense
with a free-fall time, $t_{\rm ff}\sim 10^{4}$ yr, it is likely to be on the
brink of star formation. The interferometric H2CO spectrum shows evidence that
the core is collapsing but higher spectral resolution data are required to
confirm this, and to allow modeling and a determination of the infall speed.
The difference between the peak and self-absorption dip is an approximate
measure and suggests $v_{\rm in}\sim 1$ ${\rm km~{}s^{-1}}$. Adding in the
short spacing information from the JCMT swamps the self-absorption showing
that any inward motions are on small scales. That is, the core is collapsing
on itself rather than growing through the accretion of envelope material. With
a mass of $6\,M_{\odot}$, SMA3 is more massive than starless cores in isolated
star forming regions such as Taurus (Shirley et al., 2000). Presumably it will
form a correspondingly more massive star but probably not comparable to
AFGL961A.
A mix of early evolutionary states in clusters is not uncommon. The first core
identified as a Class 0 object, VLA1623 in $\rho$ Ophiuchus, sits on the edge
of a small group of pre-stellar cores (Andre et al., 1993). Williams & Myers
(1999) found a collapsing starless core adjacent to a Class I protostar in the
Serpens cluster and Swift & Welch (2008) found a range of young stellar
objects, from Class 0 to III, in L1551. These are all low mass objects in low
mass star forming regions. Radio observations of the W3 and W75 N massive star
forming regions show a range of HII region morphologies and spectral indices
indicating different evolutionary states (Tieftrunk et al., 1997; Shepherd et
al., 2004, respectively). In particular, Shepherd et al. (2004) were able to
estimate an age spread of at least 1 Myr between a cluster of five early B
stars based on Strömgren sphere expansion. The detection of cold, dense pre-
stellar cores and young lower mass protostars require observations at
millimeter wavelengths. Interferometer maps by Hunter et al. (2006) and Rodón
et al. (2008) show tight groups of dusty cores in NGC6334 I and W3 IRS5
respectively. Core separations are even smaller and masses higher than we have
found here in the lower luminosity AFGL961 cluster. The cores have a range of
infrared properties and some power outflows and they likely span a range of
evolutionary states. There does not appear, however, to be a clear counterpart
to the pre-stellar core, AFGL961-SMA3, with its combination of low limit to
the luminosity of any embedded source and lack of outflow.
### 4.2 Dynamic or equilibrium cluster formation?
Theoretical models of cluster formation divide into two camps: a global
collapse of a massive molecular clump or piecemeal growth from the formation
of individual protostars in a more local process. The former occurs on short,
dynamical time-scales (Bate et al., 2003) but the latter is more gradual and
the cluster forming clump is in quasi-equilibrium (Tan et al., 2006).
Based on the SCUBA image in Figure 1, the cluster envelope has a mass
$M\approx 90\,M_{\odot}$ within a radius $R\approx 0.25$ pc. The free-fall
timescale for this region, based on the inferred average density, $\rho\approx
10^{-19}$ g cm-3, is $t_{\rm ff}=(3\pi/32G\rho)^{1/2}\approx 0.2$ Myr. Note
that material on larger scales, for example in the surrounding CO clump
(Williams et al., 1995), would have a longer collapse timescale.
Our observations of a filamentary structure at high surface densities,
$\Sigma\approx 0.1-1$ g cm-2, in a roughly spherical envelope at
$\Sigma\approx 0.025-0.25$ g cm-2 are quite similar to the numerical
simulations by Bate et al. (2003) and Bonnell et al. (2003). In particular,
the initial conditions of Bate et al. (2003) with a $M=50\,M_{\odot},R=0.19$
pc clump are closest to the properties of AFGL961. In these simulations, the
clump collapses on the free-fall timscale, fragments, and produces stars in
two short bursts of about $0.02$ Myr duration spread by about 0.2 Myr. The
addition of radiative feedback (Bate, 2009) does not change these numbers
significantly.
Kurosawa et al. (2004) calculated the radiative transfer for the Bate et al.
(2003) simulation and showed that the infrared classification of young stellar
objects varied from Class 0 to III. Under this scenario, a protostar’s
evolutionary state is dependent more on its dynamical history than its age and
we would interpret the lack of circumstellar material around AFGL961B,C as due
to their ejection from the dense star forming filament.
Evans et al. (2009) shows that circumstellar material around isolated low mass
stars in nearby star forming regions is lost in about 0.5 Myr. If the same
timescale applies to the more luminous sources in the crowded environs of
AFGL961, then sources B and C are older than the cluster free-fall time
suggesting a quasi-equilibrium collapse. In this scenario, evolution
correlates with age and we would infer that the massive star AFGL961A formed
after B and C, and that collapse continued even around this compact HII region
to form dense cores SMA 2 and 3.
These observations alone cannot distinguish between dynamic or equilibrium
models of cluster formation. However, similar high resolution mid-infrared
through millimeter observations of other young clusters of varied luminosity
and protostellar density will add to the classification statistics. In this
way, we might hope to decipher the relative effect of environment and time on
protostellar evolution.
## 5 Conclusions
We have carried out a high resolution mid-infrared and millimeter study of the
central region of the young stellar cluster AFGL961. Our observations show the
most deeply embedded bright protostars in the cluster and the filamentary
distribution of the highest column density gas. We find five sources within
0.2 pc of each other, each with distinct properties unlike the others. The
brightest infrared source is an early B star that powers a compact HII region
and lies within a dense molecular core. The core mass is a substantial
fraction of the stellar mass indicative of a Class I protostar. The two other
infrared sources are not detected at millimeter wavelengths and the ratio of
circumstellar to stellar mass is very low, suggesting that these sources are
most similar to Class II objects. Further, one has cleared out a cavity in the
circumcluster envelope. The SMA data also reveal two millimeter cores with no
infrared counterparts. One is a strong source of line emission, drives an
outflow, and has the characteristics of a deeply embedded Class 0 protostar.
The other core is massive, starless, and appears to be collapsing.
The dense mixture of diverse protostellar evolutionary states suggests either
that circumstellar matter is removed rapidly through dynamical interactions
with other cluster members or that clusters build up gradually over several
free-fall timescales. Regardless of its history, however, the discovery of a
massive, collapsing core shows that AFGL961 continues to grow even after the
birth of a massive star.
J.P.W. acknowledges the NSF for support through grants AST-0324328 and
AST-0607710 and BHO for his support of the NSF. We thank Matthew Bate and Ian
Bonnell for comments. This publication makes use of data products from the Two
Micron All Sky Survey, which is a joint project of the University of
Massachusetts and the Infrared Processing and Analysis Center/California
Institute of Technology, funded by the National Aeronautics and Space
Administration and the National Science Foundation, and the facilities of the
Canadian Astronomy Data Centre operated by the National Research Council of
Canada with the support of the Canadian Space Agency.
## References
* Andre et al. (1993) Andre, P., Ward-Thompson, D., & Barsony, M. 1993, ApJ, 406, 122
* Aspin (1998) Aspin, C. 1998, A&A, 335, 1040
* Bally & Predmore (1983) Bally, J., & Predmore, R. 1983, ApJ, 265, 778
* Bate et al. (2003) Bate, M. R., Bonnell, I. A., & Bromm, V. 2003, MNRAS, 339, 577
* Bate (2009) Bate, M. R. 2009, MNRAS, 392, 1363
* Bertoldi & McKee (1992) Bertoldi, F., & McKee, C. F. 1992, ApJ, 395, 140
* Blitz & Thaddeus (1980) Blitz, L., & Thaddeus, P. 1980, ApJ, 241, 676
* Bonnell et al. (2003) Bonnell, I. A., Bate, M. R., & Vine, S. G. 2003, MNRAS, 343, 413
* Castelaz et al. (1985) Castelaz, M. W., Grasdalen, G. L., Hackwell, J. A., Capps, R. W., & Thompson, D. 1985, AJ, 90, 1113
* Cohen (1973) Cohen, M. 1973, ApJ, 185, L75
* Cox et al. (1990) Cox, P., Deharveng, L., & Leene, A. 1990, A&A, 230, 181
* Dent et al. (2009) Dent, W. R. F., et al. 2009, arXiv:0902.4138
* De Vries & Myers (2005) De Vries, C. H., & Myers, P. C. 2005, ApJ, 620, 800
* de Wit et al. (2009) de Wit, W. J., et al. 2009, A&A, 494, 157
* Evans et al. (2009) Evans, N. J., et al. 2009, ApJS, 181, 321
* Hildebrand (1983) Hildebrand, R. H. 1983, QJRAS, 24, 267
* Hensberge et al. (2000) Hensberge, H., Pavlovski, K., & Verschueren, W. 2000, A&A, 358, 553
* Hunter et al. (2006) Hunter, T. R., Brogan, C. L., Megeath, S. T., Menten, K. M., Beuther, H., & Thorwirth, S. 2006, ApJ, 649, 888
* Jørgensen et al. (2006) Jørgensen, J. K., et al. 2006, ApJ, 645, 1246
* Kassis et al. (2008) Kassis, M., Adams, J. D., Hora, J. L., Deutsch, L. K., & Tollestrup, E. V. 2008, PASP, 120, 1271
* Kurosawa et al. (2004) Kurosawa, R., Harries, T. J., Bate, M. R., & Symington, N. H. 2004, MNRAS, 351, 1134
* Lada & Gautier (1982) Lada, C. J., & Gautier, T. N., III 1982, ApJ, 261, 161
* Lenzen et al. (1984) Lenzen, R., Hodapp, K.-W., & Reddmann, T. 1984, A&A, 137, 365
* Li et al. (2008) Li, J. Z., Smith, M. D., Gredel, R., Davis, C. J., & Rector, T. A. 2008, ApJ, 679, L101
* Mardones et al. (1997) Mardones, D., Myers, P. C., Tafalla, M., Wilner, D. J., Bachiller, R., & Garay, G. 1997, ApJ, 489, 719
* Ogura & Ishida (1981) Ogura, K., & Ishida, K. 1981, PASJ, 33, 149
* Perez et al. (1987) Perez, M. R., The, P. S., & Westerlund, B. E. 1987, PASP, 99, 1050
* Poulton et al. (2008) Poulton, C. J., Robitaille, T. P., Greaves, J. S., Bonnell, I. A., Williams, J. P., & Heyer, M. H. 2008, MNRAS, 384, 1249
* Robitaille et al. (2006) Robitaille, T. P., Whitney, B. A., Indebetouw, R., Wood, K., & Denzmore, P. 2006, ApJS, 167, 256
* Robitaille et al. (2007) Robitaille, T. P., Whitney, B. A., Indebetouw, R., & Wood, K. 2007, ApJS, 169, 328
* Rodón et al. (2008) Rodón, J. A., Beuther, H., Megeath, S. T., & van der Tak, F. F. S. 2008, A&A, 490, 213
* Román-Zúñiga et al. (2008) Román-Zúñiga, C. G., Elston, R., Ferreira, B., & Lada, E. A. 2008, ApJ, 672, 861
* Shepherd et al. (2004) Shepherd, D. S., Kurtz, S. E., & Testi, L. 2004, ApJ, 601, 952
* Shirley et al. (2000) Shirley, Y. L., Evans, N. J., II, Rawlings, J. M. C., & Gregersen, E. M. 2000, ApJS, 131, 249
* Siess et al. (2000) Siess, L., Dufour, E., & Forestini, M. 2000, A&A, 358, 593
* Stark et al. (1999) Stark, R., van der Tak, F. F. S., & van Dishoeck, E. F. 1999, ApJ, 521, L67
* Stanimirovic (2002) Stanimirovic, S. 2002, Single-Dish Radio Astronomy: Techniques and Applications, 278, 375
* Swift & Welch (2008) Swift, J. J., & Welch, W. J. 2008, ApJS, 174, 202
* Tan et al. (2006) Tan, J. C., Krumholz, M. R., & McKee, C. F. 2006, ApJ, 641, L121
* Tieftrunk et al. (1997) Tieftrunk, A. R., Gaume, R. A., Claussen, M. J., Wilson, T. L., & Johnston, K. J. 1997, A&A, 318, 931
* Williams et al. (1995) Williams, J. P., Blitz, L., & Stark, A. A. 1995, ApJ, 451, 252
* Williams & Myers (1999) Williams, J. P., & Myers, P. C. 1999, ApJ, 518, L37
* Zinnecker & Yorke (2007) Zinnecker, H., & Yorke, H. W. 2007, ARA&A, 45, 481
Table 1: Infrared sources | $\Delta\alpha$aa$\alpha_{2000}=06^{\rm h}34^{\rm m}37.74^{\rm s},\delta_{2000}=04^{\circ}\,12^{\prime}\,44.2^{\prime\prime}$ | $\Delta\delta$aa$\alpha_{2000}=06^{\rm h}34^{\rm m}37.74^{\rm s},\delta_{2000}=04^{\circ}\,12^{\prime}\,44.2^{\prime\prime}$ | $J$ | $H$ | $K$ | $M$ | $N$ | $Q$
---|---|---|---|---|---|---|---|---
Source | (′′) | (′′) | (Jy) | (Jy) | (Jy) | (Jy) | (Jy) | (Jy)
AFGL961A | 0.0 | 0.0 | 0.006 | 0.023 | 0.009 | 18.7 | 37.2 | 159
AFGL961B | $-4.9$ | $-1.7$ | 0.026 | 0.147 | 0.029 | 3.98 | 12.3 | 74.0
AFGL961C | $-30.8$ | 1.3 | 0.539 | 0.598 | 0.085 | 0.55 | 2.05 | 17.0
Table 2: Millimeter cores Source | $\Delta\alpha$ | $\Delta\delta$ | $S_{\rm 1400}$ | $M_{\rm core}$aa$T=20$ K, $\kappa_{\nu}=0.19$ cm2 g-1 | $v_{\rm lsr}$ | $\Delta v$ | $M_{\rm vir}$bb$R=5000$ AU
---|---|---|---|---|---|---|---
| (′′) | (′′) | (mJy) | ($M_{\odot}$) | (${\rm km~{}s^{-1}}$) | (${\rm km~{}s^{-1}}$) | ($M_{\odot}$)
SMA1 | 0.2 | 0.0 | 215 | 6.3 | 15.0 | 2.1 | 13
SMA2 | $-5.0$ | $-0.1$ | 184 | 5.4 | 14.2 | 4.3 | 56
SMA3 | $-16.8$ | 4.0 | 210 | 6.2 | 12.6 | 1.8 | 9.8
Table 3: SED fits | $M_{\ast}~{}(M_{\odot})$ | $L~{}(L_{\odot})$
---|---|---
Source | Best | Mean | Best | Mean
AFGL961A | 11.0 | $11.3\pm 1.8$ | 4600 | $6000\pm 2500$
AFGL961B | 9.1 | $9.1\pm 1.3$ | 1200 | $2500\pm 1300$
AFGL961C | 5.3 | $6.4\pm 1.1$ | 370 | $480\pm 200$
Figure 1: Large scale view of the AFGL961 cluster. The background is the
K-band image from Román-Zúñiga et al. (2008) on a log scale showing the
embedded stars and associated nebulosity. The contours of the SCUBA 850 $\mu$m
emission from the cold, dusty cluster envelope show surface densities at
$\Sigma=0.025\times(1,2,3...)$ g cm-2. The rectangle outlines the region shown
in the MIRSI images in Figure 2.
Figure 2: MIRSI maps of the center of AFGL961 at M, N, and Q bands
($4.9,10.4$, and $20.9\,\hbox{$\mu$m}$ respectively). For each band, the scale
is logarithmic with a dynamic range of 300 and the contour levels are at 1, 3,
9, 27, 81% of the peak intensity. The axes are arcsecond offsets from
AFGL961A.
Figure 3: The 1400 $\mu$m continuum emission from the SMA data showing the
cool dust condensations in the cluster. The top panel overlays the MIRSI
M-band image in logarithmic contours on the 1400 $\mu$m continuum map. The
scale ranges linearly from 5 to 75 mJy beam-1, corresponding to surface
densities $\Sigma=0.05$ to 0.75 g cm-2. The three prominent millimeter peaks
are labeled SMA1–3 and their properties listed in Table 2. The bottom panel
overlays the 1400 $\mu$m continuum map in contours on the Spitzer 3.6 $\mu$m
map in log scale. The contours are at surface densities
$\Sigma=0.1\times(1,2,3...)$ g cm-2. The stars show the locations of the MIRSI
sources AFGL961A,B,C. The Spitzer image is saturated at the positions of these
sources but provides stringent limits on the luminosity of any embedded source
in SMA3. The $3\hbox{$.\\!\\!^{{}^{\prime\prime}}$}1\times
2\hbox{$.\\!\\!^{{}^{\prime\prime}}$}8$ SMA beam is shown in the lower left
corner.
Figure 4: Integrated intensity maps of H2CO $3_{12}-2_{11}$ (top panel) and
DCN $3-2$ (bottom panel). The background image in each case is the 1400 $\mu$m
continuum map. The velocity range of integration was 8 to 20 ${\rm
km~{}s^{-1}}$. For the H2CO map, the SMA data are shown in solid contours
beginning at and in increments of 0.4 K ${\rm km~{}s^{-1}}$. Dashed contours
show negative levels. The combined JCMT+SMA map is shown in dotted contours at
4 K ${\rm km~{}s^{-1}}$. The DCN contours are at 0.2 K ${\rm km~{}s^{-1}}$.
The stars shows the location of AFGL961A,B, and C. The resolution of the
integrated intensity maps is shown in the lower left corner of each panel.
Figure 5: Red and blueshifted H2CO emission revealing a molecular outflow from
an (undetected) embedded protostar in SMA2. The background image is the high
resolution ($1\hbox{$.\\!\\!^{{}^{\prime\prime}}$}4\times
0\hbox{$.\\!\\!^{{}^{\prime\prime}}$}9$) 1400 $\mu$m continuum map and ranges
from 15 to 30 mJy beam-1. The solid or red contours show the intensity
integrated over $13-25$ ${\rm km~{}s^{-1}}$. and the dashed or blue contours
show the intensity integrated over $0-13$ ${\rm km~{}s^{-1}}$. Contour levels
are 30, 40, 50,… K ${\rm km~{}s^{-1}}$, and show only the most intense
emission above the cloud background. The stars show the location of AFGL961A,
B and the crosses locate the peak of the millimeter cores SMA1, 2.
Figure 6: Spectra toward SMA3 showing infall at small scales, possibly due to
core collapse. The thick solid line is the SMA H2CO spectrum and shows a
slight dip at the same velocity as the peak of the DCN spectrum shown as a
thin solid line. The dotted line is the combined JCMT+SMA H2CO spectrum which
is more symmetric and does not show any self-absorption features.
Figure 7: Spectral energy distributions and best fit model for the three
infrared sources AFGL961A,B,C. The dotted line linearly interpolates between
the model at 70 $\mu$m and SMA data point at 1400 $\mu$m.
|
arxiv-papers
| 2009-05-11T20:12:12 |
2024-09-04T02:49:02.533772
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Jonathan P. Williams, Rita K. Mann, Christopher N. Beaumont, Jonathan\n J. Swift, Joseph D. Adams, Joe Hora, Marc Kassis, Elizabeth A. Lada, Carlos\n G. Roman-Zuniga",
"submitter": "Jonathan Williams",
"url": "https://arxiv.org/abs/0905.1693"
}
|
0905.1765
|
# Screening in gated bilayer graphene
L.A. Falkovsky L.D. Landau Institute for Theoretical Physics, Moscow 117334,
Russia Institute of the High Pressure Physics, Troitsk 142190, Russia
###### Abstract
The tight-binding model of a graphene bilayer is used to find the gap between
the conduction and valence bands, as a function of both the gate voltage and
as the doping by donors or acceptors. The total Hartree energy is minimized
and the equation for the gap is obtained. This equation for the ratio of the
gap to the chemical potential is determined only by the screening constant.
Thus the gap is strictly proportional to the gate voltage or the carrier
concentration in the absence of donors or acceptors. In the opposite case,
where the donors or acceptors are present, the gap demonstrates the
asymmetrical behavior on the electron and hole sides of the gate bias.
###### pacs:
73.20.At, 73.21.Ac, 73.43.-f, 81.05.Uw
Bilayer graphene has attracted much interest partly due to the opening of a
tunable gap in its electronic spectrum by an external electrostatic field.
Such a phenomenon was predicted by McCann and Fal’ko McF and can be observed
in optical studies controlled by applying a gate bias OBS ; ZBF ; KHM ; LHJ ;
ECNM ; NC . In Refs. Mc ; MAF , within the self-consistent Hartree
approximation, the gap was derived as a near-linear function of the carrier
concentration injected in the bilayer by the gate bias. Recently, this problem
was numerically considered GLS using the density functional theory (DFT) and
including the external charge doping involved with impurities. The DFT
calculation gives the gap which is roughly half of the gap obtained in the
Hartree approximation. This disagreement was explained in Ref. GLS as a
result of both the inter- and intralayer correlations.
In this Brief Report, we study this problem within the same Hartree
approximation as in Refs. Mc ; MAF , but including the external doping. We
consider the case, where the carrier concentration in the bilayer is less than
1013 cm-2, calculating the carrier concentration on both layers. Then, we
minimize the total energy of the system and find self-consistently both the
chemical potential and the gap induced by the gate bias. Our results
completely differ from those obtained in Refs. Mc ; MAF even for the range
where the external doping is negligible. The dependence of the gap on the
carrier concentration, i. e., on the gate voltage, exhibits an asymmetry at
the electron and hole sides of the gate bias.
The graphene bilayer lattice is shown in Fig. 1. Atoms in one layer, i. e.,
$a$ and $b$ in the unit cell, are connected by solid lines, and in the other
layer, e. g., $a_{1}$ and $b_{1}$, by the dashed lines. The atom $a$ ($a_{1}$)
differs from $b$ ($b_{1}$) because it has a neighbor just below in the
adjacent layer, whereas the atom $b$ ($b_{1}$) does not.
Let us recall the main results of the Slonchewski–Weiss–McClure model SW ;
McCl . In the tight-binding model, the Bloch functions of the bilayer are
written in the form
$\displaystyle\psi_{a}=\frac{1}{\sqrt{N}}\sum_{j}e^{i{\bf
ka}_{j}}\psi_{0}({\bf a}_{j}-{\bf r})$
$\displaystyle\psi_{b}=\frac{1}{\sqrt{N}}\sum_{j}e^{i{\bf
ka}_{j}}\psi_{0}({\bf a}_{j}+{\bf a}-{\bf r})$ (1)
$\displaystyle\psi_{a1}=\frac{1}{\sqrt{N}}\sum_{j}e^{i{\bf
ka}_{j}}\psi_{0}({\bf a}_{j}+{\bf c}-{\bf r})$
$\displaystyle\psi_{b1}=\frac{1}{\sqrt{N}}\sum_{j}e^{i{\bf
ka}_{j}}\psi_{0}({\bf a}_{j}+{\bf c}+{\bf a}-{\bf r}),$
where the sums are taken over the lattice vectors ${\bf a}_{j}$ and $N$ is the
number of unit cells. Vectors ${\bf a}$ and ${\bf c}$ connect the nearest
atoms in the layer and in the neighbor layers, correspondingly.
Figure 1: Bilayer lattice
For the nearest neighbors, the effective Hamiltonian in the space of the
functions (Screening in gated bilayer graphene) contains the hopping integrals
$\gamma_{0},\gamma_{1},\gamma_{3},\gamma_{4},$ and $\Delta$ PP . The largest
of them, $\gamma_{0}$, determines the band dispersion near the $K$ point in
the Brillouin zone with a velocity parameter $v$. The parameters $\gamma_{3}$
and $\gamma_{4}$ giving a correction to the dispersion are less than
$\gamma_{0}$ by a factor of 10 (see Refs. KHM ; LHJ ). The parameters
$\gamma_{1}$ and $\Delta$ result in the displacements of the levels at $K$,
but $\Delta$ is much less than $\gamma_{1}$. Besides, there is the parameter
$U$ induced by the gate and describing the asymmetry of two layers in the
external electrostatic field. This parameter simply presents the potential
energy between two layers, $2U=-edE$, where $d$ is the interlayer distance and
$E$ is the electric field induced both by the gate voltage and the external
charge dopants in the bilayer. In the simplest case, the effective Hamiltonian
can be written as
$H(\mathbf{k})=\left(\begin{array}[]{cccc}U&vk_{+}&\gamma_{1}&0\\\
vk_{-}&U&0&0\\\ \gamma_{1}&0&-U&vk_{-}\\\ 0&0&vk_{+}&-U\end{array}\right),$
(2)
where the matrix elements are expanded in the momentum $k_{\pm}=\mp
ik_{x}-k_{y}$ near the $K$ points.
Figure 2: Band structure of bilayer
The Hamiltonian gives four energy bands:
$\displaystyle\varepsilon_{1,4}(q)=\pm\left(\frac{\gamma_{1}^{2}}{2}+U^{2}+q^{2}+W\right)^{1/2}\,,$
(3)
$\displaystyle\varepsilon_{2,3}(q)=\pm\left(\frac{\gamma_{1}^{2}}{2}+U^{2}+q^{2}-W\right)^{1/2}\,,$
where
$W=\left(\frac{\gamma_{1}^{4}}{4}+(\gamma_{1}^{2}+4U^{2})q^{2}\right)^{1/2}$
and we denote $q^{2}=(vk)^{2}$.
The band structure is shown in Fig. 2. The minimal value of the upper energy
$\varepsilon_{1}$ is $\sqrt{U^{2}+\gamma_{1}^{2}}$. The $\varepsilon_{2}$ band
takes the maximal value $|U|$ at $k=0$ and the minimal value
$\tilde{U}=\gamma_{1}|U|/\sqrt{\gamma_{1}^{2}+4U^{2}}$ at
$q^{2}=2U^{2}(\gamma_{1}^{2}+2U^{2})/(\gamma_{1}^{2}+4U^{2}).$ Because the
value of $U$ is much less than $\gamma_{1}$, the distinction between $U$ and
$\tilde{U}$ is small and the gap between the bands $\varepsilon_{2}$ and
$\varepsilon_{3}$ takes the value $2|U|$.
Figure 3: Electrostatic model; $d$ is the interlayer distance, $d_{w}$ is the
wafer thickness.
The eigenfunctions ${\mathbf{C}}$ of the Hamiltonian (2) have the form
${\mathbf{C}}=\frac{1}{C}\left(\begin{array}[]{c}(U-\varepsilon_{n})[(\varepsilon_{n}+U)^{2}-q^{2}]\\\
-q_{-}[(\varepsilon_{n}+U)^{2}-q^{2}]\\\
\gamma_{1}(U^{2}-\varepsilon_{n}^{2})\\\
\gamma_{1}q_{+}(U-\varepsilon_{n})\end{array}\right),$ (4)
where the ${\mathbf{C}}$ norm squared is
$\displaystyle
C^{2}=[(\varepsilon_{n}+U)^{2}-q^{2}]^{2}[(\varepsilon_{n}-U)^{2}+q^{2}]$
$\displaystyle+\gamma_{1}^{2}(\varepsilon_{n}-U)^{2}[(\varepsilon_{n}+U)^{2}+q^{2}]\,.$
The probability to find an electron, for instance, on the first layer is
$|C_{1}|^{2}+|C_{2}|^{2}$, as seen from Eqs. (Screening in gated bilayer
graphene).
We assume, that carriers occupy only the bands $\varepsilon_{2,3}$, so the
chemical potential $\mu$ and the gap $2|U|$ are less than the distance between
the bands $\varepsilon_{1}$ and $\varepsilon_{2}$, i. e.,
$(|\mu|,2|U|)<\gamma_{1}.$ The electron dispersion for the $\varepsilon_{2,3}$
bands can be expanded in powers of $q^{2}$:
$\varepsilon_{2,3}^{2}(q)=U^{2}-4\frac{U^{2}}{\gamma_{1}^{2}}q^{2}+\frac{q^{4}}{\gamma_{1}^{2}}.$
Then, for $q^{2}\gg 4U^{2}$, we can use the simple relations:
$\displaystyle\varepsilon_{2,3}(q)=\pm\sqrt{U^{2}+q^{4}/\gamma_{1}^{2}}\,,$
$\displaystyle|C_{1}|^{2}+|C_{2}|^{2}=q^{4}/[q^{4}+\gamma_{1}^{2}(\varepsilon_{2,3}-U)^{2}]$
(5) $\displaystyle=(\varepsilon_{2,3}+U)/2\varepsilon_{2,3}\,.$
Within such the approximation, many observable effects can be analytically
evaluated for the intermediate carrier concentration,
$4U^{2}\ll\gamma_{1}\sqrt{\mu^{2}-U^{2}}\ll\gamma_{1}^{2}$, where we neglect
the effect of the ”mexican hat”.
At zero temperature, for the total carrier concentration $n$ and the carrier
concentrations $n_{1,2}$ on the layers, we obtain
$\displaystyle
n=\frac{\gamma_{1}}{\pi\hbar^{2}v^{2}}\sqrt{\mu^{2}-U^{2}}=\frac{n_{0}U}{\gamma_{1}}\sqrt{x^{2}-1}\,,$
(6) $\displaystyle
n_{1,2}=\frac{\gamma_{1}}{2\pi\hbar^{2}}\int_{U}^{\mu}\sqrt{\frac{\varepsilon+U}{\varepsilon-U}}d\varepsilon$
(7)
$\displaystyle=\frac{n_{0}U}{2\gamma_{1}}[\sqrt{x^{2}-1}\pm\ln{(x+\sqrt{x^{2}-1})}]\,,$
where $n_{0}=\gamma_{1}^{2}/\pi\hbar^{2}v^{2}=1.2\times 10^{13}$ cm-2 and
$x=\mu/U$.
In order to find the chemical potential $\mu$ and the gap $2|U|$ at the given
gate voltage, we minimize the total energy containing both the energy $V$ of
the carriers and the energy $V_{f}$ of the electrostatic field. Instead of the
chemical potential, it is convenient to use the variable $x$ along with $U$.
Electrons in the $\varepsilon_{2}$ band or holes in the $\varepsilon_{3}$ band
contribute in the total energy of the system the energy
$\displaystyle V=\frac{2}{\pi\hbar^{2}v^{2}}\int\varepsilon_{2}(q)qdq=$
$\displaystyle\frac{n_{0}U^{2}}{2\gamma_{1}}[x\sqrt{x^{2}-1}+\ln{(x+\sqrt{x^{2}-1})}]\,.$
(8)
The energy of the electrostatic field,
$V_{f}=\frac{1}{8\pi}(dE^{2}+\epsilon_{w}d_{w}E^{2}_{w})$ (9)
can be written in terms of the carrier concentrations with the help of
relations (see Fig. 3):
$4\pi e(n_{1}-n_{d}/2)=E\,\text{ and}\,4\pi e(n-n_{d})=\epsilon_{w}E_{w}\,,$
(10)
where $\epsilon_{w}$ is the dielectric constant of the wafer, the negative
(positive) $n_{d}$ is the acceptor (donor) concentration and we suppose that
the donors/acceptors are equally divided between two layers.
We seek the minimum of the total energy as a function of two variables, $U$
and $x$, under the gate bias constraint
$eV_{g}=-e^{2}dE-e^{2}d_{w}E_{w}\,.$ (11)
Excluding the Lagrange multiplier and assuming the interlayer distance to be
much less than the thickness $d_{w}$ of the dielectric wafer, we obtain the
following equation:
$4\pi
e^{2}d\left(n_{2}-\frac{n_{d}}{2}\right)\left(\frac{n_{1x}}{n_{x}}-\frac{n_{1u}}{n_{u}}\right)=\frac{V_{x}}{n_{x}}-\frac{V_{u}}{n_{u}}\,.$
(12)
Let us emphasize, that this equation is invariant only with respect to the
simultaneous sign change in $n_{1,2}$ and $n_{d}$, that expresses the charge
invariance of the problem. At the fixed sign of the external doping $n_{d}$,
the gap on the electron and hole sides of the gate bias is not symmetrical.
The derivatives in Eq. (12) are calculated with the help of Eqs. (6)–(10). As
a result, Eq. (12) becomes
$\displaystyle\frac{\gamma_{1}n_{d}}{Un_{0}}=\sqrt{x^{2}-1}\pm\left\\{f(x)+\frac{xf(x)}{\Lambda[xf(x)-\sqrt{x^{2}-1}]}\right\\}$
(13)
with the function $f(x)=\ln{(x+\sqrt{x^{2}-1})}$ and the dimensionless
screening constant
$\Lambda=\frac{e^{2}\gamma_{1}d}{(\hbar v)^{2}}\,.$
For the parameters of graphene $d=3.35\,\AA\,,\gamma_{1}=0.4$ eV, and
$v=10^{8}$ cm/s, we get $\Lambda=0.447$.
First, consider an ideal undoped bilayer with $n_{d}=0$, namely,
$|\gamma_{1}n_{d}/Un_{0}|\ll 1$. We obtain a solution, as $x_{0}=6.2784$, only
for one sign in Eq. (13) determining the polarity of the layers [see Eq. (7)].
This value gives $2|U/\mu|=2/x_{0}=0.3186$ for the ratio of the gap to the
chemical potential. According to Eq. (6), the gap as a function of the carrier
concentration takes a very simple form:
$2|U/n|=\frac{2\gamma_{1}}{n_{0}\sqrt{x_{0}^{2}-1}}=1.08\times
10^{-11}\text{meV}\cdot\text{cm}^{2}\,,$ (14)
where the right-hand side does not depend on the gate bias at all, but only on
the screening constant $\Lambda$.
Figure 4: The gap in units of $\gamma_{1}=0.4$ eV versus the carrier
concentration for the hole doping with concentration $n_{d}=-5\times 10^{12}$
cm-2; the positive (negative) values of $n$ correspond to the electron (hole)
conductivity.
We can compare Eq. (14) with the corresponding result of Ref. Mc :
$2|U/n|=\frac{e^{2}d}{2\epsilon_{0}}\left[1+2\Lambda\frac{|n|}{n_{0}}+\Lambda\ln\frac{n_{0}}{|n|}\right]^{-1}\,.$
(15)
Both equations are in the numerical agreement at $|n|=0.1n_{0}\simeq 10^{12}$
cm-2. However, contrary to Eq. (14), Eq. (15) contains the carrier
concentration in the right-hand side giving rise to the more rapid increase in
the gap with $|n|\ll n_{0}$. This increase also contradicts to the DFT
calculations GLS .
For the bilayer with acceptors or donors, $n_{d}\neq 0$, Eq. (13) presents a
solution $w=Un_{0}/\gamma_{1}n_{d}$ as a function of $x$. We obtain,
evidently, the large values of $w$ for $x$ close to $x_{0}=6.28$. In this
region of the relatively large $|U|$, we find again with the help of Eqs. (6)
and (13) the linear dependence
$\displaystyle
2|U|=2\gamma_{1}\frac{|n-n_{d}|}{n_{0}x_{0}}=1.08\,|n-n_{d}|\times
10^{-11}\text{ meV}\cdot\text{cm}^{2}.$
For the small gap, $|Un_{0}/\gamma_{1}n_{d}|<1$, we obtain different results
for the electron and hole types of conductivity. For instance, if the bilayer
contains acceptors (see Fig. 4) with concentration $n_{d}$, the gap decreases
linearly with the hole concentration and vanishes, when the gate bias is not
applied and the hole concentration equals $n_{d}$. Starting from this point,
the gap increases and, thereafter, becomes again small (equal to zero in Fig.
4) at the carrier concentration corresponding to the minimal value of the dc
conductivity. Therefore, the difference observed in Ref. MLS between these
two values of carrier concentrations, at the zero bias and at the minimal
conductivity, gives directly the donor/acceptor concentration in the bilayer.
Then, for the gate bias applied in order to increase the electron
concentration, the gap is rapidly opening with the electron appearance.
We see, that the asymmetry arises between the electron and hole sides of the
gate bias. This asymmetry can simulate a result of the hopping integral
$\Delta$ in the electron spectrum CNM . In order to obtain the gap dependence
for the case of electron doping, $n_{d}>0$, the reflection transformation
$n\rightarrow-n$ has to be made in Fig. 4.
The gap in the vicinity of the minimal conductivity point reaches indeed a
finite value due to several reasons. One of them is the form of the ”mexican
hat” shown in Fig. 2. Second, the trigonal warping is substantial at low
carrier concentrations. Finally, the graphene electron spectrum is unstable
with respect to the Coulomb interaction at the low momentum values. For the
graphene monolayer as shown in Ref. Mi , the logarithmic corrections appear at
the small momentum. In the case of the bilayer, the electron self-energy
contains the linear corrections, as can be found using the perturbation
theory. The similar linear terms resulting in a nematic order were also
obtained in the framework of the renormalization group VY .
In conclusions, the gap $2U$ opening in the gated graphene bilayer has an
intriguing behavior as a function of carrier concentration. In the presence of
the external doping charge, i. e. donors or acceptors, this function is
asymmetric on the hole and electron sides of the gate bias and it is linear
only for the large gate bias.
This work was supported by the Russian Foundation for Basic Research (grant
No. 07-02-00571). The author is grateful to the Max Planck Institute for the
Physics of Complex Systems for hospitality in Dresden.
## References
* (1) E. McCann, V.I. Fal’ko, Phys. Rev. Lett. 96, 086805 (2006).
* (2) T. Ohta, A. Bostwick, T Seyller, K. Horn, and E. Rotenberg, Science 313, 951 (2006).
* (3) L.M. Zhang, Z.Q. Li, D.N. Basov, M.M. Foger, Z. Hao, and M.C. Martin, Phys. Rev. B 78, 235408 (2008).
* (4) A.B. Kuzmenko, E. van Heumen, D. van der Marel, P. Lerch, P. Blake, K.S. Novoselov, A.K. Geim, Phys. Rev. B 79, 115441 (2009).
* (5) Z.Q. Li, E.A. Henriksen, Z. Jiang, Z. Hao, M.C. Martin, P. Kim, H.L. Stormer, and D.N. Basov, Phys. Rev. Lett. 102, 037403 (2009).
* (6) E.V. Castro, K.S. Novoselov. S.V. Morozov, N.M.R. Peres, J.M.B. Lopes dos Santos, Johan Nilsson, F. Guinea, A.K. Geim, and A.H. Castro Neto, Phys. Rev. Lett. 99, 216802 (2007).
* (7) E.J. Nicol, J.P. Carbotte, Phys. Rev. B 77, 155409 (2008).
* (8) E. McCann, Phys. Rev. B 74, 161403(R) (2006).
* (9) E. McCann, D.S.L. Abergel, V.I. Fal’ko, Sol. St. Comm. 143, 110 (2007).
* (10) P. Gava, M. Lazzeri, A.M. Saitta, and F. Mauri, arXiv:0902.4615 (2009).
* (11) J.C. Slonchewski and P.R. Weiss, Phys. Rev. 109, 272 (1958).
* (12) J.W. McClure, Phys. Rev. 108, 612 (1957).
* (13) B. Partoens, F.M. Peeters, Phys. Rev. B 74, 075404 (2006).
* (14) E.V. Castro, K.S. Novoselov. S.V. Morozov, N.M.R. Peres, J.M.B. Lopes dos Santos, Johan Nilsson, F. Guinea, A.K. Geim, and A.H. Castro Neto, arXiv:0807.3348 (2008).
* (15) K.F. Mak, C.H. Lui, J. Shan, and T.F. Heinz, (2009).
* (16) E.G. Mishchenko, Phys. Rev. Lett. 98, 216801 (2007).
* (17) O. Vafek and K. Yang, arXiv:0906.2483 (2009).
|
arxiv-papers
| 2009-05-12T06:35:16 |
2024-09-04T02:49:02.540172
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "L.A. Falkovsky",
"submitter": "L. A. Falkovsky",
"url": "https://arxiv.org/abs/0905.1765"
}
|
0905.1816
|
# Acceleration of Relativistic Protons during the 20 January 2005 Flare and
CME
S. Masson1K.-L. Klein1R. Bütikofer2E. Flückiger2V. Kurt3B. Yushkov3S. Krucker4
1 Observatoire de Paris, LESIA - CNRS UMR 8109, Universités P&M Curie et Denis
Diderot, Paris, F-92195 Meudon Cédex, France email:
%UPMC,␣Universit\’e␣Paris␣Diderot,sophie.masson@obspm.fr email:
%****␣Masson_al_2005Jan20_revised_v2.tex␣Line␣100␣****ludwig.klein@obspm.fr
2 University of Bern, Space Research & Planetary Sciences, CH-3012 Bern,
Switzerland email: rolf.buetikofer@space.unibe.ch
email: flueckiger@space.unibe.ch
3 Skobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State University
Moscow 119991, Russia email: vgk@srd.sinp.msu.ru
3 Space Sciences Laboratory, University of California, Berkeley, CA
94720-7450, USA email : krucker@ssl.berkeley.edu
###### Abstract
The origin of relativistic solar protons during large flare/CME events has not
been uniquely identified so far. We perform a detailed comparative analysis of
the time profiles of relativistic protons detected by the worldwide network of
neutron monitors at Earth with electromagnetic signatures of particle
acceleration in the solar corona during the large particle event of 20 January
2005. The intensity-time profile of the relativistic protons derived from the
neutron monitor data indicates two successive peaks. We show that microwave,
hard X-ray and $\gamma$-ray emissions display several episodes of particle
acceleration within the impulsive flare phase. The first relativistic protons
detected at Earth are accelerated together with relativistic electrons and
with protons that produce pion decay $\gamma$-rays during the second episode.
The second peak in the relativistic proton profile at Earth is accompanied by
new signatures of particle acceleration in the corona within $\approx
1~{}R_{\odot}$ above the photosphere, revealed by hard X-ray and microwave
emissions of low intensity, and by the renewed radio emission of electron
beams and of a coronal shock wave. We discuss the observations in terms of
different scenarios of particle acceleration in the corona.
###### keywords:
Coronal mass ejections ${\cdot}$ Cosmic rays, solar ${\cdot}$ Energetic
particles, acceleration ${\cdot}$ Energetic particles, propagation ${\cdot}$
Flares
## 1 Introduction
Energy conversion in the solar corona is often accompanied by particle
acceleration, which on occasion extends to relativistic energies. When
relativistic ions or nucleons impinge on the Earth’s atmosphere, they produce
air showers of, among others, neutrons, protons and muons. The nucleons can be
detected by ground based neutron monitors, provided the primary particle at
the top of the atmosphere has an energy above some threshold, typically 450
MeV. Transient flux enhancements of relativistic solar particles are called
ground level enhancements (GLEs). They are the relativistic extension of solar
energetic particle (SEP) events. An overview of neutron monitor studies of
GLEs is given by Lop-06.
It is still an open question how the Sun accelerates particles, and more
specifically, how it can accelerate particles to the relativistic energies
which are observed during the GLE. With flares and coronal shock waves, which
both accompany large SEP events [Gopalswamy et al. (2004)], solar activity
provides candidate environments for particle acceleration. But observations
have so far not been able to show unambiguously which of them is the key
element for particle acceleration to relativistic energies.
The links of particles detected near 1 AU with their solar origin are blurred
by their propagation in interplanetary space, through scattering by the
turbulent magnetic field and reflection at large-scale magnetic structures
[Meyer, Parker, and Simpson (1956), Dröge (2000), Bieber et al. (2002), Sáiz
et al. (2008)]. Thus, comparing signatures of accelerated solar particles at
the Sun with the measurements of the relativistic particles at the Earth is
often difficult, unless particularly favourable conditions of interplanetary
propagation are met. This is why detailed timing analyses of individual events
observed under such favourable conditions are valuable means to search for
common signatures of particle acceleration in GLE time profiles and in
electromagnetic signatures at the Sun, especially at $\gamma$-ray, hard X-ray
and radio wavelengths. Such studies have been carried out in the past, and
suggested that the neutron monitor time profiles give information on coronal
acceleration processes [Akimov et al. (1996), Debrunner et al. (1997), Klein
et al. (1999), Klein et al. (2001), Miroshnichenko et al. (2005)]. However,
the timing uncertainty due to interplanetary propagation makes a clear
association of a GLE feature with a specific phase of the flare/CME event in
the corona difficult.
The GLE on 20 January 2005 displays a conspicuous and rapid increase of the
relativistic particle flux above the cosmic ray background detected by neutron
monitors. The prompt increase and the high anisotropy suggest that the time
profiles suffered little distortion by interplanetary scattering. In this
paper we report on a detailed timing analysis of relativistic protons at 1 AU
and interacting particles. Section 2.1 describes how the time profiles and
distribution functions of protons detected at the Earth are derived from a set
of neutron monitor measurements. The timing of accelerated particles in the
chromosphere and the low corona is inferred from their microwave, hard X-ray
and $\gamma$-ray emissions (Section 2.2). Metric-to-kilometric radio emission
from electron beams is used to trace the injection and propagation of
accelerated particles in the corona and interplanetary space (Section 2.3).
The combination of these analyses allows us to infer the interplanetary path
length travelled by the protons and their initial solar release time (Section
3.1), and the relationship of escaping protons with coronal acceleration
processes throughout the event (Section 3.2). Finally, we present in Section 4
a consistent scenario of coronal acceleration in different episodes with
different radiative signatures of energetic particles and different conditions
of access to interplanetary space. We discuss our findings with respect to
other work on this event, and to particle acceleration scenarios of GLEs.
## 2 Observations
The relativistic proton event that started on 20 January 2005 at 06:50 UT was
part of a large SEP event detected by a fleet of spacecraft and by the
worldwide network of neutron monitors. Exceptionally energetic particles were
detected by muon telescopes on the Earth [Ryan and the Milagro Collaboration
(2005), D’Andrea and Poirier (2005)], which are sensitive to energies above
several GeV. The event accompanied intense activity at the Sun comprising a
strong flare in soft X-rays (GOES class X7.1, 06:37-07:26 UT, peak at 07:01
UT) and H$\alpha$ (2B, N12∘ W58∘, active region NOAA 10720), as reported in
Solar Geophysical Data Comprehensive Reports for January 2005111NOAA Solar-
Terrestrial Physics Division, http://sgd.ngdc.noaa.gov/sgd/jsp/solarfront.jsp,
along with intense hard X-ray and $\gamma$-ray emissions from high-energy
particles [Kuznetsov et al. (2008), Saldanha, Krucker, and Lin (2008), Krucker
et al. (2008)]. SOHO observed a broad and fast CME and large-scale
disturbances in EUV. Detailed presentations of this activity can be found,
e.g., in Sim-06 and Grc:al-08. The 20 January 2005 event was one of a series
of large flares and CMEs involving the same active region between 13 January
2005 and its passage across the west limb. These events left the
interplanetary medium in a highly perturbed state [Plainaki et al. (2007),
McCracken, Moraal, and Stoker (2008)]. Neutron monitors detected a Forbush
decrease starting on 17 January [Flückiger et al. (2005)].
Figure 1.: Time histories of observed and derived parameters of the
relativistic particle event on 20 January 2005: a) count rate time histories
of several neutron monitors, the count rates are normalised by the individual
maximum; b) spectral index $\gamma(t)$ of the differential directional proton
intensity spectrum (cf. Equation (2)); c) amplitude $A(t)$ (differential
directional proton intensity at 1 GV rigidity) of the protons. Insert : pitch
angle distributions; d) differential directional proton intensity at $5$ GV
rigidity. Note that the time scale in a) is different from b) to d).
### 2.1 Relativistic Protons at 1 AU
Neutron monitor time profiles of the GLE are shown in panel a) of Figure 1 and
in several other papers [Bieber et al. (2005), Simnett (2006), Flückiger et
al. (2005), Plainaki et al. (2007), Bombardieri et al. (2008), McCracken,
Moraal, and Stoker (2008)]. Here we are interested in the profile of the
primary particles outside the Earth’s magnetosphere. The primary particles
producing the GLE from the Sun are protons or occasionally also neutrons, but
neutrons can be excluded in the present GLE. As seen in the top panel of
Figure 1, the neutron monitors located in Antarctica (Sanae, McMurdo, South
Pole and Terre Adélie) start first and rise faster than the others. At these
locations the zenith angle of the Sun is usually too large to detect solar
neutrons by ground based cosmic ray detectors as neutron monitors or solar
neutron telescopes, and there was virtually no difference of time profiles at
high-altitude (e.g. South Pole) and sea-level (e.g. Terre Adélie) stations.
Zhu:al-05 come to the conclusion that also the solar neutron telescope at
Yangbajing (Tibet), at 4300 m above sea level, which observed the Sun at
zenith angle $52^{\circ}$, did not detect any signature of solar neutrons.
The terrestrial magnetic field determines the viewing direction and the low-
rigidity cutoff of a neutron monitor with respect to primary protons. A set of
instruments distributed on Earth can therefore be used to determine the
rigidity spectrum in a range between roughly 1 and 15 GV, and the angular
distribution of the primary relativistic particles arriving at the top of the
Earth’s magnetosphere. The excess count rate of a neutron monitor is given by
$\displaystyle\Delta N(t)=\int_{P_{\rm c}}^{\infty}S(P)\cdot J_{\|}(P,t)\cdot
F(\delta(P),t)\cdot dP$ (1)
where $P$ is the particle’s magnetic rigidity, related to the total energy $E$
by $eP=\sqrt{E^{2}-m^{2}c^{4}}$. $e$ is the elementary charge, $P_{\rm c}$ is
the low-rigidity cutoff imposed by the terrestrial magnetic field, $S(P)$ is
the rigidity response of the neutron monitor, called yield function.
$F(\delta(P),t)$ describes the angular distribution of the particle flux,
where $\delta(P)$ is the angle between the direction of maximal solar particle
flux and the asymptotic viewing direction of the neutron monitor at rigidity
$P$. It describes the pitch angle distribution if the axis of symmetry is
along the local magnetic field direction outside the geomagnetosphere. Based
on the analysis of Bue:al-06, we assume in the following that this is indeed
the case. $J_{\|}(P,t)$ is the rigidity dependent differential directional
particle intensity (hereafter this differential directional proton intensity
will be called “proton intensity”),
$\displaystyle
J_{\|}(P,t)=A(t)\left(\frac{P}{1~{}\rm{GV}}\right)^{-\gamma(t)}.$ (2)
For the present study we employed the data of 40 neutron monitors of the
worldwide network. The MAGNETOCOSMICS code developed by L.
Desorgher222http://reat.space.qinetiq.com/septimess/magcos/ has been used to
simulate the propagation of energetic protons through the Earth’s magnetic
field and to determine for each neutron monitor the cutoff rigidity and the
asymptotic directions from which the primary energetic particles impinged upon
the magnetosphere. We computed for each neutron monitor the response to an
anisotropic flux of charged particles at the top of the magnetosphere. Through
a trial-and-error procedure that minimizes the difference between the modeled
and the observed count rates for each monitor, we determined the amplitude
$A(t)$, the spectral index $\gamma(t)$, and the anisotropy of incident
particles at the boundary of the magnetosphere. We use in this method a
variable time interval. At the beginning of a GLE the count rates and the
spectrum change quickly. Later during the event the changes in the different
parameters in time are less pronounced. In addition, the count rates of some
neutron monitors showed a short pre-increase. Thus, we have chosen a time
interval of $2$ min during the initial and the main phase of the GLE, and $10$
min and $15$ min during the recovery phase. More details can be found in
Bue:al-06.
Panel b) of Figure 1 displays the spectral index. The initial phase, where
$\gamma=3$, is plotted by a dashed line, since we expect a hard spectrum of
the solar cosmic ray flux near Earth in the first phase of the GLE just
because the high energy particles will arrive at the Earth earlier than the
protons with lower energies.
Panels c) and d) display the time profiles of the proton intensities
calculated by Equation (2) with $P=1$ GV (i.e. the amplitude $A(t)$) and $P=5$
GV. At the first peak of the impulsive proton profile intensity at 1 GV
obtained from our analysis is $A=3.3\times
10^{8}~{}\rm{protons~{}(m}^{2}\rm{~{}sr~{}s~{}GV)}^{-1}$, and the spectral
index $\gamma=8.0$. Using a similar method of analysis, Bom:al-08 found
$A=2.0\times 10^{8}~{}\rm{protons~{}(m}^{2}\rm{~{}sr~{}s~{}GV)}^{-1}$ and
$\gamma=9.2$. Pln:al-07 used a different description of the pitch angle
distribution, but the index (7.6) and amplitude of the rigidity spectrum, when
corrected for the different integration times, are similar to our values.
Several different approaches providing flatter spectra were reported in the
literature. Bie:al-05 and McC:al-08 compared the signals observed with a
traditional neutron monitor and with a monitor without lead shield, which is
sensitive to lower energies than the shielded one. Their independent studies
found spectral indices between 4 and 5 during the first peak of the GLE as
observed, respectively, at the South Pole and Sanae stations. Rya:Mil-05
derived $\gamma=6.2$ from the Durham and Mount Washington monitors, which have
different yield functions because they are located at different altitudes.
Since the antarctic neutron monitors give information on the lower rigidities,
and the north-american monitors on medium rigidities, we suggest that the
different indices reflect a spectrum that is not a power law in rigidity, but
gradually curves down to some high-rigidity cutoff, as has been shown by
Her:al-76, Lov:al-98 and Vas:al-05.
Hereafter we will use the proton intensity at rigidity $5$ GV, which
corresponds to a kinetic energy of $4.15$ GeV, for the comparison with the
electromagnetic emissions. Indeed, although the response function of neutron
monitors depends on atmospheric depth, for most stations its maximum is close
to $5$ GV [Clem and Dorman (2000), their Figure 3]. The time profiles of the
relativistic protons display two well-identified peaks, which is a feature
found in several other GLEs from the western solar hemisphere [Shea and Smart
(1996), McCracken, Moraal, and Stoker (2008)]. The short rise to maximum
during the first peak requires an acceleration of particles to relativistic
energies or their release within a few minutes at most.
The inserted plot in panel c) shows the pitch angle distribution for three
time intervals between 06:49 UT and 07:30 UT. Between 06:49 UT and 07:01 UT,
the first peak of the proton intensity time profile displays a narrow angular
distribution. These protons suffer little scattering during interplanetary
propagation. The maximum of the pitch angle distribution continues to be
field-aligned during the second peak, but an increasingly broad range of pitch
angles contributes. This suggests that particles came from the Sun throughout
the first and the second peak of the proton time profile, but that
interplanetary scattering [Sáiz et al. (2005), McCracken, Moraal, and Stoker
(2008)] or a reflecting barrier outside 1 AU [Bieber et al. (2002), Sáiz et
al. (2008)] affected the second peak.
The interplay between variations of the amplitude and the spectral index make
the second peak appear much longer at $5$ GV than at lower rigidities. We
consider this difference, which results from minor changes in the amplitude
and spectral index, as questionable. In any case, the well-defined temporal
structure of the proton intensity enables a more detailed comparison with
coronal events than other GLEs.
Figure 2.: Time profiles of the normalised flux density at $35$ GHz (top) and
normalised count rates of hard X-rays and $\gamma$-rays (RHESSI, CORONAS/SONG)
at different energies. Each count rate is normalised by individual maximum and
a term is added in order to separate properly the curves of each others. The
$35$ GHz emission (Nobeyama Radio Polarimeter, courtesy K. Shibasaki) is
synchrotron radiation, emissions at ($50-100$) and ($550-800$) keV are
Bremsstrahlung. The high-energy $\gamma$-rays are pion-decay photons from
primary protons at energies above $300$ MeV. Different episodes of particle
acceleration are distinguished by vertical stripes numbered 1 to 4.
### 2.2 High-Energy Particles in the Low Corona and Chromosphere :
$\gamma$-Ray, Hard X-Ray and Microwave Signatures
High-energy electrons and protons in the low corona and chromosphere are
revealed by their hard X-ray and $\gamma$-ray bremsstrahlung, gyrosynchrotron
microwave emission, and different types of nuclear $\gamma$ radiation. Figure
2 displays the time profiles observed by the Reuven Ramaty High Energy
Spectroscopic Imager (RHESSI) [Lin et al. (2002)] in the photon energy ranges
$50-100$ keV (green line) and $550-800$ keV (blue line), emitted by electrons
with energies of order $100$ keV and $1$ MeV, respectively. The red curve
shows CORONAS-F/SONG measurements [Kuznetsov et al. (2008)] of $\gamma$-rays
from $62-310$ MeV.
At higher altitudes in the solar atmosphere, the relativistic electrons
produced synchrotron emission in coronal loops. The Nobeyama Radio Polarimeter
[Nakajima et al. (1985)] measures whole Sun integrated flux densities at
selected frequencies in the band $1$ to $80$ GHz. In Figure 2, we plot only
the time profile at $35$ GHz (black curve), since the emission is self
absorbed at the lower frequencies. A synchrotron spectrum is cut off at the
characteristic frequency $\nu=3/2\gamma^{2}\nu_{\rm{ce}}$, where
$\nu_{\rm{ce}}$ is the electron cyclotron frequency and $\gamma$ is the
Lorentz factor. If the magnetic field is $500$ G, $35$ GHz emission is hence
emitted by electrons with kinetic energy $1.5$ MeV.
These four time profiles have common structures that reveal distinct episodes
of particle acceleration during the flare. We defined four main episodes of
particle acceleration corresponding to time intervals with a distinct peak in
one or several of these spectral ranges. These episodes are highlighted in
Figure 2 by different tones of grey shading, and are labelled from 1 to 4.
Other distinct rises of emission produced by energetic electrons occur later,
e.g. near 06:53 UT.
The hard X-ray and microwave emissions start to rise before 06:44 UT (between
06:42 UT and 06:43 UT at $35$ GHz) and display several peaks. The time
profiles of hard X-rays present one peak in each of the acceleration phases 1
and 2. The rise to the second peak hides the decrease from the first and vice
versa. Both peaks are also seen in the $35$ GHz time profile. The initial rise
of the hard X-rays is faster, and the first peak is more pronounced, in the
$50-100$ keV range than in the $550-800$ keV range. Hence relatively more high
energy electrons are accelerated during the second acceleration episode than
during the first. This reflects the continued spectral hardening throughout
most of the event reported by Sld:al-08.
Figure 3.: High-energy photon emission observed by CORONAS/SONG: The top
panel displays the model photon spectra with an electron bremsstrahlung and a
pion decay component [Equation (3)]. Solid lines represent total spectra,
dashed lines the electron bremsstrahlung components for power laws with
indices 2.2 and 2.4 and an exponential rollover (curves numbered 1 and 4; cf.
Equation (4); $\epsilon_{0}=30$ MeV) or a double power law (curves 2 and 3;
cf. Equation (6); $\epsilon_{\rm{cut}}=30$ MeV). On the middle panel, the
observed count rate spectrum and fits with the same coding colors than the
model. On the bottom panel, the count rate (30 s smoothed) and photon flux at
photon energies above 90 MeV.
The appearance of increasingly high particle energies during episode 2 is
clearly shown by the $\gamma$-rays above 60 MeV (Figure 2, bottom). The count
rate starts to rise between 06:43 and 06:45 UT, consistent with an early start
of acceleration phase 2, which is hidden in the electron radiation profiles by
episode 1. We use as the earliest time the start of the high-energy
$\gamma$-ray emission at 06:45:30 UT.
The flux of $\gamma$-rays above 60 MeV can be due to bremsstrahlung emission
of energetic electrons or to the decay of neutral pions produced by protons
above $300$ MeV. To separate the two components, we fitted the response of the
SONG detector to a combined bremsstrahlung and pion decay spectrum
$F(\epsilon)=w_{\rm b}(\epsilon)^{-\gamma}C(\epsilon)+w_{\pi}\Phi(\epsilon)$
(3)
($\epsilon$ is the photon energy). The bremsstrahlung spectrum is the
combination of a power law with a function describing the steepening at high
photon energies. The steepening is either represented by a spectral rollover
$C(\epsilon)=\exp(-\epsilon/\epsilon_{0})$ (4)
or by a function that describes a gradual transition between power laws with
index $\gamma$ at low energies and $\beta\geq\gamma$ at high energies:
$C(\epsilon)=1\;{\rm if}\;\epsilon\leq\epsilon_{\rm{cut}}$ (5)
$C(\epsilon)=[\frac{1+\epsilon/\epsilon_{0}}{1+\epsilon_{\rm{cut}}/\epsilon_{0}}]^{(\gamma-\beta)}\;{\rm
if}\;\epsilon>\epsilon_{\rm{cut}}.$ (6)
Nuclear lines were neglected in the fitting procedure, because the count rate
in even the strongest one (at $2.2$ MeV) does not exceed $15$% of the
underlying continuum.
The spectral shape of the pion decay component was calculated by R. Murphy
(private communication, 2007) and is similar to other commonly used forms
[Murphy, Dermer, and Ramaty (1987)]. The pions are considered to be produced
by particles (protons and 50% $\alpha$ particles) with a power law spectrum
that extends up to 1.5 GeV and an isotropic angular distribution in the
downward direction. The calculations demonstrate that the photon intensity and
the location of the spectral maximum in the energy range up to 300 MeV are
identical for power law proton spectra with index 2 and 3. Because the highest
energy interval of SONG is $150-300$ MeV, we use the pion decay component
calculated by R. Murphy with the proton index 3. The detector response to a
parallel beam of $\gamma$-rays is computed using GEANT3.21 routines, and the
spectral parameters are derived by minimising ($\chi^{2}$) the difference
between observed and modelled count rates in selected detector channels. A set
of model spectra and the resulting fit to the observed count rate spectrum
during the early phase of the high-energy photon emission are plotted in
Figure 3. The count rate at photon energies above $90$ MeV, smoothed over 30
s, and the flux, which at those photon energies is largely dominated by pion
decay photons are also shown. We conclude that the high-energy photon profile
in Figure 4 and in Figure 2 can be attributed to high-energy protons in the
solar atmosphere, requiring $2.3\times 10^{31}$ protons above $30$ MeV during
a $1$ min interval at the peak of the emission. For comparison, Vil:al-03a
evaluated in their Table 3 that five to ten times more protons were needed for
the pion decay $\gamma$-ray emission during the 24 May 1990 event.
We can make a rough comparison of the number of protons detected during the
GLE and the number of protons required for the pion decay $\gamma$-ray
emission. The pitch angle distribution (insert of Figure 1) shows that all
protons during the first peak stream anti-sunward. If we assume that they are
injected at the Sun into a flux tube with diametre $3\times 10^{9}$ m at $1$
AU (corresponding to an angle of $1.2^{\circ}$), as derived for impulsive
electron But-98 and ion events Maz:al-00, our measured proton intensity at the
peak of the GLE implies that $6\times 10^{28}$ with rigidities above $1$ GV
are released to interplanetary space during the $2$ min interval around the
maximum of the GLE. The number increases to $3.5\times 10^{31}$ if a greater
angular range of 30∘, corresponding to a size of $7.8\times 10^{10}$ m at $1$
AU, is used. This result is to be considered with caution since we use a
(steep) power-law as rigidity spectrum. As discussed earlier, the real
spectrum is expected to consist of a flatter power law at low rigidities with
a steep falloff or cutoff above some limiting rigidity. Thus, if this limiting
rigidity is near or above $1$ GV, our evaluation of the proton intensity above
1 GV with the power law will be overestimated. For comparison, the spectrum
that generates the pion decay $\gamma$-ray emission contains about $10^{29}$
protons above $1$ GV (kinetic energy $433$ MeV), which is within the broad
range of values estimated for the escaping protons.
In summary, the energetic particle signatures in the low atmosphere during the
impulsive flare phase reveal several successive episodes of particle
acceleration with durations of the order of a minute. The most energetic one
starts close to 06:45:30 UT, judging from the $\gamma$-rays and the
accompanying peaks in the hard X-ray and microwave emissions of energetic to
relativistic electrons. The most energetic protons and electrons hence start
to be accelerated during this episode, more than 2 min after the first
radiative signatures of high-energy electrons in the solar atmosphere.
Figure 4.: Dynamic spectrum of the metric-to-decametric radio emission during
the 20 January 2005 event, observed at the Learmonth station of the RSTN
network ($180-25$ MHz) and the WAVES spectrograph aboard the WIND spacecraft
($14-1$ MHz). The overplotted white curve is the time history of the proton
intensity at $5$ GV rigidity (kinetic energy $4.15$ GeV), shifted backward by
$216$ s (see Section 3 for the evaluation of the time shift).
### 2.3 Radio Evidence on Particle Escape to the High Corona and
Interplanetary Space
Metric-to-hectometric radio emissions of non-thermal electrons from the middle
corona to the interplanetary medium are measured respectively by the RSTN
network333http://www.ngdc.noaa.gov/stp/SOLAR/ftpsolarradio.html#spectralgraphs
($25-180$ MHz; we use data from the Learmonth station) and the WIND/WAVES
experiment in the range 0.04-14 MHz [Bougeret et al. (1995)]. The combined
spectrum of the two instruments in the $1-180$ MHz band, corresponding roughly
to a range of heliocentric distances between $1.2$ and $10-20$ R⊙, is
represented in Figure 4. Different burst groups can be distinguished.
Two groups of decametric-to-hectometric (henceforth DH) type III bursts are
seen in the WIND/WAVES spectrum between 1 and 14 MHz, respectively from 06:45
UT to 06:55 UT and 06:57 UT to 07:00 UT. Using the individual time profiles,
we find that the first group starts with a faint burst at 06:45 UT $\pm 30$ s,
followed by bright bursts starting at 06:46 UT$\pm 30$ s (see the time profile
at $14$ MHz, the central curve in Figure 5).
The two bright DH type III groups extend up to metre waves seen by RSTN. The
starting frequency is above the RSTN high-frequency limit of $180$ MHz. With
the usual assumption that metric type III bursts are emitted at the harmonic
of the electron plasma frequency, this implies that the acceleration sites are
in the low corona, where the ambient density is above $10^{8}$ cm-3. The first
metre wave burst in the RSTN spectrum, however, does not seem to continue into
the WAVES band, but bends around to form a type J burst between 35 and 25 MHz.
This is probably not an artefact of decreasing sensitivity near the low-
frequency border of the receiver, because no comparable decrease is observed
in the subsequent emissions. Hence electrons accelerated during the first
acceleration episode do not seem to get access to interplanetary space, but
propagate in closed coronal magnetic structures.
Thus, we assume that the first injection of non-thermal electron beams into
interplanetary space is traced by the first faint DH type III burst at 06:45
UT$\pm 30$ s, and is followed by a series of more intense injections
corresponding to the first bright type III group.
Shortly after the second type III group, the RSTN spectrum shows a slowly
drifting band of less intense emission. It is observed between $85$ MHz and
$180$ MHz from 07:00 UT to 07:15 UT. The frequency drift rate is
$0.1-0.2~{}\rm{MHz~{}s}^{-1}$, and the instantaneous bandwidth about 30% of
the low frequency limit of the burst. This is typical of a type II burst at
long metre wavelengths [Mann, Classen, and Aurass (1995)]. The burst therefore
shows the propagation of a shock wave through the corona. If it is harmonic
plasma emission, the ambient electron densities in the metric type II source
range from $10^{8}$ to $2\times 10^{7}~{}\rm{cm}^{-3}$. At lower frequencies
the WIND/WAVES spectrum presents a weak type II emission between 07:03 UT and
07:30 UT, without any harmonic relationship to the metric type II burst.
Underneath the structured type III and type II bursts the dynamic metre wave
spectrum shows a diffuse background (green in Figure 4) that reveals
gyrosynchrotron emission from mildly relativistic electrons. It starts
together with the first metric type III bursts and continues throughout the
plotted time interval. It can be identified until at least 07:50 UT in single
frequency records, with broadband fluctuations from centimetre to metre waves
that suggest repeated electron injections.
## 3 Relativistic Protons at the Earth, Coronal Acceleration, and
Interplanetary Propagation
Figure 5.: Temporal evolution of the normalised flux densities in microwaves,
i.e. gyrosynchrotron emission of relativistic electrons (top; $35$ GHz;
Nobeyama Radio Polarimeter), decametric waves (middle; $14$ MHz, WIND/WAVES
radio spectrograph) emitted by unstable electron beams in the high corona, the
normalised proton intensity at Earth ($5$ GV rigidity, kinetic energy $4.15$
GeV; solid line), and the normalized proton intensity shifted by $-216$ s
(dashed line). Each flux is normalised by individual maximum and a term is
added in order to separate properly the curves of each others. The time axis
refers to the time of detection at the WIND spacecraft or at Earth.
### 3.1 Interplanetary Propagation and Initial Solar Release Time
The solar wind speed measured by WIND/SWE [Ogilvie et al. (1995)] at the time
of the GLE is 800 km s-1, implying a Parker spiral of length $1.05$ AU. The
relativistic protons at 5 GV (corresponding to a velocity of $0.98c$) are then
expected to arrive $46$ s after the electromagnetic emission. But the first
relativistic protons were detected not earlier than 06:49 UT [McCracken,
Moraal, and Stoker (2008), their Figure 6], 3 to 6 minutes after the start of
the hard X-ray, microwave, and $\gamma$ radiations in the solar atmosphere.
However, since the relativistic protons were observed at Earth during a
Forbush decrease, it is clear that the interplanetary magnetic field was far
from nominal during this event. This is confirmed by the actual magnetic field
measurements [Plainaki et al. (2007), McCracken, Moraal, and Stoker (2008)].
We therefore compare the time profiles of the proton intensity at $1$ AU with
the radio emissions in the corona to attempt a more realistic evaluation of
the interplanetary path length and eventually of the solar release time of the
particles detected near and at the Earth.
In Section 2.3, we concluded that the first DH type III bursts ($14$ MHz)
reveal the first injection of non-thermal electron beams in the interplanetary
space. The time profiles of the radio emission at $14$ MHz and of the proton
intensity at $5$ GV are shown in Figure 5. We note that both time profiles
display similarly rapid initial rise phases and, broadly speaking, two peaks.
These similarities suggest a common release of the relativistic protons and
the radio emitting electron beams.
The rise phases and the first peak of the time profiles of protons and the
bright type III emission at 14 MHz coincide when the proton profile is shifted
backward by $t_{\rm{shift}}=216$ seconds. The time shift $t_{\rm{shift}}$
between the two profiles corresponds to the supplementary path length
travelled by the protons. Figure 5 displays the original (black solid line)
and the backward shifted time profile (black dashed line) of the relativistic
protons at $5$ GV. The timing would not have been changed significantly if we
had used a time profile at lower rigidity. Given the velocity of $0.98c$ and
the light travel time of 489 s from Sun to Earth, the delay of $216$ s implies
that the protons travelled a distance of about $1.38$ AU in interplanetary
space.
The delay of $216$ s gives only a lower limit of the travel time, because the
acceleration region is presumably much closer to the Sun than the $14$ MHz
source. We determine the upper limit of the path length, assuming that the
first relativistic protons are injected immediately after the acceleration of
the protons that created pions in the low solar atmosphere. Thus we evaluate
as an upper limit of the supplementary travel time of the protons with respect
to photons $t_{\rm{shift}}=4.5$ min, which induces a path length between the
acceleration site of the first relativistic protons and the Earth of
$0.98c\times(489$ s$+t_{\rm{shift}})=1.49$ AU.
The time interval between the onset of the second acceleration episode and the
bright $14$ MHz emission of electron beams (06:46 UT) is of $30$ s. With the
density determination of Kou-94, the $14$ MHz source is located about
$4~{}R_{\odot}$ above the photosphere. Thus the electrons injected in the low
corona together with the relativistic protons have to travel $4~{}R_{\odot}$
in $30$ s. This implies an electron beam velocity of roughly $0.3c$, which is
a classical velocity for electron beams producing type III bursts in the
corona [Suzuki and Dulk (1985), Poquérusse et al. (1996)]. McC:al-08 evaluate
a longer path length of $1.76$ AU. They also assume that the injection of
relativistic protons is simultaneous with the beginning of the high-energy
$\gamma$-ray emission. The difference of $0.27$ AU is mostly due to the
definition of the onset time. These authors use a strongly smoothed version of
the $\gamma$-ray profile, from which they deduce an onset time between 06:43
and 06:44 UT. In our Figure 2, this time corresponds to the beginning of the
first acceleration episode. But at this time pion-decay gamma emission has not
yet started, as shown in Figure 2 (see also Figure 4 of Kuz:al-06).
As an alternative, we also checked the frequently used assumption that all
energetic particles accelerated during a flare start to be released together
with the onset of the electromagnetic emission. If we assume that the escaping
protons were accelerated and released with the first microwave emitting
electrons, their travel time would be $489$ s$+t_{\rm{shift}}=489+506$
seconds, and the interplanetary path length $1.95$ AU. While this value is not
inconsistent with particles travelling along a twisted magnetic field line
within an interplanetary CME [Larson et al. (1997)], it is in conflict with
the idea suggested by the DH type III emission that the first microwave and
hard X-ray emitting electrons have no access to interplanetary space. It is of
course also inconsistent with the timing of the pion decay gamma ray emission,
which starts several minutes after the first hard X-ray and microwave
signatures.
### 3.2 Release of Relativistic Protons and Associated Solar Activity
Throughout the Event
Figure 4 shows the superposition onto the radio spectrum of the $5$ GV proton
profile in white, shifted backward in time by $216$ s so as to correct the
travel path difference between protons and photons.
By construction the first peak in the proton time profile starts with the
first group of DH type III bursts. Moreover its duration is comparable with
that of the DH type III group (about $10$ min). Thus we conclude that the
proton intensity time profile at the Earth and the radio time profile of the
first DH type III group in the high corona are similar and that the first
protons are injected in the interplanetary space with the first escaping non-
thermal electrons. This injection time coincides with the second episode of
impulsive coronal particle acceleration, hence with the start of the
$\gamma$-ray emission due to protons above 300 MeV in the solar atmosphere,
and with a fresh electron injection with harder spectrum than before.
Particles accelerated during and after the second acceleration episode get
direct access to interplanetary space.
The time profile of the proton intensity shows a second peak that lasts longer
than the first. Provided the path length of the relativistic protons is the
same as during the first peak, the onset of this second peak coincides with
the second group of type III bursts seen from $1$ to $180$ MHz, and the
prolonged tail of the peak is accompanied by the type II burst between $75$
and $180$ MHz.
As noted by Poh:al-07, the shock wave producing this type II emission is not
the bow shock of the CME observed by SOHO/LASCO. Indeed, at 06:54 UT, a few
minutes before the type II burst, the CME front is at heliocentric distance
4.5 R⊙, where electron densities inferred from eclipse measurements are below
$2\times 10^{6}$ cm-3 [Koutchmy (1994)], corresponding to plasma frequencies
below $13$ MHz, much lower than those of the metric type II burst.
The exciter speed of the type II burst is also very different from the CME
speed : the measured relative frequency drift rate is $-6.8\times 10^{-4}$
s-1. Such a drift is produced by an exciter that moves at a speed of roughly
500 km s-1 along a hydrostatic density gradient (electron-proton plasma,
$T=1.5\times 10^{6}$ K) at heliocentric distance 2 R⊙. This is much lower than
the speed of the CME, which Grc:al-08 estimated between 2000 and 2600 km s-1.
## 4 Discussion
### 4.1 Summary of the Observational Findings
From the preceding analysis we infer the following scenario:
* •
The particle acceleration in the corona deduced from the electromagnetic
signatures (Section 2.2) has several episodes in the impulsive phase, with
individual durations of the order of $1$ min.
* •
Protons with energies above $300$ MeV start to be accelerated during the
second acceleration episode within the impulsive phase, together with
electrons that have a harder spectrum than during the first episode (Section
2.2).
* •
The electrons accelerated during the first episode remain confined in closed
coronal magnetic structures, while particles accelerated during the second and
later episodes have access to the interplanetary space along open magnetic
structures (Section 2.3).
* •
The first rise of the relativistic proton profile at Earth is due to protons
that are accelerated during the second episode of the impulsive phase.
* •
A second rise of the relativistic proton profile occurs after the peak of the
hard X-ray and microwave emission. The onset of this second peak coincides
with a new acceleration of electrons in the low corona during the decay of the
microwave and hard X-ray burst, and with a fresh injection of electron beams
into interplanetary space (second group of type III burst seen on Figure 4).
It is accompanied by shock-related radio emission in the corona at
heliocentric distances below 2.5 $R_{\odot}$.
### 4.2 Evidence for Relativistic Proton Acceleration in the Flaring Active
Region
Relativistic protons arrive at Earth during this GLE with some delay with
respect to the first radiative signatures of particle acceleration and the
travel time along the nominal Parker spiral. The delay comes on the one hand
from the interplanetary path length, which is longer than the nominal Parker
spiral. But a key for understanding the timing of particle acceleration during
this event is the identification of elementary episodes within the impulsive
flare phase. The first peak of the relativistic proton time profile is related
to the impulsive phase of the flare, but to the second identified episode of
particle acceleration. Similarly to the escaping relativistic protons, the
bulk of the high-energy protons producing indirectly high-energy gamma rays
through neutral pions are also accelerated in the second episode. The close
connection between the interacting and escaping high-energy protons is
corroborated by the finding that the number of protons required for the GLE
would also be sufficient to produce a pion decay $\gamma$-ray excess.
Release delays of GLE protons with respect to the first electromagnetic
signatures are well etablished [Carmichael (1962), Cliver et al. (1982)]. They
are often attributed to the trapping of the accelerated particles in closed
magnetic structures or to the acceleration in a shock wave at greater altitude
than the flaring loops [Lockwood, Debrunner, and Flückiger (1990), Kahler
(1994)]. But the delay observed on 20 January 2005, among the shortest ever
found, is consistent with a distinct acceleration episode in the impulsive
flare phase. Such delays have been regularly reported for pion-decay
$\gamma$-ray emission in other flares [Forrest et al. (1986), Debrunner et al.
(1997), Dunphy et al. (1999), Trottet et al. (2008)], and had been identified
in the SONG data of the 20 January 2005 flare by Kuz:al-08 and Grc:al-08.
Acceleration delays of relativistic protons on time scales of a few minutes
hence seem to be a general feature of large flares.
Imaging observations of this event at hard X-rays and $\gamma$-rays [Krucker
et al. (2008)] show the usual configuration of bright chromospheric footpoints
of coronal loops, on top of UV ribbons, together with a presumably coronal
$\gamma$-ray source. Such observations are commonly ascribed to a complex
magnetic topology implying magnetic reconnection in the low corona. So the
timing, the energetics, and the X/$\gamma$-ray source configuration during the
20 January 2005 event are consistent with a common origin of the interacting
protons and the protons producing the first GLE peak in the flaring active
region.
### 4.3 The Second Peak of the Relativistic Proton Profile
The correspondence between radio emissions and the second peak in the proton
time profile is more complex than for the first peak.
The duration of this second peak and its occurrence in the decay phase of the
hard X-ray and microwave emissions suggest a difference in the acceleration
region, the conditions of particle propagation, or both. The finding that this
second peak is accompanied by distinct metric radio emission and by some weak
microwave and hard X-ray signature can, however, be considered as a further
argument that the peak is due to a fresh injection of protons at the Sun,
rather than being due to changed conditions of interplanetary propagation as
argued by Sai:al-05. The metric radio observations suggest two possible
scenarios :
The first builds on the association of the start of the proton rise with a
new, short group of type III bursts and a faint rise of the decaying microwave
and hard X-ray time profiles. If the relativistic protons are injected
simultaneously with the non-thermal electron beams, during about 3 min (photon
arrival time 06:57 UT - 07:00 UT), the prolonged tail would be ascribed to
interplanetary scattering or reflection beyond 1 AU. Scattering requires very
different propagation conditions from the first proton peak, which Sai:al-05
attribute to perturbations created by the first proton beam. The prolonged
presence of gyrosynchrotron radiation in the corona could also argue for
prolonged particle acceleration there, without any need to invoke
interplanetary transport for the duration of the relativistic proton release.
But we have no common feature in the timing to corroborate this idea.
An alternative scenario is based on the association with a metric type II
burst during the maximum of the second relativistic proton peak and its decay
phase, which may suggest shock acceleration of the second peak as proposed for
different reasons by McC:al-08. Poh:al-07 extrapolate the type II radio band
backward to a start near 06:54 UT at 600 MHz (their Figure 13). If this is
correct, the entire second peak of the proton profile is accompanied by the
radio signature of a coronal shock. This shock is seen at an altitude below
2.5 $R_{\odot}$ and at least its radio emission, if not the shock itself, is
of short duration. Both findings are more consistent with a lateral shock than
with a radially outward driven shock in front of the CME. A lateral shock
would be expected to be quasi-perpendicular with respect to open magnetic flux
tubes in the low corona (see Figure 2 of Vainio and Khan, 2004). The initially
lateral shock would become quasi-parallel to the open field lines upon
propagating outward. It has been argued elsewhere Tylka and Lee (2006) that
quasi-parallel shocks are less efficient particle accelerators at high
energies than quasi-perpendicular shocks. This could explain the short
duration of this type II burst and the duration of the second proton peak.
### 4.4 Multiple acceleration episodes in solar energetic particle events
GLE scenarios including two components, called a prompt one and a delayed one,
had been introduced before Torsti et al. (1996); Miroshnichenko, De Koning,
and Perez-Enriquez (2000). The delayed component was ascribed to acceleration
at a CME-driven shock wave. While the present analysis contains one scenario
that is consistent with this two-component injection, it also shows that the
coronal acceleration history is much more complex: there is no unique flare-
related acceleration, but the impulsive flare phase is itself structured, as
has long been known from hard X-ray observations de Jager and de Jonge (1978).
If a coronal shock wave accelerates relativistic protons in a later phase of
the event, it is not necessarily the bow shock of the CME which is the key
element. Clearly, detailed comparative timing analyses of GLEs and flare/CME
tracers provide relevant constraints to understand the origin of relativistic
particles at the Sun.
#### Acknowledgements
This research was supported by the French Polar Institute IPEV under grant
RAYCO, the Swiss National Science Foundation, grants 200020-105435/1 and
200020-113704/1, by the Swiss State Secretariat for Education and Research,
grant C05.0034, and by the High Altitude Research Stations Jungfraujoch and
Gornergrat. The Russian authors’ research is supported by the RBRF grant
09-02-011145-a. We thank the investigators of the following other neutron
monitor stations for the data that we used for this analysis: Alma Ata,
Apatity, Athens, Baksan, Barentsburg, Calgary, Cape Schmidt, Durham, Fort
Smith, Hermanus, Inuvik, Irkutsk, Kingston, Kiel, Larc, Lomnický $\check{\rm
S}$tít, McMurdo, Magadan, Mawson, Mexico City, Moscow, Mt. Aragats, Mt.
Washington, Nain, Nor Amberd, Norilsk, Novosibirsk, Newark, Los Cerrillos,
Oulu, Potchefstroom, Rome, Sanae, South Pole, Thule, Tibet, Tsumeb, Tixie Bay
and Yakutsk. We acknowledge the supply of radio spectral data via the RSTN web
site at NGDC, and S. White (UMd College Park) for the related software. We are
particularly grateful to the CORONAS/SONG team (Co PI K. Kudela, Institute of
Experimental Physics, Slovakia Academy of Sciences) and D. Haggerty (APL
Laurel) for the CORONAS/SONG $\gamma$-ray and ACE/EPAM electron data, and to
K. Shibasaki (Nobeyama Radio Obs.) for the radio polarimetric data and
detailed information on their quality. This research has greatly benefitted
from the project "Transport of energetic particles in the inner heliosphere"
led by W. Droege at the International Space Science Institute (ISSI) in Bern.
Comments on an early version of this manuscrit by G. Trottet were highly
appreciated, as well as discussions with N. Vilmer, G. Aulanier and S. Hoang.
S.M.’s doctoral thesis research at Meudon Observatory is funded by a
fellowship of Direction Générale à l’Armement (DGA).
## References
* Akimov et al. (1996) Akimov, V.V., Ambro$\check{\rm z}$, P., Belov, A.V., Berlicki, A., Chertok, I.M., Karlický, M. et al.: 1996, Evidence for prolonged acceleration based on a detailed analysis of the long-duration solar gamma-ray flare of june 15, 1991. Sol. Phys. 166, 107 – 134.
* Bieber et al. (2002) Bieber, J.W., Dröge, W., Evenson, P.A., Pyle, R., Ruffolo, D., Pinsook, U., Tooprakai, P., Rujiwarodom, M., Khumlumlert, T., Krucker, S.: 2002, Energetic particle observations during the 2000 July 14 solar event. ApJ 567, 622 – 634.
* Bieber et al. (2005) Bieber, J., Clem, J., Evenson, P., Pyle, R., Duldig, M., Humble, J., Ruffolo, D., Rujiwarodom, M., Sáiz, A.: 2005, Largest GLE in half a century : neutron monitor observations of the January20, 2005 event. In: Acharya, B.S., Gupta, S., Jagadeesan, P., Jain, A., Karthikeyan, S., Morris, S., Tonwar, S. (eds.) Proc. 29-th Int. Cosmic Ray Conf. 1, 237 – 240.
* Bombardieri et al. (2008) Bombardieri, D.J., Duldig, M.L., Humble, J.E., Michael, K.J.: 2008, An improved model for relativistic solar proton acceleration applied to the 2005 January 20 and earlier events. ApJ 682, 1315 – 1327.
* Bougeret et al. (1995) Bougeret, J.L., Kaiser, M.L., Kellogg, P.J., Manning, R., Goetz, K., Monson, S.J. et al.: 1995, Waves: The Radio and Plasma Wave Investigation on the Wind Spacecraft. Space Sci. Rev. 71, 231 – 263.
* Bütikofer et al. (2006) Bütikofer, R., Flückiger, E.O., Desorgher, L., Moser, M.R.: 2006, Analysis of the GLE on January 20, 2005: an update. In: 20-th European Cosmic Ray Symposium, 2006\.
* Buttighoffer (1998) Buttighoffer, A.: 1998, Solar electron beams associated with radio type III bursts: propagation channels observed by Ulysses between 1 and 4 AU. A&A 335, 295 – 302.
* Carmichael (1962) Carmichael, H.: 1962, High-energy solar-particle events. Space Sci. Rev. 1, 28 – 61.
* Clem and Dorman (2000) Clem, J.M., Dorman, L.I.: 2000, Neutron monitor response functions. Space Sci. Rev. 93, 335 – 359.
* Cliver et al. (1982) Cliver, E.W., Kahler, S.W., Shea, M.A., Smart, D.F.: 1982, Injection onsets of 2 GeV protons, 1 MeV electrons, and 100 keV electrons in solar cosmic ray flares. ApJ 260, 362 – 370.
* D’Andrea and Poirier (2005) D’Andrea, C., Poirier, J.: 2005, Ground level muons coincident with the 20 January 2005 solar flare. Geophys. Res. Lett. 32, 14102\.
* de Jager and de Jonge (1978) de Jager, C., de Jonge, G.: 1978, Properties of elementary flare bursts. Sol. Phys. 58, 127 – 137.
* Debrunner et al. (1997) Debrunner, H., Lockwood, J.A., Barat, C., Buetikofer, R., Dezalay, J.P., Flückiger, E.O. et al.: 1997, Energetic neutrons, protons, and gamma rays during the 1990 May 24 solar cosmic-ray event. ApJ 479, 997 – 1011.
* Dröge (2000) Dröge, W.: 2000, Particle scattering by magnetic fields. Space Sci. Rev. 93, 121 – 151.
* Dunphy et al. (1999) Dunphy, P.P., Chupp, E.L., Bertsch, D.L., Schneid, E.J., Gottesman, S.R., Kanbach, G.: 1999, Gamma-rays and neutrons as a probe of flare proton spectra: the solar flare of 11 June 1991. Sol. Phys. 187, 45 – 57.
* Flückiger et al. (2005) Flückiger, E.O., Bütikofer, R., Moser, M.R., Desorgher, L.: 2005, The cosmic ray ground level enhancement during the Forbush decrease in January 2005\. In: Acharya, B.S., Gupta, S., Jagadeesan, P., Jain, A., Karthikeyan, S., Morris, S., Tonwar, S. (eds.) Proc. 29-th Int. Cosmic Ray Conf. 1, 225 – 228.
* Forrest et al. (1986) Forrest, D.J., Vestrand, W.T., Chupp, E.L., Rieger, E., Cooper, J.: 1986, Very energetic gamma-rays from the June 3, 1982 solar flare. Adv. Space Res. 6(6), 115 – 118.
* Gopalswamy et al. (2004) Gopalswamy, N., Yashiro, S., Krucker, S., Stenborg, G., Howard, R.A.: 2004, Intensity variation of large solar energetic particle events associated with coronal mass ejections. J. Geophys. Res. 109, 12105\.
* Grechnev et al. (2008) Grechnev, V.V., Kurt, V.G., Chertok, I.M., Uralov, A.M., Nakajima, H., Altyntsev, A.T. et al.: 2008, An extreme solar event of 20 January 2005: properties of the flare and the origin of energetic particles. Sol. Phys. 252, 149 – 177.
* Heristchi, Trottet, and Perez-Peraza (1976) Heristchi, D., Trottet, G., Perez-Peraza, J.: 1976, Upper cutoff of high energy solar protons. Sol. Phys. 49, 151 – 175.
* Kahler (1994) Kahler, S.: 1994, Injection profiles of solar energetic particles as functions of coronal mass ejection heights. ApJ 428, 837 – 842.
* Klein et al. (1999) Klein, K.L., Chupp, E.L., Trottet, G., Magun, A., Dunphy, P.P., Rieger, E., Urpo, S.: 1999, Flare-associated energetic particles in the corona and at 1 AU. A&A 348, 271 – 285.
* Klein et al. (2001) Klein, K.L., Trottet, G., Lantos, P., Delaboudinière, J.P.: 2001, Coronal electron acceleration and relativistic proton production during the 14 July 2000 flare and CME. A&A 373, 1073 – 1082.
* Koutchmy (1994) Koutchmy, S.: 1994, Coronal physics from eclipse observations. Adv. Space Res. 14(4), 29 – 39.
* Krucker et al. (2008) Krucker, S., Hurford, G.J., MacKinnon, A.L., Shih, A.Y., Lin, R.P.: 2008, Coronal $\gamma$-ray bremsstrahlung from solar flare-accelerated electrons. ApJ 678, 63 – 66.
* Kuznetsov et al. (2006) Kuznetsov, S.N., Kurt, V.G., Yushkov, B.Y., Myagkova, I.N., Kudela, K., Kaššovicová, J., Slivka, M.: 2006, Proton acceleration during 20 January 2005 solar flare: CORONAS-F observations of high-energy $\gamma$ emission and GLE. Cont. Astron. Obs. Skalnate Pleso 36, 85 – 92.
* Kuznetsov et al. (2008) Kuznetsov, S.N., Kurt, V.G., Yushkov, B.Y., Kudela, K.: 2008, CORONAS-F satellite data on the delay between the proton acceleration on the Sun and their detection at 1 AU. In: Caballero, R., D’Olivo, J.C., Medina-Tanco, G., Nellen, L., Sánchez, F.A., Valdés-Galicia, J.F. (eds.) Proc. 30-th Int. Cosmic Ray Conf. 1, 121 – 124.
* Larson et al. (1997) Larson, D.E., Lin, R.P., McTiernan, J.M., McFadden, J.P., Ergun, R.E., McCarthy, M. et al.: 1997, Tracing the topology of the October 18-20, 1995, magnetic cloud with $\sim 0.1-10^{2}$ keV electrons. Geophys. Res. Lett. 24, 1911 – 1914.
* Lin et al. (2002) Lin, R.P., Dennis, B.R., Hurford, G.J., Smith, D.M., Zehnder, A., Harvey, P.R. et al.: 2002, The Reuven Ramaty High-Energy Solar Spectroscopic Imager (RHESSI). Sol. Phys. 210, 3 – 32.
* Lockwood, Debrunner, and Flückiger (1990) Lockwood, J.A., Debrunner, H., Flückiger, E.O.: 1990, Indications for diffusive coronal shock acceleration of protons in selected solar cosmic ray events. J. Geophys. Res. 95, 4187 – 4201.
* Lopate (2006) Lopate, C.: 2006, Fifty years of ground level solar particle event observations. In: Gopalswamy, N., Mewaldt, R., Torsti, J. (eds.) Solar eruptions and energetic particles, AGU Monograph 165, American Geophysical Union, Washington DC, 283 – 296.
* Lovell, Duldig, and Humble (1998) Lovell, J.L., Duldig, M.L., Humble, J.E.: 1998, An extended analysis of the September 1989 cosmic ray ground level enhancement. J. Geophys. Res. 103, 23733 – 23742.
* Mann, Classen, and Aurass (1995) Mann, G., Classen, T., Aurass, H.: 1995, Characteristics of coronal shock waves and solar type II radio bursts. A&A 295, 775 – 781.
* Mazur et al. (2000) Mazur, J.E., Mason, G.M., Dwyer, J.R., Giacalone, J., Jokipii, J.R., Stone, E.C.: 2000, Interplanetary magnetic field line mixing deduced from impulsive solar flare particles. ApJ 532, 79 – 82.
* McCracken, Moraal, and Stoker (2008) McCracken, K.G., Moraal, H., Stoker, P.H.: 2008, Investigation of the multiple-component structure of the 20 January 2005 cosmic ray ground level enhancement. J. Geophys. Res. 113(A12), 12101\.
* Meyer, Parker, and Simpson (1956) Meyer, P., Parker, E.N., Simpson, J.A.: 1956, Solar cosmic rays of February, 1956 and their propagation through interplanetary space. Phys. Rev. 104, 768 – 783.
* Miroshnichenko, De Koning, and Perez-Enriquez (2000) Miroshnichenko, L.I., De Koning, C.A., Perez-Enriquez, R.: 2000, Large solar event of September 29, 1989: ten years after. Space Sci. Rev. 91, 615 – 715.
* Miroshnichenko et al. (2005) Miroshnichenko, L.I., Klein, K.L., Trottet, G., Lantos, P., Vashenyuk, E.V., Balabin, Y.V.: 2005, Electron acceleration and relativistic nucleon production in the 2003 October 28 solar event. Adv. Space Res. 35(10), 1864 – 1870.
* Murphy, Dermer, and Ramaty (1987) Murphy, R.J., Dermer, C.D., Ramaty, R.: 1987, High-energy processes in solar flares. ApJS 63, 721 – 748.
* Nakajima et al. (1985) Nakajima, H., Sekiguchi, H., Sawa, M., Kai, K., Kawashima, S.: 1985, The radiometer and polarimeters at 80, 35, and 17 GHz for solar observations at Nobeyama. PASJ 37, 163 – 170.
* Ogilvie et al. (1995) Ogilvie, K.W., Chornay, D.J., Fritzenreiter, R.J., Hunsaker, F., Keller, J., Lobell, J. et al.: 1995, SWE, A Comprehensive Plasma Instrument for the Wind Spacecraft. Space Sci. Rev. 71, 55 – 77.
* Plainaki et al. (2007) Plainaki, C., Belov, A., Eroshenko, E., Mavromichalaki, H., Yanke, V.: 2007, Modeling ground level enhancements: Event of 20 January 2005. J. Geophys. Res. 112(A11), 04102\.
* Pohjolainen et al. (2007) Pohjolainen, S., van Driel-Gesztelyi, L., Culhane, J.L., Manoharan, P.K., Elliott, H.A.: 2007, CME propagation characteristics from radio observations. Sol. Phys. 244, 167 – 188.
* Poquérusse et al. (1996) Poquérusse, M., Hoang, S., Bougeret, J.L., Moncuquet, M.: 1996, Ulysses-ARTEMIS radio observation of energetic flare electrons. In: Winterhalter, D., Gosling, J., Habbal, S., Kurth, W., Neugebauer, M. (eds.) Solar Wind Eight, Am. Inst. Phys., Melville, NY, 62 – 65.
* Ryan and the Milagro Collaboration (2005) Ryan, J.M., the Milagro Collaboration: 2005, Ground-level events measured with Milagro. In: Acharya, B.S., Gupta, S., Jagadeesan, P., Jain, A., Karthikeyan, S., Morris, S., Tonwar, S. (eds.) Proc. 29-th Int. Cosmic Ray Conf. 1, 245 – 248.
* Sáiz et al. (2005) Sáiz, A., Ruffolo, D., Rujiwarodom, M., Bieber, J., Clem, J., Evenson, P., Pyle, R., Duldig, M., Humble, J.: 2005, Relativistic particle Injection and interplanetary transport during the January 20, 2005 ground level enhancement. In: Acharya, B.S., Gupta, S., Jagadeesan, P., Jain, A., Karthikeyan, S., Morris, S., Tonwar, S. (eds.) Proc. 29-th Int. Cosmic Ray Conf. 1, 229 – 232.
* Sáiz et al. (2008) Sáiz, A., Ruffolo, D., Bieber, J.W., Evenson, P., Pyle, R.: 2008, Anisotropy signatures of solar energetic particle transport in a closed interplanetary magnetic loop. ApJ 672, 650 – 658.
* Saldanha, Krucker, and Lin (2008) Saldanha, R., Krucker, S., Lin, R.P.: 2008, Hard X-ray spectral evolution and production of solar energetic particle events during the January 2005 X-class flares. ApJ 673, 1169 – 1173.
* Shea and Smart (1996) Shea, M.A., Smart, D.F.: 1996, Unusual intensity-time profiles of ground-level solar proton events. In: Ramaty, R., Mandzhavidze, N., Hua, X.M. (eds.) High Energy Solar Physics, AIP Conf. Proc. 374, 131 – 139.
* Simnett (2006) Simnett, G.M.: 2006, The timing of relativistic proton acceleration in the 20 January 2005 flare. A&A 445, 715 – 724.
* Suzuki and Dulk (1985) Suzuki, S., Dulk, G.A.: 1985, Bursts of Type III and Type V. In: McLean, D., Labrum, N. (eds.) Solar Radiophysics, Cambridge University Press, Cambridge, Massachusetts, 289 – 332.
* Torsti et al. (1996) Torsti, J., Kocharov, L.G., Vainio, R., Anttila, A., Kovaltsov, G.A.: 1996, The 1990 May 24 solar cosmic-ray event. Sol. Phys. 166, 135 – 158.
* Trottet et al. (2008) Trottet, G., Krucker, S., Lüthi, T., Magun, A.: 2008, Radio submillimeter and $\gamma$-ray observations of the 2003 October 28 solar flare. ApJ 678, 509 – 514.
* Tylka and Lee (2006) Tylka, A.J., Lee, M.A.: 2006, A model for spectral and compositional variability at high energies in large, gradual solar particle events. ApJ 646, 1319 – 1334.
* Vainio and Khan (2004) Vainio, R., Khan, J.I.: 2004, Solar energetic particle acceleration in refracting coronal shock waves. ApJ 600, 451 – 457.
* Vashenyuk et al. (2005) Vashenyuk, E.V., Balabin, Y.V., Bazilevskaya, G.A., Makhmutov, V.S., Stozhkov, Y.I., Svirzhevsky, N. S.: 2005, Solar particle event 20 January, 2005 on stratosphere and ground level observations. In: Acharya, B.S., Gupta, S., Jagadeesan, P., Jain, A., Karthikeyan, S., Morris, S., Tonwar, S. (eds.) Proc. 29-th Int. Cosmic Ray Conf. 1, 213 – 216.
* Vilmer et al. (2003) Vilmer, N., MacKinnon, A.L., Trottet, G., Barat, C.: 2003, High energy particles accelerated during the large solar flare of 1990 May 24: X/$\gamma$-ray observations. A&A 412, 865 – 874.
* Zhu et al. (2005) Zhu, F.R., Tang, Y.Q., Zhang, Y., Wang, Y.G., Lu, H., Zhang, J.L., Tan, Y.H.: 2005, A possible GLE event in association with solar flare on January 20, 2005. In: Acharya, B.S., Gupta, S., Jagadeesan, P., Jain, A., Karthikeyan, S., Morris, S., Tonwar, S. (eds.) Proc. 29-th Int. Cosmic Ray Conf. 1, 185 – 188.
|
arxiv-papers
| 2009-05-12T11:48:47 |
2024-09-04T02:49:02.545297
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "S. Masson, K.-L. Klein, R. Buetikofer, E. Flueckiger, V. Kurt, B.\n Yushkov, S. Krucker",
"submitter": "Sophie Masson",
"url": "https://arxiv.org/abs/0905.1816"
}
|
0905.1868
|
# Controllable spin-dependent transport in armchair graphene nanoribbon
structures
V. Hung Nguyen1,2111E-mail: viet-hung.nguyen@u-psud.fr, V. Nam Do3, A.
Bournel1, V. Lien Nguyen2 and P. Dollfus1 1Institut d’Electronique
Fondamentale, UMR8622, CNRS, Universite Paris Sud, 91405 Orsay, France
2Theoretical Department, Institute of Physics, VAST, P.O. Box 429 Bo Ho, Hanoi
10000, Vietnam
3Hanoi Advanced School of Science and Technology, 1 Dai Co Viet Str., Hanoi
10000, Vietnam
###### Abstract
Using the non-equilibrium Green’s functions formalism in a tight binding
model, the spin-dependent transport in armchair graphene nanoribbon (GNR)
structures controlled by a ferromagnetic gate is investigated. Beyond the
oscillatory behavior of conductance and spin polarization with respect to the
barrier height, which can be tuned by the gate voltage, we especially analyze
the effect of width-dependent band gap and the nature of contacts. The
oscillation of spin polarization in the GNRs with a large band gap is strong
in comparison with 2D-graphene structures. Very high spin polarization (close
to $100\%$) is observed in normal-conductor/graphene/normal-conductor
junctions. Moreover, we find that the difference of electronic structure
between normal conductor and graphene generates confined states in the device
which have a strong influence on the transport quantities. It suggests that
the device should be carefully designed to obtain high controllability of spin
current.
###### pacs:
75.75.+a, 72.25.-b, 05.60.Gg, 73.43.Jn
## I introduction
Graphene, a monolayer of carbon atoms packed into a two dimensional (2D)
honeycomb lattice, has attached a great amount of attention from both
experimental and theoretical points of view cast09 since it was isolated and
demonstrated to be stable novo04 ; novo05 . It is a basic building block for
graphite materials of all other dimensionalities, e.g., it can be wrapped up
into 0D fullerenes, rolled into 1D nanotubes, or stacked into 3D graphite. Due
to its unique electronic properties, i.e., its conduction electrons behave as
massless Dirac fermions novo05 ; zhan05 , a lot of interesting phenomena such
as the finite conductance at zero concentration novo05 , the unusual half
integer quantum Hall effect zhan05 , and the Klein tunneling stan09 have been
observed in the graphene and theoretically discussed in the framework of the
massless fermion Dirac’s model novo05 ; kats06 .
Finite width graphene strips, which are referred as graphene nanoribbons, have
been also studied attentively naka96 ; cres08 ; koba06 ; han007 ; quer08 . It
has been shown that GNRs of various widths can be obtained from graphene
monolayers using patterning techniques han007 . The transport properties of
the perfect GNRs are expected to depend strongly on whether they have zigzag
or armchair edges naka96 . In the framework of the nearest neighbor tight
binding (NNTB) model, the GNRs with zigzag edges are always metallic while the
armchair structures are either semiconducting or metallic depending on their
width. In the zigzag GNRs, the bands are partially flat around the Fermi
energy [$E=0$ eV], which means that the group velocity of conduction electrons
is close to zero. Their transport properties are dominated by edge states
koba06 . In the GNRs with armchair edges, the bands exhibit a finite energy
gap in the semiconducting structures or are gapless in the metallic ones
naka96 . However, ab initio studies have demonstrated that there are no truly
metallic armchair GNRs (see in Ref. cres08 and reference therein). Even for
the structures predicted to be metallic by the NNTB model, a small energy gap
opens, thus modifying their behavior from metallic to semiconducting. In
general, the group velocity of conduction electrons in armchair GNRs is high,
e. g., it is constant and equal to about $10^{6}$ m/s novo05 in those
metallic structures. The transport properties of ideal zigzag and armchair
GNRs are thus very different. In the current work, we pay our attention only
to the ribbons with armchair edges.
Experimentally, electronic transport measurements through a graphene sheet
usually require contacts to metal electrodes, e.g., see an illustration in
Ref. huar07 . When tunneling from metal reservoir to graphene occurs in a
large area, the contact becomes Ohmic and the area under the contact forms a
substance which is a hybrid between graphene and normal metal blan07 .
Depending on the nature of this substance, the system can be appropriately
considered as a graphene/graphene/graphene (GGG) structure or a normal-
conductor/graphene/normal-conductor (NGN) junction whose contacts can be
modeled by honeycomb or square lattices, respectively. The ballistic transport
through the NGN systems has been investigated systematically in Refs. scho07 ;
blan07 ; robi07 .
Beside their interesting electronic transport properties, due to very weak
spin orbit interaction kane05 , which leads to a long spin flip length ($\sim
1\mu m$) jozs08 , the graphene - based structures also offer a good potential
for spin-polarized electronics. Actually, graphene is not a natural
ferromagnet. However, recent works have shown that ferromagnetism and spin-
polarized states can be introduced in graphene, e. g., by doping and by defect
pere05 ; wehl08 ; yazy07 or by applying an external electric field son006 .
Especially, Haugen et. al. haug08 suggested that the ferromagnetic
correlations can be created in graphene by the so-called proximity effect. The
exchange splitting induced by depositing a ferromagnetic insulator $EuO$ on
the graphene sheet was then roughly estimated to be about $5$ meV. The effect
has been also experimentally demonstrated jozs08 ; hill06 . Motivated by these
features, some other works have predicted and discussed the controllability of
spin current by a ferromagnetic gate in $2D$-graphene structures yoko08 ;
vndo08 ; zou009 . The spin current was found to be an oscillatory function of
the potential barrier, which can be tuned by the gate voltage, and its
amplitude is never damped by the increase of the width and the height of
barrier yoko08 . However, the spin polarization is not so high, e. g., its
maximum value is just about $30\%$. In addition, the other spin-dependent
properties of graphene such as spin field effect transistor seme07 , spin Hall
effects kane05 ; sini06 , spin valve effects cho007 ; brey07 ; roja09 ; saff09
have been also investigated extensively. Especially, the giant magneto-
resistance has been explored theoretically and discussed in the structures of
armchair saff09 and zigzag roja09 GNRs connecting two conducting electrodes.
The authors predicted that it can reach the high value of $100\%$ roja09 .
In this article, we are interested in the possibilities of electrically
tunable of spin current in single ferromagnetic gate armchair GNR structures.
By investigating the physics of spin polarized transport in these structures,
we would like to derive some simple scaling rules in order to tend to a high
tunable spin polarized current. The study is focussed on the role of the
ribbon’s energy band gap and the different types of leads (either graphitic or
normal-conducting). In the NGN systems, since the strength of the device-to-
contact coupling and the device length are important parameters, their
influence on the control of spin current is also carefully investigated. The
paper is organized as follows. Section 2 is devoted to the description of the
model and main formulas based on the non-equilibrium Green’s functions
formalism (NEGF). In section 3, the numerical results are presented and
discussed. Finally, a brief summary is given in section 4.
## II model and formulation
Figure 1: (color online) Schematic illustration of the considered armchair GNR
structures with the number $M$ of carbon chains between two edges: (a)
graphitic and (b) normal-conducting leads. The latter ones are modeled by
square lattices. A magnetic gated insulator is deposited to create a spin-
dependent potential barrier in the center of device.
The considered structures consist of an armchair GNR coupled with two
electrodes which may be described either by graphitic (Fig. 1(a)) or normal-
conducting (Fig. 1(b)) leads. In the simplest consideration, the normal-
conducting leads are modeled by square lattices blan07 ; scho07 ; robi07 . A
ferromagnetic gate is assumed to create a potential barrier which controls the
Fermi level locally and to induce an exchange splitting into the device. To
model the structures, we use the single band tight binding Hamiltonian
$\hat{H}=\hat{H}_{L}+\hat{H}_{D}+\hat{H}_{R}+\hat{H}_{C}$ (1)
where $\hat{H}_{L,R}$ are the Hamiltonian of the left and right leads,
respectively; $\hat{H}_{D}$ is the Hamiltonian of the device; $\hat{H}_{C}$
describes the coupling of the device to the leads. The Hamiltonian terms in
Eq. (1) can be written as
$\displaystyle\hat{H}_{\alpha}$ $\displaystyle=$
$\displaystyle\varepsilon_{\alpha}\sum\limits_{i_{\alpha},\sigma}{c_{i_{\alpha},\sigma}^{\dagger}c_{i_{\alpha},\sigma}}-t_{L}\sum\limits_{\left\langle{i_{\alpha},j_{\alpha}}\right\rangle,\sigma}{c_{i_{\alpha},\sigma}^{\dagger}c_{j_{\alpha},\sigma}}$
$\displaystyle\hat{H}_{D}$ $\displaystyle=$
$\displaystyle\sum\limits_{i_{d},\sigma}{{\varepsilon_{i_{d},\sigma}}a_{i_{d},\sigma}^{\dagger}a_{i_{d},\sigma}}-t\sum\limits_{\left\langle{i_{d},j_{d}}\right\rangle,\sigma}{a_{i_{d},\sigma}^{\dagger}a_{j_{d},\sigma}}$
(2) $\displaystyle\hat{H}_{C}$ $\displaystyle=$ $\displaystyle-
t_{C}\sum\limits_{\alpha=\left\\{{L,R}\right\\}}{\sum\limits_{\left\langle{i_{\alpha},j_{d}}\right\rangle,\sigma}{\left({c_{i_{\alpha},\sigma}^{\dagger}a_{i_{d},\sigma}+h.c.}\right)}}$
where the operators $c_{i_{\alpha},\sigma}^{\dagger}$
($c_{i_{\alpha},\sigma}$) and $a_{i_{d},\sigma}^{\dagger}$
($a_{i_{d},\sigma}$) create (annihilate) an electron with spin $\sigma$ in the
electrode $\alpha$ and the device region, respectively. The sum over carbon
atoms $\langle i,j\rangle$ is restricted to the nearest neighbor atoms. $t$,
$t_{L}$ and $t_{C}$ stand for the hopping parameters in the device, the lead
and at the coupling interface, respectively. $\varepsilon_{\alpha}$ is the on-
site energy of the leads which acts as a shift in energy. The device spin-
dependent on-site energy $\varepsilon_{i_{d},\sigma}$ is modulated by the gate
voltage
$\varepsilon_{i_{d},\sigma}=\left\\{\begin{array}[]{l}U_{G}-\sigma
h\,\,\,\,\,\,\,\,\,\,\,\,\,{\rm{in}\,\,\rm{gated}\,\,\rm{region}}\\\
0\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\rm{otherwise}}\end{array}\right.$
(3)
Here, $U_{G}$ denotes the potential barrier height, $h$ is the exchange
splitting and $\sigma=\pm 1$ describes the up/down spin states.
Since no spin flip process is considered here, (1) can be decoupled into two
linear spin-dependent Hamiltonians $\hat{H}_{\sigma}$ and the transport is
easily considered using the NEGF formalism. For each spin channel $\sigma$,
the retarded Green’s function is defined as
$\hat{G}_{\sigma}^{r}\left(E\right)=\left[{E+i0^{+}-\hat{H}_{D,\sigma}-\hat{\Sigma}_{L}^{r}-\hat{\Sigma}_{R}^{r}}\right]^{-1}$
(4)
where, $\hat{\Sigma}_{\alpha}^{r}$ describes the retarded self energy matrices
which contain the information on the electronic structure of the leads and
their coupling to the device. It can be expressed as
$\hat{\Sigma}_{\alpha}^{r}=\hat{\tau}_{{\rm{D}}{\rm{,}}\alpha}\hat{g}_{\alpha}\hat{\tau}_{\alpha,D}$
where $\hat{\tau}$ is the hopping matrix that couples the device to the leads.
$\hat{g}_{\alpha}$ are the surface Green’s functions of the uncoupled leads,
i.e., the left or right semi-infinite electrodes. The surface Green’s
functions and the device Green’s functions are calculated using the fast
iterative scheme sanc84 and the recursive algorithm anan08 , respectively.
Within the model described by Eq. (1), the transport is considered to be
ballistic and the conductance through the device is calculated using the
Landauer formalism imry99 . The spin-dependent conductances
$\mathcal{G}_{\sigma}$ at the Fermi energy $E_{F}$ are obtained from the
transmission function $T_{\sigma}(E)$, such that
$\mathcal{G}_{\sigma}\left(E_{F}\right)=\frac{{e^{2}}}{h}T_{\sigma}\left(E_{F}\right)$
(5)
and
$T_{\sigma}\left(E\right)={\rm{Tr}}\left[{\hat{\Gamma}_{L}\hat{G}_{\sigma}^{r}\hat{\Gamma}_{R}\hat{G}_{\sigma}^{a}}\right]\cdot$
(6)
Here, $\hat{G}_{\sigma}^{a}$ ($\equiv\hat{G}_{\sigma}^{r{\dagger}}$) denotes
the advanced Green’s function. The tunneling rate matrix
$\hat{\Gamma}_{L\left(R\right)}$ for the left (right) lead is obtained from
$\hat{\Gamma}_{L/R}={\rm{i}}\left[{\hat{\Sigma}_{L/R}^{r}-\hat{\Sigma}_{L/R}^{a}}\right]$
(7)
where $\hat{\Sigma}_{\alpha}^{a}$ ($\equiv\hat{\Sigma}_{\alpha}^{r{\dagger}}$)
is the advanced self energy. Finally, the spin polarization is determined by
$P=\frac{{\mathcal{G}_{\uparrow}-\mathcal{G}_{\downarrow}}}{{\mathcal{G}_{\uparrow}+\mathcal{G}_{\downarrow}}}\cdot$
(8)
In addition, the local density of states (LDOS) at site $j$ can be also
directly extracted from the retarded Green’s function as:
${\rm{LDOS}}_{\left(j\right)}=-\frac{1}{\pi}{\mathop{\rm
Im}\nolimits}G^{r}\left({j,j}\right)\cdot$ (9)
By using the recursive algorithm described in Ref. anan08 , the size of
matrices equals the number $M$ of carbon chains between two edges. Thus, the
cost of calculations is only linearly dependent on the device length.
## III results and discussion
Using the above formalism, we investigate the spin-dependent transport in the
considered structures. Throughout the work, we set $t=2.66$ eV chic96 in the
graphitic regions and assume that $t_{L}$ is equal to $t$ and $t_{C}\leq t$ in
the NGN junctions blan07 ; robi07 . Our consideration is restricted to the
low-energy regime when $E\ll t$.
### III.1 Gate control of spin current in GGG structures
Figure 2: (color online) (a) Oscillation of conductance vs the barrier height
$U_{G}$ in the GGG structures with different widths: $M=21$ (dashed) and $23$
(solid line). (b) illustrates the transmission coefficient calculated from eq.
(10) for different modes $\theta_{j}$. Other parameters are: $L_{G}=42.5$ nm,
$E_{F}=300$ meV and $h=0$ meV.
As mentioned above, the gate voltage creates a potential barrier in the
device. In the armchair GNRs, the electronic properties of the structures with
$M\neq 3n+2$ and $M=3n+2$ ($n$ is an integer) are significantly different, i.
e., there is a finite energy band gap in the former structures while it is
negligible in the latter ones cres08 . Respectively, we display in Fig. 2(a)
the conductance as a function of the barrier height $U_{G}$ for the
structures: $M=21$ (dashed) and $23$ (solid line). Here, the gated region is
assumed to be nonmagnetic, i. e., $h=0$ meV. First of all, the results show
that the conductance has an oscillatory behavior with respect to $U_{G}$. This
phenomenon has been observed in the $2D$-graphene structures and explained as
a consequence of the well-known Klein’s tunneling kats06 ; yoko08 . In the
framework of the Dirac’s description, the conductance peaks have been
demonstrated to be essentially due to the resonance or the good matching of
electron states and confined hole states outside/inside the barrier region
vndo08 , respectively. Those states, in this model, correspond to electron
states in positive and negative energy bands. Note that due to the finite
ribbon width, the transverse momentum is quantized into a set of discrete
values. Practically, the oscillation of conductance can be seen clearly from
the analytical expression of the transmission coefficient for a given
transverse momentum mode $k_{y}^{j}$ (see the calculation in Ref. klym08 ). In
the limit of low energy, it can be rewritten as
$T=\frac{{\cos^{2}\theta_{j}\sin^{2}{\phi}}}{{\cos^{2}\theta_{j}\sin^{2}{\phi}+\left({\sin\theta_{j}+\cos{\phi}}\right)^{2}\sin^{2}\left({k_{x}^{b}L_{G}}\right)}}$
(10)
where $\theta_{j}=\tan^{-1}(k_{y}^{j}/k_{x})$ and
$\sin\phi=\left[{\left({t-{\rm{v}}_{F}k_{y}^{j}}\right)\sin\left({3ak_{x}^{b}/2}\right)}\right]/\left({U_{G}-E}\right)$
with ${\rm v}_{F}=3at/2$ and $a$ is the $C-C$ bond length. $k_{x}$
($k_{y}^{j}$) denotes the longitudinal (transverse) momentum, which is the
deviation of the momentum $\vec{k}$ from the zero energy point, outside the
barrier and $k_{x}^{b}$ is the longitudinal momentum inside the barrier. The
energy dispersions outside/inside the barrier are, respectively
$\displaystyle E$ $\displaystyle=$
$\displaystyle{\rm{v}}_{\rm{F}}\sqrt{k_{x}^{2}+k_{y}^{j2}}$ (11)
$\displaystyle E-U_{G}$ $\displaystyle=$
$\displaystyle-\sqrt{4t\left({t-{\rm{v}}_{F}k_{y}^{j}}\right)\sin^{2}\frac{3ak_{x}^{b}}{4}+{\rm{v}}_{F}^{2}k_{y}^{j2}}$
(12)
Accordingly, the transmission and then the conductance have their maximum (or
minimum) values when $k_{x}^{b}L_{G}$ is equal to $m\pi$ (or $(m+1/2)\pi$) for
any integer $m$. In the limit of $E\ll U_{G}\ll t$, the eq. (12) can be
rewritten as $U_{G}-E\approx{\rm v}_{F}k_{x}^{b}$ and the period of
oscillation is defined by $U_{P}={\rm v}_{F}\pi/L_{G}$, which coincides with
that in Refs. yoko08 ; vndo08 . In general cases, when the relation between
$U_{G}$ and ${\vec{k}}$ is nonlinear, the period can be approximately
expressed as $U_{P}={\rm v}_{g}\pi/L_{G}$ with ${\rm v}_{g}\leq{\rm v}_{F}$.
For instance, ${\rm v}_{g}\approx 0.74{\rm v}_{F}$ and $0.89{\rm v}_{F}$ for
$M=21$ and $23$ presented in Fig. 2(a), respectively.
As a consequence of different electronic structures, the Fig. 2(a) also shows
that the conductance for the case of $M=21$ (large energy gap) oscillates
strongly in comparison with the other ($M=23$). This can be easily understood
by considering the behavior of transmission coefficient for different energy
gaps. As seen in Eq. (11), the energy gap is enlarged when increasing
$k_{y}^{j}$ (or $\theta_{j}$). So that, from the Eq. (10) and the Fig. 2(b),
we see that when the energy gap is larger (increasing $\theta_{j}$), the
oscillation of transmission is stronger. It leads to the different behaviors
of conductance shown in Fig. 2(a). Similarly, it is shown that the oscillation
of conductance in the structure with $M=21$ is also stronger than that in the
$2D$-graphene structures (see in Fig. 2(a) of Ref. yoko08 ), where the gap is
truly zero.
In eqs. (10-11), the mode $\theta_{0}=0$ (or $k_{y}^{0}=0$) corresponds
essentially to the normal incident mode whose energy dispersion is gapless and
linear in the $2D$-graphene structures. When considering the behavior of
transmission coefficient, we find an important feature: it is not uniformly
equal to unity, but a function of the barrier height even in the case of
$\theta=\theta_{0}$. This differs from the prediction of Klein’s paradox
obtained by using Dirac’s description in graphene structures kats06 as
previously discussed in Ref. tang08 . Practically, the transmission
coefficient (eq. (10)) approaches the simplified expression (4) in Ref. kats06
only in the limit of $E\ll U_{G}\ll t$.
Figure 3: (color online) (a) Spin polarization as a function of the barrier
height $U_{G}$ for the same structures as in Fig. 2(a). (b) shows an example
of the effects of the different ribbon widths on the spin polarization: $M=21$
(dashed), $27$ (dashed-dotted) and $33$ (solid line). Everywhere $L_{G}=42.5$
nm, $E_{F}=300$ meV and $h=10$ meV.
Now we investigate the behavior of spin polarization in the ferromagnetic gate
structures. The exchange splitting $h$ is chosen to be $10$ meV, which can be
achieved experimentally jozs08 ; hill06 . Since no spin flip process is
considered, the exchange splitting just shifts the conductance of each spin
channel relatively to the other. The spin polarization therefore behaves as an
oscillatory function of $U_{G}$ as shown in Fig. 3. Similar phenomena in the
$2D$-graphene structures have been also observed and discussed in Ref. yoko08
; vndo08 ; zou009 . It was shown that the oscillation of spin current is never
damped with the increase of the width and the height of barrier and the spin
polarization can be reversed by changing the gate voltage. Actually, the
amplitude of $P$ depends primarily on the phase coherence/decoherence of the
oscillation of spin - dependent conductances, i. e., it has the
maximum/minimum value when the gate length (or the barrier width) $L_{G}$ is
equal to a half-integer/integer of $L_{h}$ with $L_{h}={\rm v}_{g}\pi/2h$,
respectively. Hence, the gate control of spin current can be modulated by
changing $L_{G}$, i. e., it leads to the beating behavior of $P$ similar to
that shown in Fig. 5(c) of Ref. vndo08 . Furthermore, as a consequence of the
behavior of conductance presented in Fig. 2(a), Fig. 3(a) also shows that the
oscillation of $P$ in the GNRs with a large energy gap is very strong in
comparison with the others. For instance, the amplitude of $P$ is about $65\%$
for $M=21$ while it is only few percents for $M=23$ or has a maximum value of
$30\%$ in the $2D$-graphene structures yoko08 ; vndo08 ; zou009 . However,
since for $M\neq 3n+2$ the energy gap decreases when increasing the ribbon
width, the oscillation of conductance and $P$ in those structures is gradually
weaker. The transport quantities for the GNRs in the limit of infinite width
approaches those for the $2D$-graphene structures, where the continuum Dirac’s
description is valid. To illustrate this point, we display in Fig. 3(b) an
example of the effect of different ribbon widths on the spin polarization in
the structures with $M\neq 3n+2$. Indeed, when increasing the ribbon width,
the amplitude of $P$ decreases and becomes closer to that in 2D-graphene, i.
e., it is only about $35\%$ for $M=33$.
### III.2 Effects of normal-conducting leads
In this section, we consider the spin transport in NGN junctions. First, we
focus on the possibilities of obtaining high tunable spin current when
replacing the graphitic leads by the normal-conducting ones. Second, we
analyze the sensitivity of transport quantities to different parameters as the
device length and the Fermi energy.
In Ref. scho07 , Schomerus compared the resistances of NGN junctions and GGG
structures, and found the duality between graphitic and normal-conduncting
contacts. He has shown that identical transport properties arise when the
graphitic leads are replaced by quantum wires and the difference between the
results obtained in those structures is only quantitative. On this basis, we
plot in Fig. 4(a) the conductance as a function of $U_{G}$ in the NGN junction
in comparison with the GGG structure for the case of $L_{D}=51$ nm and
$E_{F}=300$ meV. Since we found that the results depend weakly on the hopping
energy in the leads, $t_{L}$ is chosen to be equal to $t$ for simplicity.
Qualitatively, Fig. 4(a) shows that the oscillation of conductance seem to be
unchanged in its phase and period when changing the leads. Quantitatively, the
oscillation in the NGN junction is stronger than in the GGG structure. This
can be explained clearly by the effect of replacing the graphitic leads by the
normal conducting ones on the picture of bound states in the barrier region.
These states, in the framework of the Dirac’s description, have been
considered as confined hole states in the $2D$-graphene structures vndo08 . In
Fig. 4(b), we display the LDOS (see the right axis for graphitic and the left
axis for normal-conducting leads) at the first site of barrier region with
respect to $U_{G}$. The results are understood that the peaks of LDOS occur
when the Fermi energy corresponds to any bound state. Obviously, the
oscillation of LDOS (or the quantization of bound states in the barrier) in
the NGN junction appears stronger (with higher peaks) than that in the GGG
structure. It is the essential origin of the different behaviors of
conductance as shown in Fig. 4(a).
Figure 4: (color online) Comparison of conductance (a) and LDOS (b) in
different structures: graphitic (dashed) and normal-conducting (solid lines,
$t_{C}=t$) leads. Everywhere $M=21$, $L_{D}=51$ nm, $L_{G}=42.5$ nm,
$E_{F}=300$ meV and $h=0$ meV.
Now we turn to the behavior of spin current in ferromagnetic gate NGN
junctions, i. e., $h=10$ meV. In Figs. 5(a) and 5(b), we display the
comparison of spin polarization in the NGN junctions and the GGG structures.
Due to the different behaviors of conductance shown in Fig. 4(a), the
amplitude of $P$ in former structures is remarkably larger than that in latter
ones. Particularly, when changing the leads, it increases from $65\%$ to
$81\%$ (see in Fig. 5(a)) and from $1\%$ to $50\%$ (see in Fig. 5(b)) for
$M=21$ and $23$, respectively. Moreover, in the NGN junctions, the possibility
of obtaining high tunable spin current is more impressive with decreasing the
strength of the device-to-contact coupling, which is characterized by the
hopping energy $t_{C}$. In Figs. 5(c) and 5(d), we plot the obtained results
for three cases: $t_{C}=t$ (dotted), $0.8t$ (dashed) and $0.6t$ (solid lines).
Actually, the transport in these structures depends strongly on the properties
of junctions and therefore on $t_{C}$. A smaller $t_{C}$ corresponds to a
higher contact resistance blan07 . We find that the quantization of bound
states in the barrier region is stronger when decreasing $t_{C}$ (not shown),
which leads to a stronger oscillation of the transport quantities with respect
to the barrier height. Indeed, even in the case of $M=23$, the spin
polarization can reach a very high value of $86\%$ for $t_{C}=0.6t$ (see in
Fig. 5(d)). More impressively, in the cases of $M=21$, it can tend to $100\%$
by reducing $t_{C}$ (see in Fig. 5(c)). A similar feature (giant magneto
resistance) has been also predicted in the structures of GNRs connecting two
conducting electrodes roja09 ; saff09 .
Figure 5: (color online) (a,b) Comparison of spin polarization in the
different structures: graphitic (dashed) and normal-conducting (solid lines,
$t_{C}=t$) leads. (c,d) The spin polarization in the latter one with different
coupling strengths: $t_{C}=t$ (dashed), $0.8t$ (dashed-dotted) and $0.6t$
(solid lines). The ribbon widths are $M=21$ (a,c) and 23 (b,d). Other
parameters are $L_{D}=51$ nm, $L_{G}=42.5$ nm, $E_{F}=300$ meV and $h=10$ meV.
Practically, the features discussed above depend strongly on the parameters of
the NGN junctions, such as the device length and/or the Fermi energy. It
results from the fact that the charge transport can be confined in the device
by two normal-conductor/graphene junctions. It leads to an additional resonant
condition controlling the transport picture beside the transmission via the
bound states in the barrier. Indeed, the existence of such confined states is
demonstrated clearly from the behavior of LDOS shown in Fig. 6(a). In this
figure, to cancel the effects of bound states in the barrier, the gate voltage
is not applied to the device. The energy spacing $E_{S}$ is estimated to be
about $25.5$ meV for $L_{D}=51$ nm and $12.6$ meV for $102$ nm. It means that
$E_{S}$ seems to be inversely proportional to the device length, i. e., as
illustrated in Fig. 6(c). This implies an unusual quantization of charges in
the graphene-based structures, which is essentially different from the case of
normal semicondutors wherein $E_{S}\propto 1/L_{D}^{2}$ as previously
discussed in Refs. vndo08 ; milt06 . Due to such confinement, the transport
quantities, such as the conductance and the spin current (not shown), have an
oscillatory behavior also with respect to the Fermi energy in the considered
region (see in Fig. 6(b)). Therefore, when a gate voltage is applied, there is
a coexistence of bound states in the barrier and confined states in the
device. They together respond for the resonant transport conditions of the
structure.
Figure 6: (color online) (a) LDOS illustrating the existence of confined
states in the device and (b) conductance in the NGN junctions as a function of
the Fermi energy for different device lengths: $L_{D}=51$ nm (solid) and 102
nm (dashed lines). (c) shows the dependence of energy spacing of the confined
states on the inverse of device length. Everywhere $M=21$, $L_{G}=42.5$ nm,
$t_{C}=0.8t$ and $U_{G}=h=0$ meV. Figure 7: (color online) (a) Conductance and
(b) spin polarization $P$ in the NGN structures as functions of the barrier
height $U_{G}$ for different device lengths: $68$ nm (solid), $74$ nm (dashed)
and $79$ nm (dashed-dotted lines). The oscillation of $P$ versus the device
length for different values of $U_{G}$ (c): $630$ meV (solid), $655$ meV
(dashed-dotted) and $681$ meV (dashed line); and $E_{F}$ (d): $250$ meV
(dashed-dotted), $300$ meV (solid) and $350$ meV (dashed line). Other
parameters are $M=21$, $L_{G}=42.5$ nm, $E_{F}=300$ meV (in (a,b,c)),
$t_{C}=0.8t$, $U_{G}=630$ meV (in (d)) and $h=10$ meV.
On the other hand, the gate controllability of spin current is principally due
to the picture of bound states in the barrier (or Klein’s tunneling) vndo08 .
It arises a question about how the confined states in the device affect that
picture. As shown in Figs. 4 and 5, the replacement of the graphitic leads
(infinite $L_{D}$) by normal-conducting ones (finite $L_{D}$) does not affect
the period, but the amplitude of the oscillation. It suggests that, in the NGN
junctions, the oscillation of conductance and spin polarization can be
modulated in its amplitude while its period is unchanged when changing the
device length. To examine this statement, we plot the conductance in Fig. 7(a)
and the spin polarization in Fig. 7(b) as functions of the barrier height for
different device lengths. From Fig. 7(a), we see that while the period is
determined only by the gate length, the oscillation of conductance is modified
by changing $L_{D}$, i. e., it is strong/weak when $L_{D}=68$ (and $79$) or
$74$ nm, respectively. Consequently, the amplitude of spin polarization is
dependent on $L_{D}$, i. e., it is about $95\%$ for $L_{D}=68$ nm, $15\%$ for
$74$ nm and $86\%$ for $79$ nm (see Fig. 7(b)). This feature is exhibited more
clearly in Fig. 7(c) by three curves of spin polarization versus $L_{D}$ for
different barrier heights: $U_{G}=630$ (dashed), $655$ (dashed-dotted), and
681.5 (solid line) meV. We see that the spin current has an oscillatory
behavior and is suppressed completely at certain values of $L_{D}$. Obviously,
this demonstrates that the amplitude of spin polarization exhibited in Fig.
7(b) is also an oscillatory function of the device length. Moreover, its
period seems to be inversely proportional to the Fermi energy, i. e., it is
about $27.8$ nm for $E_{F}=250$ meV, $18.3$ nm for $300$ meV and $13.4$ nm for
$350$ meV (see Fig. 7(d)). It is nothing, but a consequence of the resonant
transport due to the confined states in the device. Hence, the gate control of
spin current in the NGN junction can be modulated not only by the gate length
$L_{G}$ (see in the section A) but also by the device length $L_{D}$ and/or
the Fermi energy $E_{F}$. This implies that the structure should be carefully
designed to obtain high controllability of spin current.
## IV conclusions
Using the NEGF method for quantum transport simulation within a tight binding
hamiltonian, we have considered the spin-dependent transport in single
ferromagnetic gate armchair GNR structures. The leads are modeled as either
graphitic or normal-conducting.
In the case of graphitic leads, it is shown that the conductance and the spin
current behave as oscillatory functions of barrier height which can be tuned
by the gate voltage. The oscillation of spin polarization in the ribbon
structures with a large energy band gap is strong in comparison with the
$2D$-graphene structures. Especially, the study has demonstrated that a very
high spin polarization can be observed in the NGN junctions. It results from
the fact that the quantization of bound states in the barrier (gated) region
can appear very strong when using the normal-conducting leads. In this
structure, it is shown that the spin polarization increases and can tend to
$100\%$ with the decrease of the strength of the device-to-contact coupling.
Moreover, we have also found the existence of confined states in the device by
normal-conductor/graphene junctions. This confinement responds for an
additional resonant condition beside the transmission via the bound states in
the barrier. Therefore, the gate control of spin current in the NGN junctions
can be modulated not only by the gate length but also by the device length
and/or the Fermi energy.
Our predictions may be helpful for designing efficient spintronics devices
based on perfect armchair GNRs. However, some disorder effects, e. g. due to
edge roughness, have been observed experimentally han007 and demonstrated to
affect the transport properties quer08 of the GNR structures. Further work is
needed to assess their influence on the spin polarized properties discussed in
this article.
Acknowledgements. This work was partially supported by the European Community
through the Network of Excellence NANOSIL.
## References
* (1) A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov and A. K. Geim, Rev. Mod. Phys. 81, 109 (2009)
* (2) K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva and A. A. Firsov, Science 306, 666 (2004)
* (3) K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos and A. A. Firsov, Nature (London), 438, 197 (2005)
* (4) Y. Zhang, Y.-M. Tan, H. L. Stormer and P. Kim, Nature (London), 438, 201 (2005)
* (5) N. Stander, B. Huard and D. Goldhaber-Gordon, Phys. Rev. Lett. 102, 026807 (2009)
* (6) M. I. Katsnelson, K. S. Novoselov and A. K. Geim Nat. Phys. 2, 620 (2006)
* (7) M. Y. Han _et al._ , Phys. Rev. Lett. 98, 206805 (2007); Z. Chen _et al._ , Physica E 40, 228 (2007); X. Li _et al._ , Science 319, 1229 (2008)
* (8) K. Nakada, M. Fujita, G. Dresselhaus and M. S. Dresselhaus, Phys. Rev. B 54, 17954 (1996)
* (9) Y. Kobayashi, K. Fukui, T. Enoki, K. Kusakabe, Phys. Rev. B 73, 125415 (2006)
* (10) A. Cresti, N. Nemec, B. Biel, G. Niebler, F. Triozon, G. Cuniberti and S. Roche, Nano Res. 1, 361 (2008)
* (11) D. Querlioz _et al._ , Appl. Phys. Lett. 92, 042108 (2008); A. Lherbier _et al._ , Phys. Rev. Lett. 100, 036803 (2008); M. Evaldsson _et al._ , Phys. Rev. B 78, 161407(R) (2008)
* (12) B. Huard, J. A. Sulpizio, N. Stander, K. Todd, B. Yang and D. Goldhaber-Gordon, Phys. Rev. Lett. 98, 236803 (2007)
* (13) M. Ya. Blanter and I. Martin, Phys. Rev. B 76, 155433 (2007)
* (14) H. Schomerus, Phys. Rev. B 76, 045433 (2007)
* (15) J. P. Robinson and H. Schomerus, Phys. Rev. B 76, 115430 (2007)
* (16) C. L. Kane and E. J. Mele, Phys. Rev. Lett. 95, 226801 (2005)
* (17) C. J$\acute{o}$zsa, M. Popinciuc, N. Tombros, H. T. Jonkman and B. J. van Wees, Phys. Rev. Lett. 100, 236603 (2008)
* (18) E. W. Hill, A. K. Geim, K. Novoselov, F. Schedin, and P. Blake, IEEE Trans. Magn. 42, 2694 (2006)
* (19) N. M. R. Peres, F. Guinea and A. H. Castro Neto, Phys. Rev. B 72, 174406 (2005)
* (20) T. O. Wehling, K. S. Novoselov, S. V. Morozov, E. E. Vdovin, M. I. Katsnelson, A. K. Geim and A. I. Lichtenstein, Nano Lett. 8, 173 (2008)
* (21) O. V. Yazyev and L. Helm, Phys. Rev. B 75, 125408 (2007)
* (22) Y.-W. Son, M. L. Cohen and S. G. Louie, Nature 444, 347 (2006)
* (23) H. Haugen, D. Huertas-Hernando and A. Brataas, Phys. Rev. B 77, 115406 (2008)
* (24) T. Yokoyama, Phys. Rev. B 77, 073413 (2008)
* (25) V. Nam Do, V. Hung Nguyen, P. Dollfus and A. Bournel, J. Appl. Phys. 104, 063708 (2008)
* (26) J. Zou, G. Jin and Y. -Q. Ma, J. Phys.: Condens. Matter 21, 126001 (2009)
* (27) Y. G. Semenov, K. W. Kim and J. M. Zavada, Appl. Phys. Lett. 91, 153105 (2007)
* (28) N. A. Sinitsyn, J. E. Hill, H. Min, J. Sinova and A. H. MacDonald, Phys. Rev. Lett. 97, 106804 (2006)
* (29) S. Cho, Y.-F. Chen, and M. S. Fuhrer, Appl. Phys. Lett. 91, 123105 (2007)
* (30) L. Brey and H. A. Fertig, Phys. Rev. B 76, 205435 (2007)
* (31) A. Saffarzadeh and M. Ghorbani Asl, Eur. Phys. J. B 67, 239 (2009)
* (32) F. Munoz-Rojas, J. Fernandez-Rossier, and J. J. Palacios, Phys. Rev. Lett. 102, 136810 (2009)
* (33) M. P. Lopez Sancho, J. M. Lopez Sancho and J. Rubio, J. Phys. F: Met. Phys. 14, 1205 (1984)
* (34) M. P. Anantram, M. S. Lundstrom and D. E. Nikonov, IEEE Proceeding 96, 1511 (2008)
* (35) Y. Imry and R. Landauer, Rev. Mod. Phys. 71, S306 (1999)
* (36) L. Chico, V. H. Crespi, L. X. Benedict, S. G. Louie and M. L. Cohen, Phys. Rev. Lett. 76, 971 (1996)
* (37) Y. Klymenko, L. Malysheva and A. Onipko, Phys. Stat. Sol. (b) 245, 2181 (2008)
* (38) C. Tang, Y. Zheng, G. Li and L. Li, Solid State Commun. 148, 455 (2008)
* (39) J. M. Pereira, Jr., V. Mlinar, F. M. Peeters and P. Vasilopoulos, Phys. Rev. B 74, 045424 (2006)
|
arxiv-papers
| 2009-05-12T15:03:47 |
2024-09-04T02:49:02.552421
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "V. Hung Nguyen, V. Nam Do, A. Bournel, V. Lien Nguyen and P. Dollfus",
"submitter": "Hung Nguyen Viet",
"url": "https://arxiv.org/abs/0905.1868"
}
|
0905.1895
|
# “Narrow” Graphene Nanoribbons Made Easier by Partial Hydrogenation
Hongjun Xiang,1∗ Erjun Kan,2 Su-Huai Wei,1 Myung-Hwan Whangbo,2 Jinlong Yang3
1 National Renewable Energy Laboratory, Golden, Colorado 80401, USA
2Department of Chemistry, North Carolina State University
Raleigh, North Carolina 27695-8204, USA
3 Hefei National Laboratory for Physical Sciences at Microscale,
University of Science and Technology of China, Hefei, Anhui 230026, P. R.
China
e-mail: hongjun_xiang@nrel.gov
###### Abstract
It is a challenge to synthesize graphene nanoribbons (GNRs) with narrow widths
and smooth edges in large scale. Our first principles study on the
hydrogenation of GNRs shows that the hydrogenation starts from the edges of
GNRs and proceeds gradually toward the middle of the GNRs so as to maximize
the number of carbon-carbon $\pi$-$\pi$ bonds. Furthermore, the partially
hydrogenated wide GNRs have similar electronic and magnetic properties as
those of narrow GNRs. Therefore, it is not necessary to directly produce
narrow GNRs for realistic applications because partial hydrogenation could
make wide GNRs “narrower”.
## Introduction
Graphene, a two-dimensional (2D) single layer of carbon atoms, is a rapidly
rising star on the horizon of materials science and condensed-matter physics.
It has attracted tremendous attention because of its unique massless Dirac
fermion-like electronic properties [1] and potential applications in
electronic devices. [2] When graphene is etched or patterned along one
specific direction, a novel quasi one-dimensional (1D) structure, a strip of
graphene of nanometers in width, can be obtained, which is referred to as a
graphene nanoribbon (GNR). The GNRs are predicted to exhibit various
remarkable properties and may be a potential elementary structure for future
carbon-based nanoelectronics. [3, 4, 5, 6, 7] Remarkably, theoretical
calculations [8, 5] predicted that quantum confinement and edge effects make
narrow GNRs (width w $<$ 10 nm) into semiconductors, which differs from
single-walled carbon nanotubes that contain some metallic species. Thus, GNRs
with narrow widths and atomically smooth edges could be used as room
temperature field effect transistors with excellent switching speed and high
carrier mobility (potentially, even ballistic transport). Indeed, Li et al.
[9] recently produced GNRs with width below 10 nm using a chemical route and
found that all of the sub-10-nm GNRs were semiconductors and afforded graphene
field effect transistors with on-off ratios of about $10^{7}$ at room
temperature.[9, 10] Unfortunately, the yield of GNRs was low and their width
distribution was broad; widths ranged from less than 10 nm to $\sim$100 nm
To realize the practical potential of narrow GNRs, methods for their mass
production are sorely needed. Lithographic patterning has been used to produce
wide ribbons ($>$20nm) from graphene sheets, but the width and smoothness of
the GNRs were limited by the resolution of the lithography and etching
techniques. [11, 12] Bulk amounts of wide (20$-$300nm) and few-layered GNRs
were synthesized by a chemical vapor deposition method.[13] Very recently, two
research groups found ways to unroll carbon nanotubes to produce nanoribbons.
Jiao et al. [14] showed an approach to making GNRs with smooth edges and a
narrow width distribution (10$-$20 nm) by unzipping multiwalled carbon
nanotubes (MWCNT) through plasma etching of nanotubes partly embedded in a
polymer film, but the ability of mass production through this method is
limited. On the other hand, Kosynkin et al. [15] described a simple solution-
based oxidative process for producing a nearly 100% yield of water soluble
nanoribbon structures by lengthwise cutting and unravelling of MWCNT side
walls. The resulting ribbons are wider (around 100$-$500 nm), and not
semiconducting, but easier to make in large amounts. Multilayered GNRs with a
width about 100 nm were produced by lithium intercalation and exfoliation of
carbon nanotubes. [16] The above experimental efforts indicate that a large
scale production of narrow GNRs is still very challenging.
In the present work, we propose that wide GNRs can be made semiconducting as
in the case of narrow GNRs by partial hydrogenation, because the hydrogenation
starts from the edges of GNRs and proceeds gradually toward the middle of the
GNRs. In this way, the difficulty of directly synthesizing narrow and smooth
GNRs can be avoided. Besides the electronic device applications, certain
partially hydrogenated GNRs with a band gap around 1.5 eV could be ideal
materials for solar cell absorbers due to the high carrier mobility of GNRs.
## Hydrogenation of 2D graphene
As a first step, we examine the hydrogenation of 2D graphene. Full
hydrogenation of graphene has been extensively studied,[17, 19, 18] but
configuration of the partial hydrogenation is not well understood. It is well
known that hydrogen atoms tend to absorb on the top of carbon atoms. To find
the lowest energy configuration of graphene with different coverages of
hydrogen atoms, we use the state of the art “cluster expansion” method [20]
established in the alloy theory. In essence, the total energy of an alloy
configuration is expressed by a generalized Ising like model. The interaction
parameters of the cluster expansion are obtained by mapping the density
functional total energies of tens of different configurations to the
generalized Ising Hamiltonian. The Alloy Theoretic Automated Toolkit (ATAT)
code [21] is employed here to construct the cluster expansion Hamiltonian. The
spin polarized density functional theory (DFT) calculations were performed on
the basis of the projector augmented wave method [22] encoded in the Vienna ab
initio simulation package [23] using the local density approximation. The
internal coordinates and the cell of the sampling configurations are fully
relaxed with the plane-wave cutoff energy of 500 eV.
In the cluster expansion process, we consider the C2T4 alloy (see Fig. 1a)
with the top sites T occupied by H atoms or vacancies. Our calculation shows
that there are four important pair interactions as shown in Fig. 1a, and the
three-body interactions are negligible. We can see that the interaction
between two H atoms adsorbed on the same C atom is extremely repulsive (0.340
eV). This is understandable because the five-fold coordinated carbon is not
stable. In contrast, we find that the interaction parameter between two H
atoms adsorbed on different sides of the two adjacent C atoms is largely
negative ($-0.276$ eV). The efficient strain relaxation and the absence of
dangling $\pi$ bonds in this four-fold coordinated carbon configuration
account for the stability. Using the cluster expansion Hamiltonian, we can
easily obtain the energy of a given alloy configuration and thus the ground
state structure of the partially hydrogenated graphene for a given supercell.
It is clear that the number of H atoms (n[H]) should not exceed that of C
atoms (n[C]) because otherwise some C atoms will bind with more than one H
atoms. When n[H]/n[C] $=1$, the ground state structure is graphane (see Fig.
1b), as found by Elias et al.. [24] For n[H]/n[C] $=0.5$, the lowest energy
structure among all possible configurations with no more than 4 carbon atoms
per supercell is shown in Fig. 1c. The adsorbed H atoms adsorb on a 1D zigzag
carbon chain such that each H atom has two neighbor H atoms adsorbed on the
opposite side of neighboring C atoms. This is consistent with the fact that
the hydrogenation of neighbor C atoms from opposite sides is energetically
preferred. However, we find that the above structure is not the global ground
state structure with n[H]/n[C] $=0.5$. For example, the lowest energy
structure (Fig. 1d) among all possible configurations with no more than 8
carbon atoms per supercell has a lower energy by 44 meV/C than does the
structure shown in Fig. 1c. This is so because the number of carbon-carbon
$p_{z}$-$p_{z}$ $\pi$ bonds is increased from 0.5/C (Fig. 1c ) to 0.625/C
(Fig. 1d). It is expected that the global ground state of the partially
hydrogenated graphene displays macroscopic phase separation between graphene
and graphane regions. To confirm this point, we consider a large supercell
($8\times 8$) with 24 adsorbed H atoms. Using the Monte Carlo annealing
technique, we find all the H atoms form a close packed cluster in the ground
state configuration (Fig. 1e). This shows that, in the global ground state of
a partially hydrogenated graphene, the phase separation into graphene and
graphane parts would take place.
We also calculate the hopping barrier of an isolated H atom adsorbed on a
carbon atom of graphene to the adjacent carbon atom using the nudged elastic
band method.[25] The calculated barrier is $0.84$ eV, which is close to the
value ($\sim 0.7$ eV) in the case of the (8,0) CNT.[26] This suggests that
isolated H atoms are relatively mobile, and thermal annealing would result in
the formation of the macroscopic H cluster. Recently, Singh et al. [27]
theoretically studied the electronic and magnetic properties of “nanoroads” of
pristine graphene carved into the electrically insulating matrix of fully
hydrogenated carbon sheet. However, our results suggest that it would be
difficult to realize such patterned graphene nanoroads because of the tendency
for phase separation into graphene and graphane parts.
## Hydrogenation of 1D graphene nanoribbons
Now we turn to the study of hydrogenation of GNRs. There are two common types
of GNRs. One kind of the GNRs, called zigzag GNR (ZGNR), has zigzag-shaped
edges with the dangling $\sigma$ bonds passivated by hydrogen atoms. Following
conventional notation, we name the GNR shown in Fig. 2a as 8-ZGNR according to
the number of zigzag chains along the ribbon. First, we consider the
adsorption of a single H atom on 8-ZGNR using the supercell with two unit
cells. The adoption of a larger cell leads to qualitatively similar results.
To find the most stable configuration, we consider all nonequivalent possible
adsorption sites indicated in Fig. 2a. We define the adsorption energy as:
$E_{a}=-[E\mbox{(H-GNR)}-E\mbox{(H atom)}-E\mbox{(GNR)}],$ (1)
where $E\mbox{(H atom)}$, $E\mbox{(GNR)}$, and $E\mbox{(H-GNR)}$ are the total
energies of an isolated H atom, the pristine GNR, and the GNR with an adsorbed
H atom, respectively. Our calculated adsorption energies are shown in Fig. 2b.
The positive adsorption energy is a consequence of the formation of C$-$H
bond. Remarkably, we find that the isolated H atom prefers to adsorb on the
edge carbon atom (site 1) than other sites by at least 1.1 eV. This is because
the number of carbon-carbon $p_{z}$-$p_{z}$ $\pi$ bonds is the largest in this
configuration, similar to the hydrogenated graphene case. Experimentally, it
was found [28] that the atomic layer deposition of metal oxide on graphene
grows actively on edges, indicating that the chemical reactivity at the edges
of graphene is high,[29] which is consistent with our theoretical results.
Interestingly, the dependence of the adsorption energy on the distance between
the adsorbed site and the edge is not monotonous: It displays an odd-even
oscillation with a smaller adsorption energy at even sites, and the adsorption
energy of even (odd) sites increases (decreases) with the distance, and
eventually the energy difference between even and odd sites adsorption becomes
very small. The smallest adsorption energy at site 2 might be due to the
presence of two rather unstable edge carbon atoms near site 2 that participate
in the formation of only one $\pi-\pi$ bond.
A second H atom will adsorb on the opposite site of the carbon atom (site 2)
adjacent to the edge carbon atom (site 1) to which the first H atom is bound,
so as to saturate the broken bond. The adsorption energy of this configuration
is 5.73 eV/(two H atoms), which is larger than the sum of the adsorption
energies of a single H atom on site 1 and 2, indicating a cooperative
adsorption behavior. This configuration is more stable than that with two H
atoms adsorbed on two outermost edge carbon atoms, for which the adsorption
energy (4.62 eV) is about twice the adsorption energy of a single H atom on an
edge carbon atom. The third H atom is expected to adsorb on the edge carbon
atom adjacent to the carbon atom to which the second H atom is bound. If the
number of H atoms is equal to the total number of edge carbon atoms (on both
edges), all the H atoms will adsorb on the outermost carbon atoms of one edge
of the ZGNR, as shown in Fig. 2c, where the left edge is assumed without loss
of generality. When the number of H atoms is twice the total number of edge
carbon atoms, the excess H atoms will bind with the outermost carbon atoms of
the right edge, resulting in a symmetric configuration (see Fig. 3d). The
asymmetric configuration where all H atoms adsorb on the left side has a
higher energy by 30 meV per edge carbon atom. Nevertheless, the asymmetric
configuration has a similar electronic structure as the symmetric
configuration except for a small asymmetric splitting in the band structure.
Thus, our calculations suggest the following H adsorption scenario: The H
atoms adsorb on the outermost zigzag bare carbon chain of one edge, and then
on the outermost zigzag bare carbon chain of the other edge. The process of
alternating hydrogenation continues until no more free H atom is available.
In the above discussion, we focused on the hydrogenation of ZGNRs. To be
complete, we now investigate the hydrogenation of another kind of GNRs,
namely, the armchair GNR (AGNR) with armchair-shaped edges. Similar to the
case of ZGNRs, an isolated H atom also prefers to adsorb on the edge carbon
atom (see Fig. S1 for the calculated adsorption energies), as shown in Fig. 3a
for the case of 13-AGNR case. A second H atom will adsorb on the adjacent
carbon atom of the edge C$-$C dimer, as expected (Fig. 3b). It is found that
four H atoms also adsorb symmetrically on the two edges of 13-AGNR (Fig. 3c).
Therefore the H adsorption on AGNRs resembles that on ZGNRs except that the H
atoms adsorb on AGNRs in a dimer-line-by-dimer-line manner.
To understand the electronic and magnetic properties of partially hydrogenated
GNRs, we compare partially hydrogenated 8-ZGNR with four bare zigzag carbon
rows, hereafter referred to as 8-ZGNR-4, with 4-ZGNR without adsorbed H atoms
as a representative example. In both cases, we find that the electronic ground
state is the antiferromagnetic (AFM) state in which each of the two electronic
edge states is ferromagnetically ordered but the two edge states are
antiferromagnetically coupled to each other. For 8-ZGNR-4, the AFM state is
$-7.2$ meV/unit cell more stable than the ferromagnetic (FM) state, in which
all spins are ferromagnetically aligned. A similar stability difference
between the AFM and FM states is found for 4-ZGNR (i.e., $-6.2$ meV/unit
cell). Moreover, they almost have the same local magnetic moment ($\sim 0.10$
$\mu_{B}$). Bacause the local density approximation is well known to seriously
underestimate the band gap of semiconductors, we calculate the band structure
of partially hydrogenated ZGNRs by employing the screened Heyd-Scuseria-
Ernzerhof 06 (HSE06) hybrid functional, [30, 31, 32] which was shown to give a
good band gap for many semiconductors including ZGNRs. [33] The HSE06 band
structures calculated for 4-ZGNR and 8-ZGNR-4 in the AFM state are shown in
Figs. 4a and b, respectively. In the energy region of the band gap, the band
structure of 8-ZGNR-4 is similar to that of 4-ZGNR, and the band gap (1.44 eV)
of 8-ZGNR-4 is close to that (1.53 eV) of 4-ZGNR. Therefore, the electronic
and magnetic properties of partially hydrogenated wide GNRs are determined by
those of its graphene part, i.e., the bare zigzag carbon rows. As already
mentioned, semiconducting partially hydrogenated GNRs can be as transistors,
and those with a small number of bare zigzag carbon rows might be used as a
solar cell absorption materials: 8-ZGNR-4 (and N-ZGNR-4 with N$>4$) has a
direct band gap that is close to the optimal value ($\sim 1.5$ eV) [34] for
the solar energy harvesting, and high carrier mobility.
Experimentally, the edges of synthesized GNRs might be rough. It is
interesting to see whether the resulting hydrogenated GNRs with rough edges
have a similar electronic structure as do hydrogenated perfect GNRs. In order
to address this issue, we study the hydrogenation of bare 8-ZGNR with Stone-
Wales (SW) reconstructions at the edges (see Fig. 5a), which are typical
defects for bare GNRs. [35] There are four edge carbon atoms per edge: two (C
and D in Fig. 5a) of these belong to the 7-ring that form a triplet bond with
each other, and the others (A and B in Fig. 5a) are isolated edge carbon atoms
of the 6-ring. Due to the presence of dangling $\sigma$ bonds at A and B
sites, a single H atom will first bond to A or B site: The adsorption energy
calculation shows that $E_{a}\mbox{(A)}=7.40$ eV, $E_{a}\mbox{(B)}=7.34$ eV,
$E_{a}\mbox{(C)}=4.98$ eV, and $E_{a}\mbox{(D)}=5.01$ eV. Excess H atoms will
adsorb gradually toward the inner part of GNRs. Shown in Fig. 5b is the
partially hydrogenated 8-ZGNR containing SW defects with four zigzag bare
carbon rows. For the two carbon atoms common to both 5-ring and 7-ring, two H
atoms adsorb on them above the ribbon plane, as a result of the odd-membered
ring. We find that this partially hydrogenated 8-ZGNR has a similar properties
as 8-ZGNR-4; It is semiconducting with a similar band gap and the AFM state is
more stable than the FM state by $-5.9$ meV per ZGNR unit cell. The spin
density plot of the AFM state (Fig. 5b) shows that the magnetic moments are
also mainly due to the sp2 carbon atoms next to the sp3 carbon atoms.
Therefore, partial hydrogenation can also convert a GNR with unsmooth edges
into a GNR with perfect electronic and magnetic properties.
## Concluding remarks
In summary, we performed a comprehensive first principles DFT study on the
hydrogenation of graphene and GNRs. The hydrogen adsorption on graphene
results in a complete phase separation between bare graphene and graphane. As
for the hydrogen adsorption on GNRs, our study reveals the following rules:
(i) Hydrogen atoms adsorb preferentially on the outermost edge carbon atoms.
(ii) Hydrogen atoms lead to pairwise addition by adsorbing on adjacent carbon
atoms in a one-up and one-down manner. (iii) The above adsorption process
shifts from one edge to the other edge of a GNR, and this alternating
hydrogenation process continues until there is no more free H atoms.
Our study suggests that the partial hydrogenation can make wide GNRs
effectively “narrower” in their physical properties, because partially
hydrogenated wide GNRs have electronic, optical, and magnetic properties
similar to those of the narrow GNRs representing their graphene parts.
Therefore, the experimental difficulty in synthesizing GNRs with narrow width
and smooth edges could be bypassed through partial hydrogenation of wide GNRs.
Partial hydrogenation might pave a way for the application of GNRs as
transistors or novel carbon based solar cell absorption materials. In this
study, we only consider the adsorption of H atoms. However, the concept should
remain valid if other groups are used instead.
## Acknowledgments
Work at NREL was supported by the U.S. Department of Energy, under Contract
No. DE-AC36-08GO28308, and work at NCSU by the U. S. Department of Energy,
under Grant DE-FG02-86ER45259.
## Supporting Information Available:
The adsorption energies of one H atom on different carbon sites of 13-AGNR.
This information is available free of charge via the Internet at
http://pubs.acs.org/.
## References
* [1] Castro Neto,A. H.; Guinea, F.; Peres, N. M. R.; Novoselov, K. S.; Geim, A. K. Rev. Mod. Phys. 2009, 81, 109.
* [2] Geim,A. K.; Novoselov, K. S. Nature Mater. 2007, 6, 183.
* [3] Son,Y.-W.; Cohen, M. L.; Louie, S. G. Nature (London) 2006, 444, 347.
* [4] Wakabayashi, K. Phys. Rev. B 2001, 64, 125428.
* [5] Barone, V.; Hod, O.; Scuseria, G. E. Nano Lett. 2006, 6, 2748.
* [6] Areshkin, D. A.; Gunlycke, D.; White, C. T. Nano Lett. 2007, 7, 204.
* [7] Kan,E. J.; Li, Z.; Yang, J.; Hou, J. G. J. Am. Chem. Soc. 2008, 130, 4224.
* [8] Yang, L.; Park, C. -H.; Son, Y. -W.; Cohen, M. L.; Louie, S. G. Phys. Rev. Lett. 2007, 99, 186801.
* [9] Li, X.; Wang, X.; Zhang, L.; Lee, S.; Dai, H. Science 2008, 319, 5867.
* [10] Wang,X. R.; Ouyang, Y. J.; Li, X. L.; Wang, H. L.; Guo, J.; Dai, H. J. Phys. Rev. Lett. 2008, 100, 206803.
* [11] Chen, Z.; Lin, Y.-M.; Rooks, M. J.; Avouris, P. Physica E 2007, 40, 228.
* [12] Han,M. Y.; Özyilmaz, B.; Zhang, Y.; Kim, P. Phys. Rev. Lett. 2007, 98, 206805.
* [13] Campos-Delgado, J. et al. Nano Lett. 2008, 8, 2773.
* [14] Jiao, L.; Zhang, L.; Wang, X.; Diankov, G.; Dai, H. Nature (London) 2009, 458, 877.
* [15] Kosynkin,Dmitry V.; Higginbotham, Amanda L.; Alexander Sinitskii,; Lomeda, Jay R.; Ayrat Dimiev,; Katherine Price, B.; Tour, James M. Nature (London) 2009, 458, 872.
* [16] Cano-Márquez, A. G. et al. Nano Lett. 2009, 9, 1527.
* [17] Sofo,Jorge O.; Chaudhari, Ajay S.; Barber, Greg D. Phys. Rev. B 2007, 75, 153401.
* [18] Boukhvalov,D. W.; Katsnelson, M. I.; Lichtenstein, A. I. Phys. Rev. B 2008, 77, 035427.
* [19] Lu, N.; Li, Z.; and Yang, J. arXiv:0904.4540.
* [20] Ferreira,L. G.; Su-Huai Wei,; Alex Zunger, Phys. Rev. B 1989, 40, 3197.
* [21] van de Walle,A.; Asta, M.; Ceder, G. CALPHAD Journal 2002, 26, 539. http://www.its.caltech.edu/~avdw/atat/.
* [22] (a) Blöchl,P. E. Phys. Rev. B 1994, 50, 17953; (b) Kresse,G.; Joubert, D. Phys. Rev. B 1999, 59, 1758.
* [23] Kresse, G.; Furthmüller, J. Comput. Mater. Sci. 1996, 6, 15; Phys. Rev. B 1996, 54, 11169.
* [24] Elias,D. C.; Nair, R. R.; Mohiuddin, T. M. G.; Morozov, S. V.; Blake, P.; Halsall, M. P.; Ferrari, A. C.; Boukhvalov, D. W.; Katsnelson, M. I.; Geim, A. K.; Novoselov, K. S. Science 2009, 323, 610.
* [25] Henkelman, G.; Uberuaga, B. P.; Jónsson, H. J. Chem. Phys. 2000, 113, 9901.
* [26] Zhang, Z.; Cho, K. Phys. Rev. B 2007, 75, 075420.
* [27] Singh,Abhishek K.; Yakobson, Boris I. Nano Lett. 2009, 9, 1540.
* [28] Wang, X.; Tabakman, Scott M.; Dai, H. J. Am. Chem. Soc. 2008, 130, 8152.
* [29] Wang, X.; Li, X.; Zhang, L.; Yoon, Y.; Weber, Peter K.; Wang, H.; Guo, J.; Dai, H. Science 2009, 324, 768.
* [30] Heyd,J.; Scuseria, G. E.; Ernzerhof, M. J. Chem. Phys. 2003, 118, 8207.
* [31] Krukau,A. V.; Vydrov, O. A.; Izmaylov, A. F.; Scuseria, G. E. J. Chem. Phys. 2006, 125, 224106.
* [32] Paier,J.; Marsman, M.; Hummer, K.; Kresse, G.; Gerber, I. C.; Ángyán, J. G. J. Chem. Phys. 2006, 124, 154709.
* [33] Hod, O.; Barone, V.; Scuseria, G. E. Phys. Rev. B 2008, 77, 035411.
* [34] Thompson, B. C., Fréchet, J. M. J. Angew. Chem., Int. Ed. 2008, 47, 58.
* [35] Huang, B.; Liu, M.; Su, N.; Wu, J.; Duan, W.; Gu, B.; Liu, F. Phys. Rev. Lett. 2009, 102, 166404.
Figure 1: (a) The $C_{2}T_{4}$ structure used in the cluster expansion
calculations. “T” refers to the top site of graphene. The important pair
interactions are indicated by arrows. The numbers (in eV) give the pair
interaction parameters of the cluster expansion expression. (b) The ground
state structure (graphane) of hydrogenated graphene with n[H]/n[C] $=1$. (c)
The lowest energy structure of hydrogenated graphene with n[H]/n[C] $=0.5$
among all possible structures with no more than four carbon atoms per cell.
(d) The lowest energy structure of hydrogenated graphene with n[H]/n[C] $=0.5$
among all possible structures with no more than eight carbon atoms per cell.
(e) The ground state structure of a very large graphene cell with 24 adsorbed
H atoms. The enclosed region by dashed lines denotes the cell. Figure 2: (a)
The structure of 8-ZGNR with numbers indicating different nonequivalent carbon
sites. The supercell adopted in the DFT calculations is denoted by solid
horizontal lines. (b) The adsorption energy of one H atom on different carbon
sites as labeled in (a). (c) and (d) show the ground state structures of
8-ZGNR with one and two adsorbed H atoms per edge carbon atom, respectively.
Figure 3: Schematic illustration of the ground state structures of 13-AGNR
with (a) one or (b) two or (c) four adsorbed H atom per unit cell. The solid
(dashed) big circle indicates that the H atom adsorbs on the top site above
(below) the ribbon plane. Figure 4: Electronic band structures of 4-ZGNR and
partially hydrogenated 8-ZGNR with four bare zigzag carbon rows (i.e.,
8-ZGNR-4) in the AFM state from HSE06 calculations. The horizontal dashed
lines denote the top of the valence bands. The insets show the structures of
4-ZGNR and 8-ZGNR-4. Figure 5: (a) Bare 8-ZGNR with Stone-Wales (SW) defects
at the edges. A, B, C, and D label different egde carbon atoms. The supercell
is denoted by solid horizontal lines. (b) The partially hydrogenated 8-ZGNR
containing SW defects with four zigzag bare carbon rows and the spin density
distribution in the AFM state.
TOC graphic
|
arxiv-papers
| 2009-05-12T16:12:41 |
2024-09-04T02:49:02.557612
|
{
"license": "Public Domain",
"authors": "Hongjun Xiang, Erjun Kan, Su-Huai Wei, Myung-Hwan Whangbo, Jinlong\n Yang",
"submitter": "H. J. Xiang",
"url": "https://arxiv.org/abs/0905.1895"
}
|
0905.2002
|
On leave of absence from the ] Institute of Physics and Electronics, Hanoi,
Vietnam
# Exact and approximate ensemble treatments
of thermal pairing in a multilevel model
N. Quang Hung1 [ nqhung@riken.jp N. Dinh Dang1,2 dang@riken.jp 1) Heavy-Ion
Nuclear Physics Laboratory, RIKEN Nishina Center for Accelerator-Based
Science, 2-1 Hirosawa, Wako City, 351-0198 Saitama, Japan
2) Institute for Nuclear Science and Technique, Hanoi, Vietnam
###### Abstract
A systematic comparison is conducted for pairing properties of finite systems
at nonzero temperature as predicted by the exact solutions of the pairing
problem embedded in three principal statistical ensembles, as well as the
unprojected (FTBCS1+SCQRPA) and Lipkin-Nogami projected (FTLN1+SCQRPA)
theories that include the quasiparticle number fluctuation and coupling to
pair vibrations within the self-consistent quasiparticle random-phase
approximation. The numerical calculations are performed for the pairing gap,
total energy, heat capacity, entropy, and microcanonical temperature within
the doubly-folded equidistant multilevel pairing model. The FTLN1+SCQRPA
predictions agree best with the exact grand-canonical results. In general, all
approaches clearly show that the superfluid-normal phase transition is
smoothed out in finite systems. A novel formula is suggested for extracting
the empirical pairing gap in reasonable agreement with the exact canonical
results.
Suggested keywords
###### pacs:
21.60.-n, 21.60.Jz, 24.60.-k, 24.10.Pa, 21.10.Ma
## I INTRODUCTION
Pairing correlations are a fundamental feature responsible for the
superconducting (superfluid) properties in many-body systems ranging from very
large ones as neutron stars to tiny ones as atomic nuclei. In macroscopic
systems such as superconductors, pairing correlations are destroyed as the
temperature $T$ increases and completely vanish at a value $T_{\rm c}\simeq
0.57\Delta(0)$, which is the critical temperature of the phase transition from
the superfluid state to the normal one. Here $\Delta(0)$ is the value of the
pairing gap at $T=$ 0\. The recent years witness a renewed interest in pairing
correlations, which is supported by the exact solutions of the pairing problem
in practice, the studies of unstable nuclei, the BCS to Bose-Einstein
condensation crossover, etc.
The superfluid properties and superfluid-normal (SN) phase transition of
infinite systems are accurately described by the Bardeen-Cooper-Schrieffer
theory BCS , where the average within the grand canonical ensemble (GCE) is
used to obtain the occupation number in the form of Fermi-Dirac distribution
for noninteracting fermions. The GCE consists of identically prepared systems
in thermal equilibrium, each of which shares its energy and number of
particles with an external heat bath. As compared to the other two principal
thermodynamic ensembles, namely the canonical ensemble (CE) and the
microcanonical one (MCE), the GCE is, perhaps, the most popular in theoretical
studies of systems at finite temperature because it is very convenient in the
calculations of average thermodynamic quantities 111For instance, the double-
time Green’s functions $G(t,t^{\prime})$, which are defined by using the GCE
Green , depend only on the time difference $(t-t^{\prime})$, which greatly
simplify the derivations in many statistical physics applications.. The CE is
also in contact with the heat bath, but the particle number is the same for
all systems. The MCE is an ensemble of thermally isolated systems sharing the
same energy and particle number. In thermodynamics limit (i.e. when the
system’s particle number $N$ and volume $V$ approach infinity, but $N/V$ is
finite), fluctuations of energy and particle number are zero, therefore three
types of ensembles offer the same average values for thermodynamic quantities.
Thermodynamics limit works quite well in large systems as well where these
fluctuations are negligible. The discrepancies between the predictions by
three types of ensembles arise when thermodynamics is applied to small systems
such as atomic nuclei or nanometer-size clusters. These systems have a fixed
and not very large number of particles, their single-particle energy spectra
are discrete with the level spacing comparable to the pairing gap. Under this
circumstance, the justification of using the GCE for these systems becomes
questionable. A number of theoretical studies have also shown that, in these
tiny systems, thermal fluctuations become so large that they smooth out the
sharp SN transition Moretto ; Goodman ; Egido ; SPA ; Zele ; MBCS ; FTBCS1 ;
AFTBCS . As the result, the pairing gap never collapses but decreases
monotonously with increasing $T$. These predictions are in qualitative
agreement with the results obtained by averaging the pairing energy in the CE
built on the eigenvalues obtained by exactly solving the pairing problem
Richardson ; EP . As a matter of fact, even at $T=$ 0, the exact pairing
solution in nuclei shows a sizable pairing energy in the region where the BCS
solution collapses EP . In the literature so far, under the pretext that a
nucleus is a system with a fixed number of particle, the thermodynamic
averages of the exact solutions of the pairing Hamiltonian are usually carried
out within the CE, and the results are compared with those obtained within
different theoretical approximations at $T\neq$ 0\. The latter, such as the
BCS, Hartree-Fock-Bogoliubov (HFB) theories, etc., as a rule, are always
derived within the GCE, where both energy and particle number fluctuate. On
the other hand, the well-known argument that the nuclear temperature should be
extracted from the MCE of thermally isolated nuclei is also quite often
debated and studied in detail Zele .
These results suggest that a thorough comparison of the predictions offered by
the exact pairing solutions averaged within three principal thermodynamic
ensembles, and those given by the recent microscopic approaches, which include
fluctuations around the thermal pairing mean field in nuclei, might be timely
and useful. This question is not new, but the answers to it have been so far
only partial. Already in the sixties Kubo Kubo drew attention to the
thermodynamic effects in very small metal particles. Later, Denton et al.
Denton used Kubo’s assumption to study the difference between the predictions
offered by the GCE and CE for the heat capacity and spin susceptibility within
a spinless equidistant level model for electrons. Very recently, the
predictions for thermodynamic quantities such as total energy, heat capacity,
entropy, and microcanonical temperature within three principal ensembles were
studied and compared in Ref. Sumaryada by using the exact solutions of an
equidistant multilevel model with constant pairing interaction parameter.
However, no results for the pairing gaps as functions of temperature are
reported.
In the present paper, we carry out a systematic comparison of predictions for
nuclear pairing properties obtained by averaging the exact solutions in three
principal thermodynamic ensembles as well as those offered by recent
microscopic approaches to thermal pairing. For the latter, we choose the
unprojected and particle-number projected versions of the FTBCS1+SCQRPA, which
we recently developed in Ref. FTBCS1 . This approach has been rigorously
derived based on the same variational procedure used for the derivation of the
standard BCS theory, taking into account the effects due to quasiparticle-
number fluctuation (QNF) and coupling to the self-consistent quasiparticle
random-phase approximation (SCQRPA) in addition to those of the QNF.
The paper is organized as follows. The pairing Hamiltonian, its
diagonalization, ensemble treatments of the exact pairing solutions, the main
results of the FTBCS1 and FTBCS1+SCQRPA theories, as well as their particle-
number projected versions, are summarized in Sec. II. The numerical results
obtained within the doubly-folded equidistant multilevel model Richardson are
analyzed in Sec. III. The paper is summarized in the last section, where
conclusions are drawn.
## II Exact solution of pairing Hamiltonian
### II.1 Exact solution at zero temperature
We consider a system of $N$ particles with single-particle energies
$\epsilon_{j}$, which are generated by particle creation operators
$a_{jm}^{\dagger}$ on $j$th orbitals with shell degeneracies $2\Omega_{j}$
($\Omega_{j}=j+1/2$), and interacting via a monopole-pairing force with a
constant parameter $G$. This system is described by the well-known pairing
Hamiltonian
$H=\sum_{jm}\epsilon_{j}\hat{N}_{j}-G\sum_{jj^{\prime}}\hat{P}_{j}^{\dagger}\hat{P}_{j^{\prime}}~{},$
(1)
where the particle-number operator $\hat{N}_{j}$ and pairing operator
$\hat{P}_{j}$ are given as
$\hat{N}_{j}=\sum_{m}a_{jm}^{\dagger}a_{jm}~{},\hskip
14.22636pt\hat{P}_{j}^{\dagger}=\sum_{m>0}a_{jm}^{\dagger}a_{j\widetilde{m}}^{\dagger}~{},\hskip
14.22636pt\hat{P}_{j}=(\hat{P}_{j}^{\dagger})^{\dagger}~{},$ (2)
with the symbol $~{}~{}\widetilde{}~{}~{}$ denoting the time-reversal
operator, namely $a_{j\widetilde{m}}=(-)^{j-m}a_{j-m}$. For a two-component
system with $Z$ protons and $N$ neutrons, the sums in Eq. (1) run also over
all $j_{\tau}m_{\tau}$, $j^{\prime}_{\tau}m^{\prime}_{\tau}$, and $G_{\tau}$
with $\tau=(Z,N)$.
The pairing Hamiltonian (1) was solved exactly for the first time in the
sixties by Richardson Richardson . By noticing that the operators
$\hat{J}_{j}^{z}\equiv(\hat{N}_{j}-\Omega_{j})/2$,
$\hat{J}^{+}_{j}\equiv\hat{P}_{j}^{\dagger}$, and
$\hat{J}^{-}_{j}\equiv\hat{P}_{j}$ close an SU(2) algebra of angular momentum,
the authors of Ref. EP have reduced the problem of solving the Hamiltonian
(1) to its exact diagonalization in the subsets of representations, each of
which is given by a set basis states
$|k\rangle\equiv|\\{s_{j}\\},\\{N_{j}\\}\rangle$ characterized by the partial
occupation number $N_{j}\equiv(J_{j}^{z}+\Omega_{j})/2$ and partial seniority
(the number of unpaired particles) $s_{j}\equiv\Omega_{j}-2J_{j}$ of the $j$th
single-particle orbital. Here $J_{j}(J_{j}+1)$ is the eigenvalue of the total
angular momentum operator
$(\hat{J}_{j})^{2}\equiv\hat{J}^{+}_{j}\hat{J}^{-}_{j}+\hat{J}^{z}_{j}(\hat{J}^{z}_{j}-1)$.
The partial occupation number $N_{j}$ and seniority $s_{j}$ are bound by the
constraints of angular momentum algebra $s_{j}\leq N_{j}\leq
2\Omega_{j}-s_{j}$ with $s_{j}\leq\Omega_{j}$. In the present paper, we use
this exact diagonalization method to find the eigenvalues ${\cal E}_{s}$ and
eigenstates (eigenvectors) $|s\rangle$ of Hamiltonian (1). Each $s$th
eigenstate $|s\rangle$ is fragmented over the basis states $|k\rangle$
according to $|s\rangle=\sum_{k}C_{k}^{(s)}|k\rangle$ with the total seniority
$s=\sum_{j}s_{j}$, and degenerated by
$d_{s}=\prod_{j}\bigg{[}\frac{(2\Omega_{j})!}{s_{j}!(2\Omega-
s_{j})!}-\frac{(2\Omega_{j})!}{(s_{j}-2)!(2\Omega-s_{j}+2)!}\bigg{]}~{},$ (3)
where $(C_{k}^{(s)})^{2}$ determine the weights of the eigenvector components.
The state-dependent exact occupation number $f_{j}^{(s)}$ on the $j$th single-
particle orbital is then calculated as the average value of partial occupation
numbers $N_{j}^{(k)}$ weighted over the basis states $|k\rangle$ as
$f_{j}^{(s)}=\frac{\sum_{k}N_{j}^{(k)}(C_{k}^{(s)})^{2}}{\sum_{k}(C_{k}^{(s)})^{2}}=\sum_{k}N_{j}^{(k)}(C_{k}^{(s)})^{2}~{},\hskip
5.69054pt{\rm with}\hskip 5.69054pt\sum_{k}(C_{k}^{(s)})^{2}=1~{}.$ (4)
### II.2 Exact solution embedded in thermodynamic ensembles
The properties of the nucleus as a system of $N$ interacting fermions at
energy ${\cal E}$ can be extracted from its level density $\rho({\cal E},N)$
Bohr
$\rho({\cal E},N)=\sum_{s,n}\delta({\cal E}-{\cal
E}_{s}^{(n)})\delta(N-n)~{},$ (5)
where ${\cal E}_{s}^{(n)}$ are the energies of the quantum states $|s\rangle$
of the $n$-particle system. Applying to the pairing problem, these energies
are the eigenvalues, which are obtained by exactly diagonalizing the pairing
Hamiltonian (1), as has been discussed in the previous section.
#### II.2.1 Grand canonical and canonical ensembles
Since each system in a GCE is exchanging its energy and particle number with
the heat bath at a given temperature $T=1/\beta$, both of its energy and
particle number are allowed to fluctuate. Therefore, instead of the level
density (5), it is convenient to use the average value, which is obtained by
integrating (5) over the intervals in ${\cal E}$ and $N$. This Laplace
transform of the level density $\rho({\cal E},N)$ defines the grand partition
function $Z(\beta,\lambda)$ Bohr ,
${\cal Z}(\beta,\lambda)=\int\int_{0}^{\infty}\rho({\cal E},N)e^{-\beta({\cal
E}-\lambda N)}dNd{\cal E}=\sum_{n}e^{\beta\lambda n}Z(\beta,n)~{},$ (6)
where $Z(\beta,n)$ denotes the partition function of the CE at temperature $T$
and particle number fixed at $n$, namely
${Z}(\beta,n)=\sum_{s}p_{s}(\beta,n)~{},\hskip
14.22636ptp_{s}(\beta,n)=d_{s}^{(n)}e^{-\beta{\cal E}_{s}^{(n)}}~{}.$ (7)
Within the GCE, the chemical potential $\lambda$ should be chosen as a
function of $T$ so that the average particle number $\langle N\rangle$ of the
system always remains equal to $N$. The summations over $n$ and $s$ in Eqs.
(6) and (7) are obtained by using Eq. (5) taking into account the degeneracy
$d_{s}^{(n)}$ (3) of each $s$th state in the $n$-particle system, and carrying
out the double integration with $\delta$ functions.
The thermodynamic quantities such as total energy ${\cal E}$, heat capacity
$C$ within the GCE and CE are as usual given as
$\langle{\cal E}\rangle_{\alpha}=-\frac{{\partial{\ln}{{\bf
Z}(\beta)_{\alpha}}}}{{\partial\beta}};\hskip
14.22636ptC^{(\alpha)}=\frac{\partial\langle{\cal E}\rangle_{\alpha}}{\partial
T}~{}$ (8)
where $\alpha$=GC for the GCE and $\alpha$=C for the CE.
The thermodynamic entropy $S^{(\alpha)}_{\rm th}$ is calculated within the GCE
or CE based on the general definition of the change of entropy (by Clausius):
$dS=\beta d{\cal E}~{}.$ (9)
By using the differentials of ${\rm ln}{\bf Z}(\beta)_{\alpha}$, one obtains
$S^{(\rm GC)}_{\rm th}=\beta(\langle{\cal E}\rangle_{\rm GC}-\lambda N)+{\rm
ln}{\cal Z}(\beta,\lambda)~{},\hskip 14.22636ptS^{(\rm C)}_{\rm
th}=\beta\langle{\cal E}\rangle_{\rm C}+{\rm ln}Z(\beta,N)~{}.$ (10)
The occupation number $f_{j}$ on the $j$th single-particle orbital is obtained
as the ensemble average of the state-dependent occupation numbers
$f_{j}^{(s)}$, namely
$f_{j}^{(\rm GC)}=\frac{1}{{\cal
Z}(\beta,\lambda)}\sum_{s,n}f_{j}^{(s,n)}d_{s}^{n}e^{-\beta({\cal
E}_{s}^{(n)}-\lambda n)}~{},\hskip 14.22636ptf_{j}^{(\rm
C)}=\frac{1}{{Z}(\beta,N)}\sum_{s}f_{j}^{(s,N)}d_{s}^{N}e^{-\beta{\cal
E}_{s}^{N}}~{},$ (11)
for the GCE and CE, respectively.
#### II.2.2 Microcanonical ensemble
Different from the GCE and CE, there is no heat bath within the MCE. A MCE
consists of thermodynamically isolated systems, each of which may be in a
different microstate (microscopic quantum state) $|s\rangle$, but has the same
total energy ${\cal E}$ and particle number $N$. Since the energy and particle
number of the system are fixed, one should use the level density (5) to
calculate directly the entropy by Boltzmann’s definition, namely
$S^{(\rm MC)}({\cal E})={\ln}\Omega({\cal E})~{},$ (12)
where $\Omega({\cal E})=\rho({\cal E})\Delta{\cal E}$ is the statistical
weight, i.e. the number of eigenstates of Hamiltonian (1) within a fixed
energy interval $\Delta{\cal E}$. The condition of thermal equilibrium then
leads to the standard definition of temperature within the MCE as Bohr
$\beta=\frac{\partial S^{(\rm MC)}({\cal E})}{\partial{\cal
E}}=\frac{1}{\rho({\cal E})}\frac{\partial\rho({\cal E})}{\partial{\cal
E}}~{},$ (13)
using which one can build a “thermometer” for each value of the excitation
energy ${\cal E}$ of the system.
In practical calculations, to handle the numerical derivative at the right-
hand side of Eq. (13), one needs a continuous energy dependence of $\rho({\cal
E})$ in the form of a distribution in Eq. (5). This is realized by replacing
the Dirac-$\delta$ function $\delta(x)$ at the right-hand side of Eq. (5),
where $x\equiv{\cal E}-{\cal E}_{s}$, with a nascent $\delta$-function
$\delta_{\sigma}(x)$ ($\sigma>$ 0), i.e. a function that becomes the original
$\delta(x)$ in the limit $\sigma\rightarrow$ 0 222This replacement is
equivalent to the folding procedure for the average density, discussed in Sec.
2.9.3 of the textbook Ring , taken at zero degree ($M=$ 0) of the Laguerre
polynomial $L_{M}^{1/2}(x)$.. Among the popular nascent $\delta$ functions are
the Gaussian (or normal) distribution, Breit-Wigner distribution BW , and
Lorentz (or relativistic Breit-Wigner) distribution, which are given as
$\delta_{\sigma}(x)_{\rm G}=\frac{1}{\sigma\sqrt{2\pi}}{\rm
e}^{-\frac{x^{2}}{2\sigma^{2}}}~{},\hskip 8.53581pt\delta_{\sigma}(x)_{\rm
BW}=\frac{1}{\pi}\frac{\sigma}{x^{2}+\sigma^{2}}~{},$ $\delta_{\sigma}({\cal
E}-{\cal E}_{s})_{\rm L}=\frac{1}{\pi}\frac{\sigma{\cal E}^{2}}{({\cal
E}^{2}-{\cal E}_{s}^{2})^{2}+\sigma^{2}{\cal E}^{2}}~{},$ (14)
respectively. In these distributions, $\sigma$ is a parameter, which defines
the width of the peak centered at ${\cal E}={\cal E}_{s}$. The full widths at
the half maximum (FWHM) $\Gamma$ of these distributions are
$\Gamma=2\sigma\sqrt{2{\ln}2}\simeq 2.36\sigma$ for the Gaussian distribution,
and $2\sigma$ for the Breit-Wigner and Lorentz ones. The disadvantage of using
such smoothing is that the temperature extracted from Eq. (13), of course,
depends on the chosen distribution as well as the value of the parameter
$\sigma$. It is worth mentioning that changing $\sigma$ is not equivalent to
changing $\Delta E$ for the discrete $\rho({\cal E})$ used in calculating the
statistical weight $\Omega({\cal E})$ in Eq. (12), since the wings of any
distributions in Eq. (14) extend with increasing $\sigma$, whereas for the
discrete spectrum, no more levels might be found in the low ${\cal E}$ by
enlarging $\Delta{\cal E}$.
The generalized form of Boltzmann’s entropy (12) is the definition by von
Neumann
$S=-{\rm Tr}[\rho{\rm ln}\rho]~{},$ (15)
which is the quantum mechanical correspondent of the Shannon’s entropy in
classical theory. By expressing the level density $\rho({\cal E})$ in Eq. (5)
in terms of the local densities of states $F_{k}({\cal E})$ Zele ,
$\rho({\cal E})=\sum_{k}F_{k}({\cal E})~{},\hskip 14.22636ptF_{k}({\cal
E})=\sum_{s}[C_{k}^{(s)}]^{2}\delta({\cal E}-{\cal E}_{s})~{},$ (16)
the entropy becomes
$S^{(s)}=-\sum_{k}[C_{k}^{(s)}]^{2}\ln[C_{k}^{(s)}]^{2}~{}.$ (17)
By using Eqs. (13) and (17) one can extract a quantity $T_{s}$ as the
“microcanonical temperature of each eigenstate $s$”.
### II.3 Determination of pairing gaps from total energies
#### II.3.1 Ensemble-averaged pairing gaps
Although the exact solution of Hamiltonian (1) does not produce a pairing gap
per se, which is a quantity determined within the mean field, it is useful to
define an ensemble-averaged pairing gap to be closely compared with the gaps
predicted by the approximations within and beyond the mean field. In the
present paper, we define this ensemble-averaged gap $\Delta_{\alpha}$ from the
pairing energy ${\cal E}_{\rm pair}^{(\alpha)}$ of the system as follows
$\Delta_{\alpha}=\sqrt{-G{\cal E}_{\rm pair}^{(\alpha)}}~{},\hskip
14.22636pt{\cal E}_{\rm pair}^{(\alpha)}=\langle{\cal
E}\rangle_{\alpha}-\langle{\cal E}\rangle_{\alpha}^{(0)}~{},\hskip
14.22636pt\langle{\cal E}\rangle_{\alpha}^{(0)}\equiv
2\sum_{j}\Omega_{j}\big{[}\epsilon_{j}-\frac{G}{2}f_{j}^{(\alpha)}\big{]}f_{j}^{(\alpha)}~{},$
(18)
within the GCE ($\alpha={\rm GC}$), CE ($\alpha={\rm C}$), and MCE
($\alpha={\rm MC}$), where for the latter we put ${\cal E}_{\rm
pair}^{(\alpha)}\equiv{\cal E}_{\rm pair}(s)$ with $\langle{\cal
E}\rangle_{\rm MC}\equiv{\cal E}_{s}^{(N)}$. The term $\langle{\cal
E}\rangle_{\alpha}^{(0)}$ denotes the contribution from the energy
$2\sum_{j}\Omega_{j}\epsilon_{j}f_{j}^{(\alpha)}$ of the single-particle
motion described by the first term at the right-hand side of Hamiltonian (1),
and the energy $-G\sum_{j}\Omega_{j}[f_{j}^{(\alpha)}]^{2}$ of uncorrelated
single-particle configurations caused by the pairing interaction in
Hamiltonian (1). Therefore, subtracting the term $\langle{\cal
E}\rangle_{\alpha}^{(0)}$ from the total energy $\langle{\cal
E}\rangle_{\alpha}$ yields the residual that corresponds to the energy due to
pure pairing correlations. By replacing $f_{j}^{(\alpha)}$ with $v_{j}^{2}$,
one recovers from Eq. (18) the expression ${\cal E}_{\rm pair}^{(\rm
BCS)}=-\Delta^{2}_{\rm BCS}/G$ of the BCS theory. Given several definitions of
the ensemble-averaged gap existing in the literature, it is worth mentioning
that the definition (18) is very similar to that given by Eq. (52) of Ref.
Delft , whereas, even within the CE, the gap $\Delta_{\rm C}$ is different
from the canonical gap defined in Refs. Ross ; Frau , since in the latter, the
term $\langle{\cal E}\rangle_{\rm C}^{(0)}$ is taken at $G=$ 0\. The pairing
energy ${\cal E}_{\rm pair}^{(\alpha)}$ in Eq. (18) is also different from the
simple average value
$-G\sum_{jj^{\prime}}\langle{\hat{P}^{\dagger}_{j}\hat{P}_{j^{\prime}}}\rangle_{\alpha}$
of the last term of Hamiltonian (1) as the latter still contains the
uncorrelated term $-G\sum_{j}\Omega_{j}[f_{j}^{(\alpha)}]^{2}$.
#### II.3.2 Empirical determination of pairing gap at finite temperature
The simplest way to empirically determine the pairing gap of a system with $N$
particles in the ground state (at $T=$ 0) is to use the so-called three-point
formula $\Delta^{(3)}(N)$, which is given by the odd-even mass difference
between the ground-state energies of the $N$-particle system and the
neighboring systems with $N\pm 1$ particles Bohr . A straightforward extension
of this formula to $T\neq$ 0 reads
$\Delta^{(3)}(\beta,N)\simeq\frac{(-1)^{N}}{2}[\langle{\cal
E}(N+1)\rangle_{\alpha}-2\langle{\cal E}(N)\rangle_{\alpha}+\langle{\cal
E}(N-1)\rangle_{\alpha}]~{},$ (19)
which was used, e.g., in Ref. Kaneko to extract the thermal pairing gaps in
wolfram and molybdenum isotopes. The four-point formula represents the
arithmetic average of the three-point gaps over in the neighboring systems
with $N$ and $N-1$ particles, namely
$\Delta^{(4)}(\beta,N)=\frac{1}{2}[\Delta^{(3)}(N)+\Delta^{(3)}(N-1)]~{}.$
(20)
A drawback of the gaps $\Delta^{(3)}(\beta,N)$ and $\Delta^{(4)}(\beta,N)$,
defined in this way, is that they still contain the admixture with the
contribution from uncorrelated single-particle configurations. The later
increases with increasing $T$. Therefore, Eq. (19) and, consequently, Eq. (20)
do not hold at finite temperature. To remove this contribution so that the
experimentally extracted pairing gap is comparable with $\Delta_{\alpha}$ in
Eq. (18), we propose in the present paper an improved odd-even mass difference
formula at $T\neq$ 0 as follows. Using Eq. (18) to express the total energy
$\langle{\cal E}(N)\rangle_{\alpha}$ of the system in terms of
$\Delta_{\alpha}(N)$ and $\langle{\cal E}(N)\rangle_{\alpha}^{0}$, we obtain
$\langle{\cal E}(N)\rangle_{\alpha}=\langle{\cal
E}(N)\rangle_{\alpha}^{(0)}-\frac{\widetilde{\Delta}^{2}(\beta,N)}{G}~{}.$
(21)
where $\langle{\cal E}(N)\rangle_{\alpha}$ is the experimentally known total
energy of the system with $N$ particles at $T\neq$ 0, whereas
$\widetilde{\Delta}(\beta,N)$ is the pairing gap of this system to be
determined. Replacing $\langle{\cal E}(N)\rangle_{\alpha}$ in the definition
of the odd-even mass difference (19) with the right-hand side of Eq. (21), we
obtain a quadratic equation for the three-point
$\widetilde{\Delta}^{(3)}(\beta,N)$
$\widetilde{\Delta}^{(3)}(\beta,N)=(-1)^{N}\bigg{\\{}\frac{1}{2}[\langle{\cal
E}(N+1)\rangle_{\alpha}+\langle{\cal E}(N-1)\rangle_{\alpha}]-\langle{\cal
E}\rangle_{\alpha}^{0}+\frac{[\widetilde{\Delta}^{(3)}(\beta,N)]^{2}}{G}\bigg{\\}}~{}.$
(22)
The discriminant of this equation is equal to $G\sqrt{1-4{S^{\prime}}/{G}}$,
where
$S^{\prime}=\frac{1}{2}\big{[}\langle{\cal
E}(N+1)\rangle_{\alpha}+\langle{\cal
E}(N-1)\rangle_{\alpha}\big{]}-\langle{\cal E}(N)\rangle_{\alpha}^{(0)}~{}.$
(23)
Therefore the condition for Eq. (22) to have real solutions is $S^{\prime}\leq
G/4$. Including both cases with even and odd $N$, the positive solution of Eq.
(22) is always possible provided $S^{\prime}<$ 0, which reads
$\widetilde{\Delta}^{(3)}(\beta,N)=\frac{G}{2}\bigg{[}(-1)^{N}+\sqrt{1-4\frac{S^{\prime}}{G}}\bigg{]}~{}.$
(24)
The quantity $S^{\prime}$ differs from the conventional odd-even mass
difference shown in the square brackets of (19) by the contribution due to the
uncorrelated single-particle motion, i.e. the last sum containing $G$ in the
definition of $\langle{\cal E}\rangle_{\alpha}^{(0)}$ in Eq. (18). The latter
is zero only at $G=$ 0, which yields $\widetilde{\Delta}^{(3)}(\beta,N)=$ 0,
as can be seen from Eq. (24) as well. In this case both $S^{\prime}$ and the
expression in the square brackets of Eq. (19) vanish as they are just the
difference of the Hartree-Fock energies $\langle
E(N+1)\rangle^{(0)}_{\alpha}+\langle E(N-1)\rangle^{(0)}_{\alpha}-2\langle
E(N)\rangle^{(0)}_{\alpha}$, which is zero at $G=0$. Moreover, while the odd-
even mass difference of Eq. (19) can be positive or negative depending on
whether $N$ is even or odd, $S^{\prime}$ should be always negative as
discussed above. The gap $\widetilde{\Delta}^{(3)}(\beta,N)$ extracted from
Eq. (24) is, therefore, consistent with the result of the exact calculation at
zero temperature, where the pairing gap is zero only at $G$ = 0, and increases
with $G$ (See e.g. Fig. 1 - (a) of Ref. Sumaryada ). As compared to the simple
finite-temperature extension of the odd-even mass (19), the modified gap
$\widetilde{\Delta}^{(3)}(\beta,N)$ is closer to the ensemble-averaged gap
$\Delta_{\alpha}(N)$ (18) since it is free from the contribution of
uncorrelated single-particle configurations. In Eq. (24), the energies
$\langle{\cal E}(N+1)\rangle_{\alpha}$ and $\langle{\cal
E}(N-1)\rangle_{\alpha}$ can be extracted from experiments, whereas the
pairing interaction parameter $G$ can be obtained by fitting the experimental
values of $\Delta(T=0,N)$. The energy $\langle{\cal
E}(N)\rangle_{\alpha}^{(0)}$ remains the only model-dependent quantity being
determined in terms of the single-particle energies $\epsilon_{j}$ and single-
particle occupation numbers $f_{j}^{(\alpha)}$. As a matter of fact, since the
pairing gap $\Delta^{(3)}(\beta,N)$ of the $N$-particle system at the left-
hand side of the expression (19) is also present in the total energy
$\langle{\cal E}(N)\rangle_{\alpha}$ of the same system at the right-hand side
of (19), the former is simply extracted from the latter by using Eqs. (18) and
(21). As the result, the modified gap $\widetilde{\Delta}^{(3)}(\beta,N)$ of
the system with $N$ particles explicitly depends now on $\langle{\cal
E}(N)\rangle_{\alpha}^{(0)}$ of the same system rather than on its total
energy $\langle{\cal E}(N)\rangle$. The modified four-point gap
$\widetilde{\Delta}^{(4)}(\beta,N)$ is then obtained from the modified three-
point gaps $\widetilde{\Delta}^{(3)}(\beta,N)$ and
$\widetilde{\Delta}^{(3)}(\beta,N-1)$ by using the definition (20).
## III FTBCS1+SCQRPA and FTLN1+SCQRPA
The FTBCS1+SCQRPA includes the effects due to quasiparticle-number fluctuation
and coupling to the SCQRPA vibrations, which are neglected within the standard
BCS theory. The derivation of the FTBCS1+SCQRPA has already been given and
discussed in detail in Ref FTBCS1 . Therefore we give below only the main
results, which are necessary to follow the numerical calculations in the
present paper.
The rigorous derivation of the FTBCS1+SCQRPA theory follows the standard
variational procedure, which is used to derived the BCS theory. By using the
Bogoliubov’s canonical transformation Bogo
${\alpha}_{jm}^{\dagger}=u_{j}a_{jm}^{\dagger}-v_{j}a_{j\widetilde{m}}~{},\hskip
14.22636pt{\alpha}_{j\widetilde{m}}=u_{j}a_{j\widetilde{m}}+v_{j}a_{jm}^{\dagger}~{},$
(25)
the pairing Hamiltonian (1) is expressed in terms of quasiparticle operators,
$\alpha_{jm}^{\dagger}$ and $\alpha_{jm}$, as $H_{\rm q.p.}$, whose explicit
form is given in many papers, e.g., Eqs. (3), (8) – (14) of Ref. FTBCS1 . The
$u_{j}$ and $v_{j}$ coefficients of the Bogoliubov’s transformation (25) are
determined by minimizing the GCE average value of the Hamiltonian ${\cal
H}=H_{\rm q.p.}-\lambda\hat{N}$. This leads to the equation
$\langle[{\cal H},{\cal A}_{j}^{\dagger}]\rangle_{\rm GC}=0~{},\hskip
14.22636pt{\rm where}\hskip 5.69054pt{\cal
A}_{j}^{\dagger}=\frac{1}{\sqrt{\Omega_{j}}}\sum_{j=1}^{\Omega_{j}}\alpha_{jm}^{\dagger}\alpha_{j\widetilde{m}}^{\dagger}~{}.$
(26)
The final result yields the equation for the level-dependent pairing gap
$\Delta_{j}$ for a system with even number $N$ of particles,
$\Delta_{j}=\Delta+\delta\Delta_{j}~{},\hskip
14.22636pt\Delta=G\sum_{j^{\prime}}\Omega_{j^{\prime}}(1-2n_{j^{\prime}})u_{j^{\prime}}v_{j^{\prime}}~{},\hskip
14.22636pt\delta\Delta_{j}=2G\frac{\delta{\cal
N}_{j}^{2}}{1-2n_{j}}u_{j}v_{j}~{},$ (27)
with the QNF defined as
$\delta{\cal N}_{j}^{2}\equiv n_{j}(1-n_{j})~{},$ (28)
and the equation for the average particle number $N$,
$N=2\sum_{j}\Omega_{j}[v_{j}^{2}(1-2n_{j})+n_{j}]~{},$ (29)
The $u_{j}$ and $v_{j}$ coefficients in Eqs. (27) and (29) are determined as
$u_{j}^{2}=\frac{1}{2}\bigg{[}1+\frac{\epsilon_{j}^{\prime}-Gv_{j}^{2}-\lambda}{E_{j}}\bigg{]}~{},\hskip
14.22636ptv_{j}^{2}=\frac{1}{2}\bigg{[}1-\frac{\epsilon_{j}^{\prime}-Gv_{j}^{2}-\lambda}{E_{j}}\bigg{]}~{},$
(30)
with the renormalized single-particle energies $\epsilon_{j}^{\prime}$,
$\epsilon_{j}^{\prime}=\epsilon_{j}+\frac{G}{\sqrt{\Omega_{j}}(1-2n_{j})}\sum_{j^{\prime}}\sqrt{\Omega_{j^{\prime}}}(u_{j^{\prime}}^{2}-v_{j}^{2})(\langle{\cal
A}^{\dagger}_{j}{\cal A}^{\dagger}_{j^{\prime}\neq j}\rangle+\langle{\cal
A}^{\dagger}_{j}{\cal A}_{j^{\prime}}\rangle)~{},$ (31)
and the quasiparticle energies
$E_{j}=\sqrt{(\epsilon_{j}^{\prime}-Gv_{j}^{2}-\lambda)^{2}+\Delta_{j}^{2}}~{}.$
(32)
For a system with an odd number of particles, the blocking effect caused by
the unpaired particle should be taken into account in Eqs. (27) and (29). In
the present paper, for simplicity, we do not consider systems with odd
particle numbers within the FTBCS1+SCQRPA and FTLN1+SCQRPA.
The pair correlators $\langle{\cal A}^{\dagger}_{j}{\cal
A}^{\dagger}_{j^{\prime}\neq j}\rangle$ and $\langle{\cal
A}^{\dagger}_{j}{\cal A}_{j^{\prime}}\rangle$ in Eq. (31) are determined by
numerically solving a set of coupled equations (47) and (48) of Ref. FTBCS1 ,
which contain the ${\cal X}_{j}^{\mu}$ and ${\cal Y}_{j}^{\mu}$ amplitudes of
the SCQRPA equations. The details of the derivation of the SCQRPA are given in
Ref. SCQRPA . The SCQRPA equations are solved self-consistently with the gap
and number equations (27) and (29). The quasiparticle occupation numbers
$n_{j}$ are then found by solving a set of equations that include coupling of
quasiparticle density operators $\alpha_{jm}^{\dagger}\alpha_{jm}$ to the
SCQRPA phonon operators. The result yields the integral equation (69) of Ref.
FTBCS1 for $n_{j}$. To compare with the level-independent gap such as the BCS
one, the level-weighted gap,
$\overline{\Delta}=\sum_{j}\Omega_{j}\Delta_{j}/\sum_{j}\Omega_{j}~{},$ (33)
is used instead of Eq. (27). The total energy $\langle{\cal E}\rangle_{\rm
FTBCS1+SCQRPA}$ is calculated by averaging the quasiparticle representation
$H_{\rm q.p.}$ of the Hamiltonian (1) within the GCE, i.e.
$\langle{\cal E}\rangle=\langle H_{\rm q.p.}\rangle_{\rm GC}~{},$ (34)
and the heat capacity $C$ is then found from Eq. (8). The thermodynamic
entropy is obtained by integrating Eq. (9)
$S_{\rm th}=\int_{0}^{T}\frac{1}{\tau}Cd\tau~{}.$ (35)
The BCS equations BCS ; Bogo are recovered from Eqs. (27) and (29) by
neglecting the QNF $\delta{\cal N}_{j}^{2}$ [i.e. $\delta\Delta_{j}=0$ in Eq.
(27)] together with the pair correlators $\langle{\cal A}^{\dagger}_{j}{\cal
A}^{\dagger}_{j^{\prime}\neq j}\rangle$ and $\langle{\cal
A}^{\dagger}_{j}{\cal A}_{j^{\prime}}\rangle$ in Eq. (31), and assuming
$n_{j}$ to have the form of Fermi-Dirac distribution for noninteracting
quasiparticles, i.e. setting
$n_{j}=n_{j}^{\rm FD}\equiv\frac{1}{e^{\beta E_{j}}+1}~{}.$ (36)
The BCS is known to violate the particle number. This causes a certain quantal
particle-number fluctuation around the average value determined by Eq. (29)
even at $T=$ 0\. The FTBCS1+SCQRPA takes into account only the thermal effect
in terms of QNF $\delta{\cal N}_{j}^{2}$, but does not remove the quantal
fluctuation, which is a feature inherent in the BCS wave functions. To cure
this inconsistency, a proper particle-number projection (PNP) needs to be
carried out. The Lipkin-Nogami (LN) method LN is an approximated PNP before
variation, which is widely used in nuclear study because of its simplicity.
This method has been implemented into the FTBCS1+SCQRPA, and the ensuing
approach is called the FTLN1+SCQRPA (See Sec. II.C.2. of Ref. FTBCS1 ).
However, the LN method can approximately eliminate only the quantal
fluctuations due to particle-number violation within the BCS theory. These
quantal fluctuations are different from the thermal particle-number
fluctuations, which always arise from the exchange of particles between the
systems in the GCE. The LN method is, therefore, not sufficient to remove the
particle-number fluctuations within the GCE. To avoid the thermal particle-
number fluctuations, the average in the CE should be used instead.
Unfortunately, the methods of equilibrium statistical physics applied to the
nuclear theories such as the Matsubara-Green’s function and/or double-time
Green’s function techniques, which are used to derived the BCS and QRPA
equations at finite temperature, are all based on the GCE. The complete
particle-number projection based on the applying particle-number projection
operator PNP at finite temperature for these approaches still remains a
subject under study [See, e.g. Ref. PMBCS ].
In this paper, the numerical results obtained within both particle-number
unprojected (FTBCS1+SCQRPA) and projected (FTLN1+SCQRPA) approaches will be
compared with those given by averaging the exact pairing solution with the
principal thermodynamic ensembles.
## IV NUMERICAL RESULTS
### IV.1 Details of numerical calculations
The schematic model employed for numerical calculations consists of $N$
particles, which are distributed over $\Omega=N$ doubly-folded equidistant
levels (i.e. with the level degeneracy $2\Omega_{j}=2$). These levels, whose
energies are $\epsilon_{j}=\epsilon[j-(\Omega+1)/2]$ ($j=$ 1, …, $\Omega$),
interact via the pairing force with a constant parameter $G$. The model is
half-filled, namely, in the absence of the pairing interaction, all the lowest
$\Omega/2$ levels (with negative single-particle energies) are filled up with
$N$ particles, leaving $\Omega/2$ upper levels (with positive single-particle
energies) empty 333This model is also called Richardson’s model, picket-fence
model, ladder model, multilevel pairing model, etc. in the literature.. It is
worth mentioning that the extension of the exact solution to $T\neq$ 0 is not
possible at a large value of $\Omega=N$. For the present schematic model, the
number of eigenstates $n_{S}$, each of which is $2^{S}$-degenerated, increases
almost exponentially with $\Omega$
$n_{S}(\Omega)={\rm C}_{N_{\rm pair}}^{\Omega}+\sum_{S}{{\rm
C}_{S}^{\Omega}\times{\rm C}_{N_{\rm pair}-\frac{S}{2}}^{\Omega-S}}~{},$ (37)
where ${\rm C}_{n}^{m}={m!}/[n!(m-n)!]$ and N${}_{\rm pair}=N/2$ is the
numbers of pairs distributed over $\Omega$ single-particle levels. The sum in
Eq. (37) runs over all the values of total seniorities S = 0, 2,
$\ldots,\Omega$ . Therefore, at $\Omega=N=$ 16 there are 5196627 states, which
corresponds to the order $\sim$ 2.7$\times 10^{13}$ for the square matrix to
be diagonalized. This makes the finite-temperature extension of the exact
pairing solution practically impossible for $\Omega>$ 16 since all the
eigenvalues must be included in the partition function. Therefore, in the
present paper, we limit the calculations up to $\Omega=N=$ 14, for which there
are 73789 eigenstates. For the GCE average with respect to the system with $N$
particles and $\Omega=N$ levels, the sum over particle numbers $n$ runs from
$n_{min}=1$ to $n_{max}=2\Omega-1$ with the blocking effect caused by the odd
particle properly taken into account. The calculations are carried out by
using the level distance $\epsilon=$ 1 MeV and the pairing interaction
parameter $G=$ 0.9 MeV. With these parameters, the values of the pairing gap
obtained at $T=$ 0 are equal to around 3, 3.5, and 4.5 MeV for $N=$ 8, 10, and
12 in qualitative agreement with the empirical systematic for realistic nuclei
Bohr .
### IV.2 Results within GCE, CE and BCS-based approaches
Figure 1: (Color online) Pairing gaps $\overline{\Delta}$, total energies
$\langle{\cal E}\rangle$, heat capacities $C$ and thermodynamic entropies,
obtained for $N=$ 8, 10, and 12 ($G=$ 0.9 MeV) within the FTBCS (dotted
lines), FTBCS1+SCQRPA (thin dashed lines), FTLN1+SCQRPA (thick dashed lines),
CE (dash-dotted lines), and GCE (solid lines) vs temperature $T$.
Shown in Fig. 1 are the pairing gaps, total energies, and heat capacity
obtained for $N=$ 8, 10, and 12 as functions of temperature $T$ within the
FTBCS, FTBCS1+SCQRPA, and FTLN1+SCQRPA, along with the corresponding results
obtained by embedding the exact solutions (eigenvalues) of the Hamiltonian (1)
in the CE and GCE. These latter results using the exact pairing solutions are
referred to as CE and GCE results hereafter. For the FTBCS1+SCQRPA and
FTLN1+SCQRPA the level-weighted gaps $\overline{\Delta}$ (33) are plotted in
Figs. 1 (a1) – 1 (a3).
It is seen from this figure that the GCE results are close to the CE ones for
the gaps, obtained from Eq. (18), as well as for the total energies, obtained
by using Eq. (8). Among three systems under consideration, the largest
discrepancies between the CE and GCE results are seen in the lightest system
($N=$ 8), for which the GCE gap is slightly lower than the CE one, and
consequently, the GCE total energy is slightly higher than that obtained
within the CE. As $N$ increases, the high-$T$ values of the GCE and CE gaps
become closer, so do the corresponding total energies. Different from the BCS
results (dotted lines), which show a collapse of the gap and a spike in the
temperature dependence of the heat capacity at $T=T_{\rm c}$, no singularity
occurs in the GCE and CE results. Both GCE and CE gaps decrease monotonously
with increasing $T$ and remain finite even at $T\gg$ 5 MeV. The FTBCS1+SCQRPA
and FTLN1+SCQRPA predictions for the pairing gap are found in qualitative
agreement with the GCE and CE results [Figs. 1 (a1) – 1 (a3)]. Because of a
different definition of the pairing gap in the exact solutions within the GCE
and/or CE, where actually no mean-field gap exists (See Sec. II C1), one
cannot expect a more quantitative agreement between the predictions by the
FTLN1+SCQRPA and the GCE (CE) results. In this respect, the modified gap (24)
yields a better quantitative agreement with the GCE (CE) results, as will be
seen later in Sec. IV.4. The two BCS-based approaches differ noticeably only
at $T\leq T_{\rm c}$, where the FTLN1+SCQRPA gap, due to PNP, practically
coincides with the GCE and CE results at $T<$ 0.5 – 1 MeV. At $T>T_{\rm c}$
the predictions by two approaches start to converge to the same value, which
decreases with increasing $T$, remaining smaller than the GCE and CE gaps.
For the same reason, at $T<T_{\rm c}$, where the total energy predicted by the
FTLN1+SCQRPA agrees very well with the GCE and CE results, the FTBCS1+SCQRPA
energy is significantly larger [Figs 1 (b1) – 1 (b3)]. At $T>T_{\rm c}$, one
finds a remarkable agreement between the energies predicted by the
FTBCS1+SCQRPA, FTLN1+SCQRPA, and that obtained within the GCE. This seems to
be a natural consequence, given the fact that the two BCS-based approaches are
derived by using the variational procedure within the GCE. The energies do not
suffer either from the difference in the definitions as the pairing gaps do,
as has been discussed above. For the heat capacities [Figs 1 (c1) – 1 (c3)],
the spike obtained at $T=T_{\rm c}$ within the FTBCS theory is completely
smeared out within the GCE, CE as well as the FTBCS1+SCQRPA and FTLN1+SCQRPA,
where only a broad bump is seen in a large temperature region between 0 and 3
MeV. At $T<$ 1 – 1.2 MeV, the difference between the GCE and CE energies leads
to a significant discrepancy between the GCE and CE values for the heat
capacity. As the FTBCS1+SCQRPA and FTLN1+SCQRPA heat capacities are close to
the GCE values, this explains the discrepancy reported in Figs. 4 (b) and 4
(c) of Ref. FTBCS1 between these results and the predictions obtained within
the CE by the quantum Monte-Carlo calculations, which use a model Hamiltonian
with same monopole pairing interaction. The thermodynamic entropies $S_{\rm
th}$ are shown as functions of $T$ in Figs. 1 (d1) – 1 (d3). Once again the
FTLN1+SCQRPA results for the thermodynamic entropy $S_{\rm th}$ agree very
well with $S_{\rm th}$ obtained within the GCE, whereas such good agreement is
seen for the FTBCS1-SCQRPA results only at $T<$ 1 MeV. The CE thermodynamic
entropy is significantly lower than the values obtained in all other
approaches under consideration, which are based on averaging within the GCE.
On the other hand, the BCS theory strongly overestimates the thermodynamic
entropy at $T>T_{\rm c}$.
### IV.3 Results within MCE
Figure 2: (Color online) Temperature extracted from Eq. (13) within the MCE
(dots) vs excitation energy ${\cal E}^{*}$ in comparison with the CE results
(dash-dotted line) for $N=$ 8, 10, and 12. The results in the top panels, (a)
– (c), are obtained from the definition of Neumann for entropy Eq. (15),
whereas those in the middle panels, (d) – (f), and bottom ones, (g) – (i), are
calculated from Boltzmann’s entropy Eq. (12) with two different values of the
energy interval $\Delta E$ in the statistical weight $\Omega$.
The values of temperature within the MCE as extracted by using Eq. (13) are
plotted in Fig. 2 along with the CE results against the excitation energy
${\cal E}^{*}$. The CE definition of the latter is ${\cal E}^{*}_{\rm
C}\equiv\langle{\cal E}(T)\rangle_{\rm C}-\langle{\cal E}(T=0)\rangle_{\rm
C}$. For comparison, the results obtained by using the entropy (17) are also
presented in the top panels [Figs. 2 (a) – 2 (c)]. They show the values of the
eigenstate temperatures $T_{s}$, which scatter widely around the heat bath
temperature (i.e. the CE result) [Figs. 2 (a) – 2 (c)]. Many of these values
are even negative. Since the eigenstate temperatures $T_{s}$ are related with
the spread of the exact eigenfunctions over the “unperturbed” basis states $k$
with the weights $[C_{k}^{(s)}]^{2}$, they need not follow the trend of the
heat bath (or canonical) temperature, which depends just on the level density
[Eqs. (12) and (13)]. In fact, with increasing the energy interval $\Delta E$,
within which the levels are counted, the values of the MCE temperature
determined by Eq.(13) gradually converge to the CE values [Figs. 2 (a) – 2
(i)]. This means that, the thermal equilibrium within the MCE for the present
isolated pairing model can be reached only at large $N$ and dense spectrum
(small level spacing). Moreover, a full thermalization in a system with pure
pairing is a subject under question. In Ref. Zele ; Volya1 , the
thermalization of the system is characterized by the single-particle
temperature, which is obtained by fitting the occupation numbers of the
individual eigenstates to those given by the Fermi-Dirac distribution. The
numerical results Fig. 12 of Ref. Volya1 show that, the temperature extracted
from the density of states in 116Sn by using Eq. (13) agrees with the single-
particle temperature only when all the residual interactions are taken into
account. With the pure pairing interaction alone, the single-particle
temperature shows a low temperature of the whole system, whereas the MCE
temperature (13) is a hyperbola as a function of the excitation energy with a
singularity in the middle of the spectrum, where it turns negative [See also
Figs. 53 and 54 of Ref. Zele ].
Figure 3: (Color online) Temperatures [(a) – (c)], entropies [(d) – (f)], and
pairing gaps [(g) – (i)] within the MCE as functions of excitation energy
${\cal E}^{*}$ for $N=$ 8 ($G=$ 0.9 MeV) obtained by using the Gaussian,
Lorentz, and Breit-Wigner distributions from Eq. (14) for the level density at
different values of the parameter $\sigma$. The dash-dotted lines show the CE
values.
The values of MCE temperature, entropy and gap obtained by using the Gaussian,
Lorentz, and Breit-Wigner distributions from Eq. (14) are shown in Fig. 3 as
functions of excitation energy ${\cal E}^{*}$. While the fluctuating behavior
of the microcanonical temperature can be smoothed out by increasing the
parameter $\sigma$ in all three distributions, we found that only the Gaussian
distribution can simultaneously fit both the temperature and entropy [Figs. 3
(a) and 3 (d)]. The Lorentz distribution can fit only the MCE temperature to
the CE one, but fails to do so for the entropy [Figs. 3 (b) and 3 (e)],
whereas the Breit-Wigner distribution can fit the MCE temperature to the CE
value only at high excitation energies [Figs. 3 (c)]. A similar result is seen
for the pairing gaps as functions of ${\cal E}^{*}$, where the Gaussian fit
gives the best performance among the three distributions [Compare Figs. 3 (g),
3 (h), and 3 (i)]444The non-vanishing values of $T$ and $S$ at ${\cal E}^{*}=$
0 in Figs. 3 (a), 3 (c), 3 (d), 3 (f) are the artifacts due to the use of the
Gaussian and Breit-Wigner distribution functions (14) to smooth out the
discrete level density. These two distributions have non-zero values $\sim
1/\sigma$ at $x=$ 0, where the Lorentz distribution vanishes.. We conclude
that the Gaussian distribution should be chosen as the best one for smoothing
the level density $\rho({\cal E})$ in Eq. (5) to extract the MCE temperature.
### IV.4 Pairing gaps extracted from odd-even mass differences
Figure 4: (Color online) Pairing gaps extracted from the odd-even mass
differences as functions of $T$ for $N=$ 10 (a,c) and $N=$ 9 (b,d) ($\Omega=$
10, $G=$ 0.9 MeV). The thin solid and thick solid lines denote the gaps
$\Delta^{(i)}(\beta,N)$ ($i=$ 3, 4) from Eq. (19), and the modified gaps
$\widetilde{\Delta}^{(i)}(\beta,N)$ from Eq. (24), respectively. The dash-
dotted lines are the canonical results $\Delta^{(i)}_{\rm C}$. The upper
panels (a) and (b) show the three-points gaps ($i=$ 3), whereas the the
corresponding four-points gaps ($i=$ 4) are shown in the lower panels (c) and
(d).
We extracted the pairing gaps $\Delta^{(i)}(\beta,N)$ ($i=$ 3 and 4) by using
the simple extension of the odd-even mass formula to $T\neq$ 0 in Eq. (19) as
well as the modified gaps $\widetilde{\Delta}^{(i)}(\beta,N)$ from Eq. (24),
and the canonical gaps $\Delta^{(i)}_{\rm C}$ from Eq. (18) for several
particle numbers up to $N=$ 12\. In these calculations, the blocking effect
caused by the unpaired particle in the systems with an odd particle number is
properly taken into account in constructing the basis states when
diagonalizing the pairing Hamiltonian. The results obtained for $N=$ 9 and 10
($\Omega=$ 10) are displayed in Fig. 4. First of all, the values for
$S^{\prime}$ are found to be always negative, and increases with $T>$ 1 MeV to
reach a value of around $-$2 MeV at $T=$ 5 MeV, and vanish at very high $T$.
By comparing Figs. 4 (a) and 4 (b) one can see a clear manifestation of the
parity effect Delft , which causes the reduction of the three-point gap the in
the system with an odd particle number ($N=$ 9) due to the blocking effect.
For lights systems as those considered here, this reduction is rather strong
(about 1 MeV at $T=$ 0). With increasing $T$, as the thermodynamics weakens
the effect of a single-unpaired particle, the parity effect starts to fade out
in such a way that the three-point canonical gap $\Delta^{(3)}_{\rm C}(N=9)$
slightly increases with $T$ up to $T\simeq$ 1 MeV, starting from which it is
even slightly larger than $\Delta^{(4)}_{\rm C}(N=9)$. This feature is found
in qualitative agreement with the results obtained in Fig. 25 of Ref. Delft
for ultrasmall metallic grains. It is also seen in Fig. 4 that the naive
extension of the odd-even mass formula to $T\neq$ 0, resulting in the gap
$\Delta^{(i)}(\beta,N)$ (thin solid lines), fails to match the temperature-
dependence of the canonical gap $\Delta^{(i)}_{\rm C}$ (dash-dotted lines).
The former even increases with $T$ at $T<$ 1 MeV, whereas it drastically drops
at $T>$ 1 – 1.5 MeV, resulting in a very depleted tail at $T>$ 2 MeV as
compared to the canonical gap $\Delta^{(i)}_{\rm C}$. Moreover, the three-
point gap $\Delta^{(3)}(\beta,N=9)$ even turns negative at $T>$ 2.4 MeV,
suggesting that such simple extension of the odd-even mass difference to
finite $T$ is invalid. At the same time, the modified gap
$\widetilde{\Delta}^{(i)}(\beta,N)$ (thick solid line) given by Eq. (24) is
found in much better agreement with the canonical one. At $T<$ 1.5 MeV, the
three point-gap $\widetilde{\Delta}^{(3)}(\beta,N)$ is almost the same as the
canonical gap. At higher $T$, it becomes larger (smaller) than the canonical
value for the system with an even (odd) $N$, however the systematic of the
results of our calculations up to $N=$ 12 show we that this discrepancy
decreases with increasing the particle number. The source of the discrepancy
resides in the assumption of the odd-even mass formula that the gap obtained
as the energy difference between the systems with $N+1$ and $N$ particles is
the same as that obtained from the energy difference between systems with $N$
and $N-1$ particles. This assumption does not hold for small $N$ (Cf. Ref.
cons ). The average in the four-point gap nearly eliminates this difference.
As a result, the modified four-point gaps $\widetilde{\Delta}^{(4)}(\beta,N)$
practically coincide with the canonical gaps. A natural consequence of the
average in the definition of the four-point gap (20) is that the gaps obtained
in the systems with $N$ and $N-1$ particles are now nearly the same. The
pairing gaps predicted by a number of alternative theories Moretto ; Goodman ;
Egido ; SPA ; MBCS , including those discussed in the present work, are in
closer agreement to the GCE and CE gaps rather than to the gap
$\Delta(\beta,N)$ from Eq. (19). Therefore, the comparison in Fig. 4 suggests
that formula (24) is a much better candidate for the experimental gap at
$T\neq$ 0, rather than the simple odd-even mass difference (19).
## V Conclusions
In the present work, a systematic comparison is conducted for pairing
properties of finite systems at finite temperature as predicted by the exact
solutions of the pairing problem embedded in three principal statistical
ensembles, as well as by the recently developed FTBCS1 (FTLN1)+SCQRPA. The
analysis of numerical results obtained within the doubly-folded equidistant
multilevel model for the pairing gap, total energy, heat capacity, entropies,
and MCE temperature allows us to draw following conclusions.
1) The sharp SN phase transition is indeed smoothed out in exact calculations
within all three principal ensembles. The results obtained within GCE and CE
are very close to each other even for systems with small number of particles.
As for the MCE, although it can also be used to study the pairing properties
of isolated systems at high-excitation energies, there is a certain ambiguity
in the temperature extracted from the level density due to the discreteness of
a small-size system. This ambiguity, therefore, depends on the shape and
parameter of the distribution employed to smooth the discrete level density.
We found that, in this respect, the normal (Gaussian) distribution gives the
best fit for both of the temperature and entropy to the canonical values. The
wide fluctuations of MCE temperature obtained here also indicate that thermal
equilibrium within thermally isolated pure-pairing systems might not be
reached. On the other hand, it opens an interesting perspective of studying
the behavior of phase transitions in finite systems within microcanonical
thermodynamics Gross by using the exact solutions of pairing problem.
2) The predictions by the FTBCS1+SCQRPA and FTLN1+SCQRPA are found in
reasonable agreement with the results obtained by using the exact solutions
embedded in the GCE and CE. The best agreement is see between the FTLN1+SCQRPA
and the GCE results. Once again, this is a robust confirmation that
quasiparticle-number fluctuation, included in these approximations, is indeed
the microscopic origin of the strong thermal fluctuations that smooth out the
sharp SN phase transition in finite systems.
3) We suggest a novel formula to extract the pairing gap at finite temperature
from the difference of total energies of even and odd systems where the
contribution of uncorrelated single-particle motion is subtracted. The new
formula predicts a pairing gap in much better agreement with the canonical gap
than the simple finite-temperature extension of the odd-even mass formula.
###### Acknowledgements.
Discussions with Peter Schuck (Orsay) are gratefully acknowledges. One of us
(N.Q.H.) also thanks S. Frauendorf (Notre Dame), and V. Zelevinsky (East
Lansing) for discussions and hospitality during his visit at the University of
Notre Dame and Cyclotron Institute of the Michigan State University, where a
part of this work was presented. NQH is a RIKEN Asian Program Associate. The
numerical calculations were carried out using the FORTRAN IMSL Library by
Visual Numerics on the RIKEN Super Combined Cluster (RSCC) system.
## References
* (1) J. Bardeen, L. Cooper, and Schrieffer, Phys. Rev. 108, 1175 (1957).
* (2) N.N. Bogoliubov and S.V. Tyablikov, Sov. Phys. Dokl. 4, 589 (1959) [Dokl. Akad. Nauk. SSSR 126, 53 (1959)]; D.N. Zubarev, Sov. Phys. Usp. 3, 320 (1960) [Usp. Fiz. Nauk 71, 71 (1960); R. Kubo, M. Toda, and N. Hashitsume, Statistical Physics II - Nonequilibrium Statistical Mechanics (Springer, Berlin-Heidelberg, 1985).
* (3) L.G. Moretto, Phys. Lett. B 40, 1 (1972).
* (4) A.L. Goodman, Phys. Rev. C 29, 1887 (1984).
* (5) J.L. Egido, P. Ring, S. Iwasaki, and H.J. Mang, Phys. Lett. B 154, 1 (1985).
* (6) R. Rossignoli, P. Ring and N.D. Dang, Phys. Lett. B 297, 9 (1992); N.D. Dang, P. Ring and R. Rossignoli, Phys. Rev. C 47, 606 (1993).
* (7) V. Zelevinsky, B.A. Brown, N. Frazier, and M. Horoi, Phys. Rep. 276, 85 (1996).
* (8) N. Dinh Dang and V. Zelevinsky, Phys. Rev. C 64, 064319 (2001); N.D. Dang and A. Arima, Phys. Rev. C 67, 014304 (2003); N.D. Dang and A. Arima, Phys. Rev. C 68, 014318 (2003); N.D. Dang, Nucl. Phys. A 784, 147 (2007).
* (9) N.D. Dang and N.Q. Hung, Phys. Rev. C 77, 064315 (2008).
* (10) N.Q. Hung and N.D. Dang, Phys. Rev. C 78, 064315 (2008).
* (11) R.W. Richardson, Phys. Lett. 3, 277 (1963); Ibid. 14, 325 (1965); R.W. Richardson and N. Sherman, Nucl. Phys. 52, 221 (1964).
* (12) A. Volya, B.A. Brown, and V. Zelevinsky, Phys. Lett. B 509 (2001) 37.
* (13) R. Kubo, J. Phys. Soc. Japan 17, 975 (1962).
* (14) R. Denton, B. Mühlschlegel, and D.J. Scalapino, Phys. Rev. B 8, 3589 (1973).
* (15) T. Sumaryada and A. Volya, Phys. Rev. C 76, 024319 (2007).
* (16) A. Bohr and B.R. Mottelson, Nuclear structure I (Bejamin, NY, 1969).
* (17) P. Ring and P. Schuck, The Nuclear Many-Body Problem (Springer, Heidelberg, 2004).
* (18) J. von Delft and D.C. Ralph, Phys. Rep. 345, 61 (2001).
* (19) R. Rossignoli, N. Canoza, and P. Ring, Ann. Phys. 275, 1 (1999).
* (20) S. Frauendorf, N.K. Kuzmenko, V.M. Mikhajlov, and J.A. Sheikh, Phys. Rev. B 68, 024518 (2003).
* (21) G. Breit, Phys. Rev. 58, 506 (1940); E.P. Wigner, Phys. Rev. 70, 15 (1946); M. Danos and W. Greiner, Phys. Rev. 134, B284 (1964).
* (22) K. Kaneko and M. Hasegawa, Phys. Rev. C 72, 024307 (2005), K. Kaneko et al., Phys. Rev. C 74, 024325 (2006).
* (23) N.N. Bogoliubov, JETP 34, 58 (1958).
* (24) N.Q. Hung and N.D. Dang, Phys. Rev. C 76, 054302 (2007); 77, 029905(E) (2008).
* (25) H.J. Lipkin, Ann. Phys. 9, 272 (1960); Y. Nogami, Phys. Rev. 134, B313 (1964); H.C. Pradhan, Y. Nogami, and J. Law, Nucl. Phys. A 201, 357 (1973).
* (26) H. Olofsson, R. Bengtsson, P. Möller, Nucl. Phys. A 784, 104 (2007).
* (27) N. Dinh Dang, Phys. Rev. C 76, 064320 (2007).
* (28) A. Volya, V. Zelevinsky, and B. Alex Brown, Phys. Rev. C 65, 054312 (2002).
* (29) N. Dinh Dang, Phys. Rev. C 74, 024318 (2006).
* (30) D.H.E. Gross and J.F. Kenney, J. Chem. Phys. 112, 224111 (2005).
|
arxiv-papers
| 2009-05-13T02:21:55 |
2024-09-04T02:49:02.564478
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "N. Quang Hung and N. Dinh Dang",
"submitter": "Nguyen Quang Hung",
"url": "https://arxiv.org/abs/0905.2002"
}
|
0905.2017
|
# Light Scalar Mesons in Photon-Photon Collisions
N.N. Achasov111E-mail: achasov@math.nsc.ru and G.N. Shestakov222E-mail:
shestako@math.nsc.ru Laboratory of Theoretical Physics, S.L. Sobolev Institute
for Mathematics, 630090, Novosibirsk, Russia
###### Abstract
The light scalar mesons, discovered over forty years ago, became a challenge
for the naive quark-antiquark model from the outset. At present the nontrivial
nature of these states is no longer denied practically anybody. Two-photon
physics has made a substantial contribution to understanding the nature of the
light scalar mesons. Recently, it entered a new stage of high statistics
measurements. We review the results concerning two-photon production
mechanisms of the light scalars, based on the analysis of current experimental
data.
###### pacs:
12.39.-x, 13.40.-f, 13.60.Le, 13.75.Lb
Outline
1\. Introduction.
2\. Special place of the light scalar mesons in the hadron world. Evidences of
their four-quark structure.
3\. Light scalar mesons in the light of photon-photon collisions.
3.1. History of investigations. 3.2. Current experimental situation. 3.3.
Dynamics of the reactions $\gamma\gamma\to\pi\pi$: Born contributions and
angular distributions. 3.4. Production mechanisms of scalar resonances.
4\. Analysis of high statistics Belle data on the reactions
$\gamma\gamma\to\pi^{+}\pi^{-}$ and $\gamma\gamma\to\pi^{0}\pi^{0}$.
Manifestations of the $\sigma(600)$ and $f_{0}(980)$ resonances.
5\. Production of the $a_{0}(980)$ resonance in the reaction
$\gamma\gamma\to\pi^{0}\eta$.
6\. Preliminary summary.
7\. Future Trends.
7.1 $f_{0}(980)$ and $a_{0}(980)$ resonances near $\gamma\gamma\to K^{+}K^{-}$
and $\gamma\gamma\to K^{0}\bar{K}^{0}$ reaction thresholds. 7.2.
$\sigma(600)$, $f_{0}(980)$, and $a_{0}(980)$ resonances in $\gamma\gamma^{*}$
collisions. 7.3 Inelasticity of $\pi\pi$ scattering and
$f_{0}(980)-a_{0}(980)$ mixing.
8\. Appendix
8.1. $\gamma\gamma\to\pi\pi$. 8.2. $\gamma\gamma\to\pi^{0}\eta$. 8.3.
$\gamma\gamma\to K\bar{K}$.
References.
1\. Introduction
The scalar channels in the region up to 1 GeV became a stumbling block of QCD
because both perturbation theory and sum rules do not work in these channels.
333The point is that, in contrast to classic vector channels, in this region
there are not solitary resonances, i.e., scalar resonances, which are not
accompanied by a large inseparable from resonance background. Particularly, in
the case of the solitary $a_{0}(980)$ and $f_{0}(980)$ resonances, the
resonance peaks in the $\phi$ $\to$ $\gamma a_{0}(980)$ $\to$ $\gamma\pi\eta$
and $\phi$ $\to$ $\gamma f_{0}(980)$ $\to$ $\gamma\pi\pi$ decays would be not
observed at all because the differential probabilities of these decays vanish
proportionally cubic function of the photon energy in a soft photon region for
gauge invariance AI89 ; AG01 ; AG02YF ; Ac03a ; Ac04YF , see Section 2. The
principal role of the chiral background in the fate of the $\sigma(600)$
resonance was demonstrated in the linear $\sigma$ model AS94a ; AS93 ; AS94a1
; AS07 . The solitary resonance approximation is nothing more than an academic
exercise in the light scalar meson case. At the same time the question on the
nature of the light scalar mesons, $\sigma(600)$, $\kappa(800)$, $a_{0}(980)$,
and $f_{0}(980)$ PDG08 ; PDG10 , is major for understanding the mechanism of
the chiral symmetry realization, arising from the confinement, and hence for
understanding the confinement itself.
Hunting the light $\sigma$ and $\kappa$ mesons had begun in the sixties
already and a preliminary information on the light scalar mesons in Particle
Data Group (PDG) reviews had appeared at that time (see, for example, PDG65 ;
PDG67 ; PDG69 ). The theoretical ground for a search for scalar mesons was the
linear $\sigma$ model (LSM) GL60 ; Ge64 ; Le67 , which takes into account
spontaneous breaking of chiral symmetry and contains pseudoscalar mesons as
Goldstone bosons. The surprising thing is that after ten years it has been
made clear that LSM could be the low energy realization of QCD. At the end of
the sixties and at the beginning of the seventies PDG67 ; PDG71 ; PDG73 there
were discovered the narrow light scalar resonances, the isovector $a_{0}(980)$
and isoscalar $f_{0}(980)$. 444In 1977 Jaffe noted that in the MIT bag model,
which incorporates confinement phenomenologically, there exists the nonet of
the light scalar four-quark states Ja77 . He suggested also that $a_{0}(980)$
and $f_{0}(980)$ might be these states with symbolic structures:
$a^{+}_{0}(980)=u\bar{d}s\bar{s}$,
$a^{0}_{0}(980)=(us\bar{u}\bar{s}-ds\bar{d}\bar{s})/\sqrt{2}$,
$a^{-}_{0}(980)=d\bar{u}s\bar{s}$, and
$f_{0}(980)=(us\bar{u}\bar{s}+ds\bar{d}\bar{s})/\sqrt{2}$. From that time
$a_{0}(980)$ and $f_{0}(980)$ resonances came into beloved children of the
light quark spectroscopy.
As for the $\sigma$ and $\kappa$ mesons, long-standing unsuccessful attempts
to prove their existence in a conclusive way entailed general disappointment
and an information on these states disappeared from PDG reviews. One of
principal reasons against the $\sigma$ and $\kappa$ mesons was the fact that
the $S$ wave phase shifts, both $\pi\pi$ and $\pi K$ scattering, do not pass
over $90^{0}$ at putative resonance masses. Nevertheless, experimental and
theoretical investigation of processes, in which the $\sigma$ and $\kappa$
states could reveal themselves, had been continued.
Situation changes when we showed AS94a that in LSM there is a negative
background phase in the $\pi\pi$ scattering $S$ wave amplitude with isospin
$I$ = 0, which hides the $\sigma$ meson with the result that the $\pi\pi$ $S$
wave phase shift does not pass over $90^{0}$ at a putative resonance mass. It
has been made clear that shielding wide lightest scalar mesons in chiral
dynamics is very natural. This idea was picked up and triggered new wave of
theoretical and experimental searches for the $\sigma$ and $\kappa$ mesons,
see, for example, SS95 ; To95 ; Is96 ; HSS96 ; Is97 ; Bl98 ; Bl99 ; Is00 . As
a result the light $\sigma$ resonance, since 1996, and the light $\kappa$
resonance, since 2004, appeared in the PDG reviews PDG96 ; PDG04 .
By now there is an impressive amount of data about the light scalar mesons
PDG08 ; PDG10 ; STA08 ; Am10 . The nontrivial nature of these states is no
longer denied practically anybody. In particular, there exist numerous
evidences in favour of their four-quark structure. These evidences are widely
covered in the literature AI89 ; Ac03a ; STA08 ; Am10 ; Mo83 ; ADS84a ; Ac90 ;
Ac98 ; Ac99 ; Ac02 ; Ac01 ; Ac03b ; Ac03c ; Ac04a ; Ac04b ; Ac06 ; Ac08a ;
Ac08b ; Ac09Bog ; AS91 ; AS99 ; DS98 ; GN99 ; Tu01 ; Tu03 ; CT02 ; AJ03 ; JW03
; AT04 ; Ma04 ; Ja05 ; Ja07 ; Ka05 ; AKS06 ; AKS08 ; CCL06 ; Bu06 ; FJS06 ;
FJS08a ; FJS08b ; FJS09 ; Na06 ; Na08 ; To07 ; KZ07 ; MPR07 ; Pe07a ; Pe07b ;
vBR07 ; By08 ; Le08 ; tH08 ; IK08 ; EFG09 ; AS09China ; AS-Q10 . They are
presented also in Sections 2–6.
One of them is the suppression of the $a_{0}(980)$ and $f_{0}(980)$ resonances
in the $\gamma\gamma$ $\to$ $\pi^{0}\eta$ and $\gamma\gamma$ $\to$ $\pi\pi$
reactions, respectively, predicted in 1982 ADS82a ; ADS82b and confirmed by
experiment PDG08 ; PDG10 . The elucidations of the mechanisms of the
$\sigma(600)$, $f_{0}(980)$, and $a_{0}(980)$ resonance production in the
$\gamma\gamma$ collision and their quark structure are intimately related.
That is why the studies of the two-photon processes are the important part of
the light scalar meson physics.
It should be noted that the reactions of hadron production in photon-photon
collisions are measured at $e^{+}e^{-}$ colliders, i.e., the information on
the transitions $\gamma\gamma$ $\to$ hadrons is extracted from the data on the
processes $e^{+}e^{-}$ $\to$ $e^{+}e^{-}\gamma\gamma$ $\to$ $e^{+}e^{-}$
hadrons (Fig. 1). The most statistics is obtained by the so-called “non tag”
method when hadrons only are detected and the scattered leptons are not. In
this case the main contribution to the cross section of $e^{+}e^{-}$ $\to$
$e^{+}e^{-}$ hadrons is provided by photons with very small virtualities.
Therefore, this method allows to extract data on hadron production in
collisions of almost real photons. The absolute majority of data on the
inclusive channels $\gamma\gamma$ $\to$ hadrons has been obtained with the use
of this method. If the scattered electrons are detected (which leads to a loss
of statistics), then one can investigate in addition the $Q^{2}$ dependence of
the hadron production cross sections in $\gamma\gamma^{*}(Q^{2})$ collisions,
where $\gamma$ is a real photon and $\gamma^{*}(Q^{2})$ is a photon with
virtuality $Q^{2}=(p_{1}-p^{\prime}_{1})^{2}$. 555Detailed formulae for
experimental investigations of the reactions $e^{+}e^{-}$ $\to$ $e^{+}e^{-}$
hadrons may be found in the reviews BGMS75 ; Ko84 .
Figure 1: The two-photon process of hadron formation at $e^{+}e^{-}$
colliders; $p_{1}$, $p^{\prime}_{1}$ and $p_{2}$, $p^{\prime}_{2}$ are the
4-momenta of electrons and positrons.
Recently a qualitative leap had place in the experimental investigations of
the $\gamma\gamma$ $\to$ $\pi\pi$ and $\gamma\gamma$ $\to$ $\pi^{0}\eta$
processes Mo03 ; Mo07a ; Mo07b ; Ue08 ; Ue09 that proved the theoretical
expectations based on the four-quark nature of the light scalar mesons ADS82a
; ADS82b . The Belle Collaboration published the data on the cross sections
for the $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ Mo07a ; Mo07b , $\gamma\gamma$
$\to$ $\pi^{0}\pi^{0}$ Ue08 , and $\gamma\gamma$ $\to$ $\pi^{0}\eta$ Ue09
reactions, statistics of which are hundreds of times as large as statistics of
all previous data. The Belle Collaboration observed for the first time the
clear signals of the $f_{0}(980)$ resonance in the both charge channels. The
previous indications for the $f_{0}(980)$ production in the $\gamma\gamma$
collisions Ma90 ; Bo90 ; Oe90 ; Be92 ; Bi92 ; Bara00 ; Bra00 were rather
indefinite.
In the given paper there are presented the results of the investigation the
mechanisms of the $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$, $\gamma\gamma$ $\to$
$\pi^{0}\pi^{0}$, and $\gamma\gamma$ $\to$ $\pi^{0}\eta$ reactions (see
Sections 3–5) based on the analysis AS05 ; AS07 ; AS08a ; AS08b ; AS09 ; AS10a
; AS10b of the Belle data Mo03 ; Mo07a ; Mo07b ; Ue08 ; Ue09 and our
previous investigations of the scalar meson physics in the $\gamma\gamma$
collisions ADS82a ; ADS82b ; ADS85 ; AS88 ; AS92 ; AS94b ; AS91 . We also
briefly (sometimes critical) survey analyses of other authors.
The joint analysis of the Belle high-statistics data on the $\gamma\gamma$
$\to$ $\pi^{+}\pi^{-}$ and $\gamma\gamma$ $\to$ $\pi^{0}\pi^{0}$ reactions is
presented and the principal dynamical mechanisms of these processes are
elucidated in the energy region up to 1.5 GeV. The analysis of the Belle high-
statistics data on the reaction $\gamma\gamma$ $\to$ $\pi^{0}\eta$ is
presented too. It is shown that the two-photon decays of the light scalar
resonances are the four-quark transitions caused by the rescatterings $\sigma$
$\to$ $\pi^{+}\pi^{-}$ $\to$ $\gamma\gamma$, $f_{0}(980)$ $\to$
$(K^{+}K^{-}+\pi^{+}\pi^{-})$ $\to$ $\gamma\gamma$, and $a_{0}(980)$ $\to$
$(K\bar{K}+\pi^{0}\eta+\pi^{0}\eta^{\prime})$ $\to$ $\gamma\gamma$ in contrast
to the two-photon decays of the classic $P$ wave tensor $q\bar{q}$ mesons
$a_{2}(1320)$, $f_{2}(1270)$ and $f^{\prime}_{2}(1525)$, which are caused by
the direct two-quark transitions $q\bar{q}$ $\to$ $\gamma\gamma$ in the main.
As for the direct coupling constants of the $\sigma(600)$, $f_{0}(980)$, and
$a_{0}(980)$ resonances with the $\gamma\gamma$ system, they are small. It is
obtained the two-photon widths averaged over resonance mass distributions
$\langle\Gamma_{f_{0}\to\gamma\gamma}\rangle_{\pi\pi}$ $\approx$ 0.19 keV,
$\langle\Gamma_{a_{0}\to\gamma\gamma}\rangle_{\pi\eta}$ $\approx$ 0.4 keV and
$\langle\Gamma_{\sigma\to\gamma\gamma}\rangle_{\pi\pi}$ $\approx$ 0.45 keV.
In Section 7, we attend to the additional possibilities of the investigation
of the $a_{0}(980)$ and $f_{0}(980)$ resonances in the reactions
$\gamma\gamma$ $\to$ $K^{+}K^{-}$ and $\gamma\gamma$ $\to$ $K^{0}\bar{K}^{0}$,
which are as yet little studied experimentally, and also to the promising
possibility of investigating the nature of the light scalars $\sigma(600)$,
$f_{0}(980)$, and $a_{0}(980)$ in $\gamma\gamma^{*}(Q^{2})$ collisions.
2\. Special place of the light scalar mesons in the hadron world. Evidences of
their four-quark structure
Even a cursory examination of PDG reviews gives an idea of the four-quark
structure of the light scalar meson nonet 666 To be on the safe side, notice
that the linear $\sigma$ model does not contradict to non-$q\bar{q}$ nature of
the low lying scalars because Quantum Fields can contain different virtual
particles in different regions of virtuality., $\sigma(600)$, $\kappa(800)$,
$a_{0}(980)$, and $f_{0}(980)$,
$\begin{array}[]{rrcll}a_{0}^{-}&&a_{0}^{0}/f_{0}&&a_{0}^{+}\\\ &&&&\\\
&\\{\kappa\\}&&\\{\kappa\\}&\\\ &&&&\\\ &&\sigma&&\end{array}$ (1)
inverted AS99 in comparison with the classical $P$ wave $q\bar{q}$ tensor
meson nonet $f_{2}(1270)$, $a_{2}(1320)$, $K_{2}^{\ast}(1420)$, and
$f_{2}^{\prime}(1525)$
$\begin{array}[]{rrcll}&&f^{\prime}_{2}&&\\\ &&&&\\\
&\\{K^{*}_{2}\\}&&\\{K^{*}_{2}\\}&\\\ &&&&\\\ \
a^{-}_{2}&&a^{0}_{2}/f_{2}&&a^{+}_{2}\ ,\\\ \end{array}$ (2)
or also in comparison with the classical $S$ wave vector meson nonet
$\rho(770)$, $\omega(782)$, $K^{*}(892)$, and $\phi(1020)$. 777In Eqs. (1) and
(2) the mass and isotopic spin third component of states increase bottom-up
and from left to right, respectively. In the naive quark model such a nonet
cannot be understood as the $P$ wave $q\bar{q}$ nonet, but it can be easy
understood as the $S$ wave $q^{2}\bar{q}^{2}$ nonet, where $\sigma(600)$ has
no strange quarks, $\kappa(800)$ has the $s$ quark, $a_{0}(980)$ and
$f_{0}(980)$ have the $s\bar{s}$ pair.
The scalar mesons $a_{0}(980)$ and $f_{0}(980)$, discovered about forty years
ago, became the hard problem for the naive $q\bar{q}$ model from the outset.
888Note here a series of important experiments of seventies in which the
$f_{0}(980)$ and $a_{0}(980)$ resonances were investigated Fl72 ; Pr73 ; Hy73
; Gr74 ; Hy75 ; Ga76 , as well as a few theoretical analyses of scalar meson
properties relevant to this period Mo74 ; Fl76 ; MOS77 ; Pe77 ; Ja77 ; Es79 ;
ADS79 . In the last-named paper there was theoretically discovered the fine
threshold phenomenon of the $a_{0}(980)-f_{0}(980)$ mixing which breaks the
isotopic invariance (see also ADS81b ). Now a rebirth of interest in the
$a_{0}(980)-f_{0}(980)$ mixing takes place and there appear new suggestions on
search for this phenomenon (see, for example, AS04a ; AS04b ; WZZ07 ; WZ08
and references in these papers) as well as the first indications for its
manifestation in the $f_{1}(1285)$ $\to$ $\pi^{+}\pi^{-}\pi^{0}$ decay, which
is measured with the help of the VES detecor at IHEP in Protvino Do07 ; Dor08
; Ni09 , and in the decays $J/\psi\to\phi f_{0}(980)\to\phi
a_{0}(980)\to\phi\eta\pi$ and
$\chi_{c1}\to\pi^{0}a_{0}(980)\to\pi^{0}f_{0}(980)\to\pi^{+}\pi^{-}\pi^{0}$,
which are being investigated with the BESIII detector at BEPCII in Chine Har10
. Really, on the one hand the almost exact degeneration of the masses of the
isovector $a_{0}(980)$ and isoscalar $f_{0}(980)$ states revealed seemingly
the structure $a^{+}_{0}(980)$ = $u\bar{d}$, $a^{0}_{0}(980)$ = $(u\bar{u}$ \-
$d\bar{d})/\sqrt{2}$, $a^{-}_{0}(980)$ = $d\bar{u}$ and $f_{0}(980)$ =
$(u\bar{u}$ \+ $d\bar{d})/\sqrt{2}$ similar to the structure of the vector
$\rho$ and $\omega$ or tensor $a_{2}(1320)$ and $f_{2}(1270)$ mesons, but on
the other hand, the strong coupling of the $f_{0}(980)$ with the $K\bar{K}$
channel as if suggested a considerable part of the strange pair $s\bar{s}$ in
the wave function of the $f_{0}(980)$.
At the beginning of eighty it was demonstrated in a series of papers ADS80a ;
ADS80b ; ADS81a ; ADS81b ; ADS81c ; ADS84a ; ADS84b that data on the
$f_{0}(980)$ and $a_{0}(980)$ resonances, available at that time, can be
interpreted in favour of the $q^{2}\bar{q}^{2}$ model, i.e., can be explained
by using coupling constants of the $f_{0}(980)$ and $a_{0}(980)$ states with
pseudoscalar mesons superallowed by the Okubo-Zweig-Iizuka (OZI) rule as it is
predicted by the $q^{2}\bar{q}^{2}$ model. In particular, in these papers
there were obtained and specified formulae for scalar resonance propagators
with taking into account corrections for finite width in case of strong
coupling with two-particle decay channels. Late on, these formulae were used
in fitting data of a series of experiments on the $f_{0}(980)$ and
$a_{0}(980)$ resonance production (see, for example, Ach98a ; Ach98b ; Ach00a
; Ach00b ; Akh99a ; Akh99b ; Al02a ; Al02b ; Du03 ; Am06 ; Am07a ; Am07b ;
Bo07 ; Cav07 ; Mo07a ; Mo07b ; Ue08 ; Bo08 ). Recently, it was shown that the
above scalar resonance propagators satisfy the Källen-Lehmann representation
in the domain of coupling constants usually used AK04 .
At the end of eighties it was shown that the study of the radiative decays
$\phi\to\gamma a_{0}\to\gamma\pi\eta$ and $\phi\to\gamma f_{0}\to\gamma\pi\pi$
can shed light on the problem of $a_{0}(980)$ and $f_{0}(980)$ mesons AI89 .
Over the next ten years before experiments (1998) the question was considered
from different points of view BGP92 ; CIK93 ; LN94 ; Ac95 ; AGS97 ; AGS97YF ;
AGShev97 ; AGShev97MP ; AGShev97YF ; AG97 ; AG98YFa ; AcGu98 ; AG98YFb .
Now these decays have been studied not only theoretically but also
experimentally with the help of the SND Ach98a ; Ach98b ; Ach00a ; Ach00b and
CMD-2 Akh99a ; Akh99b detectors at Budker Institute of Nuclear Physics in
Novosibirsk and the KLOE detector at the DA$\Phi$NE $\phi$-factory in Frascati
Al02a ; Al02b ; Am06 ; Am07a ; Am07b ; Amb09a ; Ambr09b ; Bin08 ; Bo08 .
These experimental data called into being a series of theoretical
investigations AG01 ; AG02YF ; Ac03a ; Ac04YF ; Ac03b ; AK03 ; AK04YF ; AK06 ;
AK07a in which evidences for the four-quark nature of the $f_{0}(980)$ and
$a_{0}(980)$ states were obtained. Note the clear qualitative one. The
isovector $a_{0}(980)$ resonance is produced in the radiative $\phi$ meson
decay as intensively as the isoscalar $\eta^{\prime}(958)$ meson containing
$\approx 66\%$ of $s\bar{s}$, responsible for the $\phi\approx
s\bar{s}\to\gamma s\bar{s}\to\gamma\eta^{\prime}(958)$ decay. In the two-quark
model, $a^{0}_{0}(980)$ = $(u\bar{u}-d\bar{d})/\sqrt{2}$, the $\phi$ $\approx$
$s\bar{s}$ $\to$ $\gamma a_{0}(980)$ decay should be suppressed by the OZI
rule. So, experiment, probably, indicates for the presence of the $s\bar{s}$
pair in the isovector $a_{0}(980)$ state, i.e., for its four-quark nature.
---
Figure 2: The $K^{+}K^{-}$ loop mechanism of the radiative decays $\phi(1020)$
$\to$ $\gamma(a_{0}(980)/f_{0}(980))$.
---
Figure 3: The left and right plots illustrate the fit to the KLOE data for the
$\pi^{0}\eta$ and $\pi^{0}\pi^{0}$ mass spectra in the $\phi$ $\to$
$\gamma\pi^{0}\eta$ Al02a and $\phi$ $\to$ $\gamma\pi^{0}\pi^{0}$ Al02b
decays, respectively. See for details AK03 ; AK04YF ; AK06 ; AK07a
.
Figure 4: A new threshold phenomenon in $\phi\to K^{+}K^{-}$ $\to$ $\gamma R$
decays. The universal in the $K^{+}K^{-}$ loop model function $|g(m)|^{2}$ =
$|g_{R}(m)/g_{RK^{+}K^{-}}|^{2}$ is drawn with the solid line. The
contributions of the imaginary and real parts of $g(m)$ are drawn with the
dashed and dotted lines, respectively.
When basing the experimental investigations AI89 , it was suggested the kaon
loop model $\phi$ $\to$ $K^{+}K^{-}$ $\to$ $\gamma a_{0}(980)$ $\to$
$\gamma\pi^{0}\eta$ and $\phi$ $\to$ $K^{+}K^{-}$ $\to$ $\gamma f_{0}(980)$
$\to$ $\gamma\pi\pi$, see Fig. 2. This model is used in the data treatment and
is ratified by experiment Ach98a ; Ach98b ; Ach00a ; Ach00b ; Akh99a ; Akh99b
; Al02a ; Al02b ; Am06 ; Am07a ; Am07b ; Amb09a ; Ambr09b ; Bo08 ; Bin08 ;
DiMi08 ; SVP09 ; A-C10 , see Fig. 3. The key virtue of the kaon loop model has
the built-in nontrivial threshold phenomenon, see Fig. 4. To describe the
experimental mass distributions $dBR(\phi\to\gamma R\gamma
ab;\,m)/dm\sim|g(m)|^{2}\omega(m)$, 999Here $m$ is the invariant mass of the
$ab$-state, $R=a_{0}(980)$ or $f_{0}(980)$, $ab=\pi^{0}\eta$ or
$\pi^{0}\pi^{0}$, the function $g(m)$ describes the
$\phi\to\gamma[a_{0}(m)/f_{0}(m)]$ transition vertex. the function
$|g(m)|^{2}$ should be smooth at $m\leq 0.99$ GeV. But gauge invariance
requires that $g(m)$ is proportional to the photon energy $\omega(m)$.
Stopping the impetuous increase of the function $(\omega(m))^{3}$ at
$\omega(990\,\mbox{MeV})$=29 MeV is the crucial point in the data description.
The $K^{+}K^{-}$ loop model solves this problem in the elegant way AG01 ;
AG02YF ; Ac01 ; Ac03a ; Ac04YF ; Ac03b , see Fig. 4. In truth this means that
$a_{0}(980)$ and $f_{0}(980)$ resonances are seen in the radiative decays of
$\phi$ meson owing to the $K^{+}K^{-}$ intermediate state. So the mechanism of
the $a_{0}(980)$ and $f_{0}(980)$ mesons production in the $\phi$ radiative
decays is established at a physical level of proof at least.
Both real and imaginary parts of the $\phi\to\gamma R$ amplitude are caused by
the $K^{+}K^{-}$ intermediate state. The imaginary part is caused by the real
$K^{+}K^{-}$ intermediate state while the real part is caused by the virtual
compact $K^{+}K^{-}$ intermediate state, i.e., we are dealing here with the
four-quark transition Ac01 ; Ac03a ; Ac04YF ; Ac03b . Needless to say,
radiative four-quark transitions can happen between two $q\bar{q}$ states as
well as between $q\bar{q}$ and $q^{2}\bar{q}^{2}$ states but their intensities
depend strongly on a type of the transition. A radiative four-quark transition
between two $q\bar{q}$ states requires creation and annihilation of an
additional $q\bar{q}$ pair, i.e., such a transition is forbidden according to
the OZI rule, while a radiative four-quark transition between $q\bar{q}$ and
$q^{2}\bar{q}^{2}$ states requires only creation of an additional $q\bar{q}$
pair, i.e., such a transition is allowed according to the OZI rule. The
consideration of this question from the large $N_{C}$ expansion standpoint
Ac03a ; Ac04YF supports a suppression of a radiative four-quark transition
between two $q\bar{q}$ states in comparison with a radiative four-quark
transition between $q\bar{q}$ and $q^{2}\bar{q}^{2}$ states. So, both
intensity and mechanism of the $a_{0}(980)$ and $f_{0}(980)$ production in the
radiative decays of the $\phi(1020)$ meson indicate for their four-quark
nature.
Note also that the absence of the decays $J/\psi\to\gamma f_{0}(980)$,
$J/\psi\to a_{0}(980)\rho$, $J/\psi\to f_{0}(980)\omega$ against a background
of the rather intensive decays into the corresponding classical $P$ wave
tensor $q\bar{q}$ resonances $J/\psi\to\gamma f_{2}(1270)$ (or even
$J/\psi\to\gamma f^{\prime}_{2}(1525)$), $J/\psi\to a_{2}(1320)\rho$,
$J/\psi\to f_{2}(1270)\omega$ intrigues against the $P$ wave $q\bar{q}$
structure of the $a_{0}(980)$ and $f_{0}(980)$ states Ac98 ; Ac99 ; Ac02 ;
Ac03c .
3\. Light scalars in the light of two-photon collisions
3.1. History of investigations
Experimental investigations of light scalar mesons in the $\gamma\gamma$ $\to$
$\pi^{+}\pi^{-}$, $\gamma\gamma$ $\to$ $\pi^{0}\pi^{0}$ and $\gamma\gamma$
$\to$ $\pi^{0}\eta$ reactions with the $e^{+}e^{-}$-colliders began in
eighties and have continued up to now. In first decade many groups, DM1,
DM1/2, PLUTO, TASSO, CELLO, JADE, Crystal Ball, MARK II, DELCO, and
TPC/2$\gamma$, took part in that. Only Crystal Ball and JADE could studied the
$\pi^{0}\pi^{0}$ and $\pi^{0}\eta$ channels, the others (and JADE) the
$\pi^{+}\pi^{-}$ channel. For those, who wish to read more widely in the
contribution of this impressive period in light scalar meson physics, one can
recommend the following reviews and papers: Fi81 ; Hi81 ; We81 ; ADS82a ;
ADS82b ; ADS85 ; Ed82 ; Ol83 ; Me83 ; Ko84 ; Kol85 ; KZ87 ; Ko88 ; Co85 ; Er85
; Bar85 ; KaSer ; An86 ; Jo86 ; Po86 ; BW87 ; MP87 ; MP88 ; AS88 ; AS91 ; Ch88
; PDG92 ; MPW94 .
First results on the $f_{0}(980)$ resonance production are collected in Tables
1 and 2.
It is reasonable that first conclusions had a qualitative character and data
on the $f_{0}(980)$ $\to$ $\gamma\gamma$ decay width had large errors or were
upper bounds. Note as a guide that the TASSO and Crystal Ball results, see
Table 2, based on the integral luminosity equals to 9.24 pb-1 and 21 pb-1,
respectively.
Table 1: First conclusions on the $f_{0}(980)$ production in $\gamma\gamma$ $\to$ $\pi\pi$ (see reviews Fi81 ; Hi81 ; We81 ). Experiments | Conclusions
---|---
Crystal Ball | No significant $f_{0}(980)$
CELLO | Hint of $f_{0}(980)$
JADE | No evidence for $f_{0}(980)$
TASSO | Good fit to data book values
| for $f_{2}(1270)$ includes $f_{0}(980)$
| (3 $\sigma$ effect)
MARK II | No significant $f_{0}(980)$ signal
Table 2: First results on the $\gamma\gamma$ width of the $f_{0}(980)$ (see reviews Fi81 ; Hi81 ; Ed82 ; Ol83 ; Ko84 ; Ko88 ). Experiments | $\ \ \ \ \qquad\Gamma_{f_{0}\to\gamma\gamma}$ [keV]
---|---
TASSO | (1.3 $\pm$ 0.4 $\pm$ 0.6)/$B(f_{0}$ $\to$ $\pi^{+}\pi^{-})$
Crystal Ball | $<0.8/B(f_{0}\to\pi\pi)$ (95% C.L.)
JADE | $<0.8$ (95% C.L.)
Kolanoski (1988) Ko88 | 0.27 $\pm$ 0.12 (average value)
Figure 5: Cross section for $\gamma\gamma$ $\to$ $\pi^{0}\eta$ as a function
of $\sqrt{s}$ for $|\cos\theta|\leq 0.9$, where $\sqrt{s}$ is the invariant
mass of $\pi^{0}\eta$ and $\theta$ is the polar angle of the produced
$\pi^{0}$ (or $\eta$) meson in the $\gamma\gamma$ center-of-mass system. The
data are from the Crystal Ball Collaboration An86 .
As for the $a_{0}(980)$ resonance, it was observed in the $\gamma\gamma$ $\to$
$\pi^{0}\eta$ reaction only in three experiments. The Crystal Ball group An86
collected during two years the integral luminosity of 110 pb-1, selected at
that 336 events relevant to the $\gamma\gamma$ $\to$ $\pi^{0}\eta$ reaction in
the $a_{0}(980)$ and $a_{2}(1320)$ region, see Fig. 5, and published in 1986
the following result: $\Gamma_{a_{0}\to\gamma\gamma}B(a_{0}$ $\to$
$\pi^{0}\eta)$ = $(0.19\pm 0.07^{+0.10}_{-0.07})$ keV, where
$\Gamma_{a_{0}\to\gamma\gamma}$ is the width of the $a_{0}(980)$ $\to$
$\gamma\gamma$ decay and $B(a_{0}$ $\to$ $\pi^{0}\eta)$ is the branching ratio
of the $a_{0}(980)$ $\to$ $\pi^{0}\eta$ decay. The measured value of
$\Gamma_{a_{0}\to\gamma\gamma}B(a_{0}$ $\to$ $\pi^{0}\eta)$ characterizes the
intensity of $a_{0}(980)$ production in the channel $\gamma\gamma$ $\to$
$a_{0}(980)$ $\to$ $\pi^{0}\eta$. The prehistory of this result see in Refs.
Ed82 ; Co85 ; Er85 . After four years, the JADE group Oe90 (see also Ko88 )
obtained $\Gamma_{a_{0}\to\gamma\gamma}B(a_{0}$ $\to$ $\pi^{0}\eta)$ =
$(0.28\pm 0.04\pm 0.10)$ keV based on the integral luminosity 149 pb-1 and 291
$\gamma\gamma$ $\to$ $\pi^{0}\eta$ events. The Crystal Ball An86 and JADE
Oe90 data on the $a_{0}(980)$ $\to$ $\gamma\gamma$ decay have aroused keen
interest, see, for example, Bar85 ; Barn92 ; Kol85 ; Ko88 ; Ko91 ; KZ87 ; BW87
; AS88 ; Ch88 ; PDG92 . Late on, need for high-statistic data arose. But until
very recently, there are no new experiments on the $\gamma\gamma$ $\to$
$\pi^{0}\eta$ reaction. According to the PDG reviews from 1992 to 2008, the
average value for $\Gamma_{a_{0}\to\gamma\gamma}B(a_{0}$ $\to$ $\pi^{0}\eta)$
= $(0.24^{+0.8}_{-0.7})$ keV PDG92 ; PDG08 . Only in 2009, the Belle
Collaboration obtained new high-statistics data on the reaction $\gamma\gamma$
$\to$ $\pi^{0}\eta$ at the KEKB $e^{+}e^{-}$ collider Ue09 . The statistics
collected in the Belle experiment is 3 orders of magnitude higher than in the
earlier Crystal Ball and JADE experiments. The detailed analysis of the new
Belle data we present in Section 5. Here we only point out the value for
$\Gamma_{a_{0}\to\gamma\gamma}B(a_{0}$ $\to$ $\pi^{0}\eta)$ =
$(0.128^{+0.003+0.502}_{-0.002-0.043})$ keV obtained by the authors of the
experiment Ue09 and the average value for
$\Gamma_{a_{0}\to\gamma\gamma}B(a_{0}$ $\to$ $\pi^{0}\eta)$ =
$(0.21^{+0.8}_{-0.4})$ keV from the last PDG review PDG10 .
The JADE group Oe90 measured also the $\gamma\gamma$ $\to$ $\pi^{0}\pi^{0}$
cross section and having (60 $\pm$ 8)-events in the $f_{0}(980)$ region (and,
for comparison, (2177 $\pm$ 47) events in the $f_{2}(1270)$ region) obtained
for the $f_{0}$ $\to$ $\gamma\gamma$ decay width
$\Gamma_{f_{0}\to\gamma\gamma}$ = $(0.42\pm 0.06^{+0.08}_{-0.18})$ keV (that
corresponds to $\Gamma_{f_{0}\to\gamma\gamma}<0.6$ keV at 95% C.L.).
In addition, in 1990 the MARK II group in experiment on the $\gamma\gamma$
$\to$ $\pi^{+}\pi^{-}$ reaction with the integral luminosity 209 pb-1 Bo90
and the Crystal Ball group in 1990–1992 in experiments on the $\gamma\gamma$
$\to$ $\pi^{0}\pi^{0}$ reaction with the integral luminosities 97 pb-1 Ma90
and 255 pb-1 Ka91 ; Bi92 obtained also similar results for
$\Gamma_{f_{0}\to\gamma\gamma}$. All data are listed together in Table 3, and
Figs. 6(a) and 6(b) illustrate the manifestations of the $f_{0}(980)$ and
$f_{2}(1270)$ resonances observed by MARK II and Crystal Ball in the cross
sections for $\gamma\gamma$ $\to$ $\pi\pi$.
Although the statistical significance of the $f_{0}(980)$ signal in the cross
sections and the invariant $\pi\pi$ mass resolution left much to be desired,
the existence of a shoulder in the $f_{0}(980)$ resonance region in the
$\gamma\gamma$collision might be thought as established, see Fig. 6
Table 3: 1990–1992 data on the $\gamma\gamma$ width of the $f_{0}(980)$ (see the text). Experiments | $\ \ \ \Gamma_{f_{0}\to\gamma\gamma}$ [keV]
---|---
Crystal Ball (1990) | 0.31 $\pm$ 0.14 $\pm$ 0.09
MARK II (1990) | 0.29 $\pm$ 0.07 $\pm$ 0.12
JADE (1990) | 0.42 $\pm 0.06^{+0.08}_{-0.18}$
Karch (1991) | 0.25 $\pm$ 0.10
Bienlein (1992) | 0.20 $\pm$ 0.07 $\pm$ 0.04
| $\leq$ 0.31 (90% CL)
---
Figure 6: Cross sections for $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ (a) and
$\gamma\gamma$ $\to$ $\pi^{0}\pi^{0}$ (b) as functions of the invariant mass
$\sqrt{s}$ of $\pi\pi$. The data correspond to limited angular ranges of the
registration of the final pions; $\theta$ is the polar angle of the produced
$\pi$ meson in the $\gamma\gamma$ center-of-mass system.
The experiments of eighties and beginning of nineties showed that the two-
photon widths of the scalar $f_{0}(980)$ and $a_{0}(980)$ resonances are small
in comparison with the two-photon widths of the tensor $f_{2}(1270)$ and
$a_{2}(1320)$ resonances, for which there were obtained the following values
$\Gamma_{f_{2}\to\gamma\gamma}$ $\approx$ $2.6-3$ keV Ma90 ; Bo90 ; Oe90 ;
Be92 (see also PDG08 ; PDG10 ) and $\Gamma_{a_{2}\to\gamma\gamma}$ $\approx$
1 keV An86 ; Oe90 (see also PDG08 ; PDG10 ). This fact pointed to the four-
quark nature of the $f_{0}(980)$ and $a_{0}(980)$ states An86 ; Kol85 ; Ko88 ;
Ko91 ; Bar85 ; BW87 ; KZ87 ; Ch88 ; Ma90 ; Oe90 ; PDG92 ; Cah89 ; FH90 .
As mentioned above, in the beginning of eighties it was predicted ADS82a ;
ADS82b that, if the $a_{0}(980)$ and $f_{0}(980)$ mesons are taken as four-
quark states, their production rates should be suppressed in photon-photon
collisions by a factor ten in relation to the $a_{0}(980)$ and $f_{0}(980)$
mesons taken as two-quark $P$ wave states. The estimates obtained for the
four-quark model were ADS82a ; ADS82b
$\Gamma_{a_{0}\to\gamma\gamma}\sim\Gamma_{f_{0}\to\gamma\gamma}\sim
0.27\,\mbox{keV},$ (3)
which were supported by experiments. As for the $q\bar{q}$ model, it predicted
that
$\frac{\Gamma_{0^{++}\to\gamma\gamma}}{\Gamma_{2^{++}\to\gamma\gamma}}=\frac{15}{4}\times
corrections\approx 1.3-5.5$ (4)
for the $P$ wave states with $J^{PC}$ = 0++ and 2++ from the same family, see,
for example, BF73 ; BGK76 ; Ja76 ; BK79 ; Bar85 ; Barn92 ; BW87 ; AS88 ; Ch88
; Ma90 ; MP90 ; Li91 ; Ko91 ; Mu96 ; Ac98 ; Penn07c ; BHS83 . The factor
$\frac{15}{4}$ is obtained in the non-relativistic quark model according to
which
$\Gamma_{0^{++}\to\gamma\gamma}=(256/3)\alpha^{2}|R^{\prime}(0)|^{2}/M^{4}$
and
$\Gamma_{2^{++}\to\gamma\gamma}=(1024/45)\alpha^{2}|R^{\prime}(0)|^{2}/M^{4}$,
where $R^{\prime}(0)$ is the derivative of the $P$ state radial wave function
with a mass $M$ at the origin. $\Gamma_{2^{++}\to\gamma\gamma}$ differs from
$\Gamma_{0^{++}\to\gamma\gamma}$ by the product of the Clebsch-Gordan spin-
orbit coefficient squared ($\frac{1}{2}$) and value of $\sin^{4}\vartheta$
averaged over the solid angle ($\frac{8}{15}$); see Ref. Ja76 for details.
This suggested that $\Gamma_{f_{0}\to\gamma\gamma}\geq 3.4$ keV and
$\Gamma_{a_{0}\to\gamma\gamma}\geq 1.3$ keV.
One dwells else on predictions of the molecule model in which the $a_{0}(980)$
and $f_{0}(980)$ resonances are non-relativistic bound states of the
$K\bar{K}$ system WI82 ; WIs90 . As the $q^{2}\bar{q}^{2}$ model, the molecule
one explains the state mass degeneracy and their strong coupling with the
$K\bar{K}$ channel. As in the four-quark model, in the molecular one no
questions arise with the small rates $B[J/\psi$ $\to$
$a_{0}(980)\rho]/B[J/\psi$ $\to$ $a_{2}(1320)\rho$] and $B[J/\psi$ $\to$
$f_{0}(980)\omega]/B[J/\psi$ $\to$ $f_{2}(1270)\omega]$ (see specialities in
Refs. Ac98 ; Ac02 ). However, the predictions of this model for the two-photon
widths Bar85 ; Barn92 ,
$\Gamma_{a_{0}(K\bar{K})\to\gamma\gamma}=\Gamma_{f_{0}(K\bar{K})\to\gamma\gamma}\approx
0.6\,\mbox{keV},$ (5)
are rather big, within two standard deviations contradict the experiment data
from Table 3. More than that, the widths of $K\bar{K}$ molecules must be
smaller (strictly speaking, much smaller) than the binging energy
$\epsilon\approx 10$ MeV. Recent data PDG10 , however, contradict this,
$\Gamma_{a_{0}}$ $\sim$ $(50-100)$ MeV and $\Gamma_{f_{0}}$ $\sim$ $(40-100)$
MeV. The $K\bar{K}$ molecule model predicted also AG97 ; AGS97 that $B[\phi$
$\to$ $\gamma a_{0}(980)]$ $\approx$ $B[\phi$ $\to$ $\gamma f_{0}(980)]$
$\sim$ 10-5 that contradicts experiment PDG10 . In addition, recently AK07b ;
AK08 it was shown that the kaon loop model, ratified by experiment, describes
production of a compact state and not an extended molecule. Finally,
experiments in which the $a_{0}(980)$ and $f_{0}(980)$ mesons were produced in
the $\pi^{-}p$ $\to$ $\pi^{0}\eta n$ Dz95 ; Al99 and $\pi^{-}p$ $\to$
$\pi^{0}\pi^{0}n$ Al95 ; Al98 ; Gu01 reactions within a broad range of four-
momentum transfer squared, 0 $<$ $-t$ $<$ 1 GeV2, have shown that these states
are compact, e.g. as two-quark $\rho$, $\omega$, $a_{2}(1320)$, $f_{2}(1270)$
and other mesons and not as extended molecule ones with form factors
determined by the wave functions. These experiments have left no chances for
the $K\bar{K}$ molecule model. 101010A $K\bar{K}$ formation of unknown origin
with the average relativistic Euclidean momentum squared $<k^{2}>\approx 2$
GeV2 was considered recently and named “a $K\bar{K}$ molecule” BGL08 . Such a
free use of the molecule term can mislead readers considering a molecule as an
extent non-relativistic bound system. As to four-quark states, they are as
compact as two-quark states. 111111An additional argument against the
molecular model for the $a_{0}(980)$ resonance is presented in Section 5.
The Particle Data Group gives information on an average value of
$\Gamma_{f_{0}\to\gamma\gamma}$ beginning from 1992. Note that no new
experimental data on $\Gamma_{f_{0}\to\gamma\gamma}$ emerged from 1992 up to
2006, nevertheless, its average value, adduced by PDG, evolved noticeably in
this period. Based on the data in Table 3, the $\Gamma_{f_{0}\to\gamma\gamma}$
value would be (0.26 $\pm$ 0.08) keV. In 1992 PDG PDG92 obtained the average
value $\Gamma_{f_{0}\to\gamma\gamma}$ = (0.56 $\pm$ 0.11) keV combining the
JADE result (1990) Oe90 , see Table 3, with the value
$\Gamma_{f_{0}\to\gamma\gamma}$ = (0.63$\pm$0.14) keV, which was found by
Morgan and Pennington (1990) MP90 as a result of a theoretical analysis of
the MARK II (1990) Bo90 and Crystal Ball (1990) Ma90 data. In 1999 Boglione
and Pennington carried out a new theoretical analysis BP99 of the situation
and halved value, $\Gamma_{f_{0}\to\gamma\gamma}$ = (0.28
${}^{+0.09}_{-0.13}$) keV (see also Pe99 ). The Particle Data Group noted that
the Boglione and Pennington (1999) result replaces the Morgan and Pennington
(1990) one but used both results coupled with the JADE (1990) one for
calculation of the average $f_{0}$ $\to$ $\gamma\gamma$ decay width. In this
way the value $\Gamma_{f_{0}\to\gamma\gamma}$ = (0.39 ${}^{+0.10}_{-0.13}$)
keV emerged in the PDG review (2000) PDG00 .
In 2003 preliminary super-statistics Belle data on $\gamma\gamma$ $\to$
$\pi^{+}\pi^{-}$ were reported. They contain a clear signal from the
$f_{0}(980)$ resonance Mo03 . In 2005 there emerged our first response AS05
to these data. It has become clear that $\Gamma_{f_{0}\to\gamma\gamma}$ is
bound to be small. In 2006 PDG excluded the Morgan and Pennington (1990)
result, $\Gamma_{f_{0}\to\gamma\gamma}$ = $(0.63\pm 0.14)$ keV, from its
sample and using only the JADE (1990) data and the Boglione and Pennington
(1999) result obtained a new guide $\Gamma_{f_{0}\to\gamma\gamma}$ = (0.31
${}^{+0.08}_{-0.11}$) keV PDG06 . To the effect that happened later to the
average value of $\Gamma_{f_{0}\to\gamma\gamma}$ and can else happens to the
one, we are going to tell in the following subsections.
3.2. Current experimental situation
In 2007 the Belle collaboration published the data on cross section of the
$\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ reaction in the region of the
$\pi^{+}\pi^{-}$ invariant mass, $\sqrt{s}$, from 0.8 up 1.5 GeV based on the
integral luminosity 85.9 fb-1 Mo07a ; Mo07b . These data are shown on Fig. 7.
Thanks to the huge statistics and high energy resolution in the Belle
experiment, the clear signal of the $f_{0}(980)$ resonance was detected for
the first time. Its value proved to be small that agrees qualitatively with
the four-quark model prediction ADS82a ; ADS82b . The visible height of the
$f_{0}(980)$ peak amounts of about 15 nb over the smooth background near 100
nb. Its visible (effective) width proved to be about 30–35 MeV, see Fig. 7.
Figure 7: (a) The high statistics Belle data on the $\gamma\gamma$ $\to$
$\pi^{+}\pi^{-}$ reaction cross section for $|\cos\theta|\leq 0.6$ Mo07b .
Plot (b) emphasizes the region of the $f_{0}(980)$ peak. Errors shown include
statistics only. They are approximately equal to 0.5%–1.5%. The $\sqrt{s}$ bin
size in the Belle experiment has been chosen to be 5 MeV, with the mass
resolution of about 2 MeV. Figure 8: (a) The data on the
$\gamma\gamma\to\pi^{+}\pi^{-}$ reaction cross section from Mark II Ma90 and
CELLO Be92 , for $\sqrt{s}\leq 0.85$ GeV, and from Belle Mo07b , for 0.8
$\leq\sqrt{s}\leq 1.5$ GeV. (b) The data on the
$\gamma\gamma\to\pi^{0}\pi^{0}$ reaction cross section from Crystal Ball Bo90
, for $\sqrt{s}<0.8$ GeV, and from Belle Ue08 , for 0.8 $\leq\sqrt{s}\leq 1.5$
GeV. Plots (a), for $\sqrt{s}>0.85$ GeV, and (b), for $\sqrt{s}>0.8$ GeV, show
exclusively the Belle data to emphasize the discovered miniature signals from
the $f_{0}(980)$ resonance. The theoretical curves, shown on plot (a),
correspond to the cross sections for the process
$\gamma\gamma\to\pi^{+}\pi^{-}$ for $|\cos\theta|\leq 0.6$ caused by the
electromagnetic Born contribution from the elementary one pion exchange: the
total integrated cross section $\sigma^{\mbox{\scriptsize{Born}}}$ =
$\sigma^{\mbox{\scriptsize{Born}}}_{0}$ \+
$\sigma^{\mbox{\scriptsize{Born}}}_{2}$ and the integrated cross sections
$\sigma^{\mbox{\scriptsize{Born}}}_{\lambda}$ with helicity $\lambda$ = 0 and
2.
Then the Belle collaboration published the data on cross section for the
$\gamma\gamma$ $\to$ $\pi^{0}\pi^{0}$ reaction in the region of the
$\pi^{+}\pi^{-}$ invariant mass, $\sqrt{s}$, from 0.6 to 1.6 GeV based on the
integral luminosity 95 fb-1 Ue08 ; see also Abe07 ; Nak08 ; Ada08 . Here also
the clear signal of the $f_{0}(980)$ resonance was detected for the first
time. Note that the background conditions for the manifestation of the
$f_{0}(980)$ in the $\gamma\gamma$ $\to$ $\pi^{0}\pi^{0}$ channel are more
favourable than in the $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ one.
Figures 8(a) and 8(b) illustrate a general picture of data on the cross
sections of the $\pi^{+}\pi^{-}$ and $\pi^{0}\pi^{0}$ production in photon-
photon collisions from the $\pi\pi$ threshold up to 1.5 GeV after the Belle
experiments. It is instructive to compare these results with a previous
picture illustrated by Figs. 6(a) and 6(b).
Table 4: The current data on the $f_{0}(980)\to\gamma\gamma$ decay width. Experiments | $\ \ \ \Gamma_{f_{0}\to\gamma\gamma}$ [keV]
---|---
$\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ Belle (2007) Mo07a | $0.205^{+0.095+0.147}_{-0.083-0.117}$
$\gamma\gamma$ $\to$ $\pi^{0}\pi^{0}$ Belle (2008) Ue08 | $0.286\pm 0.017^{+0.211}_{-0.070}$
PDG average value PDG08 ; PDG10 | $0.29^{+0.07}_{-0.06}$
The current information about $\Gamma_{f_{0}\to\gamma\gamma}$ are adduced in
Table 4. The Belle collaboration determined $\Gamma_{f_{0}\to\gamma\gamma}$
(see Table 4) as a result of fitting the mass distributions (see Figs. 7(b)
and 8(b)) taking into account the $f_{0}(980)$ and $f_{2}(1270)$ resonance
contributions and smooth background contributions, which are a source of large
systematic errors in $\Gamma_{f_{0}\to\gamma\gamma}$ (see for details in Refs.
Mo07a ; Mo07b ; Ue08 ).
3.3. Dynamics of the reactions $\gamma\gamma\to\pi\pi$: Born contributions and
angular distributions
To feel the values of the cross sections measured by experiment, in Fig. 8(a)
the total Born cross section of the $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$
process, $\sigma^{\mbox{\scriptsize{Born}}}$ =
$\sigma^{\mbox{\scriptsize{Born}}}_{0}$ \+
$\sigma^{\mbox{\scriptsize{Born}}}_{2}$, and the partial helicity ones,
$\sigma^{\mbox{\scriptsize{Born}}}_{\lambda}$, are adduced as a guide, where
$\lambda$ = 0 or 2 is the absolute value of the photon helicity difference.
These cross sections are caused by the elementary one pion exchange mechanism,
see Fig. 9. By the Low theorem 121212 According to this theorem Low54 ; GMG54
; AbG68 , the Born contributions give the exact physical amplitude of the
crossing reaction $\gamma\pi^{\pm}\to\gamma\pi^{\pm}$ close to its threshold.
and chiral symmetry 131313Chiral symmetry guarantees weakness of the $\pi\pi$
interaction at low energy., the Born contributions should dominate near the
threshold region of the $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ reaction. As
shown in Fig. 8(a), this anticipation does not contradict the current data
near threshold, but, certainly, errors leave much to be desired. In
additional, one can consider the Born contributions as an reasonable
approximation of background (non-resonance) contributions in the
$\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ amplitudes in all the resonance region,
including the $f_{2}(1270)$ one. The Born contributions are also the base for
a construction of amplitudes, including strong interactions in final state,
see, for example, Me83 ; Ly84 ; Lyt85 ; Jo86 ; MP87 ; MP88 ; MP91 ; BC88 ;
DHL88 ; DH93 ; OO97 ; AS05 ; AS07 ; Pe06 .
Figure 9: The Born diagrams for $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$.
The Born contributions have the following particular qualities. First,
$\sigma^{\mbox{\scriptsize{Born}}}$ has a maximum at $\sqrt{s}\approx 0.3$
GeV, where
$\sigma^{\mbox{\scriptsize{Born}}}\approx\sigma^{\mbox{\scriptsize{Born}}}_{0}$,
then $\sigma^{\mbox{\scriptsize{Born} }}_{0}$ falls with increasing
$\sqrt{s}$, so that the $\sigma^{\mbox{\scriptsize{Born}}}_{2}$ contribution
dominates in $\sigma^{\mbox{\scriptsize{Born}}}$ at $\sqrt{s}>0.5$ GeV, see
Fig. 8(a). Second, although the $\sigma^{\mbox{\scriptsize{Born}}}_{2}$ value
is approximately 80% caused by the $D$ wave amplitude, its interference with
the contribution of higher waves are considerable in the differential cross
section $d\sigma^{\mbox{\scriptsize{Born}}}(\gamma\gamma$ $\to$
$\pi^{+}\pi^{-})/d|\cos\theta|$, compare Figs. 10(a) and 10(b). The
interference, destructive in the first half of the $|\cos\theta|\leq 0.6$
interval and constructive in the second one, flattens out the $\theta$ angle
distribution in this interval, so that this effect increases with increasing
$\sqrt{s}$, see Fig. 10(a).
Figure 10: Plots (a) and (b) show the $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$
differential cross section in the Born approximation (i.e., for the elementary
one pion exchange mechanism) and its components for different values of
$\sqrt{s}/\mbox{GeV}$. The vertical straight lines $|\cos\theta|=0.6$ show the
upper boundary of the region available for measurements.
Since the first resonance with $I^{G}(J^{PC})=0^{+}(4^{++})$ has the mass near
2 GeV PDG08 , then seemingly the $S$ and $D$ wave contributions only should
dominate at $\sqrt{s}\leq$ 1.5 GeV and the differential cross section of the
$\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ process could be represented as Mo07b
$d\sigma(\gamma\gamma\to\pi^{+}\pi^{-})/d\Omega=|S+D_{0}Y^{0}_{2}|^{2}+|D_{2}Y^{2}_{2}|^{2}\,,$
(6)
where $S$, $D_{0}$, and $D_{2}$ are the $S$ and $D_{\lambda}$ wave amplitudes
with the helicity $\lambda$ = 0 and 2, $Y^{m}_{J}$ are the spherical
harmonics. 141414Eq. (6) corresponds the situation “untagget” when the
dependence on the pion azimuth $\varphi$ is not measured, that took place in
all above experiments. But, the above discussion shows that the smooth
background contribution in the $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ cross
section contains the high partial wave due to the one pion exchange, so that
the smooth background can imitate the large $S$ wave at $|\cos\theta|$ $\leq$
0.6.
The one-pion exchange is absent in the $\gamma\gamma$ $\to$ $\pi^{0}\pi^{0}$
channel and the representation of the cross section of this reaction similar
to Eq. (6) is a good approximation at $\sqrt{s}\leq$ 1.5 GeV
$d\sigma(\gamma\gamma\to\pi^{0}\pi^{0})/d\Omega=|\widetilde{S}+\widetilde{D}_{0}Y^{0}_{2}|^{2}+|\widetilde{D}_{2}Y^{2}_{2}|^{2}\,,$
(7)
where $\widetilde{S}$, $\widetilde{D}_{0}$, and $\widetilde{D}_{2}$ are the
$S$ and $D_{\lambda}$ wave amplitudes with the helicity $\lambda$ = 0 and 2.
Nevertheless, the partial wave analysis of the $\gamma\gamma$ $\to$
$\pi^{0}\pi^{0}$ events, based on Eq. (7), is not prevented from difficulties
for the relation $\sqrt{6}|Y^{2}_{2}|$ = $\sqrt{5}Y^{0}_{0}$ – $Y^{0}_{2}$,
which gives no way of separating the partial waves when using only the data on
the differential cross section Oe90 ; Mo07b ; Ue08 . So, the separation of the
contributions with the different helicities requests some guesswork, for
example, the domination of the helicity 2 in the $f_{2}(1270)$ resonance
production KrKr78 ; Kra78 ; KV81 ; Li91 that agrees rather well with the
experimental angle distribution.
The $d\sigma(\gamma\gamma$ $\to$ $\pi^{0}\pi^{0})/d\Omega$ differential cross
section in Eq. (7) is a polynomial of the second power of $z$ =
$\cos^{2}\theta$, which can be expressed in terms of its roots $z_{1}$ and
$z^{*}_{1}$, 151515Such a procedure is the base of the determination all
solutions when carrying out partial wave analyses, see, for example, Ger69 ;
Barr72 ; Pe77 ; Al98 ; Al99 ; Sad99 ; Gu01 .
$d\sigma(\gamma\gamma\to\pi^{0}\pi^{0})/d\Omega=C(z-z_{1})(z-z^{*}_{1})\,,$
(8)
where $C$ is a real quantity. So, from fitting experimental data on the
differential cross section one can determine only three independent
parameters, for example, $C$, Re$z_{1}$, and Im$z_{1}$ up to the sign and not
four ones, $|\widetilde{S}|$, $|\widetilde{D}_{0}|$, $|\widetilde{D}_{2}|$,
and $\cos\delta$ ($\delta$ is a relative phase between the $\widetilde{S}$ and
$\widetilde{D}_{0}$ amplitudes), as one would like.
Figure 11: The Belle data on the angular distributions for $\gamma\gamma$
$\to$ $\pi^{0}\pi^{0}$ Ue08 . The solid lines are the approximations. The
vertical straight lines $|\cos\theta|$ = 0.8 show the upper boundary of the
region available for measurements.
In Fig. 11 the Belle data on the angular distributions in $\gamma\gamma$ $\to$
$\pi^{0}\pi^{0}$ are adduced at three values of $\sqrt{s}$. All of them are
described very well by the simple two-parameter expression $|a|^{2}$ \+
$|b\,Y^{2}_{2}|^{2}$ AS08b . This suggests that the $\gamma\gamma$ $\to$
$\pi^{0}\pi^{0}$ cross section is saturated only by the $\widetilde{S}$ and
$\widetilde{D}_{2}$ partial wave contributions at $\sqrt{s}<1.5$ GeV.
3.4. Production mechanisms of scalar resonances
Expectation of the Belle data and their advent have called into being a whole
series of theoretical papers which study dynamics of the $f_{0}(980)$ and
$\sigma(600)$ production in the $\gamma\gamma$ $\to$ $\pi\pi$ processes by
various means and discuss the nature of these states AS05 ; AS07 ; AS08a ;
AS08b ; Ac08a ; Ac08b ; AKS08 ; AS09 ; AS10a ; Pe06 ; Pe07a ; Pe07b ; Penn07c
; Penni08 ; PMUW08 ; MNO07 ; MeNO08a ; MeNaO08b ; Men08 ; Na08 ; ORS08 ; OR08
; BKR08 ; KaV08 ; MWZZZ09 ; MNW10 ; GMM10 .
The main lesson from the analysis of the production mechanisms of the light
scalars in $\gamma\gamma$ collisions is the following Ac08a ; Ac08b .
The classical $P$ wave tensor $q\bar{q}$ mesons $f_{2}(1270)$, $a_{2}(1320)$,
and $f^{\prime}_{2}(1525)$ are produced in $\gamma\gamma$ collisions due to
the direct $\gamma\gamma$ $\to$ $q\bar{q}$ transitions in the main, whereas
the light scalar mesons $\sigma(600)$, $f_{0}(980)$, and $a_{0}(980)$ are
produced by the rescatterings $\gamma\gamma\to\pi^{+}\pi^{-}\to\sigma$,
$\gamma\gamma\to K^{+}K^{-}\to f_{0}$,
$\gamma\gamma\to(K^{+}K^{-},\pi^{0}\eta)\to a_{0}$, and so on, i.e., due to
the four quark transitions. As to the direct transitions
$\gamma\gamma\to\sigma$, $\gamma\gamma\to f_{0}$, and $\gamma\gamma\to a_{0}$,
they are strongly suppressed, as it is expected in four-quark model.
This conclusion introduces a new seminal view of the $\gamma\gamma$ $\to$
$\pi\pi$ reaction dynamics at low energy. Let us dwell on this point.
Recall elementary ideas of interactions of $C$ even mesons with photons based
on the quark model Ko84 ; Ko88 ; KZ87 ; PDG10 . Coupling of the $\gamma\gamma$
system with the classical $q\bar{q}$ states, to which the light pseudoscalar
($J^{PC}=0^{-+}$) and tensor ($2^{++}$) mesons belong, are proportional to
four power of charges of constituent quarks.
Only the width of the $\pi^{0}$ $\to$ $\gamma\gamma$ decay is evaluated from
the first principles Ad69 ; BeJ69 ; BFGM72 ; Leu98 .
$\Gamma_{\pi^{0}\to\gamma\gamma}$ is determined completely by the Adler-Bell-
Jackiw axial anomaly and in this case the theory (QCD) is in excellent
agreement with the experiment IO07 ; Ber07 . The relations between the widths
of the $\pi^{0}$ $\to$ $\gamma\gamma$, $\eta$ $\to$ $\gamma\gamma$, and
$\eta^{\prime}$ $\to$ $\gamma\gamma$ decays are obtained in the $q\bar{q}$
model with taking into account the effects of the $\eta-\eta^{\prime}$ mixing
and the $SU(3)$ symmetry breaking KZ87 ; Leu98 ; Fe00 .
As for the tensor mesons, in the ideal mixing case, i.e., if
$f_{2}=(u\bar{u}+d\bar{d})/\sqrt{2}$ and $f^{\prime}_{2}=s\bar{s}$, the quark
model predicts the following relations for the coupling constant squared:
$g^{2}_{f_{2}\gamma\gamma}:g^{2}_{a_{2}\gamma\gamma}:g^{2}_{f^{\prime}_{2}\gamma\gamma}=25:9:2\,.$
(9)
Though absolute values of the two-photon widths of the tensor meson decays
cannot be obtained from the first principles Ko84 ; KZ87 ; BK79 ; BW87 ; BF73
; BR76 ; Ros81 ; Be80 (see also references herein), the $q\bar{q}$ model
prediction (9), underlying the relations between the widths of the
$f_{2}(1270)$ $\to$ $\gamma\gamma$, $a_{2}(1320)$ $\to$ $\gamma\gamma$, and
$f^{\prime}_{2}(1525)$ $\to$ $\gamma\gamma$ decays, are used with taking into
account the effects of a deviation from the ideal mixing and the $SU(3)$
symmetry breaking Ko84 ; Po86 ; KZ87 ; PDG10 ; Alb90 ; LYS01 . Roughly
speaking, the $q\bar{q}$ model prediction (9) is borne out by experiment.
Among other things, this implies that the final state interaction effects are
small, in particular, the contributions of the $f_{2}(1270)$ $\to$
$\pi^{+}\pi^{-}$ $\to$ $\gamma\gamma$ rescattering type are small in
comparison with the contributions of the direct $q\bar{q}(2^{++})$ $\to$
$\gamma\gamma$ transitions.
The observed smallness of the $a_{0}(980)$ and $f_{0}(980)$ meson two-photon
widths in comparison with the two-photon tensor meson ones and thus the
failure of the $q\bar{q}$ model prediction of the relation (4) between the
widths of the direct $0^{++}$ and $2^{++}$ $\to$ $\gamma\gamma$ transitions
point to that $a_{0}(980)$ and $f_{0}(980)$ are not the quark and antiquark
bound states. If the $q\bar{q}$ component is practically absent in the wave
functions of the light scalars and in their $q^{2}\bar{q}^{2}$ component the
white neutral vector meson pairs are practically absent too, as in the MIT bag
model ADS82a ; ADS82b , then the $\sigma(600)$ $\to$ $\gamma\gamma$,
$f_{0}(980)$ $\to$ $\gamma\gamma$, and $a_{0}(980)$ $\to$ $\gamma\gamma$
decays could be the four-quark transitions caused by the rescatterings
$\sigma(600)$ $\to$ $\pi^{+}\pi^{-}$ $\to$ $\gamma\gamma$, $f_{0}(980)$ $\to$
$K^{+}K^{-}$ $\to$ $\gamma\gamma$, and $a_{0}(980)$ $\to$
$(K^{+}K^{-},\pi^{0}\eta)$ $\to$ $\gamma\gamma$. Already in 1998 we considered
such a scenario extensively AS88 analyzing the Crystal Ball data An86 on the
$a_{0}(980)$ resonance production in the $\gamma\gamma$ $\to$ $\pi^{0}\eta$
reaction; see also the discussion of the $\gamma\gamma$ $\to$ $K\bar{K}$
reaction mechanisms in Refs. AS92 ; AS94b . Fifteen years later, when the
preliminary high statics Belle data Mo03 on the $f_{0}(980)$ resonance
production in the $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ reaction were
reported, we studied what role the rescattering mechanisms, in particular, the
$\gamma\gamma$ $\to$ $K^{+}K^{-}$ $\to$ $f_{0}(980)$ $\to$ $\pi^{+}\pi^{-}$
mechanism, could play in this process AS05 . As a result we showed that just
this mechanism gives a reasonable scale of the $f_{0}(980)$ manifestation in
the $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ and $\gamma\gamma$ $\to$
$\pi^{0}\pi^{0}$ cross sections.
Then in the $SU(2)_{L}\times SU(2)_{R}$ linear $\sigma$ model frame we showed
that the $\sigma$ field are described by its four-quark component at least in
the energy (virtuality) region of the $\sigma$ resonance and the $\sigma(600)$
meson decay into $\gamma\gamma$ is the four-quark transition $\sigma(600)$
$\to$ $\pi^{+}\pi^{-}$ $\to$ $\gamma\gamma$ AS07 . We also emphasized that the
$\sigma$ meson contribution in the $\gamma\gamma$ $\to$ $\pi\pi$ amplitudes is
shielded due to its strong destructive interference with the background
contributions as in the $\pi\pi$ $\to$ $\pi\pi$ amplitudes, 161616As already
noted in Introduction, the presence of the large background, which shields the
$\sigma$ resonance in $\pi\pi$ $\to$ $\pi\pi$, is a consequence of chiral
symmetry. i.e., the $\sigma$ meson is produced in the $\gamma\gamma$
collisions accompanied by the great chiral background due to the rescattering
mechanism $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ $\to$ $(\sigma+\mbox{\it
background})$ $\to$ $\pi\pi$, that results in the modest $\gamma\gamma$ $\to$
$\pi^{0}\pi^{0}$ cross section near (5–10) nb in the $\sigma$ meson region,
see Fig. 8(b). The details of this shielding are given in the next Section.
The above considerations about dynamics of the $\sigma(600)$, $f_{0}(980)$,
and $f_{2}(1270)$ resonance production were developed in analyzing the final
high-statistics Belle data AS08a ; AS08b on the $\gamma\gamma$ $\to$
$\pi^{+}\pi^{-}$ and $\gamma\gamma$ $\to$ $\pi^{0}\pi^{0}$ reactions, to a
discussion of which we proceed.
4\. Analysis of high statistics Belle data on the reactions
$\gamma\gamma\to\pi^{+}\pi^{-}$ and $\gamma\gamma\to\pi^{0}\pi^{0}$.
Manifestations of the $\sigma(600)$ and $f_{0}(980)$ resonances
As noted above, the $S$ and $D_{\lambda=2}$ partial wave contributions
dominate in the Born cross sections $\sigma^{\mbox{\scriptsize{Born}}}_{0}$
and $\sigma^{\mbox{\scriptsize{Born}}}_{2}$, respectively, in region of
interest, $\sqrt{s}<1.5$ GeV, and the $\pi\pi$ interaction is strong also in
the $S$ and $D$ waves only in this region, that is why the final-state strong
interaction modifies these Born contribution in $\gamma\gamma$ $\to$
$\pi^{+}\pi^{-}$ essentially. 171717It is reliably established by experiment
that the $S$ and $D$ wave contribution dominate in the $\pi\pi$ scattering
cross sections in the isospin $I$ = 0 and 2 channels at $\sqrt{s}<1.5$ GeV
(see, for example, data Pr73 ; Hy73 ; Hy75 ; Duru73 ; Hoo77 ; Pe77 ; Al95 ;
Al98 ; Gu01 ). The partial amplitudes $\pi\pi$ $\to$ $\pi\pi$ ($T^{I}_{J}(s)$
= $\\{\eta^{I}_{J}(s)\exp[2i\delta^{I}_{J}(s)]-1\\}/[2i\rho_{\pi^{+}}(s)]$,
where $\delta^{I}_{J}(s)$ and $\eta^{I}_{J}(s)$ are the phase and inelasticity
of the $J$ wave $\pi\pi$ scattering in the channel with the isospin $I$;
$\rho_{\pi^{+}}(s)$ = $(1-4m^{2}_{\pi^{+}}/s)^{1/2}$) with $J$ = 0, 2 and $I$
= 0 reach their unitarity limits at some values of $\sqrt{s}$ in the region of
interest and demonstrate both the smooth energy dependence and the sharp
resonance oscillations. The $T^{0}_{2}(s)$ amplitude is dominated by the
$f_{2}(1270)$ resonance contribution. The $T^{0}_{0}(s)$ amplitude contains
the $\sigma_{0}(600)$ and $f_{0}(980)$ resonance contributions. The
$\sigma_{0}(600)$ resonance contribution is compensated strongly by the chiral
background near the $\pi\pi$ threshold to provide for the observed smallness
of the $\pi\pi$ scattering length $a^{0}_{0}$ and the Adler zero in
$T^{0}_{0}(s)$ at $s$ $\approx$ $m^{2}_{\pi}/2$ AS94a ; AS07 ; AK06 ; AK07a .
$|T^{0}_{0}(s)|$ reaches the unitary limit in the 0.85–0.9 GeV region and has
the narrow deep (practically up to zero) right under the $K\bar{K}$ threshold
caused by the destructive interference of the $f_{0}(980)$ resonance
contribution with the large smooth background. It is established also that the
$\pi\pi$ scattering in the $I$ = 0 channel is elastic up to the $K\bar{K}$
channel threshold in the very good approximation, but directly above this the
inelasticity $\eta^{0}_{0}(s)$ shows the sharp jump due to the production of
the $f_{0}(980)$ resonance coupled strongly with the $K\bar{K}$channel. In
addition, the inelastic $\gamma\gamma$ $\to$ $K^{+}K^{-}$ $\to$ $\pi\pi$
rescattering plays the important role in the $f_{0}(980)$ resonance region
(for the first time this process was noted in Refs. ADS82a ; ADS82b ).
So, we use the model for the helicity, $M_{\lambda}$, and partial, $M_{\lambda
J}$, amplitudes of $\gamma\gamma$ $\to$ $\pi\pi$ in which the Born charged
$\pi$ and $K$ exchanges modified by the strong final-state interactions in the
$S$ and $D_{2}$ waves and the direct transitions of the resonances in two
photons are taken into account (see, in addition, AS88 ; AS92 ; AS94b ; AS05 ;
AS07 ; AS08a ; AS08b ; Me83 ; AcGu98 ; PMUW08 ; OR08 ; MeNO08a ),
$\displaystyle
M_{0}(\gamma\gamma\to\pi^{+}\pi^{-};s,\theta)=M^{\mbox{\scriptsize{Born}}\,\pi^{+}}_{0}(s,\theta)+$
$\displaystyle+\widetilde{I}^{\pi^{+}}_{\pi^{+}\pi^{-}}(s)\,T_{\pi^{+}\pi^{-}\to\pi^{+}\pi^{-}}(s)+$
$\displaystyle+\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s)\,T_{K^{+}K^{-}\to\pi^{+}\pi^{-}}(s)+M^{\mbox{\scriptsize{direct}}}_{\mbox{\scriptsize{res}}}(s),$
(10) $\displaystyle
M_{2}(\gamma\gamma\to\pi^{+}\pi^{-};s,\theta)=M^{\mbox{\scriptsize{Born}}\,\pi^{+}}_{2}(s,\theta)+$
$\displaystyle+80\pi d^{2}_{20}(\theta)M_{\gamma\gamma\to
f_{2}(1270)\to\pi^{+}\pi^{-}}(s),\ \ $ (11) $\displaystyle
M_{0}(\gamma\gamma\to\pi^{0}\pi^{0};s,\theta)=M_{00}(\gamma\gamma\to\pi^{0}\pi^{0};s)=$
$\displaystyle=\widetilde{I}^{\pi^{+}}_{\pi^{+}\pi^{-}}(s)\,T_{\pi^{+}\pi^{-}\to\pi^{0}\pi^{0}}(s)+$
$\displaystyle+\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s)\,T_{K^{+}K^{-}\to\pi^{0}\pi^{0}}(s)+M^{\mbox{\scriptsize{direct}}}_{\mbox{\scriptsize{res}}}(s),$
(12) $\displaystyle M_{2}(\gamma\gamma\to\pi^{0}\pi^{0};s,\theta)=$
$\displaystyle=5d^{2}_{20}(\theta)M_{22}(\gamma\gamma\to\pi^{0}\pi^{0};s)=$
$\displaystyle=80\pi d^{2}_{20}(\theta)M_{\gamma\gamma\to
f_{2}(1270)\to\pi^{0}\pi^{0}}(s)\,,$ (13)
where $d^{2}_{20}(\theta)$ = $(\sqrt{6}/4)\sin^{2}\theta$. The diagrams of the
above amplitudes are adduced in Figs. 9, 12, 13, and 14.
The first terms in the right sides of Eqs. (10) and (11) are the Born helicity
amplitudes $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ corresponding to the
elementary one pion exchange mechanism (see Fig. 9). Their explicit forms are
adduced in Appendix 8.1. The terms in Eqs. (10) and (12), containing the
$T_{\pi^{+}\pi^{-}\to\pi^{+}\pi^{-}}(s)$ = $[2T^{0}_{0}(s)+T^{2}_{0}(s)]/3$,
$T_{\pi^{+}\pi^{-}\to\pi^{0}\pi^{0}}(s)$ = $2[T^{0}_{0}(s)-T^{2}_{0}(s)]/3$,
and $T_{K^{+}K^{-}\to\pi^{+}\pi^{-}}(s)$ =
$T_{K^{+}K^{-}\to\pi^{0}\pi^{0}}(s)$ amplitudes, take into account the strong
final-state interactions in the $S$ wave. Eqs. (10) and (12) imply that
$T_{\pi^{+}\pi^{-}\to\pi\pi}(s)$ and $T_{K^{+}K^{-}\to\pi\pi}(s)$ in the loops
of the $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ $\to$ $\pi\pi$ and $\gamma\gamma$
$\to$ $K^{+}K^{-}$ $\to$ $\pi\pi$ rescatterings (see Figs. 12 and 13) are on
the mass shell. In so doing the $\widetilde{I}^{\pi^{+}}_{\pi^{+}\pi^{-}}(s)$
and $\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s)$ functions are the amplitudes of
the triangle loop diagrams describing the transitions $\gamma\gamma$ $\to$
$\pi^{+}\pi^{-}$ $\to$ ($scalar\ state\ with\ a\ mass=\sqrt{s}$) and
$\gamma\gamma$ $\to$ $K^{+}K^{-}$ $\to$ ($scalar\ state\ with\ a\
mass=\sqrt{s}$), in which the meson pairs $\pi^{+}\pi^{-}$ and $K^{+}K^{-}$
are produced by the electromagnetic Born sources, see Figs. 9 and 14. Their
explicit forms are adduced in Appendixes 8.1 and 8.3. The amplitude
$M^{\mbox{\scriptsize{direct}}}_{\mbox{\scriptsize{res}}}(s)$ in Eqs. (10) and
(12) caused by the direct coupling constants of the $\sigma_{0}(600)$ and
$f_{0}(980)$ with photons, and the $f_{2}(1270)$ production amplitude
$M_{\gamma\gamma\to f_{2}(1270)\to\pi^{+}\pi^{-}}(s)=M_{\gamma\gamma\to
f_{2}(1270)\to\pi^{0}\pi^{0}}(s)$ in Eqs. (11) and (13) are specified below.
Figure 12: The diagrams corresponding to the helicity amplitudes (10) and (11)
for the $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ reaction. Figure 13: The
diagrams corresponding to the helicity amplitudes (12) and (13) for the
$\gamma\gamma$ $\to$ $\pi^{0}\pi^{0}$ reaction. Figure 14: The Born diagrams
for $\gamma\gamma$ $\to$ $K^{+}K^{-}$.
Let us show by the example of the $S$ wave amplitudes $M_{00}(\gamma\gamma$
$\to$ $\pi^{+}\pi^{-};s)$ and $M_{00}(\gamma\gamma$ $\to$ $\pi^{0}\pi^{0};s)$
that the unitary condition requirement or the Watson theorem Wat52 about
interaction in final-state holds in the model under consideration. First of
all note that the 4$\pi$ and 6$\pi$ channel contributions are small for
$\sqrt{s}$ $<$ 1 GeV Pr73 ; Hy73 ; Gr74 and consequently
$T_{\pi^{+}\pi^{-}\to K^{+}K^{-}}(s)$ =
$e^{i\delta^{0}_{0}(s)}|T_{\pi^{+}\pi^{-}\to K^{+}K^{-}}(s)|$ and
$M^{\mbox{\scriptsize{direct}}}_{\mbox{\scriptsize{res}}}(s)$ = $\pm
e^{i\delta^{0}_{0}(s)}|M^{\mbox{\scriptsize{direct}}}_{\mbox{\scriptsize{res}}}(s)|$
for $4m_{\pi}^{2}$ $\leq$ $s$ $\leq$ $4m^{2}_{K}$ AS05 ; AS07 ; AS08a ; AK06 ;
AK07a . Taking into account that
Im$\widetilde{I}^{\pi^{+}}_{\pi^{+}\pi^{-}}(s)$ =
$\rho_{\pi^{+}}(s)M_{00}^{\mbox{\scriptsize{Born}}\,\pi^{+}}(s)$ one finds
$\displaystyle
M_{00}(\gamma\gamma\to\pi^{+}\pi^{-};s)=M^{\mbox{\scriptsize{Born}}\,\pi^{+}}_{00}(s)+$
$\displaystyle+\widetilde{I}^{\pi^{+}}_{\pi^{+}\pi^{-}}(s)T_{\pi^{+}\pi^{-}\to\pi^{+}\pi^{-}}(s)+$
$\displaystyle+\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s)T_{K^{+}K^{-}\to\pi^{+}\pi^{-}}(s)+M^{\mbox{\scriptsize{direct}}}_{\mbox{\scriptsize{res}}}(s)=$
$\displaystyle=(\mbox{for\ }2m_{\pi}\leq\sqrt{s}\leq 2m_{K})=$
$\displaystyle=\frac{2}{3}e^{i\delta^{0}_{0}(s)}A(s)+\frac{1}{3}e^{i\delta^{2}_{0}(s)}B(s),$
(14) $\displaystyle
M_{00}(\gamma\gamma\to\pi^{0}\pi^{0};s)=\widetilde{I}^{\pi^{+}}_{\pi^{+}\pi^{-}}(s)T_{\pi^{+}\pi^{-}\to\pi^{0}\pi^{0}}(s)+$
$\displaystyle+\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s)T_{K^{+}K^{-}\to\pi^{0}\pi^{0}}(s)+M^{\mbox{\scriptsize{direct}}}_{\mbox{\scriptsize{res}}}(s)=$
$\displaystyle=(\mbox{for\ }2m_{\pi}\leq\sqrt{s}\leq 2m_{K})=$
$\displaystyle=\frac{2}{3}e^{i\delta^{0}_{0}(s)}A(s)-\frac{2}{3}e^{i\delta^{2}_{0}(s)}B(s),$
(15)
where A(s) and B(s) are the real functions. 181818 $A(s)$ =
$M^{\mbox{\scriptsize{Born}}\,\pi^{+}}_{00}(s)\cos\delta^{0}_{0}(s)+(1/\rho_{\pi^{+}}(s))\mbox{Re}[I^{\pi^{+}}_{\pi^{+}\pi^{-}}(s)]\sin\delta^{0}_{0}(s)+(3/2)\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s)|T_{K^{+}K^{-}\to\pi^{+}\pi^{-}}(s)|$
$\pm$ $|M^{\mbox{\scriptsize{direct}}}_{\mbox{\scriptsize{res}}}(s)|$ and
$B(s)$ =
$M^{\mbox{\scriptsize{Born}}\,\pi^{+}}_{00}(s)\cos\delta^{2}_{0}(s)+(1/\rho_{\pi^{+}}(s))\mbox{Re}[\widetilde{I}^{\pi^{+}}_{\pi^{+}\pi^{-}}(s)]\sin\delta^{2}_{0}(s)$.
Eqs. (14) and (15) show that at one with the Watson theorem the phases of the
$S$ wave amplitudes $\gamma\gamma\to\pi\pi$ with $I$ = 0 and 2 coincide with
the phases of the $\pi\pi$ scattering $\delta^{0}_{0}(s)$ and
$\delta^{2}_{0}(s)$, respectively, in the elastic region (below the $K\bar{K}$
threshold).
We use the following notations and normalizations for the $\gamma\gamma$ $\to$
$\pi\pi$ cross sections:
$\displaystyle\sigma(\gamma\gamma\to\pi^{+}\pi^{+};|\cos\theta|\leq
0.6)\equiv\sigma=\sigma_{0}+\sigma_{2},$ (16)
$\displaystyle\sigma(\gamma\gamma\to\pi^{0}\pi^{0};|\cos\theta|\leq
0.8)\equiv\tilde{\sigma}=\tilde{\sigma}_{0}+\tilde{\sigma}_{2},$ (17)
$\displaystyle\sigma_{\lambda}=\frac{\rho_{\pi^{+}}(s)}{64\pi
s}\int^{0.6}_{-0.6}|M_{\lambda}(\gamma\gamma\to\pi^{+}\pi^{-};s,\theta)|^{2}d\cos\theta,\
\ $ (18)
$\displaystyle\tilde{\sigma}_{\lambda}=\frac{\rho_{\pi^{+}}(s)}{128\pi
s}\int^{0.8}_{-0.8}|M_{\lambda}(\gamma\gamma\to\pi^{0}\pi^{0};s,\theta)|^{2}d\cos\theta.\
\ $ (19)
$\sigma_{\lambda J}$ and $\tilde{\sigma}_{\lambda J}$ denote the corresponding
partial cross sections.
Before fitting data it is helpful to center on a simplified (qualitative)
scheme of their description.
In Fig. 15(a) from Ref. AS08a are adduced the theoretical curves for the
cross section $\sigma=\sigma_{0}+\sigma^{\mbox{\scriptsize{Born}}}_{2}$ and
its components $\sigma_{0}$ and $\sigma^{\mbox{\scriptsize{Born}}}_{2}$
corresponding to the simplest variant of the above model in which only the $S$
wave Born amplitudes $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ and $\gamma\gamma$
$\to$ $K^{+}K^{-}$ are modified by the pion and kaon strong final-state
interactions. As for all higher partial waves with $\lambda$ = 0 and 2, they
are taken in the Born point-like approximation AS05 ; AS08a . This
modification results in appearing the $f_{0}(980)$ resonance signal in
$\sigma_{0}$, the value and shape of which agree very well with the Belle
data, see Fig. 15(a). From comparing the corresponding curves in Figs. 15(a)
and 8(a) it follows that the $S$ wave contribution to
$\sigma(\gamma\gamma\to\pi^{+}\pi^{-};|\cos\theta|\leq 0.6)$ is small at
$\sqrt{s}>0.5$ GeV in any case. It is clear that the $f_{2}(1270)$ resonance
contribution is the main element required for the description of the Belle
data on $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ in the $\sqrt{s}$ region from
0.8 up to 1.5 GeV. For describing data only near the $f_{0}(980)$ resonance
one can the large non-coherent background under the resonance, caused by
$\sigma_{2}$, approximate by a polynomial of $\sqrt{s}$. The result of a such
fit is shown in Figs. 15(c) and 15(d) AS08a .
Figure 15: Theoretical curves in plots (a) and (b) correspond to the simplest
model which incorporates only the Born contributions $\gamma\gamma$ $\to$
$\pi^{+}\pi^{-}$, from $\pi$ exchange, and $\gamma\gamma$ $\to$ $K^{+}K^{-}$,
from $K$ exchange, modified for strong final-state interactions in the $S$
wave. Plot (c) illustrates the description of the Belle data in the
$f_{0}(980)$ region. (d) The fragment of (c).
By Fig. 13 and Eq. (12) taking into account the final-state interactions in
the Born $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ and $\gamma\gamma$ $\to$
$K^{+}K^{-}$ amplitudes leads to the prediction of the $S$ wave amplitude of
the $\gamma\gamma\to\pi^{0}\pi^{0}$ reaction AS05 ; AS07 ; AS08a ; AS08b . In
Figs. 15(b) the $\gamma\gamma$ $\to$ $\pi^{0}\pi^{0}$ cross section,
$\tilde{\sigma}_{0}$ = $\tilde{\sigma}_{00}$, evaluated in the outlined above
manner, are compared with the Crystal Ball and Belle data. 191919Note that the
step of $\sqrt{s}$ for the Crystal Ball and Belle data, shown in Fig. 15(b),
is 50 MeV and 20 MeV respectively. In view of the fact that no fitting
parameters are used for the construction of $\tilde{\sigma}_{0}$, one should
accept that the agreement with the data is rather well at $\sqrt{s}\leq 0.8$
GeV, i.e., in the $\sigma(600)$ resonance region. It is clear also that at
$\sqrt{s}$ $>$ 0.8 GeV the $f_{2}(1270)$ resonance responsibility region
begins.
So, already at this stage it emerges the following. First, if the direct
coupling constants of $\sigma(600)$ and $f_{0}(980)$ with $\gamma\gamma$ are
included in fitting their role will be negligible in agreement with the four-
quark model prediction ADS82a ; ADS82b . Second, by Eqs. (10) and (12) the
$\sigma(600)$ $\to$ $\gamma\gamma$ and $f_{0}(980)$ $\to$ $\gamma\gamma$
decays are described by the triangle loop rescattering diagrams $Resonance$
$\to$ $(\pi^{+}\pi^{-}$, $K^{+}K^{-})$ $\to$ $\gamma\gamma$ and, consequently,
are the four-quark transitions AS05 ; AS07 ; AS08a ; AS08b .
Figure 16: The structure of the $f_{0}(980)$ signal in $\sigma_{0}$. (a) The
contributions from the $\gamma\gamma\to K^{+}K^{-}\to\pi^{+}\pi^{-}$ (dashed
line), $\gamma\gamma\to\pi^{+}\pi^{-}\to\pi^{+}\pi^{-}$ (dotted line)
rescattering amplitudes, and their sum (solid line). (b) The dashed line is
identical to the solid one in (a), the dotted and dot-dashed lines show the
$\sigma^{\mbox{\scriptsize{Born}}}_{0}$ and
$\sigma^{\mbox{\scriptsize{Born}}}_{00}$ cross sections, respectively
($\sigma^{\mbox{\scriptsize{Born}}}_{0}$ $<$
$\sigma^{\mbox{\scriptsize{Born}}}_{00}$ because of the destructive
interference between the $S$ and higher partial waves), and the solid line
corresponds to the resulting $f_{0}(980)$ signal in $\sigma_{0}$.
The interesting and important feature of the $f_{0}(980)$ signal in
$\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ is its complicated structure which is
shown by Figs. 16(a) and 16(b). The $\gamma\gamma$ $\to$ $K^{+}K^{-}$ $\to$
$\pi\pi$ rescattering amplitude plays the determinant role transferring the
$f_{0}(980)$ peak from the $T_{K^{+}K^{-}\to\pi\pi}(s)$ amplitude to the
$\gamma\gamma$ $\to$ $\pi\pi$ one. 202020It provides the natural scale of the
$f_{0}(980)$ production cross section in $\gamma\gamma$ collisions AS05 . The
maximum of the cross section $\sigma(\gamma\gamma$ $\to$ $K^{+}K^{-}$ $\to$
$f_{0}(980)$ $\to$ $\pi^{+}\pi^{-})$ is controlled by the product of the ratio
of the squares of the coupling constants $R_{f_{0}}$ =
$g^{2}_{f_{0}K^{+}K^{-}}/g^{2}_{f_{0}\pi^{+}\pi^{-}}$ and the value
$|\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(4m^{2}_{K^{+}})|^{2}$. Its estimate gives
$\sigma(\gamma\gamma$ $\to$ $K^{+}K^{-}$ $\to$ $f_{0}(980)$ $\to$
$\pi^{+}\pi^{-};|\cos\theta|\leq 0.6)\approx 0.6\times
0.62\alpha^{2}R_{f_{0}}/m^{2}_{f_{0}}\approx 8$ nb $\times R_{f_{0}}$, where
$\alpha$ = 1/137 and $m_{f_{0}}$ is the $f_{0}(980)$ mass. The $\gamma\gamma$
$\to$ $\pi^{+}\pi^{-}$ $\to$ $\pi^{+}\pi^{-}$ rescattering in its turn
transfers the narrow deep under the $K\bar{K}$ threshold from the
$T_{\pi\pi\to\pi\pi}(s)$ amplitude, see the footnote 18, to the $\gamma\gamma$
$\to$ $\pi\pi$ one. The interference of the resonance $\gamma\gamma$ $\to$
$K^{+}K^{-}$ $\to$ $\pi^{+}\pi^{-}$ amplitude with the $\gamma\gamma$ $\to$
$\pi^{+}\pi^{-}$ $\to$ $\pi^{+}\pi^{-}$ amplitude 212121Note that the relative
sign between these amplitudes is fixed surely AS05 ; AS08a . exerts the
essential effect on the resulting shape of the $f_{0}(980)$ signal, as
indicated by Fig. 16(a). As for the Born contributions, their influence on the
resulting shape of the $f_{0}(980)$ signal is small, see Fig. 16(b).
Once more notable fact lies in the drastic change of the $f_{0}(980)$
production amplitude $\gamma\gamma$ $\to$ $K^{+}K^{-}$ $\to$ $f_{0}(980)$ in
the $f_{0}(980)$ peak region AS05 ; AS08a just as the $\gamma\gamma$ $\to$
$K^{+}K^{-}$ $\to$ $a_{0}(980)$ amplitude in the $a_{0}(980)$ region AS88 ,
see Section 5. In the cross section its contribution is proportional to
$|\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s)|^{2}$, see Eqs. (10) and (12). The
function $|\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s)|^{2}$ decreases drastically
immediately under the $K^{+}K^{-}$ threshold, i.e., in the $f_{0}(980)$
resonance region, see Fig. 17. 222222The function
$|\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s)|^{2}$ decreases relatively its maximum
at $\sqrt{s}$ = $2m_{K^{+}}$ $\approx$ 0.9873 GeV by 1.66, 2.23, 2.75, 3.27,
and 6.33 times at $\sqrt{s}$ = 0.98, 0.97, 0.96, 0.95, and 0.9 GeV,
respectively. Such a behavior of the $f_{0}(980)$ two-photon production
amplitude reduces strongly the left slope of the the $f_{0}(980)$ peak defined
by the resonance amplitude $T_{K^{+}K^{-}\to\pi\pi}(s)$. That is why one
cannot approximate the $f_{0}(980)$ $\to$ $\gamma\gamma$ decay width by a
constant even in the region $m_{f_{0}}$ – $\Gamma_{f_{0}}/2$ $\leq$ $\sqrt{s}$
$\leq$ $m_{f_{0}}$ \+ $\Gamma_{f_{0}}/2$ AS05 .
Figure 17: The solid curve shows $|\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s)|^{2}$
as a function of $\sqrt{s}$ (see Appendix 8.3); the dashed and dotted curves
above the $K^{+}K^{-}$ threshold correspond to the contributions of the real
and imaginary parts of $\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s)$, respectively.
So, the above consideration teaches us that all simplest approximations of the
$f_{0}(980)$ signal shape observed in the $\gamma\gamma\to\pi^{+}\pi^{-}$ and
$\gamma\gamma\to\pi^{0}\pi^{0}$ cross sections can give only a rather relative
information on the $f_{0}(980)$ state two-photon production mechanism and the
$f_{0}(980)$ parameters.
Fortunately, the already current knowledge of the dynamics of
$T_{\pi^{+}\pi^{-}\to\pi\pi}(s)$ [$T^{0}_{0}(s)$, $T^{2}_{0}(s)$] and
$T_{K^{+}K^{-}\to\pi\pi}(s)$ strong interaction amplitudes allows to advance
in understanding the signals about the light scalar mesons which the data on
the $\gamma\gamma\to\pi\pi$ reaction send to us. In fitting data we use the
model for the $T^{0}_{0}(s)$ and $T_{K^{+}K^{-}\to\pi\pi}(s)$ amplitudes which
was suggested and used for the joint analysis of the data on the
$\pi^{0}\pi^{0}$ mass spectrum in the $\phi$ $\to$ $\pi^{0}\pi^{0}\gamma$
decay, the $\pi\pi$ scattering at $2m_{\pi}<\sqrt{s}<1.6$ GeV, and the
$\pi\pi$ $\to$ $K\bar{K}$ reaction AK06 ; AK07a . The $T^{0}_{0}(s)$ model
takes into account the contributions of the $\sigma(600)$ and $f_{0}(980)$
resonances, their mixing, and the chiral background with the large negative
phase which shields the $\sigma(600)$ resonance (see additionally AS94a ; AS07
; Ac08a ; Ac08b ). Eqs. (10) and (12) transfer the effect of the chiral
shielding of the $\sigma(600)$ resonance from the $\pi\pi$ scattering into the
$\gamma\gamma$ $\to$ $\pi\pi$ amplitudes. This effect are demonstrated by Fig.
18(a) with the help of the $\pi\pi$ scattering phases
$\delta_{\mbox{\scriptsize{res}}}(s)$, $\delta^{\pi\pi}_{B}(s)$, and
$\delta^{0}_{0}(s)$ [see Eqs. (20)–(22) and (24)], and by Figs. 18(b) and
18(c) with the help the corresponding cross sections of the $\pi\pi$ $\to$
$\pi\pi$ and $\gamma\gamma$ $\to$ $\pi^{0}\pi^{0}$ reactions. As seen from
Fig. 18(c), if it were not for such a shielding the $\gamma\gamma$ $\to$
$\pi^{0}\pi^{0}$ cross section nearby the threshold would be not $(5-10)$ nb
but approximately 100 nb due to the $\pi^{+}\pi^{-}$ loop mechanism of the
$\sigma(600)$ $\to$ $\gamma\gamma$ decay AS07 . The decay width corresponding
to this mechanism, $\Gamma_{\sigma\to\pi^{+}\pi^{-}\to\gamma\gamma}(s)$, is
shown in Fig. 18(d); see also Eq. (64) in Appendix 8.1.
According Refs. AK06 ; AK07a , we write
$T^{0}_{0}(s)=T^{\pi\pi}_{B}(s)+e^{2i\delta^{\pi\pi}_{B}(s)}T^{\pi\pi}_{\mbox{\scriptsize{res}}}(s)\,,$
(20)
$T^{\pi\pi}_{B}(s)=\\{\exp[2i\delta^{\pi\pi}_{B}(s)]-1\\}/[2i\rho_{\pi^{+}}(s)]\,,$
(21)
$T^{\pi\pi}_{\mbox{\scriptsize{res}}}(s)=\\{\eta^{0}_{0}(s)\exp[2i\delta_{\mbox{\scriptsize{res}}}(s)]-1\\}/[2i\rho_{\pi^{+}}(s)]\,,$
(22)
$T_{K^{+}K^{-}\to\pi^{+}\pi^{-}}(s)=e^{i[\delta^{\pi\pi}_{B}(s)+\delta^{K\bar{K}}_{B}(s)]}T^{K\bar{K}\to\pi\pi}_{\mbox{\scriptsize{res}}}(s)\,,$
(23)
where $\delta^{\pi\pi}_{B}(s)$ and $\delta^{K\bar{K}}_{B}(s)$ are the phase of
the elastic $S$ wave background in the $\pi\pi$ and $K\bar{K}$ channels with
$I$ = 0; the $\pi\pi$ scattering phase
$\delta^{0}_{0}(s)=\delta^{\pi\pi}_{B}(s)+\delta_{\mbox{\scriptsize{res}}}(s).$
(24)
The amplitudes of the $\sigma(600)$ – $f_{0}(980)$ resonance complex in Eqs.
(10), (12), (20), (22), and (23) are AK06 ; AK07a
$T^{\pi\pi}_{\mbox{\scriptsize{res}}}(s)=3\,\frac{g_{\sigma\pi^{+}\pi^{-}}\Delta_{f_{0}}(s)+g_{f_{0}\pi^{+}\pi^{-}}\Delta_{\sigma}(s)}{32\pi[D_{\sigma}(s)D_{f_{0}}(s)-\Pi^{2}_{f_{0}\sigma}(s)]}\,,$
(25) $T^{K\bar{K}\to\pi\pi}_{\mbox{\scriptsize{res}}}(s)=\frac{g_{\sigma
K^{+}K^{-}}\Delta_{f_{0}}(s)+g_{f_{0}K^{+}K^{-}}\Delta_{\sigma}(s)}{16\pi[D_{\sigma}(s)D_{f_{0}}(s)-\Pi^{2}_{f_{0}\sigma}(s)]}\,,$
(26)
$M^{\mbox{\scriptsize{direct}}}_{\mbox{\scriptsize{res}}}(s)=s\,e^{i\delta^{\pi\pi}_{B}(s)}\,\frac{g^{(0)}_{\sigma\gamma\gamma}\Delta_{f_{0}}(s)+g^{(0)}_{f_{0}\gamma\gamma}\Delta_{\sigma}(s)}{D_{\sigma}(s)D_{f_{0}}(s)-\Pi^{2}_{f_{0}\sigma}(s)}\,,$
(27)
where $\Delta_{f_{0}}(s)$ = $D_{f_{0}}(s)g_{\sigma\pi^{+}\pi^{-}}$ \+
$\Pi_{f_{0}\sigma}(s)g_{f_{0}\pi^{+}\pi^{-}}$ and $\Delta_{\sigma}(s)$ =
$D_{\sigma}(s)g_{f_{0}\pi^{+}\pi^{-}}$ \+
$\Pi_{f_{0}\sigma}(s)g_{\sigma\pi^{+}\pi^{-}}$, $g^{(0)}_{\sigma\gamma\gamma}$
and $g^{(0)}_{f_{0}\gamma\gamma}$ are the direct coupling constants of the
$\sigma$ and $f_{0}$ resonances with the photons. We use the expressions for
the $\delta^{\pi\pi}_{B}(s)$ and $\delta^{K\bar{K}}_{B}(s)$ phases, the
propagators of the $\sigma(600)$ and $f_{0}(980)$ resonances $1/D_{\sigma}(s)$
and $1/D_{f_{0}}(s)$, and the polarization operator matrix element
$\Pi_{f_{0}\sigma}(s)$ from AK06 (see also Appendix). The $m_{f_{0}}$ was
free, the other parameters in the strong amplitudes ($m_{\sigma}$,
$g_{\sigma\pi^{+}\pi^{-}}$, $g_{f_{0}K^{+}K^{-}}$, etc.) correspond to variant
1 from Table 1 from this paper. 232323Removing the misprint in the sign of the
constant $C\equiv C_{f_{0}\sigma}$ we use $C_{f_{0}\sigma}=-0.047$ GeV. Notice
that our principal conclusions [the insignificance of the direct transition
$\gamma\gamma\to Light\ Scalar$ and the dominant role of the four-quark
transition $\gamma\gamma\to(\pi\pi,\,K^{+}K^{-})\to Light\ Scalar$] are
independent on a specific variant from AK06 ; AK07a . We also put
$\eta^{2}_{0}(s)$ = 1 for all $\sqrt{s}$ under consideration and take
$\delta^{2}_{0}(s)$ from AchS03 .
Figure 18: The figure demonstrates the chiral shielding effect in the
reactions $\pi\pi$ $\to$ $\pi\pi$ and $\gamma\gamma$ $\to$ $\pi^{0}\pi^{0}$.
All the plots have been taken from Ref. AS07 dedicated the lightest scalar in
the $SU(2)_{L}\times SU(2)_{R}$ linear $\sigma$ model.
The amplitudes of the $f_{2}(1270)$ resonance production in Eqs. (11) and (13)
are
$\displaystyle M_{\gamma\gamma\to
f_{2}(1270)\to\pi^{+}\pi^{-}}(s)=M_{\gamma\gamma\to
f_{2}(1270)\to\pi^{0}\pi^{0}}(s)=$
$\displaystyle=\frac{\sqrt{s}\,G_{2}(s)\sqrt{(2/3)\Gamma_{f_{2}\to\pi\pi}(s)/\rho_{\pi^{+}}(s)}}{m^{2}_{f_{2}}-s-i\sqrt{s}\Gamma^{\mbox{\scriptsize{tot}}}_{f_{2}}(s)}\,.\mbox{\qquad\
\ }$ (28)
The main contribution in its total width
$\Gamma^{\mbox{\scriptsize{tot}}}_{f_{2}}(s)=\Gamma_{f_{2}\to\pi\pi}(s)+\Gamma_{f_{2}\to
K\bar{K}}(s)+\Gamma_{f_{2}\to 4\pi}(s)$ is given by the $\pi\pi$ partial decay
width
$\displaystyle\Gamma_{f_{2}\to\pi\pi}(s)=\Gamma^{\mbox{\scriptsize{tot}}}_{f_{2}}(m^{2}_{f_{2}})B(f_{2}\to\pi\pi)\times$
$\displaystyle\times\frac{m^{2}_{f_{2}}}{s}\frac{q^{5}_{\pi^{+}}(s)}{q^{5}_{\pi^{+}}(m^{2}_{f_{2}})}\frac{D_{2}(q_{\pi^{+}}(m^{2}_{f_{2}})r_{f_{2}})}{D_{2}(q_{\pi^{+}}(s)r_{f_{2}})}\,,$
(29)
where $D_{2}(x)$ = $9+3x^{2}+x^{4}$, $q_{\pi^{+}}(s)$ =
$\sqrt{s}\rho_{\pi^{+}}(s)/2$, $r_{f_{2}}$ is the interaction range and
$B(f_{2}$ $\to$ $\pi\pi)$ = 0.848 PDG08 . The small contributions of
$\Gamma_{f_{2}\to K\bar{K}}(s)$ and $\Gamma_{f_{2}\to 4\pi}(s)$ are the same
ones as in AS08a . The parameter $r_{f_{2}}$ Ma90 ; Bo90 ; Be92 ; Mo07a ; Ue08
; AS08a ; AS08b controls the relative form of the $f_{2}(1270)$ resonance
wings and is very important especially for fitting data with small errors.
The amplitude $G_{2}(s)$ in Eq. (28) describes the coupling of the
$f_{2}(1270)$ resonance with the photons,
$G_{2}(s)=\sqrt{\Gamma^{(0)}_{f_{2}\to\gamma\gamma}(s)}+i\frac{M_{22}^{\mbox{\scriptsize{Born}}\,\pi^{+}}(s)}{16\pi}\sqrt{\frac{2}{3}\rho_{\pi^{+}}(s)\Gamma_{f_{2}\to\pi\pi}(s)}\,.$
(30)
The explicit form of the $M_{22}^{\mbox{\scriptsize{Born}}}(s)$ amplitude is
in Appendix 8.1, Eq. (53). The $f_{2}(1270)$ $\to$ $\gamma\gamma$ decay width
is
$\Gamma_{f_{2}\to\gamma\gamma}(s)=|G_{2}(s)|^{2}$ (31)
and
$\Gamma^{(0)}_{f_{2}\to\gamma\gamma}(s)=\frac{m_{f_{2}}}{\sqrt{s}}\Gamma^{(0)}_{f_{2}\to\gamma\gamma}(m^{2}_{f_{2}})\frac{s^{2}}{m^{4}_{f_{2}}}$
(32)
[here the $s^{2}$ factor, and also the $s$ factor in Eq. (27), is caused by
the gauge invariance requirement]. The second term in $G_{2}(s)$ corresponds
to the rescattering $f_{2}(1270)$ $\to$ $\pi^{+}\pi^{-}$ $\to$ $\gamma\gamma$
with the real pions in intermediate state 242424That is, it corresponds to the
imaginary part of the $f_{2}(1270)\to\pi^{+}\pi^{-}\to\gamma\gamma$ amplitude.
and ensure the fulfillment of the Watson theorem requirement for the amplitude
$\gamma\gamma$ $\to$ $\pi\pi$ with $\lambda$ = $J$ = 2 and $I$ =0 under the
first inelastic threshold. This term gives a small contribution, less then 6%,
in $\Gamma_{f_{2}\to\gamma\gamma}(m^{2}_{f_{2}})$. 252525As for a real part of
the $f_{2}(1270)$ $\to$ $\pi^{+}\pi^{-}$ $\to$ $\gamma\gamma$ amplitude, its
modulus is far less than the one of the direct transition amplitude as
different estimations show.
The simplest approximation (32) of the main contribution,
$\Gamma^{(0)}_{f_{2}\to\gamma\gamma}(s)$, in the $f_{2}(1270)$ $\to$
$\gamma\gamma$ decay width is completely adequate to the current state of both
theory and experiment. The parameter
$\Gamma^{(0)}_{f_{2}\to\gamma\gamma}(m^{2}_{f_{2}})$ =
$\frac{1}{5}[g^{2}_{f_{2}\gamma\gamma}/(16\pi)]m^{3}_{f_{2}}$ in Eq. (32)
accumulates effectively lack of knowledge of the values of the amplitudes
responsible for the $f_{2}(1270)$ $\to$ $\gamma\gamma$ decay. By the above
reasons, see Section 3, it is generally agreed that the direct quark-antiquark
transition $q\bar{q}$ $\to$ $\gamma\gamma$ dominates in the $f_{2}(1270)$
$\to$ $\gamma\gamma$ decay and its amplitude is characterized by the
$g_{f_{2}\gamma\gamma}$ coupling constant. As shown in AS88 ; AS05 ; AS07 ;
AS08a ; AS08b ; AS09 ; AS10a ; AS10b and as we state here step by step, the
situation is quite different in the case of the light scalar mesons.
Now everything is ready to come to the discussion of fitting the Belle data on
the $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ and $\gamma\gamma$ $\to$
$\pi^{0}\pi^{0}$ cross sections which was carried out in AS08a ; AS08b .
One considers firstly fitting the $\gamma\gamma$ $\to$ $\pi^{0}\pi^{0}$ cross
section only, see Fig. 19(b), which has the smaller background contributions
under the $f_{0}(980)$ and $f_{2}(1270)$ resonances then the $\gamma\gamma$
$\to$ $\pi^{+}\pi^{-}$ cross section, compare Figs. 19(a) and 19(b). The solid
curve in Fig. 19(b), describing these data rather well, corresponds the
following parameters of the model: $m_{f_{2}}$ = 1.269 GeV,
$\Gamma^{\mbox{\scriptsize{tot}}}_{f_{2}}(m^{2}_{f_{2}})$ = 0.182 GeV,
$r_{f_{2}}$ = 8.2 GeV-1, $\Gamma_{f_{2}\to\gamma\gamma}(m_{f_{2}})$ = 3.62 keV
[$\Gamma^{(0)}_{f_{2}\to\gamma\gamma}(m_{f_{2}})$ = 3.43 keV], $m_{f_{0}}$ =
0.969 GeV, $g^{(0)}_{\sigma\gamma\gamma}$ = 0.536 GeV-1 and
$g^{(0)}_{f_{0}\gamma\gamma}$ = 0.652 GeV-1. 262626The formally calculated
errors in the significant parameters of the model are negligible due to the
high statistical accuracy of the Belle data. The model dependence of
adjustable parameter values is the main source of their ambiguity. The fitting
indicates smallness of the direct coupling constants
$g^{(0)}_{\sigma\gamma\gamma}$ and $g^{(0)}_{f_{0}\gamma\gamma}$:
$\Gamma^{(0)}_{\sigma\to\gamma\gamma}(m^{2}_{\sigma})$ =
$|m^{2}_{\sigma}g^{(0)}_{\sigma\gamma\gamma}|^{2}/(16\pi m_{\sigma})$ = 0.012
keV and $\Gamma^{(0)}_{f_{0}\to\gamma\gamma}(m^{2}_{f_{0}})$ =
$|m^{2}_{f_{0}}g^{(0)}_{f_{0}\gamma\gamma}|^{2}/(16\pi m_{f_{0}})$ = 0.008
keV, in accordance with the prediction ADS82a ; ADS82b 272727One notes that
the small values of these coupling constants are grasped in fitting due to the
interference of the
$M^{\mbox{\scriptsize{direct}}}_{\mbox{\scriptsize{res}}}(s)$ amplitude, see
Eqs. (10), (12), and (27), with the contributions of the dominant rescattering
mechanisms. In such a case not the specific above values of
$g^{(0)}_{\sigma\gamma\gamma}$ and $g^{(0)}_{f_{0}\gamma\gamma}$ are
important, but the fact of their relative smallness, corresponding to
$\Gamma^{(0)}_{\sigma\to\gamma\gamma}(m^{2}_{\sigma})$ and
$\Gamma^{(0)}_{f_{0}\to\gamma\gamma}(m^{2}_{f_{0}})$ both $\ll$ 0.1 keV. The
dominant rescattering mechanisms give the $\sigma(600)$ $\to$ $\pi^{+}\pi^{-}$
$\to$ $\gamma\gamma$ width $\approx$ (1 – 1.75) keV averaged in the region
$0.4<\sqrt{s}<0.5$ GeV AS07 , see Fig. 18(d), and the $f_{0}(980)$ $\to$
$K^{+}K^{-}$ $\to$ $\gamma\gamma$ width $\approx$ $(0.15-0.2)$ keV averaged
over the resonance mass distribution AS05 .
Figure 19: Cross sections for the $\gamma\gamma\to\pi^{+}\pi^{-}$ and
$\gamma\gamma\to\pi^{0}\pi^{0}$ reactions. Only statistical errors are shown
for the Belle data Mo07b ; Ue08 . The curves in plot (a) are described in the
text and on the figure. The curves in plot (b) are the result of the fit to
the data on the $\gamma\gamma\to\pi^{0}\pi^{0}$ reaction.
But such a fitting of the $\gamma\gamma$ $\to$ $\pi^{0}\pi^{0}$ cross section
comes into conflict with the data on $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$,
see solid curve for $\sigma$ = $\sigma_{0}$ \+ $\sigma_{2}$ in Fig. 19(a).
This is connected with the large Born contribution in $\sigma_{2}$ and its
strong constructive (destructive) interference with the $f_{2}(1270)$
resonance contribution at $\sqrt{s}<m_{f_{2}}$ ($\sqrt{s}>m_{f_{2}}$), which
are absent in $\gamma\gamma$ $\to$ $\pi^{0}\pi^{0}$. We faced this challenge
in AS08a and ibidem suggested the following solution. Matters can be improved
by the introduction of the common cutting form factor $G_{\pi^{+}}(t,u)$ in
the point-like Born amplitudes $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$,
$M_{\lambda}^{\mbox{\scriptsize{Born}}}(s,\theta)\to
G_{\pi^{+}}(t,u)M_{\lambda}^{\mbox{\scriptsize{Born}}}(s,\theta)$, where $t$
and $u$ are the Mandelstam variables for the $\gamma\gamma$ $\to$
$\pi^{+}\pi^{-}$ reaction. 282828Such a natural modification of the point-like
Born contribution was discussed in connection with the data on the
$\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ Po86 ; Bo90 ; Be92 ; MP87 ; MP88 and
$\gamma\gamma$ $\to$ $K^{+}K^{-}(K^{0}\bar{K}^{0})$ reactions AS92 ; AS94b .
But only the problem of the consistent description of the Belle data on
$\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ and $\gamma\gamma$ $\to$
$\pi^{0}\pi^{0}$ indicates the modification need of the Born sector of the
model unambiguously AS08a ; AS08b . To show this we use, as an example, the
expression for $G_{\pi^{+}}(t,u)$ suggested in Ref. Po86 ,
$G_{\pi^{+}}(t,u)=\frac{1}{s}\left[\frac{m^{2}_{\pi^{+}}-t}{1-(u-m^{2}_{\pi^{+}})/x^{2}_{1}}+\frac{m^{2}_{\pi^{+}}-u}{1-(t-m^{2}_{\pi^{+}})/x^{2}_{1}}\right],$
(33)
where $x_{1}$ is a free parameter.
---
Figure 20: Joint description of the data on the cross sections for the
reactions $\gamma\gamma\to\pi^{+}\pi^{-}$ and $\gamma\gamma\to\pi^{0}\pi^{0}$.
The shaded bands correspond to the Belle data Mo07b ; Ue08 with the
statistical and systematic errors (errors are added quadratically). The curves
are described in the text and on the figures; $\sigma^{\mbox{\scriptsize{
Born}}}_{2}(x_{1})$ in plot (a) is the Born cross section for the
$\gamma\gamma\to\pi^{+}\pi^{-}$ reaction with the inclusion of the form
factor.
This ansatz is quite acceptable in the physical region of the
$\gamma\gamma\to\pi^{+}\pi^{-}$ reaction. Note that the form factor is
introduced by changing the amplitudes of the elementary one pion exchange
$M_{\lambda}^{\mbox{\scriptsize{Born}}\,\pi^{+}}(s,\theta)$ to
$M^{\mbox{\scriptsize{Born}}\,\pi^{+}}_{\lambda}(s,\theta;x_{1})=G_{\pi^{+}}(t,u)M_{\lambda}^{\mbox{\scriptsize{Born}}\,\pi^{+}}(s,\theta)$
and does not break the gauge invariance of the tree approximation Po86 .
Replacing in (33) $m_{\pi^{+}}$ by $m_{K^{+}}$ and $x_{1}$ by $x_{2}$ we
obtain also the form factor $G_{K^{+}}(t,u)$ for the Born amplitudes
$\gamma\gamma$ $\to$ $K^{+}K^{-}$.
The solid curves for $\sigma$ = $\sigma_{0}$ \+ $\sigma_{2}$ and
$\tilde{\sigma}$ = $\tilde{\sigma}_{0}+\tilde{\sigma}_{2}$ in Figs. 20(a) and
20(b) show the consistent fitting of the data on the $\gamma\gamma$ $\to$
$\pi^{+}\pi^{-}$ cross section in the region $0.85<\sqrt{s}<1.5$ GeV and on
the $\gamma\gamma$ $\to$ $\pi^{0}\pi^{0}$ cross section in the region
$2m_{\pi}<\sqrt{s}<1.5$ GeV with the form factors modificating the point-like
Born contributions. The obtained description is more than satisfactory to
within the Belle systematic errors which are shown in Figs. 20(a) and 20(b) by
means of the shaded bands. We believe that such a fitting is completely
adequate for the statistic errors of both Belle measurements are so small that
to obtain the formally good enough value of $\chi^{2}$ in the combined fitting
of the $\pi^{+}\pi^{-}$ and $\pi^{0}\pi^{0}$ data in the wide regions of
$\sqrt{s}$ without taking the systematic errors into consideration is
practically impossible. 292929At the same time we emphasize that the
considerable systematic errors, the sources of which are described in detail
in Refs. Mo07a ; Mo07b ; Ue08 , do not depreciate the role of the high
statistics of the data, which allows to resolve the small local effects
connected with the $f_{0}(980)$ resonance manifestation. The curves in Fig. 20
correspond the following values of the parameters: $m_{f_{2}}$ = 1.272 GeV,
$\Gamma^{\mbox{\scriptsize{tot}}}_{f_{2}}(m^{2}_{f_{2}})$ = 0.196 GeV,
$r_{f_{2}}$ = 8.2 GeV-1, $\Gamma_{f_{2}\to\gamma\gamma}(m_{f_{2}})$ = 3.83 keV
[$\Gamma^{(0)}_{f_{2}\to\gamma\gamma}(m_{f_{2}})$ = 3.76 keV], $m_{f_{0}}$ =
0.969 GeV, $g^{(0)}_{\sigma\gamma\gamma}$ = $-0.049$ GeV-1
[$\Gamma^{(0)}_{\sigma\to\gamma\gamma}(m^{2}_{\sigma})$ miserable],
$g^{(0)}_{f_{0}\gamma\gamma}$ = 0.718 GeV-1
[$\Gamma^{(0)}_{f_{0}\to\gamma\gamma}$ $(m^{2}_{f_{0}})$ $\approx$ 0.01 keV],
$x_{1}$ = 0.9 GeV and $x_{2}$ = 1.75 GeV. It is clear from comparison of Figs.
19(b) and 20(b) that the form factor effect on the
$\gamma\gamma\to\pi^{0}\pi^{0}$ cross section is weak in contrast to the
$\gamma\gamma\to\pi^{+}\pi^{-}$ one [compare Figs. 19(a) and 20(a)] in which
the $\sigma_{2}$ contribution is modified mainly. One emphasizes that all our
conclusions about the mechanisms of the two-photon decays (productions) of the
$\sigma(600)$ and $f_{0}(980)$ resonances are in force. 303030Notice that the
point-like $\omega$ and $a_{2}(1320)$ exchanges in the
$\gamma\gamma\to\pi^{0}\pi^{0}$ and $\gamma\gamma\to\pi^{+}\pi^{-}$ amplitude,
respectively, which give the contributions (mainly the $S$ wave one) in the
cross section, runaway with increasing the energy and comparable with the
$f_{2}(1270)$ resonance contribution even in its energy region, are not
observed experimentally. This was puzzled out in our paper AS92 with the
$\gamma\gamma\to\pi^{0}\eta$ example (the details are discussed bellow in
Section 5). The proper Reggeization of the point-like exchanges with the high
spins reduces the dangerous contributions greatly. In addition, the partial
cancelations between the $\omega$ and $h_{1}(1170)$ exchanges in
$\gamma\gamma\to\pi^{0}\pi^{0}$ and the $a_{2}(1320)$ and $a_{1}(1260)$
exchanges in $\gamma\gamma\to\pi^{+}\pi^{-}$ take place. As to the $\rho$
exchange in $\gamma\gamma\to\pi\pi$, its contribution is small for
$g_{\rho\pi\gamma}^{2}\approx(1/9)g_{\omega\pi\gamma}^{2}$ and canceled
additionally by the $b_{1}(1235)$ one.
Thus the physics of the two-photon decays of the light scalar mesons acquires
the rather clear outline. The mechanism of their decays into $\gamma\gamma$
does not look like the mechanism of the classic tensor $q\bar{q}$ meson
decays, which is the direct annihilation $q\bar{q}$ $\to$ $\gamma\gamma$. The
light scalar meson decays into $\gamma\gamma$ are suppressed in comparison
with the the tensor meson ones. They are caused by the rescattering
mechanisms, i.e., by the four-quark transitions $\sigma(600)$ $\to$
$\pi^{+}\pi^{-}$ $\to$ $\gamma\gamma$, $f_{0}(980)$ $\to$ $K^{+}K^{-}$ $\to$
$\gamma\gamma$, $a_{0}(980)$ $\to$ $K^{+}K^{-}$ $\to$ $\gamma\gamma$, etc.
Such a picture is suggested by experiment and supports the $q^{2}\bar{q}^{2}$
nature of the light scalars. It is significant that in the scalar meson case
the longing for the exhaustive characteristic of their coupling with photons
via the constant values $\Gamma_{0^{++}\to\gamma\gamma}(m^{2}_{0^{++}})$ by
analogy with the tensor mesons cannot be realized for quite a number reasons.
First of all it is clear that when we deal with resonances accompanied by
fundamental background, when two-photon decay widths change sharply in the
resonance region for close inelastic thresholds, then there is no point in
discussing the two-photon width in the resonance peak.
In this connection it is interesting to consider the cross section
$\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ caused only by the resonance
contributions, i.e.,
$\displaystyle\sigma_{\mbox{\scriptsize{res}}}(\gamma\gamma\to\pi^{+}\pi^{-};s)=$
$\displaystyle=[\rho_{\pi^{+}}(s)/(32\pi
s)]\left|\widetilde{I}^{\pi^{+}}_{\pi^{+}\pi^{-}}(s;x_{1})\,e^{2i\delta^{\pi\pi}_{B}(s)}T^{\pi\pi}_{\mbox{\scriptsize{res}
}}(s)\right.$
$\displaystyle+\left.\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s;x_{2})\,T_{K^{+}K^{-}\to\pi^{+}\pi^{-}}(s)+M^{\mbox{\scriptsize{direct}}}_{\mbox{\scriptsize{res}}}(s)\right|^{2}$
(34)
[see Eqs. (10) and (25)–(27)], where the functions
$\widetilde{I}^{\pi^{+}}_{\pi^{+}\pi^{-}}(s;x_{1})$ and
$\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s;x_{2})$ are the analogs of the
$\widetilde{I}^{\pi^{+}}_{\pi^{+}\pi^{-}}(s)$ and
$\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s)$ functions constructed with taking the
form factors into account, see Appendixes 8.1 and 8.3.
Figure 21: The integrand in Eq. (35) corresponding to the joint fit to the
$\gamma\gamma\to\pi^{+}\pi^{-}$ and $\gamma\gamma\to\pi^{0}\pi^{0}$ data (Fig.
20) is shown by the solid curve. The dotted and dashed curves show the
contributions from the resonant elastic $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$
$\to$ $\pi^{+}\pi^{-}$ and inelastic $\gamma\gamma$ $\to$ $K^{+}K^{-}$ $\to$
$\pi^{+}\pi^{-}$ rescatterings, respectively.
Fig. 21 shows the dependence of the
$\sigma_{\mbox{\scriptsize{res}}}(\gamma\gamma$ $\to$ $\pi^{+}\pi^{-};s)$
cross section, multiplied by the factor $3s/(8\pi^{2})$, on the energy. In and
around 1 GeV there is the impressive peak from the $f_{0}(980)$ resonance due
to the inelastic $\gamma\gamma$ $\to$ $K^{+}K^{-}$ $\to$ $\pi^{+}\pi^{-}$
rescattering in the main. Following Refs. AS88 ; AS05 ; AS08b , one determines
the $f_{0}(980)$ $\to$ $\gamma\gamma$ decay width averaged over the resonance
mass distribution in the $\pi\pi$ channel
$\langle\Gamma_{f_{0}\to\gamma\gamma}\rangle_{\pi\pi}=\int\limits_{0.8\mbox{\,\scriptsize{GeV}}}^{1.1\mbox{\,\scriptsize{GeV}}}\frac{3s}{8\pi^{2}}\sigma_{\mbox{\scriptsize{res}}}(\gamma\gamma\to\pi^{+}\pi^{-};s)d\sqrt{s}.$
(35)
This value is the adequate functional characteristic of the coupling of the
$f_{0}(980)$ with $\gamma\gamma$. For the presented combined fitting,
$\langle\Gamma_{f_{0}\to\gamma\gamma}\rangle_{\pi\pi}$ $\approx$ 0.19 keV
AS08b . Taking into account that the wide $\sigma(600)$ resonance dominates in
region $2m_{\pi}<\sqrt{s}<0.8$ GeV one obtains by analogy with (35)
$\langle\Gamma_{\sigma\to\gamma\gamma}\rangle_{\pi\pi}$ $\approx$ 0.45 keV
AS08b . Note that the cusp near the $\pi\pi$ threshold in the
$[3s/(8\pi^{2})]\sigma_{\mbox{\scriptsize{res}}}(\gamma\gamma$ $\to$
$\pi^{+}\pi^{-};s)$ expression, shown in Fig. 21, is the manifestation of the
correction for the finite width in the propagator of the scalar resonance. In
Appendix 8.1 there are adduced the transparent explanation of this phenomenon.
In the total $S$ wave amplitude $\gamma\gamma$ $\to$ $\pi\pi$ such a threshold
enhancement is absent due to shielding the resonance contribution in the
amplitude $T^{0}_{0}(s)$ by the chiral background one, see, for example, Fig.
20(b).
The above examples, each in its manner give to feel clear the nontriviality of
accessing information about the $\sigma(600)$ and $f_{0}(980)$ decays into
$\gamma\gamma$. For instance, to determine
$\Gamma_{\sigma\to\gamma\gamma}(m^{2}_{\sigma})$ directly from data is
impossible for the cross section in the $\sigma$ region is formed by both
resonance and compensating background. The specific dynamic model of the total
amplitude needs to their separation. The simple Breit-Wigner is not enough
here.
As for the $f_{0}(980)$ resonance, experimenters began to take into
consideration two from three important circumstances Mo07a ; Mo07b ; Ue08
(see also PDG08 ; PDG10 ), for which we drew attention in Ref. AS05 . Firstly,
there was taken account the correction for the finite width due to the
coupling of $f_{0}(980)$ with $K\bar{K}$ channel in the $f_{0}(980)$ resonance
propagator, which effects essentially on the shape of the $f_{0}(980)$ peak in
the $\pi\pi$ channel. Secondly, there was taken into account the interference
of the $f_{0}(980)$ resonance with the background though in the simplest form.
But no model was constructed for the $f_{0}(980)$ $\to$ $\gamma\gamma$ decay
amplitude which was approximated simply by a constant Mo07a ; Mo07b ; Ue08 .
Fitting data in this way the Belle collaboration extracted the values for
$\Gamma_{f_{0}\to\gamma\gamma}(m^{2}_{f_{0}})$ presented in Table 4. But the
discussion needs how to understand these values. First and foremost, one
cannot use them for determining a coupling constant $g_{f_{0}\gamma\gamma}$ in
a effective lagrangian, i.e., a constant of the direct transition $f_{0}(980)$
$\to$ $\gamma\gamma$, because such a constant is small and does not determine
the $f_{0}(980)$ $\to$ $\gamma\gamma$ decay, as shown above. Until the model
of the $f_{0}(980)$ $\to$ $\gamma\gamma$ decay amplitude is not specified, a
meaning of the $\Gamma_{f_{0}\to\gamma\gamma}(m^{2}_{f_{0}})$ values,
extracted with the help of the simplified parametrization, is rather vague.
313131The above comments are true also in the $\sigma(600)$ resonance case. In
principle the $\Gamma_{f_{0}\to\gamma\gamma}(m^{2}_{f_{0}})$ values in Table 4
can be taken as the preliminary estimations of
$\langle\Gamma_{f_{0}\to\gamma\gamma}\rangle$, i.e., as the $f_{0}(980)$ $\to$
$\gamma\gamma$ decay width averaged over the hadron mass distribution AS88 ;
AS05 ; AS08b .
In the dispersion approach there are introduced usually the pole two-photon
widths $\Gamma_{R\to\gamma\gamma}(pole)$, $R$ = $\sigma,f_{0}$ to characterize
the coupling $\sigma(600)$ and $f_{0}(980)$ resonances with photons (see, for
example, MP88 ; MP90 ; BP99 ; Pe06 ; PMUW08 ). These widths are determined
through the moduli of the complex pole residues of the $\gamma\gamma$ $\to$
$\pi\pi$ and $\pi\pi$ $\to$ $\pi\pi$ partial amplitudes constructed
theoretically. Basing on our investigation AS07 we would like to note the
following. The residues of the above amplitudes are essentially complex and
cannot be used as any coupling constants in a hermitian effective lagrangian.
These residues are “dressed” by the background for they relate to the total
amplitudes. As our analysis in the $SU(2)_{L}$ $\times$ $SU(2)_{R}$ linear
$\sigma$-model AS07 indicated, the background effects essentially on the
values and phases of the residues. Thus the focus on the values of the
$\Gamma_{R\to\gamma\gamma}(pole)$ type in dispersion approach does not help to
reveal the mechanism of the two-photon decays of the scalar mesons and so
cannot shed light on the nature of the light scalars.
5\. Production of the $a_{0}(980)$ resonance in the reaction
$\gamma\gamma\to\pi^{0}\eta$
Our conclusions about the important role of the $K^{+}K^{-}$ loop mechanism in
the two-photon production of the $a_{0}(980)$ resonance and its possible four-
quark nature AS88 ; AS91 ; AS92 were based on the analysis of the results of
the first experiments Crystal Ball An86 (see Fig. 5) and JADE Oe90 on the
$\gamma\gamma$ $\to$ $\pi^{0}\eta$ reaction. Unfortunately, the large
statistical errors in these data and the rather rough step of the
$\pi^{0}\eta$ invariant mass distribution (equal 40 MeV in the Crystal Ball
experiment and 60 MeV in the JADE one) left many uncertainties.
As we have mentioned in Subsection 3.2, recently, the Belle Collaboration
obtained new data on the $\gamma\gamma\to\pi^{0}\eta$ reaction at the KEKB
$e^{+}e^{-}$ collider Ue09 , with statistics three orders of magnitude higher
than those in the preceding Crystal Ball (336 evens) and JADE (291 events)
experiments.
Figure 22: The Belle Ue09 and Crystal Ball An86 data for the
$\gamma\gamma\to\pi^{0}\eta$ cross section. The average statistical error of
the Belle data is approximately $\pm 0.4$ nb, the shaded band shows the size
of their systematic error. The solid, dashed, and dotted lines correspond to
the total, helicity 0, and $S$ wave $\gamma\gamma\to\pi^{0}\eta$ cross
sections caused by the elementary $\rho$ and $\omega$ exchanges for
$|\cos\theta|\leq 0.8$.
The experiments revealed a specific feature of the
$\gamma\gamma\to\pi^{0}\eta$ cross section. It turned out sizable in the
region between the $a_{0}(980)$ and $a_{2}(1320)$ resonances (see Fig. 22),
323232The JADE data Oe90 on $\gamma\gamma\to\pi^{0}\eta$ are nonnormalized
and therefore are not shown in Fig. 22. which certainly indicates the presence
of additional contributions. These contributions must be coherent with the
resonance ones, because only two lowest $S$ and $D_{2}$ partial waves dominate
in the $\gamma\gamma\to\pi^{0}\eta$ amplitude at the invariant mass of the
$\pi^{0}\eta$ system $\sqrt{s}<1.4$ GeV Ue09 . The authors of Ref. Ue09
performed the phenomenological fitting of the $\gamma\gamma\to\pi^{0}\eta$
data taking into account interference between resonance and background
contributions. It was found that the description of the $S$ wave requires not
only contributions from the $a_{0}(980)$ resonance and the possible heavy
$a_{0}(Y)$ resonance, but also a smooth background whose amplitude is
comparable with the amplitude of the $a_{0}(980)$ resonance at the maximum and
has a large imaginary part Ue09 . As a result, the background results in
almost the quadrupling of the cross section near the $a_{0}(980)$ peak and in
the filling of the dip between the $a_{0}(980)$ and $a_{2}(1320)$ resonances.
The origin of such a significant background in the $S$ wave is unknown.
Meanwhile, the imaginary part of the background amplitude is due to the
contributions from real intermediate states, $\pi\eta$, $K\bar{K}$, and
$\pi\eta^{\prime}$, and, naturally, requires the distinct dynamical decoding.
In Refs. AS10a ; AS10b we shown that the observed experimental pattern is a
result of the interplay of many dynamical factors. To analyze the data, we
significantly developed a model previously discussed in Refs. AS88 ; AS92 ;
AS09 . The basis for this model is an idea of what the $a_{0}(980)$ resonance
can be as a suitable candidate in four-quark states. There exists a number of
significant indications in favor of the four-quark nature of the $a_{0}(980)$;
see, for example, Refs. ADS80b ; ADS82a ; ADS82b ; ADS84a ; AS91 ; AI89 ; Ac98
; Ac03a ; AK03 ; Ac08a . The solution obtained by us for the $\gamma\gamma$
$\to$ $\pi^{0}\eta$ amplitude is in agreement with the expectations of the
chiral theory for the $\pi\eta$ scattering length, with the strong coupling of
the $a_{0}(980)$ resonance with the $\pi\eta$, $K\bar{K}$, and
$\pi\eta^{\prime}$ channels, and with the key role of the $a_{0}(980)$ $\to$
$(K\bar{K}+\pi^{0}\eta+\pi^{0}\eta^{\prime})$ $\to$ $\gamma\gamma$
rescattering mechanisms in the $a_{0}(980)$ $\to$ $\gamma\gamma$ decay. This
picture is much in favor of the $q^{2}\bar{q}^{2}$ nature of $a_{0}(980)$
resonance and is consistent with the properties of its partners,
$\sigma_{0}(600)$ and $f_{0}(980)$ resonances, in particular, with those
manifested in the $\gamma\gamma$ $\to$ $\pi\pi$ reactions. The important role
of vector exchanges in the formation of the non-resonant background in the
$\gamma\gamma$ $\to$ $\pi^{0}\eta$ reaction has been revealed and preliminary
information on the $\pi^{0}\eta$ $\to$ $\pi^{0}\eta$ reaction has been
obtained also in Refs. AS10a ; AS10b .
To analyze the Belle data, we constructed the helicity amplitudes
$M_{\lambda}$ and the corresponding partial amplitudes $M_{\lambda J}$ of the
$\gamma\gamma$ $\to$ $\pi^{0}\eta$ reaction, where the electromagnetic Born
contributions from $\rho$, $\omega$, $K^{*}$, and $K$ exchanges modified by
the form factors and strong elastic and inelastic final-state interactions in
the $\pi^{0}\eta$, $\pi^{0}\eta^{\prime}$, $K^{+}K^{-}$, and
$K^{0}\bar{K}^{0}$ channels, and the contributions from the direct interaction
of the resonances with photons are taken into account:
$\displaystyle
M_{0}(\gamma\gamma\to\pi^{0}\eta;s,\theta)=M^{\mbox{\scriptsize{Born}}\,V}_{0}(\gamma\gamma\to\pi^{0}\eta;s,\theta)+$
$\displaystyle+\widetilde{I}^{V}_{\pi^{0}\eta}(s)\,T_{\pi^{0}\eta\to\pi^{0}\eta}(s)+\widetilde{I}^{V}_{\pi^{0}\eta^{\prime}}(s)\,T_{\pi^{0}\eta^{\prime}\to\pi^{0}\eta}(s)+$
$\displaystyle+\left(\widetilde{I}^{K^{*+}}_{K^{+}K^{-}}(s)-\widetilde{I}^{K^{*0}}_{K^{0}\bar{K}^{0}}(s)+\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s;x_{2})\right)\times$
$\displaystyle\times\,T_{K^{+}K^{-}\to\pi^{0}\eta}(s)+\widetilde{M}^{\mbox{\scriptsize{direct}}}_{\mbox{\scriptsize{res}}}(s),$
(36) $\displaystyle M_{2}(\gamma\gamma\to\pi^{0}\eta;s,\theta)=M^{\mbox{
\scriptsize{Born}}\,V}_{2}(\gamma\gamma\to\pi^{0}\eta;s,\theta)+$
$\displaystyle+80\pi d^{2}_{20}(\theta)M_{\gamma\gamma\to
a_{2}(1320)\to\pi^{0}\eta}(s),$ (37)
where $\theta$ is the polar angle of the produced $\pi^{0}$ (or $\eta$) meson
in the $\gamma\gamma$ center-of-mass system. Figs. 23 and 24 show the diagrams
corresponding to these amplitudes.
Figure 23: The diagrams corresponding to the helicity amplitudes
$\gamma\gamma$ $\to$ $\pi^{0}\eta$, see Eqs. (36) and (37).
---
Figure 24: The Born $\rho$, $\omega$, $K^{*}$, and $K$ exchange diagrams for
$\gamma\gamma$ $\to$ $\pi^{0}\eta$, $\gamma\gamma$ $\to$
$\pi^{0}\eta^{\prime}$, and $\gamma\gamma$ $\to$ $K\bar{K}$.
The first terms in the right-hand parts of Eqs. (36) and (37) represent the
real Born helicity amplitudes, which are the sums of the $\rho$ and $\omega$
exchange contributions equal in magnitude and are written in the form AS88 ;
AS92
$\displaystyle\mbox{\qquad\ \ \
}M^{\mbox{\scriptsize{Born}}\,V}_{0}(\gamma\gamma\to\pi^{0}\eta;s,\theta)=$
$\displaystyle=2g_{\omega\pi\gamma}g_{\omega\eta\gamma}\frac{s}{4}\left[\frac{tG_{\omega}(s,t)}{t-m^{2}_{\omega}}+\frac{uG_{\omega}(s,u)}{u-m^{2}_{\omega}}\right],\mbox{\
}$ (38) $\displaystyle\mbox{\qquad\ \ \
}M^{\mbox{\scriptsize{Born}}\,V}_{2}(\gamma\gamma\to\pi^{0}\eta;s,\theta)=$
$\displaystyle=2g_{\omega\pi\gamma}g_{\omega\eta\gamma}\frac{m^{2}_{\pi}m^{2}_{\eta}-tu}{4}\left[\frac{G_{\omega}(s,t)}{t-m^{2}_{\omega}}+\frac{G_{\omega}(s,u)}{u-m^{2}_{\omega}}\right],\mbox{\
\ \ \ }$ (39)
where $g_{\omega\eta\gamma}$ =
$\frac{1}{3}g_{\omega\pi\gamma}\sin(\theta_{i}-\theta_{P})$,
$g^{2}_{\omega\pi\gamma}$ =
$12\pi\Gamma_{\omega\to\pi\gamma}[(m^{2}_{\omega}-m^{2}_{\pi})/(2m_{\omega})]^{-3}\approx
0.519$ GeV-2 PDG08 ; PDG10 , the “ideal” mixing angle $\theta_{i}$ = 35.3∘,
$\theta_{P}$ is the mixing angle in the pseudoscalar nonet, which is a free
parameter; $t$ and $u$ are the Mandelstam variables for the reaction
$\gamma\gamma\to\pi^{0}\eta$, $G_{\omega}(s,t)$ and $G_{\omega}(s,u)$ are the
$t$ and $u$ channel form factors [for the elementary $\rho$ and $\omega$
exchanges $G_{\omega}(s,t)$ = $G_{\omega}(s,u)$ = 1]. In the corresponding
Born amplitudes for $\gamma\gamma\to\pi^{0}\eta^{\prime}$,
$g_{\omega\eta^{\prime}\gamma}$ =
$\frac{1}{3}g_{\omega\pi\gamma}\cos(\theta_{i}-\theta_{P})$, and for
$\gamma\gamma\to K\bar{K}$ with the $K^{*}$ exchange,
$g^{2}_{K^{*+}K^{+}\gamma}\approx 0.064$ GeV-2 and
$g^{2}_{K^{*0}K^{0}\gamma}\approx 0.151$ GeV-2 PDG08 ; PDG10 .
Note that information on the bare Born sources of the
$\gamma\gamma\to\pi^{0}\eta$ reaction corresponding to the exchanges with the
quantum numbers of the $\rho$ and $\omega$ mesons (as well as the b1(1235) and
h1(1170) mesons) is very scarce in the nonasymptotic energy range of interest.
It is known certainly only that the elementary $\rho$ and $\omega$ exchanges,
whose contributions to the $\gamma\gamma\to\pi^{0}\eta$ cross section
(primarily to the S wave) increase very rapidly with the energy, are not
observed experimentally (see Fig. 22). 333333These contributions are weakly
sensible to the $\theta_{P}$ values under discussion PDG10 . The curves in
Fig. 22 correspond to $\theta_{P}$ = $-22^{\circ}$. This fact was explained in
Ref. AS92 by the Reggeization of the elementary exchanges, which suppresses
dangerous contributions even in the range of 1-1.5 GeV. For this reason, we
use the Regge type form factors $G_{\omega}(s,t)$ =
$\exp[(t-m^{2}_{\omega})b_{\omega}(s)]$, $G_{\omega}(s,u)$ =
$\exp[(u-m^{2}_{\omega})b_{\omega}(s)]$, where $b_{\omega}(s)$ =
$b^{0}_{\omega}+(\alpha^{\prime}_{\omega}/4)\ln[1+(s/s_{0})^{4}]$,
$b^{0}_{\omega}$ = 0, $\alpha^{\prime}_{\omega}$ = 0.8 GeV-2 and $s_{0}$ = 1
GeV2 (and similar for the $K^{*}$ exchange).
As for the $b_{1}(1235)$ and $h_{1}(1170)$ exchanges, their amplitudes have
the form similar to Eqs (Light Scalar Mesons in Photon-Photon Collisions) and
(Light Scalar Mesons in Photon-Photon Collisions) except for the common sign
in the amplitude with helicity 0. The estimates show that the axial-vector
exchange amplitudes are at least five times smaller than the corresponding
vector exchange amplitudes and we neglect their contributions. 343434The
exchanges with high spins in the $\gamma\gamma$ $\to$ $\pi^{0}\eta$ reaction
are the correction against the background of the $K$ exchange contribution.
This correction is required to describe the data for $\gamma\gamma$ $\to$
$\pi^{0}\eta$, as shown bellow. As to the $\gamma\gamma$ $\to$ $\pi\pi$
reactions, the corrections from the high spin exchanges prove to be less
significant against the background of the summary contribution of the $\pi$
and $K$ exchanges, and we are unable to catch them at this stage.
The terms in Eq. (36) proportional to the $S$ wave hadron amplitudes
$T_{\pi^{0}\eta\to\pi^{0}\eta}(s)$,
$T_{\pi^{0}\eta^{\prime}\to\pi^{0}\eta}(s)$, and
$T_{K^{+}K^{-}\to\pi^{0}\eta}(s)$ are attributed to the rescattering
mechanisms. In these amplitudes, we take into account the contribution from
the mixed $a_{0}(980)$ and heavy $a_{0}(Y)$ resonances (bellow, for brevity,
they are denoted as $a_{0}$ and $a^{\prime}_{0}$, respectively) and the
background contributions:
$\displaystyle
T_{\pi^{0}\eta\to\pi^{0}\eta}(s)=T_{0}^{1}(s)=\frac{\eta^{1}_{0}(s)e^{2i\delta^{1}_{0}(s)}-1}{2i\rho_{\pi\eta}(s)}=\
\ $
$\displaystyle=T_{\pi\eta}^{bg}(s)+e^{2i\delta_{\pi\eta}^{bg}(s)}T^{res}_{\pi^{0}\eta\to\pi^{0}\eta}(s)\,,\
\ \mbox{\qquad}$ (40) $\displaystyle
T_{\pi^{0}\eta^{\prime}\to\pi^{0}\eta}(s)=T^{res}_{\pi^{0}\eta^{\prime}\to\pi^{0}\eta}(s)\,e^{i[\delta_{\pi\eta^{\prime}}^{bg}(s)+\delta_{\pi\eta}^{bg}(s)]},\
\ \ $ (41) $\displaystyle
T_{K^{+}K^{-}\to\pi^{0}\eta}(s)=T^{res}_{K^{+}K^{-}\to\pi^{0}\eta}(s)\,e^{i[\delta_{K\bar{K}}^{bg}(s)+\delta_{\pi\eta}^{bg}(s)]},$
(42)
where
$T_{\pi\eta}^{bg}(s)=(e^{2i\delta_{\pi\eta}^{bg}(s)}-1)/(2i\rho_{\pi\eta}(s))$,
$T^{res}_{\pi^{0}\eta\to\pi^{0}\eta}(s)=(\eta^{1}_{0}(s)e^{2i\delta_{\pi\eta}^{res}(s)}-1)/(2i\rho_{\pi\eta}(s))$,
$\delta^{1}_{0}(s)=\delta_{\pi\eta}^{bg}(s)+\delta_{\pi\eta}^{res}(s)$,
$\rho_{ab}(s)$ = $\sqrt{s-m_{ab}^{(+)\,2}}\sqrt{s-m_{ab}^{(-)\,2}}\Big{/}s$,
$m_{ab}^{(\pm)}$ = $m_{b}\pm m_{a}$, $ab$ = $\pi\eta$, $K^{+}K^{-}$,
$K^{0}\bar{K}^{0}$, $\pi\eta^{\prime}$; $\delta_{\pi\eta}^{bg}(s)$,
$\delta_{\pi\eta^{\prime}}^{bg}(s)$ and $\delta_{K\bar{K}}^{bg}(s)$ are the
phase shifts of the elastic background contributions in the channels
$\pi\eta$, $\pi\eta^{\prime}$, and $K\bar{K}$ with isospin $I=1$, respectively
(see Appendix 8.2).
The amplitudes of the $a_{0}$ – $a^{\prime}_{0}$ resonance complex in Eqs.
(40)–(42) have the form analogous to Eqs. (25), (26) AS10a ; AS10b ; AK06 ;
AS98
$T^{res}_{ab\to\pi^{0}\eta}(s)=\frac{g_{a_{0}ab}\Delta_{a^{\prime}_{0}}(s)+g_{a^{\prime}_{0}ab}\Delta_{a_{0}}(s)}{16\pi[D_{a_{0}}(s)D_{a^{\prime}_{0}}(s)-\Pi^{2}_{a_{0}a^{\prime}_{0}}(s)]}\,,$
(43)
where $\Delta_{a^{\prime}_{0}}(s)$ =
$D_{a^{\prime}_{0}}(s)g_{a_{0}\pi^{0}\eta}+\Pi_{a_{0}a^{\prime}_{0}}(s)g_{a^{\prime}_{0}\pi^{0}\eta}$
and $\Delta_{a_{0}}(s)$ =
$D_{a_{0}}(s)g_{a^{\prime}_{0}\pi^{0}\eta}+\Pi_{a_{0}a^{\prime}_{0}}(s)g_{a_{0}\pi^{0}\eta}$;
$g_{a_{0}ab}$ and $g_{a^{\prime}_{0}ab}$ are the coupling constants; and
$1/D_{a_{0}}(s)=1/(m^{2}_{a_{0}}-s+\sum_{ab}[\mbox{Re}\Pi^{ab}_{a_{0}}(m^{2}_{a_{0}})-\Pi^{ab}_{a_{0}}(s)])$
is the propagator for the $a_{0}$ resonance (and similar for the
$a^{\prime}_{0}$ resonance), where $\mbox{Re}\Pi^{ab}_{a_{0}}(s)$ is
determined by a singly subtracted dispersion integral of
$\mbox{Im}\Pi^{ab}_{a_{0}}(s)=\sqrt{s}\Gamma_{a_{0}\to
ab}(s)=g^{2}_{a_{0}ab}\rho_{ab}(s)/(16\pi)$, $\Pi_{a_{0}a^{\prime}_{0}}(s)$ =
$C_{a_{0}a^{\prime}_{0}}+\sum_{ab}(g_{a^{\prime}_{0}ab}/g_{a_{0}ab})\Pi^{ab}_{a_{0}}(s)$,
and $C_{a_{0}a^{\prime}_{0}}$ is the resonance mixing parameter; the explicit
form of the polarization operators $\Pi^{ab}_{a_{0}}(s)$ ADS80a ; AK03 ;
AK04YF ; AS09 see in Appendix 8.2. The amplitude
$\widetilde{M}^{\mbox{\scriptsize{direct}}}_{\mbox{\scriptsize{res}}}(s)=s\frac{g^{(0)}_{a_{0}\gamma\gamma}\Delta_{a^{\prime}_{0}}(s)+g^{(0)}_{a^{\prime}_{0}\gamma\gamma}\Delta_{a_{0}}(s)}{D_{a_{0}}(s)D_{a^{\prime}_{0}}(s)-\Pi^{2}_{a_{0}a^{\prime}_{0}}(s)}e^{i\delta_{\pi\eta}^{bg}(s)}$
(44)
in Eq. (36) describes the $\gamma\gamma\to\pi^{0}\eta$ transition caused by
the direct coupling constants $g^{(0)}_{a_{0}\gamma\gamma}$ and
$g^{(0)}_{a^{\prime}_{0}\gamma\gamma}$ of the $a_{0}$ and $a^{\prime}_{0}$
resonances with the photons; the factor $s$ appears due to the gauge
invariance.
Equation (36) implies that the amplitudes $T_{ab\to\pi^{0}\eta}(s)$ in the
$\gamma\gamma\to ab\to\pi^{0}\eta$ rescattering loops (see Fig. 23) are on the
mass shell. In so doing, the functions $\widetilde{I}^{V}_{\pi^{0}\eta}(s)$,
$\widetilde{I}^{V}_{\pi^{0}\eta^{\prime}}(s)$,
$\widetilde{I}^{K^{*}}_{K\bar{K}}(s)$, and the above mentioned function
$\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s;x_{2})$, are the amplitudes of the
triangle loop diagrams describing the transitions $\gamma\gamma$ $\to$ $ab$
$\to$ (scalar state with a mass = $\sqrt{s}$), where the meson pairs
$\pi^{0}\eta$, $\pi^{0}\eta^{\prime}$, and $K\bar{K}$ are created by
electromagnetic Born sources (see Fig. 24); corresponding formulae see in
Appendixes 8.2 and 8.3. The constructed amplitude
$M_{0}(\gamma\gamma\to\pi^{0}\eta;s,\theta)$ satisfies the Watson theorem in
the elastic region.
For the $a_{2}(1320)$ production amplitude in (37), we use the parametrization
similar to (28) and (29):
$\displaystyle M_{\gamma\gamma\to
a_{2}(1320)\to\pi^{0}\eta}(s)=\mbox{\qquad\qquad}$
$\displaystyle=\frac{\sqrt{s\Gamma_{a_{2}\to\gamma\gamma}(s)\Gamma^{\mbox{\scriptsize{tot}}}_{a_{2}}(s)B(a_{2}\to\pi\eta)/\rho_{\pi\eta}(s)}}{m^{2}_{a_{2}}-s-i\sqrt{s}\Gamma^{\mbox{\scriptsize{tot}}}_{a_{2}}(s)}\,,$
(45)
where
$\Gamma^{\mbox{\scriptsize{tot}}}_{a_{2}}(s)=\Gamma^{\mbox{\scriptsize{tot}}}_{a_{2}}\frac{m^{2}_{a_{2}}}{s}\frac{q^{5}_{\pi\eta}(s)}{q^{5}_{\pi\eta}(m^{2}_{a_{2}})}\frac{D_{2}(q_{\pi\eta}(m^{2}_{a_{2}})r_{a_{2}})}{D_{2}(q_{\pi\eta}(s)r_{a_{2}})}\,,$
(46)
$q_{\pi\eta}(s)$ = $\sqrt{s}\rho_{\pi\eta}(s)/2$, $D_{2}(x)$ =
$9+3x^{2}+x^{4}$, $r_{a_{2}}$ is the interaction radius, and
$\Gamma_{a_{2}\to\gamma\gamma}(s)=(\frac{\sqrt{s}}{m_{a_{2}}})^{3}\Gamma_{a_{2}\to\gamma\gamma}$.
Recall that the $f_{2}(1270)$ $\to$ $\gamma\gamma$ and $a_{2}(1320)$ $\to$
$\gamma\gamma$ decays widths rather well satisfy the relation
$\Gamma_{f_{2}\to\gamma\gamma}/\Gamma_{a_{2}\to\gamma\gamma}$ = $25/9$ PDG08 ;
PDG10 ; MPW94 , which is valid in the naive $q\bar{q}$ model for the direct
transitions $q\bar{q}$ $\to$ $\gamma\gamma$.
Figure 25: The fit to the Belle data on the $\gamma\gamma$ $\to$ $\pi^{0}\eta$
reaction cross section. The resulting solid line corresponds to the solid line
1 in Fig. 26(a) (or in Fig 26(b)), folded with a Gaussian with $\sigma$ = 10
MeV mass resolution; the shaded band shows the size of the systematic error of
the data.
---
Figure 26: The fit to the Belle data. (a) Solid line 1 is the total
$\gamma\gamma$ $\to$ $\pi^{0}\eta$ cross section, solid line 2 and the dotted
line are the helicity 0 and 2 components of the cross section, solid line 3 is
the contribution from the $\gamma\gamma$ $\to$ $K^{+}K^{-}$ $\to$
$\pi^{0}\eta$ rescattering with the intermediate $K+K-$ pair created due to
the Born $K$ exchange, the dashed line is the contribution from the
$\gamma\gamma$ $\to$ $K\bar{K}$ $\to$ $\pi^{0}\eta$ with the intermediate
$K\bar{K}$ pairs created due to the Born $K$ and $K^{*}$ exchanges, the dash-
dotted line is the contribution from the Born $\rho$ and $\omega$ exchanges
with $\lambda$ =0, and solid line 4 is the joint contribution from these
exchanges and the S wave rescattering $\gamma\gamma$ $\to$
$(\pi^{0}\eta+\pi^{0}\eta^{\prime})$ $\to$ $\pi^{0}\eta$. (b) Solid lines 1
and 2 are the same as in panel (a), the short-dashed line corresponds to the
contribution of the amplitude
$\widetilde{M}^{\mbox{\scriptsize{direct}}}_{\mbox{\scriptsize{res}}}(s)$
caused by the direct decays of the $a_{0}$ and $a^{\prime}_{0}$ resonances
into photons, the dotted line is the total contribution from the
$a_{0}-a^{\prime}_{0}$ resonance complex, and the long-dashed line is the
helicity 0 cross section without the contribution of the direct transition
amplitude
$\widetilde{M}^{\mbox{\scriptsize{direct}}}_{\mbox{\scriptsize{res}}}(s)$.
The results of our fit to the Belle data on the $\gamma\gamma\to\pi^{0}\eta$
reaction cross section are shown in Figs. 25 and 26. The corresponding values
of the model parameters are quoted in Appendix 8.2. The good agreement with
the experimental data, see Fig. 25, allows for definite conclusions on the
main dynamical constituents of the $\gamma\gamma$ $\to$ $\pi^{0}\eta$ reaction
mechanism whose contributions are shown in detail in Figs. 26(a) and 26(b).
Let us begin with the contribution from the inelastic rescattering
$\gamma\gamma$ $\to$ $K^{+}K^{-}$ $\to$ $\pi^{0}\eta$, where the intermediate
$K^{+}K^{-}$ pair is create due to the charge one kaon exchange (see Fig.
24(b)). This mechanism, as in the case of the $f_{0}(980)$ production in the
$\gamma\gamma$ $\to$ $\pi\pi$ reactions AS08a ; AS08b , specifies the natural
scale for the $a_{0}(980)$ resonance production cross section in
$\gamma\gamma$ $\to$ $\pi^{0}\eta$, and leads also to the narrowing
$a_{0}(980)$ peak in this channel AS88 ; AS09 . The maximum of the cross
section $\sigma(\gamma\gamma$ $\to$ $K^{+}K^{-}$ $\to$ $a_{0}(980)$ $\to$
$\pi^{0}\eta)$ is controlled by the product of the ratio of the squares of the
coupling constants $R_{a_{0}}$ =
$g^{2}_{a_{0}K^{+}K^{-}}/g^{2}_{a_{0}\pi\eta}$ and the value
$|\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(4m^{2}_{K^{+}};x_{2})|^{2}$. Its estimate
gives $\sigma(\gamma\gamma$ $\to$ $K^{+}K^{-}$ $\to$ $a_{0}(980)$ $\to$
$\pi^{0}\eta;|\cos\theta|\leq 0.8)\approx 0.8\times
1.4\alpha^{2}R_{a_{0}}/m^{2}_{a_{0}}\approx 24$ nb $\times R_{a_{0}}$ (here we
neglect the heavy $a^{\prime}_{0}$ resonance contribution). Bellow the
$K^{+}K^{-}$ threshold, the function
$|\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s;x_{2})|^{2}$ decreases sharply,
resulting in the narrowing of the $a_{0}(980)$ peak in the $\gamma\gamma$
$\to$ $K^{+}K^{-}$ $\to$ $a_{0}(980)$ $\to$ $\pi^{0}\eta$ cross section AS88 ;
AS09 . The $\gamma\gamma$ $\to$ $K^{+}K^{-}$ $\to$ $\pi^{0}\eta$ rescattering
contribution to the $\gamma\gamma\to\pi^{0}\eta$ cross section is shown by
solid line 3 in Fig. 26(a). The $K^{*}$ exchange also slightly narrows the
$a_{0}(980)$ peak (see the dashed line under solid line 3 in Fig. 26(a)).
One $\gamma\gamma$ $\to$ $K\bar{K}$ $\to$ $\pi^{0}\eta$ rescattering mechanism
is evidently insufficient to describe the data in the region of the
$a_{0}(980)$ resonance. The addition of the Born contribution from the $\rho$
and $\omega$ exchanges, which is modified by the $S$ wave $\gamma\gamma$ $\to$
$(\pi^{0}\eta+\pi^{0}\eta^{\prime})$ $\to$ $\pi^{0}\eta$ rescattering, and the
amplitude
$\widetilde{M}^{\mbox{\scriptsize{direct}}}_{\mbox{\scriptsize{res}}}(s)$,
which is due to the direct transitions of the $a_{0}$ and $a^{\prime}_{0}$
resonances into photons, makes it possible to obtain the observed cross
section magnitude. The contributions of these two mechanisms themselves are
small in the region of the $a_{0}(980)$ resonance (see solid line 4 in Fig.
26(a) for the first of them and the short-dashed line in Fig. 26(b) for the
second), but their coherent sum with the contribution from the $\gamma\gamma$
$\to$ $K\bar{K}$ $\to$ $\pi^{0}\eta$ inelastic rescattering (see the diagrams
for the amplitude with $\lambda$ = 0 in Fig. 23) results in the considerable
enhancement of the $a_{0}(980)$ resonance (see solid line 2 in Fig. 26(a)).
Recall that all the $S$ wave contributions to the $\gamma\gamma$ $\to$
$\pi^{0}\eta$ amplitude below the $K^{+}K^{-}$ threshold have the same phase
according to the Watson theorem.
Figure 27: The $S$ $\pi^{0}\eta\to\pi^{0}\eta$ amplitude. (a) $|T^{1}_{0}(s)|$
and inelasticity $\eta^{1}_{0}(s)$; (b) phase shifts ($a^{1}_{0}$ = 0.0098).
Note that, as a by-product, we extracted from the fitting of the
$\gamma\gamma$ $\to$ $\pi^{0}\eta$ data the preliminary information on the $S$
wave amplitude of the $\pi^{0}\eta$ $\to$ $\pi^{0}\eta$ reaction, which is
important for the low-energy physics of pseudoscalar mesons. The
characteristics for the $S$ wave amplitude $\pi^{0}\eta$ $\to$ $\pi^{0}\eta$
are represented in Fig. 27. Here worth noting is the important role of the
background $\pi^{0}\eta$ elastic amplitude $T^{bg}_{\pi\eta}(s)$, see Eq.
(40). First, the choice of the negative background phase
$\delta^{bg}_{\pi\eta}(s)$ (see Fig. 27(b)) in $T^{bg}_{\pi\eta}(s)$ makes it
possible to fit the $\pi\eta$ scattering length in the model under
consideration to the estimates based on the current algebra Os70 ; Pe71 and
chiral perturbation theory BKM91 ; BFS00 , according to which $a^{1}_{0}$ (in
units of $m^{-1}_{\pi}$) $\approx$ $0.005-0.01$. The resonance contribution
($\approx$ 0.3) to $a^{1}_{0}$ is compensated by the background contribution.
Second, the significant negative value of $\delta^{bg}_{\pi\eta}(s)$ near 1
GeV ensures the resonance-like behavior of the cross section shown by solid
line 4 in Fig. 26(a).
We now turn to Fig. 26(b) and discuss the contribution from the possibly
existing heavy $a^{\prime}_{0}$ resonance PDG10 ) with the mass
$m_{a^{\prime}_{0}}\approx$ 1.4 GeV. The cross section corresponding to the
amplitude
$\widetilde{M}^{\mbox{\scriptsize{direct}}}_{\mbox{\scriptsize{res}}}(s)$ (see
the short-dashed line) exhibits a pronounced enhancement near 1.4 GeV. In the
cross section corresponding to the total contribution from the resonances (see
the dotted line), i.e., from the amplitude
$\widetilde{M}^{\mbox{\scriptsize{direct }}}_{\mbox{\scriptsize{res}}}(s)$ and
rescattering amplitudes proportional to the amplitudes of the resonance
transitions $ab\to\pi^{0}\eta$ ($ab$ = $\pi\eta$, $K^{+}K^{-}$,
$K^{0}\bar{K}^{0}$, $\pi\eta^{\prime}$), this enhancement is transferred to a
shoulder. Finally, in the total cross section $\sigma_{0}$ (see solid line 2)
additionally including the Born $\gamma\gamma$ $\to$ $\pi^{0}\eta$
contribution and the $\gamma\gamma$ $\to$ $\pi^{0}\eta$ $\to$ $\pi^{0}\eta$
rescattering caused by the background $\pi^{0}\eta$ $\to$ $\pi^{0}\eta$
elastic amplitude, any resonance attributes near 1.4 GeV are absent. Thus, a
strong destructive interference exists between different contributions and
masks the $a^{\prime}_{0}$ resonance in the $\gamma\gamma$ $\to$ $\pi^{0}\eta$
cross section. Nevertheless, in many respects owing to the $a^{\prime}_{0}$,
we succeed in modeling a significant smooth background under the $a_{2}(1320)$
and between $a_{0}(980)$ and $a_{2}(1320)$ resonances, which is required by
the Belle data Ue09 . Note that due to the resulting compensations, the wide
interval of (1.28–1.42) GeV is allowed for the mass of the $a^{\prime}_{0}$
resonance (see Ref. AS10a for details). 353535Recall, that in the previous
Section it was not required to introduce any heavy scalar isoscalar resonance
for the theoretical description of the $\gamma\gamma\to\pi^{+}\pi^{-}$ and
$\gamma\gamma\to\pi^{0}\pi^{0}$ processes, as well as in Refs. Mo07b ; Ue08
for the phenomenological treatment of the experimental data. In principle, it
could be the $f_{0}(1370)$ resonance PDG10 . As a matter of fact, situation
with the heavy scalar resonances with the masses ${}^{>}_{\sim}$ 1.3 GeV has
been strongly tangled for a long time. For example, the authors of the review
KZ07 seriously doubt in the existence of such a state as the $f_{0}(1370)$
(in this connection see also Refs. AS96 ; Oc10 ). It is possible that the wish
to see the scalar resonances with the masses of (1.3–1.4) GeV as the partners
of the well established $b_{1}(1235)$, $h_{1}(1170)$, $a_{1}(1260)$,
$f_{1}(1285)$, $a_{2}(1320)$, and $f_{2}(1270)$ states, belonging to the lower
$P$ wave $q\bar{q}$ multiplet, is not realized in the naive way. In any case,
this question remains open and requires further experimental and theoretical
investigations.
Let us consider now the $\gamma\gamma$ $\to$ $\pi^{0}\eta$ cross section due
only to the resonance contributions and, by analogy with Eq. (35), determine
the width of the $a_{0}(980)$ $\to$ $\gamma\gamma$ decay averaged over the
resonance mass distribution in the $\pi\eta$ channel AS88 ; AS09 :
$\langle\Gamma_{a_{0}\to\gamma\gamma}\rangle_{\pi\eta}=\int\limits_{0.9\mbox{\,\scriptsize{GeV}}}^{1.1\mbox{\,\scriptsize{GeV}}}\frac{s}{4\pi^{2}}\sigma_{\mbox{\scriptsize{res}}}(\gamma\gamma\to\pi^{0}\eta;s)d\sqrt{s}$
(47)
(the integral is calculated over the region of the $a_{0}(980)$ resonance).
Taking into account the contributions from all of rescattering processes and
direct decays into $\gamma\gamma$ to the $\sigma_{\mbox{\scriptsize{res}}}$
cross section, we obtain
$\langle\Gamma_{a_{0}\to(K\bar{K}+\pi\eta+\pi\eta^{\prime}+\mbox{\scriptsize{direct}})\to\gamma\gamma}\rangle_{\pi\eta}$
$\approx$ 0.4 keV. Taking into account the contributions from only the
rescattering processes,
$\langle\Gamma_{a_{0}\to(K\bar{K}+\pi\eta+\pi\eta^{\prime})\to\gamma\gamma}\rangle_{\pi\eta}$
$\approx$ 0.23 keV, and, taking into account the contributions from only the
direct decays,
$\langle\Gamma^{\mbox{\scriptsize{direct}}}_{a_{0}\to\gamma\gamma}\rangle_{\pi\eta}$
$\approx$ 0.028 keV.
The performed analysis indicates that the $a_{0}(980)$ $\to$
$(K\bar{K}+\pi^{0}\eta+\pi^{0}\eta^{\prime})$ $\to$ $\gamma\gamma$
rescattering mechanisms, i.e., the four-quark transitions, dominate in the
$a_{0}(980)$ $\to$ $\gamma\gamma$ decay. This picture is evidence of the
$q^{2}\bar{q}^{2}$ nature of the $a_{0}(980)$ resonance and is in agreement
with the properties of the $\sigma_{0}(600)$ and $f_{0}(980)$ resonances,
which are its partners. As to the ideal $q\bar{q}$ model prediction for the
two-photon decay widths of the $f_{0}(980)$ and $a_{0}(980)$ mesons,
$\Gamma_{f_{0}\to\gamma\gamma}/\Gamma_{a_{0}\to\gamma\gamma}=25/9$, it is
excluded by experiment. 363636As already mentioned in Ref. AS09 , the model of
nonrelativistic $K\bar{K}$ molecules is unjustified, because the momenta in
the kaon loops describing the $\phi\to K^{+}K^{-}\to\gamma(f_{0}/a_{0})$ and
$f_{0}/a_{0}\to K^{+}K^{-}\to\gamma\gamma$ decays are high Ac08a ; AK07b ;
AK08 . Our analysis gives an additional reason against the molecular model.
The point is that the $a_{0}(980)$ resonance is strongly coupled with the
$K\bar{K}$ and $\pi\eta$ channels, which are equivalent in the
$q^{2}\bar{q}^{2}$ model. A weakly bound $K\bar{K}+\pi\eta$ molecule seems to
be impossible. Moreover, the widths of the two-photon decays of the scalar
resonances in the molecular model are calculated at the resonance point Ka06 ;
Ha07 , but this is insufficient for describing the
$\gamma\gamma\to\pi^{+}\pi^{-}$, $\gamma\gamma\to\pi^{0}\pi^{0}$, and
$\gamma\gamma\to\pi^{0}\eta$ reactions. Attempts of the description of the
data on these processes in the framework of the molecular model are absent
and, therefore, the results obtained in this model have the academic
character.
6\. Preliminary summary
Results of the theoretical analysis of the experimental achievements in the
low energy region, up to 1 GeV, can be formulated in the following way.
1. 1.
Naive consideration of the mass spectrum of the light scalar mesons,
$\sigma(600)$, $\kappa(800)$, $f_{0}(980)$, and $a_{0}(980)$, gives an idea of
their $q^{2}\bar{q}^{2}$ structure.
2. 2.
Both intensity and mechanism of the $a_{0}(980)/f_{0}(980)$ production in the
$\phi(1020)$ meson radiative decays, the four-quark transitions $\phi(1020)\to
K^{+}K^{-}\to\gamma[a_{0}(980)/f_{0}(980)]$, indicate the $q^{2}\bar{q}^{2}$
nature of the $a_{0}(980)$ and $f_{0}(980)$ states.
3. 3.
Intensities and mechanisms of the two-photon production of the light scalars,
the four-quark transitions $\gamma\gamma\to\pi^{+}\pi^{-}\to\sigma(600)$,
$\gamma\gamma\to\pi^{0}\eta\to a_{0}(980)$, and $\gamma\gamma\to K^{+}K^{-}\to
f_{0}(980)/a_{0}(980)$, also indicate their $q^{2}\bar{q}^{2}$ nature.
4. 4.
In addition, the absence of the $J/\psi$ $\to$ $\gamma f_{0}(980)$, $\rho
a_{0}(980)$, $\omega f_{0}(980)$ decays in contrast to the intensive $J/\psi$
$\to$ $\gamma f_{2}(1270)$, $\,\gamma f^{\prime}_{2}(1525)$, $\rho
a_{2}(1320)$, $\omega f_{2}(1270)$ decays intrigues against the $P$ wave
$q\bar{q}$ structure of the $a_{0}(980)$ and $f_{0}(980)$ resonances.
5. 5.
It seems also undisputed that in all respects the $a_{0}(980)$ and
$f_{0}(980)$ mesons are strangers in the company of the well established
$b_{1}(1235)$, $h_{1}(1170)$, $a_{1}(1260)$, $f_{1}(1285)$, $a_{2}(1320)$, and
$f_{2}(1270)$ mesons, which are the members of the lower $P$ wave $q\bar{q}$
multiplet.
7\. Future Trends
7.1. $f_{0}(980)$ and $a_{0}(980)$ resonances near $\gamma\gamma\to
K^{+}K^{-}$ and $\gamma\gamma\to K^{0}\bar{K}^{0}$ reaction thresholds
The Belle Collaboration investigated the $\gamma\gamma$ $\to$
$\pi^{+}\pi^{-}$, $\gamma\gamma$ $\to$ $\pi^{0}\pi^{0}$, and $\gamma\gamma$
$\to$ $\pi^{0}\eta$ reactions with the highest statistics. 373737Note that
high precision measurements of the $\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ cross
section for 0.28 GeV $<$ $\sqrt{s}$ $<$ 0.45 GeV are planned for the KLOE-2
detector at upgraded DA$\Phi$NE $\phi$ factory in Frascati A-C10 ; Cz10 ; the
existing MARK II data Bo90 have in this region very large error-bars, see
Fig. 6(a). Measurements of the integral and differential cross sections for
$\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ and $\gamma\gamma$ $\to$
$\pi^{0}\pi^{0}$ in the $\sqrt{s}$ region from 0.45 GeV to 1.1 GeV A-C10 ;
Cz10 , which will complete the information from previous experiments on the
$\sigma(600)$ and $f_{0}(980)$ resonance production, are also planned. In
particular, the statistical uncertainty in the $\gamma\gamma$ $\to$
$\pi^{0}\pi^{0}$ cross section in the $\sigma(600)$ meson region (see Fig.
6(b)) will be reduced to 2%. In July 2010, the Belle Collaboration reported
also the first data on the $\gamma\gamma$ $\to$ $\eta\eta$ reaction Ueh10 .
The $\gamma\gamma$ $\to$ $\eta\eta$ cross section for $\sqrt{s}$ $>$ 1.2 GeV
is dominated by the contributions from the tensor resonances $f_{2}(1270)$,
$a_{2}(1320)$, and $f^{\prime}_{2}(1525)$. But near the threshold, $2m_{\eta}$
= 1.0957 GeV $<$ $\sqrt{s}$ $<$ 1.2 GeV, there is a noticeable $S$ wave
contribution, $\approx(1.5\pm 0.15\pm 0.7)$ nb, which indicates the presence
of some subthreshold resonance strongly coupled with the $\eta\eta$ channel.
Such a resonance in the $q^{2}\bar{q}^{2}$ model is the $f_{0}(980)$.
Unfortunately, the $\gamma\gamma$ $\to$ $\eta\eta$ reaction is not so good for
its investigation, because here only the end of the tail of this resonance can
be seen.
High statistics information is still lacking for the reactions $\gamma\gamma$
$\to$ $K^{+}K^{-}$ and $\gamma\gamma$ $\to$ $K^{0}\bar{K}^{0}$ in the 1 GeV
region. It is expected that the four-quark nature of the $a_{0}(980)$ and
$f_{0}(980)$ resonances shows itself in these channels very originally AS92 ;
AS94b .
As the experiments show Al83 ; Alt85 ; Jo86 ; Ber88 ; Beh89 ; FH91 ; Alb90 ;
Bra00 ; Acc01 ; Ab04 , the $\gamma\gamma$ $\to$ $K^{+}K^{-}$ and
$\gamma\gamma$ $\to$ $K^{0}_{S}K^{0}_{S}$ cross sections in the region 1 $<$
$\sqrt{s}$ $<$ 1.7 GeV are saturated in fact with the contributions of the
classical tensor $f_{2}(1270)$, $a_{2}(1320)$, and $f^{\prime}_{2}(1525)$
resonances, creating in the helicity 2 states, see Fig. 28. The constructive
and destructive interference between the $f_{2}(1270)$ and $a_{2}(1320)$
resonance contributions is observed in $\gamma\gamma$ $\to$ $K^{+}K^{-}$ and
$\gamma\gamma$ $\to$ $K^{0}\bar{K}^{0}$, respectively, in agreement with the
$q\bar{q}$ model FLR75 . Notice that the region of the $K\bar{K}$ thresholds,
$2m_{K}<\sqrt{s}<1.1$ GeV, sensitive to the $S$ wave contributions is not
investigated in fact. The sensitivity of the ARGUS experiment to the
$K^{+}K^{-}$ events for $2m_{K^{+}}<\sqrt{s}<1.1$ GeV was negligible Alb90 ,
see Fig. 28(a), and the total statistics of the L3 Acc01 , see Fig. 28(b), and
CLEO Bra00 experiments on $\gamma\gamma$ $\to$ $K^{0}_{S}K^{0}_{S}$ for
$2m_{K^{0}}<\sqrt{s}<1.1$ GeV are within 60 events.
Figure 28: The $K^{+}K^{-}$ (a) and $K^{0}_{S}K^{0}_{S}$ (b) mass spectra
measured by ARGUS Alb90 and L3 Acc01 , respectively. (c) This plot
illustrates the scale of the $K\bar{K}$ production cross section in
$\gamma\gamma$ collisions. The experimental points show the cross section for
$\gamma\gamma\to K^{+}K^{-}$ with allowed contributions from $\lambda J$ =
[22, 02, 00] Alb90 . The upper dashed, dot-dashed, and solid curves correspond
to the $\gamma\gamma\to K^{+}K^{-}$ Born cross section with $\lambda J$ =00,
$\lambda J$ = [00, 22], and the total one, respectively (the $\lambda J$ =02
contribution is negligible). The lower dashed, dot-dashed, and solid curves
correspond to the same cross sections but modified by the form factor (see
Sec. 4 and Appendix 8.3). The dotted curve shows the estimate of the the $S$
wave $\gamma\gamma\to K^{+}K^{-}$ cross section in our model.
The absence of the considerable non-resonance background in the $\gamma\gamma$
$\to$ $K^{+}K^{-}$ cross section seems at first sight rather surprising for
the one kaon exchange Born contribution comparable with the tensor resonance
contributions, see Fig. 28(c). As seen from this figure, the $S$ wave
contribution dominates in the Born cross section at $\sqrt{s}<1.5$ GeV. One
would think that the large non-coherent background should be under the tensor
meson peaks in the $K^{+}K^{-}$ channel. But taking into account of the
resonance interaction between the $K^{+}$ and $K^{-}$ mesons in the final
state results in the cancelation of the considerable part of this background
AS92 ; AS94b . The principal point is that the $S$ wave Born $\gamma\gamma$
$\to$ $K^{+}K^{-}$ amplitude acquires the $\xi(s)$ = $[1$ \+
$i\rho_{K^{+}}(s)T_{K^{+}K^{-}\to K^{+}K^{-}}(s)]$ factor due to the
$\gamma\gamma$ $\to$ $K^{+}K^{-}$ $\to$ $K^{+}K^{-}$ rescattering amplitude
with the real kaons in the intermediate state. The $a_{0}(980)$ and
$f_{0}(980)$ resonance contributions dominate in the $T_{K^{+}K^{-}\to
K^{+}K^{-}}(s)$ amplitude near the $K^{+}K^{-}$ threshold and provide it with
the considerable imaginary part for the strong coupling with the $K\bar{K}$
channels in the four-quark scheme. As a result the $|\xi(s)|^{2}$ factor is
considerably less than 1 just above the $K^{+}K^{-}$ threshold and the seed
$S$ wave Born contribution is considerably reduced in the wide region of
$\sqrt{s}$. The dotted curve in Fig. 28(c) represents the estimation of the
$S$ wave $\gamma\gamma\to K^{+}K^{-}$ cross section obtained in the model
under consideration (see details in Appendix 8.3). This estimation agrees with
those obtained earlier AS92 ; AS94b .
So one can hope to detect in the partial wave analysis of the $\gamma\gamma$
$\to$ $K^{+}K^{-}$ reaction at $2m_{K^{+}}<\sqrt{s}<1.1$ GeV the scalar
contributions at the rate of about 5–10 nb. As for the $\gamma\gamma$ $\to$
$K^{0}\bar{K}^{0}$ reaction, its amplitude has not the Born contribution and
the $a_{0}(980)$ resonance contribution has the opposite sign in comparison
with the $\gamma\gamma$ $\to$ $K^{+}K^{-}$ channel. As a result the
contributions of the $S$ wave $\gamma\gamma\to K^{+}K^{-}\to K^{0}\bar{K}^{0}$
rescattering amplitudes with the isotopic spin $I$ = 0 and 1 practically
cancel each other and the the corresponding cross section should be at the
rate of about $\lesssim$ 1 nb.
7.2. $\sigma(600)$, $f_{0}(980)$, and $a_{0}(980)$ resonances in
$\gamma\gamma^{*}$ collisions
The investigations of light scalar mesons in the $\gamma\gamma^{*}(Q^{2})$
collisions (where $\gamma^{*}(Q^{2})$ is a photon with virtuality $Q^{2}$) are
promising. If $\sigma(600)$, $f_{0}(980)$, and $a_{0}(980)$ resonances are
four-quark states, their contributions to the $\gamma\gamma^{*}$ $\to$
$\pi^{0}\pi^{0}$ and $\gamma\gamma^{*}$ $\to$ $\pi^{0}\eta$ cross sections
should decrease with increasing $Q^{2}$ more rapidly than the contributions
from the classical tensor mesons $f_{2}(1270)$ and $a_{2}(1320)$. A similar
behavior of the contribution from the exotic $q^{2}\bar{q}^{2}$ resonance
state with $I^{G}(J^{PC})$ = $2^{+}(2^{++})$ ADS82a ; ADS82b ; AS91 to the
$\gamma\gamma^{*}$ $\to$ $\rho^{0}\rho^{0}$ and $\gamma\gamma^{*}$ $\to$
$\rho^{+}\rho^{-}$ cross sections was recently observed by the L3
Collaboration L31 ; L32 ; L33 ; L34 .
7.3. Inelasticity of $\pi\pi$ scattering and $f_{0}(980)-a_{0}(980)$ mixing
By now considerable progress has been made in the experimental investigations
of the $f_{0}(980)$ and $a_{0}(980)$ mesons in various reactions.
Nevertheless, it turns out that equally good descriptions of the available
data can be obtained for appreciably different sets of the coupling constants
$g_{f_{0}K^{+}K^{-}}$, $g_{f_{0}\pi^{+}\pi^{-}}$, etc. (see, for example,
Refs. ADS84a ; ADS80a ; ADS84b ; AK06 ; AK07a ). Certainly, it would be highly
desirable to fix their values. In respect of the coupling constants
$g_{f_{0}K^{+}K^{-}}$ and $g_{f_{0}\pi^{+}\pi^{-}}$, this question could be
elucidated by precise data on the inelasticity of $\pi\pi$ scattering near the
$K\bar{K}$ threshold, that have not been updated since 1975 Pr73 ; Hy73 ; Gr74
; Hy75 . It is very likely that such data in the raw form are in hand of the
VES Collaboration, which was performing measurements of the
$\pi^{-}p\to\pi^{+}\pi^{-}n$ reaction at IHEP (Protvino). Moreover, the
product of the coupling constants $g_{a_{0}K^{+}K^{-}}g_{f_{0}K^{+}K^{-}}$ may
be fixed from data on the $f_{0}(980)-a_{0}(980)$ mixing, that are expected
from the BESIII detector Har10 .
Exclusive information on $g_{a_{0}K^{+}K^{-}}g_{f_{0}K^{+}K^{-}}$ can result
from investigations of the spin asymmetry jump, due to the
$f_{0}(980)-a_{0}(980)$ mixing, in the $\pi^{-}p\to f_{0}(980)n\to
a_{0}(980)n\to\pi^{0}\eta n$ reaction AS04a .
This work was supported in part by the RFFI Grant No. 10-02-00016 from the
Russian Foundation for Basic Research.
8\. Appendix
8.1. $\gamma\gamma\to\pi\pi$
Below there is the list of the expressions for the Born helicity amplitudes
corresponding to the charged one pion exchange mechanism and for the triangle
loop integrals $\widetilde{I}^{\pi^{+}}_{\pi^{+}\pi^{-}}(s)$ and
$\widetilde{I}^{\pi^{+}}_{\pi^{+}\pi^{-}}(s;x_{1})$ used in Section 4. In
addition, a few useful auxiliary formulae for the solitary scalar resonance
are adduced.
The Born helicity amplitudes for the elementary one pion exchange in the
$\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ reaction have the form
$M_{0}^{\mbox{\scriptsize{Born}}\,\pi^{+}}(s,\theta)=\frac{4m^{2}_{\pi^{+}}}{s}\frac{8\pi\alpha}{1-\rho^{2}_{\pi^{+}}(s)\cos^{2}\theta}\,,$
(48)
$M_{2}^{\mbox{\scriptsize{Born}}\,\pi^{+}}(s,\theta)=\frac{8\pi\alpha\rho^{2}_{\pi^{+}}(s)\sin^{2}\theta}{1-\rho^{2}_{\pi^{+}}(s)\cos^{2}\theta}\,,$
(49)
($\rho_{\pi^{+}}(s)=\sqrt{1-4m^{2}_{\pi^{+}}/s}$). Their partial wave
expansions are
$M_{\lambda}^{\mbox{\scriptsize{Born}}\,\pi^{+}}(s,\theta)=\sum_{J\geq\lambda}(2J+1)M_{\lambda
J}^{\mbox{\scriptsize{Born}}\,\pi^{+}}(s)d^{J}_{\lambda 0}(\theta)\,,$ (50)
where $d^{J}_{\lambda 0}(\theta)$ are usual $d$-functions (see, for example,
PDG08 ; PDG10 ). Three lower partial waves have the form
$M^{\mbox{\scriptsize{Born}}\,\pi^{+}}_{00}(s)=4\pi\alpha\frac{1-\rho^{2}_{\pi^{+}}(s)}{\rho_{\pi^{+}}(s)}\,\ln\frac{1+\rho_{\pi^{+}}(s)}{1-\rho_{\pi^{+}}(s)}\,,$
(51) $\displaystyle
M_{02}^{\mbox{\scriptsize{Born}}\,\pi^{+}}(s)=4\pi\alpha\frac{1-\rho^{2}_{\pi^{+}}(s)}{\rho^{2}_{\pi^{+}}(s)}\left[\frac{3-\rho^{2}_{\pi^{+}}(s)}{2\rho_{\pi^{+}}(s)}\times\right.$
$\displaystyle\times\left.\ln\frac{1+\rho_{\pi^{+}}(s)}{1-\rho_{\pi^{+}}(s)}-3\right]\,,\qquad\qquad$
(52) $\displaystyle
M_{22}^{\mbox{\scriptsize{Born}}\,\pi^{+}}(s)=4\pi\alpha\sqrt{\frac{3}{2}}\left[\frac{(1-\rho^{2}_{\pi^{+}}(s))^{2}}{2\rho^{3}_{\pi^{+}}(s)}\ln\frac{1+\rho_{\pi^{+}}(s)}{1-\rho_{\pi^{+}}(s)}-\right.$
$\displaystyle\left.-\frac{1}{\rho^{2}_{\pi^{+}}(s)}+\frac{5}{3}\right]\,.\qquad\qquad\qquad$
(53)
The amplitude of the triangle loop diagram, describing the transition
$\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ $\to$ (scalar state with a mass =
$\sqrt{s}$), is defined by
$\widetilde{I}^{\pi^{+}}_{\pi^{+}\pi^{-}}(s)=\frac{s}{\pi}\int\limits^{\infty}_{4m^{2}_{\pi^{+}}}\frac{\rho_{\pi^{+}}(s^{\prime})M^{\mbox{\scriptsize{Born}}\,\pi^{+}}_{00}(s^{\prime})}{s^{\prime}(s^{\prime}-s-i\varepsilon)}ds^{\prime}\,.$
(54)
The behavior $\widetilde{I}^{\pi^{+}}_{\pi^{+}\pi^{-}}(s)$ $\propto$ $s$, when
$s$ $\to$ 0, is the gauge invariance consequence. For $s$ $\geq$
$4m^{2}_{\pi^{+}}$
$\widetilde{I}^{\pi^{+}}_{\pi^{+}\pi^{-}}(s)=8\alpha(\frac{m^{2}_{\pi^{+}}}{s}[\pi-2\arctan|\rho_{\pi^{+}}(s)|]^{2}-1)\,,$
(55)
for $s$ $\geq$ $4m^{2}_{\pi^{+}}$
$\widetilde{I}^{\pi^{+}}_{\pi^{+}\pi^{-}}(s)=8\alpha\left\\{\frac{m^{2}_{\pi^{+}}}{s}\left[\pi+i\ln\frac{1+\rho_{\pi^{+}}(s)}{1-\rho_{\pi^{+}}(s)}\right]^{2}-1\right\\}\,.$
(56)
The form factor, see Eq. ((33),
$G_{\pi^{+}}(t,u)=\frac{1}{s}\left[\frac{m^{2}_{\pi^{+}}-t}{1-(u-m^{2}_{\pi^{+}})/x^{2}_{1}}+\frac{m^{2}_{\pi^{+}}-u}{1-(t-m^{2}_{\pi^{+}})/x^{2}_{1}}\right]$
(here $t$ = $m^{2}_{\pi^{+}}$ $-$ $s[1$ $-$ $\rho_{\pi^{+}}(s)\cos\theta]/2$
and $u$ = $m^{2}_{\pi^{+}}$ $-$ $s[1$ \+ $\rho_{\pi^{+}}(s)\cos\theta]/2$)
modifies the Born partial wave amplitudes. Let us introduce the notations:
$M^{\mbox{\scriptsize{Born}}\,\pi^{+}}_{0J}(s)=\frac{1-\rho^{2}_{\pi^{+}}(s)}{\rho_{\pi^{+}}(s)}F^{\mbox{\scriptsize{Born}}\,\pi^{+}}_{0J}(\rho_{\pi^{+}}(s))\,,$
(57)
$M^{\mbox{\scriptsize{Born}}\,\pi^{+}}_{2J}(s)=\rho_{\pi^{+}}(s)F^{\mbox{\scriptsize{Born}}\,\pi^{+}}_{2J}(\rho_{\pi^{+}}(s))\,.$
(58)
Then the modified amplitudes can be represented in the form
$\displaystyle
M^{\mbox{\scriptsize{Born}}\,\pi^{+}}_{0J}(s;x_{1})=\frac{1-\rho^{2}_{\pi^{+}}(s)}{\rho_{\pi^{+}}(s)}\times\qquad\
$
$\displaystyle\times\left[F^{\mbox{\scriptsize{Born}}\,\pi^{+}}_{0J}(\rho_{\pi^{+}}(s))-F^{\mbox{\scriptsize{Born}}\,\pi^{+}}_{0J}(\rho_{\pi^{+}}(s;x_{1}))\right],$
(59) $\displaystyle
M^{\mbox{\scriptsize{Born}}\,\pi^{+}}_{2J}(s;x_{1})=\rho_{\pi^{+}}(s)\times\qquad\qquad$
$\displaystyle\times\left[F^{\mbox{\scriptsize{Born}}\,\pi^{+}}_{2J}(\rho_{\pi^{+}}(s))-F^{\mbox{\scriptsize{Born}}\,\pi^{+}}_{2J}(\rho_{\pi^{+}}(s;x_{1}))\right],$
(60)
where
$\rho_{\pi^{+}}(s;x_{1})=\rho_{\pi^{+}}(s)/(1+2x^{2}_{1}/s)\,.$ (61)
The function $\widetilde{I}^{\pi^{+}}_{\pi^{+}\pi^{-}}(s)$, see Eqs. (54)–
(56), is replaced, with taking into account the form factor, by
$\widetilde{I}^{\pi^{+}}_{\pi^{+}\pi^{-}}(s;x_{1})=\frac{s}{\pi}\int\limits^{\infty}_{4m^{2}_{\pi^{+}}}\frac{\rho_{\pi^{+}}(s^{\prime})M^{\mbox{\scriptsize{Born}}\,\pi^{+}}_{00}(s^{\prime};x_{1})}{s^{\prime}(s^{\prime}-s-i\varepsilon)}ds^{\prime}\,.$
(62)
In this case the numerical integration needs certainly.
To make easy understanding the structure and normalization of the sufficiently
complicated expressions used in fitting data, one adduces the formulae for the
production cross section of the $\sigma$ resonance and for its two-photon
decay width due to the rescattering mechanism, $\gamma\gamma$ $\to$
$\pi^{+}\pi^{-}$ $\to$ $\sigma$ $\to$ $\pi^{+}\pi^{-}$, in the imaginary case
of the solitary scalar $\sigma$ resonance coupled only to the $\pi\pi$
channel.
The corresponding resonance cross section has the familiar form
$\displaystyle\sigma_{\mbox{\scriptsize{res}}}(\gamma\gamma\to\pi^{+}\pi^{-};s)=\qquad\qquad$
$\displaystyle=\frac{8\pi}{s}\,\frac{\sqrt{s}\Gamma_{\sigma\to\pi^{+}\pi^{-}\to\gamma\gamma}(s)\,\sqrt{s}\Gamma_{\sigma\to\pi^{+}\pi^{-}}(s)}{|D_{\sigma}(s)|^{2}}\,,$
(63)
where
$\displaystyle\Gamma_{\sigma\to\pi^{+}\pi^{-}\to\gamma\gamma}(s)=\frac{1}{16\pi\sqrt{s}}|M_{\sigma\to\pi^{+}\pi^{-}\to\gamma\gamma}(s)|^{2}=$
$\displaystyle=\left|\frac{1}{16\pi}\,\widetilde{I}^{\pi^{+}}_{\pi^{+}\pi^{-}}(s)\right|^{2}\,\frac{g^{2}_{\sigma\pi^{+}\pi^{-}}}{16\pi\sqrt{s}}\,.\qquad\qquad$
(64)
If the $\sigma$ can else directly transit into $\gamma\gamma$ with the
amplitude $sg^{(0)}_{\sigma\gamma\gamma}$ then the width
$\Gamma_{\sigma\to\pi^{+}\pi^{-}\to\gamma\gamma}(s)$ in Eq. (63) should be
replaced by
$\Gamma_{\sigma\to\gamma\gamma}(s)=\frac{1}{16\pi\sqrt{s}}|M_{\sigma\to\gamma\gamma}(s)|^{2}\,,$
(65)
where
$M_{\sigma\to\gamma\gamma}(s)=M_{\sigma\to\pi^{+}\pi^{-}\to\gamma\gamma}(s)+sg^{(0)}_{\sigma\gamma\gamma}\,.$
(66)
The propagator of the $\sigma$ resonance with the $m_{\sigma}$ Breit-Wigner
mass in Eq. (63) has the form
$\frac{1}{D_{\sigma}(s)}=\frac{1}{m^{2}_{\sigma}-s+\mbox{Re}\Pi^{\pi\pi}_{\sigma}(m^{2}_{\sigma})-\Pi^{\pi\pi}_{\sigma}(s)}\,,$
(67)
where $\Pi^{\pi\pi}_{\sigma}(s)$ is the polarization operator of the $\sigma$
resonance for the contribution of the $\pi^{+}\pi^{-}$ and $\pi^{0}\pi^{0}$
intermediate states. For $s\geq 4m^{2}_{\pi^{+}}$ (= $4m^{2}_{\pi^{0}}$)
$\Pi^{\pi\pi}_{\sigma}(s)=\frac{3}{2}\frac{g^{2}_{\sigma\pi^{+}\pi^{-}}}{16\pi}\rho_{\pi^{+}}(s)\left[i-\frac{1}{\pi}\ln\frac{1+\rho_{\pi^{+}}(s)}{1-\rho_{\pi^{+}}(s)}\right]\,.$
(68)
If 0 $<$ $s$ $<$ $4m^{2}_{\pi^{+}}$ then $\rho_{\pi^{+}}(s)$ $\to$
$i|\rho_{\pi^{+}}(s)|$ and
$\Pi^{\pi\pi}_{\sigma}(s)=-\frac{3}{2}\frac{g^{2}_{\sigma\pi^{+}\pi^{-}}}{16\pi}|\rho_{\pi^{+}}(s)|\left[1-\frac{2}{\pi}\arctan|\rho_{\pi^{+}}(s)|\right].$
(69)
The $\sigma\to\pi\pi$ decay width is
$\Gamma_{\sigma\to\pi\pi}(s)=\frac{1}{\sqrt{s}}\mbox{Im}\Pi^{\pi\pi}_{\sigma}(s)=\frac{3}{2}\frac{g^{2}_{\sigma\pi^{+}\pi^{-}}}{16\pi}\frac{\rho_{\pi^{+}}(s)}{\sqrt{s}}\,.$
(70)
The function $\mbox{Re}[\Pi^{\pi\pi}_{\sigma}(m^{2}_{\sigma})$ $-$
$\Pi^{\pi\pi}_{\sigma}(s)]$ in the denominator of Eq. (67) is the correction
for the finite width of the resonance. In Fig. 29 the real and imaginary parts
of the inverse propagator $D_{\sigma}(s)$ (taken with the sign minus) are
shown by the solid and dashed curves in the case of the resonance with the
mass $m_{\sigma}$ = 0.6 GeV and the width $\Gamma_{\sigma}$ =
$\Gamma_{\sigma\to\pi\pi}(m^{2}_{\sigma})$ = 0.45 GeV. As may be inferred from
this figure, $\mbox{Re}[D_{\sigma}(s)]$ can be close to 0 at $s$ =
$4m^{2}_{\pi^{+}}$ due to the correction for the finite width in the case of
the large one. Then this results in the threshold cusp in the amplitudes
proportional to $|1/D_{\sigma}(s)|$. 383838The references to papers, in which
the the finite width corrections and the analytic properties of the
propagators of the realistic $f_{0}(980)$, $a_{0}(980)$, and $\sigma(600)$
resonances have been investigated, are pointed out in Section 2. In connection
with the $\gamma\gamma$ $\to$ $\pi^{0}\eta$ and $\gamma\gamma$ $\to$ $\pi\pi$
reactions these corrections are discussed also in the papers AS88 ; AS05 . For
reference, in Fig. 29 the real and imaginary parts of the inverse propagator
$D_{\sigma}(s)$ = $m^{2}_{\sigma}$ $-$ $s$ $-$
$im_{\sigma}\Gamma_{\sigma}\sqrt{(s-4m^{2}_{\pi^{+}})/(m^{2}_{\sigma}-4m^{2}_{\pi^{+}})}$
without the correction for the finite width Fl76 (also taken with the sign
minus) are shown by the dotted and dot-dashed curves, respectively, at the
same values of $m_{\sigma}$ and $\Gamma_{\sigma}$.
Figure 29: Demonstration of the finite width correction with an example of the
single $\sigma$ resonance. The curves are described in the text.
8.2. $\gamma\gamma\to\pi^{0}\eta$
The polarization operators of the $a_{0}$ resonance $\Pi^{ab}_{a_{0}}(s)$
($ab$ = $\pi\eta$, $K^{+}K^{-}$, $K^{0}\bar{K}^{0}$, $\pi\eta^{\prime}$),
introduced in Section 4 (see the paragraph with Eqs. (43) and (44)), have the
following form: for $s$ $\geq$ $m_{ab}^{(+)\,2}$ ($m_{ab}^{(\pm)}$ = $m_{b}\pm
m_{a}$, $m_{b}\geq m_{a}$)
$\displaystyle\Pi^{ab}_{a_{0}}(s)=\frac{g^{2}_{a_{0}\to
ab}}{16\pi}\left[\frac{m_{ab}^{(+)}m_{ab}^{(-)}}{\pi
s}\ln\frac{m_{a}}{m_{b}}+\rho_{ab}(s)\times\right.\ $
$\displaystyle\left.\times\left(i-\frac{1}{\pi}\,\ln\frac{\sqrt{s-m_{ab}^{(-)\,2}}+\sqrt{s-m_{ab}^{(+)\,2}}}{\sqrt{s-m_{ab}^{(-)\,2}}-\sqrt{s-m_{ab}^{(+)\,2}}}\right)\right],$
(71)
where $\rho_{ab}(s)$ =
$\sqrt{s-m_{ab}^{(+)\,2}}\sqrt{s-m_{ab}^{(-)\,2}}\Big{/}s$, for
$m_{ab}^{(-)\,2}$ $<$ $s$ $<$ $m_{ab}^{(+)\,2}$
$\displaystyle\Pi^{ab}_{a_{0}}(s)=\frac{g^{2}_{a_{0}\to
ab}}{16\pi}\left[\frac{m_{ab}^{(+)}m_{ab}^{(-)}}{\pi
s}\ln\frac{m_{a}}{m_{b}}-\right.$
$\displaystyle\left.-\rho_{ab}(s)\left(1-\frac{2}{\pi}\arctan\frac{\sqrt{m_{ab}^{(+)\,2}-s}}{\sqrt{s-m_{ab}^{(-)\,2}}}\right)\right],$
(72)
where $\rho_{ab}(s)$ =
$\sqrt{m_{ab}^{(+)\,2}-s}\sqrt{s-m_{ab}^{(-)\,2}}\Big{/}s$, and for $s\leq
m_{ab}^{(-)\,2}$
$\displaystyle\Pi^{ab}_{a_{0}}(s)=\frac{g^{2}_{a_{0}\to
ab}}{16\pi}\left[\frac{m_{ab}^{(+)}m_{ab}^{(-)}}{\pi
s}\ln\frac{m_{a}}{m_{b}}+\right.\ \ $
$\displaystyle\left.-\rho_{ab}(s)\frac{1}{\pi}\,\ln\frac{\sqrt{m_{ab}^{(+)\,2}-s}+\sqrt{m_{ab}^{(-)\,2}-s}}{\sqrt{m_{ab}^{(+)\,2}-s}-\sqrt{m_{ab}^{(-)\,2}-s}}\right],$
(73)
where $\rho_{ab}(s)$ =
$\sqrt{m_{ab}^{(+)\,2}-s}\sqrt{m_{ab}^{(-)\,2}-s}\Big{/}s$.
The triangle loop integral in Eq. (36) is
$\widetilde{I}^{V}_{\pi\eta}(s)=\frac{s}{\pi}\int\limits^{\infty}_{(m_{\pi}+m_{\eta})^{2}}\frac{\rho_{\pi\eta}(s^{\prime})M^{\mbox{\scriptsize{Born}}\,V}_{00}(\gamma\gamma\to\pi^{0}\eta;s^{\prime})}{s^{\prime}(s^{\prime}-s-i\varepsilon)}ds^{\prime}\,,$
(74)
where
$\displaystyle
M^{\mbox{\scriptsize{Born}}\,V}_{00}(\gamma\gamma\to\pi^{0}\eta;s)=\qquad$
(75)
$\displaystyle=\frac{1}{2}\int\limits^{1}_{-1}M^{\mbox{\scriptsize{Born}}\,V}_{0}(\gamma\gamma\to\pi^{0}\eta;s,\theta)d\cos\theta\,,$
is the $S$ wave Born amplitude, and the amplitude
$M^{\mbox{\scriptsize{Born}}\,V}_{0}(\gamma\gamma\to\pi^{0}\eta;s,\theta)$ is
defined by Eq. (Light Scalar Mesons in Photon-Photon Collisions). The
functions $\widetilde{I}^{V}_{\pi^{0}\eta^{\prime}}(s)$ and
$\widetilde{I}^{K^{*}}_{K\bar{K}}(s)$ in Eq. (36) are calculated similarly and
the function $\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s;x_{2})$ is calculated with
Eq. (92) in Appendix 8.3.
For the background phase shifts we use the simplest parametrizations, which
are suitable in the physical region of the $\gamma\gamma$ $\to$ $\pi^{0}\eta$
reaction:
$\displaystyle
e^{i\delta^{bg}_{ab}(s)}=[(1+iF_{ab}(s))/(1-iF_{ab}(s))]^{1/2},$ (76)
$\displaystyle
F_{\pi\eta}(s)=\frac{\sqrt{1-m^{(+)\,2}_{\pi\eta}/s}\left(c_{0}+c_{1}\left(s-m^{(+)\,2}_{\pi\eta}\right)\right)}{1+c_{2}\left(s-m^{(+)\,2}_{\pi\eta}\right)^{2}},$
(77) $\displaystyle
F_{K\bar{K}}(s)=f_{K\bar{K}}\sqrt{s}\left(\rho_{K^{+}K^{-}}(s)+\rho_{K^{0}\bar{K}^{0}}(s)\right)/2,$
(78) $\displaystyle
F_{\pi\eta^{\prime}}(s)=f_{\pi\eta^{\prime}}\sqrt{s-m^{(+)\,2}_{\pi\eta^{\prime}}}.\qquad\qquad$
(79)
The curves in Figs. 25, 26, and 27 correspond to the following model
parameters: ($m_{a_{0}},\ g_{a_{0}\pi\eta},\ g_{a_{0}K^{+}K^{-}},\
g_{a_{0}\pi\eta^{\prime}}$) = (0.9845, 4.23, 3.79, $-2.13$) GeV;
($m_{a^{\prime}_{0}},\ g_{a^{\prime}_{0}\pi\eta},\
g_{a^{\prime}_{0}K^{+}K^{-}},\ g_{a^{\prime}_{0}\pi\eta^{\prime}}$) = (1.4,
3.3, 0.28, 2.91) GeV; ($g_{a_{0}\gamma\gamma},\,g_{a_{0}\gamma\gamma}$) =
(1.77, $-11.5$)$\times 10^{-3}$ GeV-1; $C_{a_{0}a^{\prime}_{0}}$ = 0.06 GeV2,
$c_{0}$ = $-0603$, $c_{1}$ = $-6.48$ GeV-2, $c_{2}$ = 0.121 GeV-4;
($f_{K\bar{K}},\ f_{\pi\eta^{\prime}}$) = ($-0.37$, 0.28) GeV-1; ($m_{a_{2}},\
\Gamma^{\mbox{\scriptsize{tot}}}_{a_{2}}$) = (1.322, 0.116) GeV;
$\Gamma^{(0)}_{a_{2}\to\gamma\gamma}$ = 1.053 keV, $r_{a_{2}}$ = 1.9 GeV-1,
$\theta_{P}$ = $-24^{\circ}$ (see Ref. AS10a for details).
8.3. $\gamma\gamma\to K\bar{K}$
The Born amplitude of the reaction $\gamma\gamma$ $\to$ $K^{+}K^{-}$, caused
by the elementary one kaon exchange,
$M_{\lambda}^{\mbox{\scriptsize{Born}}\,K^{+}}(s,\theta)$ and $M_{\lambda
J}^{\mbox{\scriptsize{Born}}\,K^{+}}(s)$ result from the corresponding
$\gamma\gamma$ $\to$ $\pi^{+}\pi^{-}$ Born amplitudes
$M_{\lambda}^{\mbox{\scriptsize{Born}}\,\pi^{+}}(s,\theta)$ and $M_{\lambda
J}^{\mbox{\scriptsize{Born}}\,\pi^{+}}(s)$ by the substitution of $m_{K^{+}}$
for $m_{\pi^{+}}$ and of $\rho_{K^{+}}(s)=\sqrt{1-4m^{2}_{K^{+}}/s}$ for
$\rho_{\pi^{+}}(s)$ in Eqs. (48), (49), and (51)–(53):
$M_{0}^{\mbox{\scriptsize{Born}}\,K^{+}}(s,\theta)=\frac{4m^{2}_{K^{+}}}{s}\frac{8\pi\alpha}{1-\rho^{2}_{K^{+}}(s)\cos^{2}\theta}\,,$
(80)
$M_{2}^{\mbox{\scriptsize{Born}}\,K^{+}}(s,\theta)=\frac{8\pi\alpha\rho^{2}_{K^{+}}(s)\sin^{2}\theta}{1-\rho^{2}_{K^{+}}(s)\cos^{2}\theta}\,,$
(81)
$M^{\mbox{\scriptsize{Born}}\,K^{+}}_{00}(s)=4\pi\alpha\frac{1-\rho^{2}_{K^{+}}(s)}{\rho_{K^{+}}(s)}\,\ln\frac{1+\rho_{K^{+}}(s)}{1-\rho_{K^{+}}(s)}\,,$
(82) $\displaystyle
M_{02}^{\mbox{\scriptsize{Born}}\,K^{+}}(s)=4\pi\alpha\frac{1-\rho^{2}_{K^{+}}(s)}{\rho^{2}_{K^{+}}(s)}\left[\frac{3-\rho^{2}_{K^{+}}(s)}{2\rho_{K^{+}}(s)}\times\right.$
$\displaystyle\times\left.\ln\frac{1+\rho_{K^{+}}(s)}{1-\rho_{K^{+}}(s)}-3\right]\,,\qquad\qquad$
(83) $\displaystyle
M_{22}^{\mbox{\scriptsize{Born}}\,K^{+}}(s)=4\pi\alpha\sqrt{\frac{3}{2}}\left[\frac{(1-\rho^{2}_{K^{+}}(s))^{2}}{2\rho^{3}_{K^{+}}(s)}\ln\frac{1+\rho_{K^{+}}(s)}{1-\rho_{K^{+}}(s)}-\right.$
$\displaystyle-\left.\frac{1}{\rho^{2}_{K^{+}}(s)}+\frac{5}{3}\right]\,.\qquad\qquad\qquad$
(84)
The function $\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s)$ results from
$\widetilde{I}^{\pi^{+}}_{\pi^{+}\pi^{-}}(s)$ by the substitution in Eqs. (55)
and (56) of $m_{K^{+}}$ for $m_{\pi^{+}}$ and of $\rho_{K^{+}}(s)$ for
$\rho_{\pi^{+}}(s)$, and thus for 0 $<$ $s$ $<$ $4m^{2}_{K^{+}}$
$\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s)=8\alpha\left\\{\frac{m^{2}_{K^{+}}}{s}\left[\pi-2\arctan|\rho_{K^{+}}(s)|\right]^{2}-1\right\\}$
(85)
and for $s\geq 4m^{2}_{K^{+}}$
$\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s)=8\alpha\left\\{\frac{m^{2}_{K^{+}}}{s}\left[\pi+i\ln\frac{1+\rho_{K^{+}}(s)}{1-\rho_{K^{+}}(s)}\right]^{2}-1\right\\}.$
(86)
Taking account of the form factor
$G_{K^{+}}(t,u)=\frac{1}{s}\left[\frac{m^{2}_{K^{+}}-t}{1-(u-m^{2}_{K^{+}})/x^{2}_{2}}+\frac{m^{2}_{K^{+}}-u}{1-(t-m^{2}_{K^{+}})/x^{2}_{2}}\right]$
(87)
(here $t$ = $m^{2}_{K^{+}}$ $-$ $s[1$ $-$ $\rho_{K^{+}}(s)\cos\theta]/2$ and
$u$ = $m^{2}_{K^{+}}$ $-$ $s[1$ $+$ $\rho_{K^{+}}(s)\cos\theta]/2$), the
partial amplitudes $M_{\lambda J}^{\mbox{\scriptsize{Born}}\,K^{+}}(s)$ are
replaced by $M_{\lambda J}^{\mbox{\scriptsize{Born}}\,K^{+}}(s;x_{2})$.
Substituting $\rho_{K^{+}}(s)$ instead $\rho_{\pi^{+}}(s)$ and
$\rho_{K^{+}}(s;x_{2})=\rho_{K^{+}}(s)/(1+2x^{2}_{2}/s)$ instead
$\rho_{\pi^{+}}(s;x_{1})$ in Eqs. (57)–(60), one gets
$M^{\mbox{\scriptsize{Born}}\,K^{+}}_{0J}(s)=\frac{1-\rho^{2}_{K^{+}}(s)}{\rho_{K^{+}}(s)}F^{\mbox{\scriptsize{Born}}\,K^{+}}_{0J}(\rho_{K^{+}}(s)),$
(88)
$M^{\mbox{\scriptsize{Born}}\,K^{+}}_{2J}(s)=\rho_{K^{+}}(s)F^{\mbox{\scriptsize{Born}}\,K^{+}}_{2J}(\rho_{K^{+}}(s)),$
(89) $\displaystyle
M^{\mbox{\scriptsize{Born}}\,K^{+}}_{0J}(s;x_{2})=\frac{1-\rho^{2}_{K^{+}}(s)}{\rho_{K^{+}}(s)}\left[F^{\mbox{\scriptsize{Born}}\,K^{+}}_{0J}(\rho_{K^{+}}(s))-\right.$
$\displaystyle\left.-F^{\mbox{\scriptsize{Born}}\,K^{+}}_{0J}(\rho_{K^{+}}(s;x_{2}))\right],\qquad\qquad$
(90) $\displaystyle
M^{\mbox{\scriptsize{Born}}\,K^{+}}_{2J}(s;x_{2})=\rho_{K^{+}}(s)\left[F^{\mbox{\scriptsize{Born}}\,K^{+}}_{2J}(\rho_{K^{+}}(s))-\right.$
$\displaystyle\left.-F^{\mbox{\scriptsize{Born}}\,K^{+}}_{2J}(\rho_{K^{+}}(s;x_{2}))\right].\qquad\qquad$
(91)
Correspondingly, with taking into account the form factor, the function
$\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s)$ is replaced by
$\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s;x_{2})=\frac{s}{\pi}\int\limits^{\infty}_{4m^{2}_{K^{+}}}\frac{\rho_{K^{+}}(s^{\prime})M^{\mbox{\scriptsize{Born}}\,K^{+}}_{00}(s^{\prime};x_{2})}{s^{\prime}(s^{\prime}-s-i\varepsilon)}ds^{\prime}.$
(92)
Note that $0.68\times|\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s)|^{2}$ coincides
with $|\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s;x_{2})|^{2}$ within an accuracy
better than 3% in the range 0.8 GeV $<$ $\sqrt{s}$ $<$ 1.2 GeV at $x_{2}$ =
1.75 GeV.
The $S$ wave amplitudes of the reactions $\gamma\gamma$ $\to$ $K^{+}K^{-}$ and
$\gamma\gamma$ $\to$ $K^{0}\bar{K}^{0}$, which we used for estimates in the
region of the $K\bar{K}$ thresholds, have the form
$\displaystyle M_{00}(\gamma\gamma\to
K^{+}K^{-};s)=M^{\mbox{\scriptsize{Born}}\,K^{+}}_{00}(s;x_{2})+$
$\displaystyle+\widetilde{I}^{\pi+}_{\pi^{+}\pi^{-}}(s;x_{1})T_{\pi^{+}\pi^{-}\to
K^{+}K^{-}}(s)+\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s;x_{2})\times$
$\displaystyle\times\,T_{K^{+}K^{-}\to
K^{+}K^{-}}(s)+M^{\mbox{\scriptsize{direct}}}_{\mbox{\scriptsize{res}};+}(s),$
(93) $\displaystyle M_{00}(\gamma\gamma\to K^{0}\bar{K}^{0};s)=$
$\displaystyle=\widetilde{I}^{\pi+}_{\pi^{+}\pi^{-}}(s;x_{1})\,T_{\pi^{+}\pi^{-}\to
K^{0}\bar{K}^{0}}(s)+\widetilde{I}^{K^{+}}_{K^{+}K^{-}}(s;x_{2})\times$
$\displaystyle\times\,T_{K^{+}K^{-}\to
K^{0}\bar{K}^{0}}(s)+M^{\mbox{\scriptsize{direct}}}_{\mbox{\scriptsize{res}};-}(s).$
(94)
The corresponding cross section are
$\sigma_{00}(\gamma\gamma\to K^{+}K^{-})=\frac{\rho_{K^{+}}(s)}{32\pi
s}|M_{00}(\gamma\gamma\to K^{+}K^{-};s)|^{2},$ (95)
$\sigma_{00}(\gamma\gamma\to K^{0}_{S}K^{0}_{S})=\frac{\rho_{K^{0}}(s)}{64\pi
s}|M_{00}(\gamma\gamma\to K^{0}\bar{K}^{0};s)|^{2}.$ (96)
The amplitudes of the $\pi\pi$ $\to$ $K\bar{K}$ reactions,
$T_{\pi^{+}\pi^{-}\to K^{+}K^{-}}(s)$ = $T_{\pi^{+}\pi^{-}\to
K^{0}\bar{K}^{0}}(s)$ = $T_{K^{+}K^{-}\to\pi^{+}\pi^{-}}(s)$, are defined by
Eqs. (23) and (26). The $K^{+}K^{-}$ $\to$ $K^{+}K^{-}$ and $K^{+}K^{-}$ $\to$
$K^{0}\bar{K}^{0}$ reaction amplitudes are given by
$T_{K^{+}K^{-}\to K^{+}K^{-}}(s)=[t^{0}_{0}(s)+t^{1}_{0}(s)]/2,$ (97)
$T_{K^{+}K^{-}\to K^{0}\bar{K}^{0}}(s)=[t^{0}_{0}(s)-t^{1}_{0}(s)]/2,$ (98)
where $t^{I}_{0}(s)$ are the $S$ wave $K\bar{K}$ $\to$ $K\bar{K}$ reaction
amplitudes with isospin $I$ = 0 and 1;
$t^{0}_{0}(s)=\frac{\exp[2i\delta^{K\bar{K}}_{B}(s)]-1}{2i\rho_{K^{+}}(s)}+e^{2i\delta^{K\bar{K}}_{B}(s)}T^{K\bar{K}}_{\mbox{\scriptsize{res}};0}(s)\,,$
(99) $t^{1}_{0}(s)=g^{2}_{a_{0}K^{+}K^{-}}/[8\pi D_{a_{0}}(s)]\,,$ (100)
$T^{K\bar{K}}_{\mbox{\scriptsize{res}};0}(s)=\frac{g_{\sigma
K^{+}K^{-}}\overline{\Delta}_{f_{0}}(s)+g_{f_{0}K^{+}K^{-}}\overline{\Delta}_{\sigma}(s)}{8\pi[D_{\sigma}(s)D_{f_{0}}(s)-\Pi^{2}_{f_{0}\sigma}(s)]}\,,$
(101)
where $\overline{\Delta}_{f_{0}}(s)$ = $D_{f_{0}}(s)g_{\sigma K^{+}K^{-}}$ \+
$\Pi_{f_{0}\sigma}(s)g_{f_{0}K^{+}K^{-}}$ and $\overline{\Delta}_{\sigma}(s)$
= $D_{\sigma}(s)g_{f_{0}K^{+}K^{-}}$ \+ $\Pi_{f_{0}\sigma}(s)g_{\sigma
K^{+}K^{-}}$. The amplitudes of the direct resonance transitions into photons
are given by
$\displaystyle
M^{\mbox{\scriptsize{direct}}}_{\mbox{\scriptsize{res}};\pm}(s)=s\,e^{i\delta^{K\bar{K}}_{B}(s)}\times\qquad\qquad$
$\displaystyle\times\,\frac{g^{(0)}_{\sigma\gamma\gamma}\overline{\Delta}_{f_{0}}(s)+g^{(0)}_{f_{0}\gamma\gamma}\overline{\Delta}_{\sigma}(s)}{D_{\sigma}(s)D_{f_{0}}(s)-\Pi^{2}_{f_{0}\sigma}(s)}\,\pm\,s\frac{g^{(0)}_{a_{0}\gamma\gamma}g_{a_{0}K^{+}K^{-}}}{D_{a_{0}}(s)}\,.$
(102)
When estimating the $S$ wave $\gamma\gamma$ $\to$ $K\bar{K}$ cross sections
(see Fig. 28(c)) the amplitudes of the direct resonance transitions into
photons were disregarded.
## References
* (1) Achasov N N, Ivanchenko V N Nucl. Phys. B 315 465 (1989)
* (2) Achasov N N, Gubin V V Phys. Rev. D 63 094007 (2001)
* (3) Achasov N N, Gubin V V Yad. Fiz. 65 1566 (2002) [Achasov N N, Gubin V VPhys. Atom. Nucl. 65 1528 (2002)]
* (4) Achasov N N Nucl. Phys. A 728 425 (2003)
* (5) Achasov N N Yad. Fiz. 67 1552 (2004) [Achasov N N Phys. Atom. Nucl. 67 1529 (2004)]
* (6) Achasov N N, Shestakov G N Phys. Rev. D 49 5779 (1994)
* (7) Achasov N N, Shestakov G N Yad. Fiz. 56, No. 9, 206 (1993) [Achasov N N, Shestakov G N Phys. Atom. Nucl 56 1270 (1993)]
* (8) Achasov N N, Shestakov G N Int. J. Mod. Phys. A 9 3669 (1994)
* (9) Achasov N N, Shestakov G N Phys. Rev. Lett. 99 072001 (2007)
* (10) Amsler C et al. (Particle Data Group) Phys. Lett. B 667 1 (2008)
* (11) Nakamura K et al. (Particle Data Group) J. Phys. G 37 075021 (2010)
* (12) Rosenfeld A H et al. (Particle Data Group) Rev. Mod. Phys. 37 633 (1965)
* (13) Rosenfeld A H et al. (Particle Data Group) Rev. Mod. Phys. 39 1 (1967)
* (14) Barash-Schmidt N et al. (Particle Data Group) Rev. Mod. Phys. 41 109 (1969)
* (15) Gell-Mann M, Levy M Nuovo Cim. 16 705 (1960)
* (16) Gell-Mann M Physics 1, 63 (1964)
* (17) Levy M Nuovo Cim. A 52 23 (1967)
* (18) Rittenberg A et al. (Particle Data Group) Rev. Mod. Phys. 43 S1 (1971)
* (19) Lasinski T A et al. (Particle Data Group) Rev. Mod. Phys. 45 S1 (1973)
* (20) Jaffe R L Phys. Rev. D 15 267, 281 (1977)
* (21) Sannino F, Schechter J Phys. Rev. D 52 96 (1995)
* (22) Törnqvist N A Z. Phys. C 68 647 (1995)
* (23) Ishida S et al. Prog. Theor. Phys. 95 745 (1996)
* (24) Harada M, Sannino F, Schechter J Phys. Rev. D 54 54 (1996)
* (25) Ishida S, in Proceedings of the 7th International Conference on Hadron Spectroscopy (Eds S-U Chung, H Willutzki) AIP Conf. Proc. Vol. 432, p. 705 (1998)
* (26) Black D et al. Phys. Rev. D 58 054012 (1998)
* (27) Black D et al. Phys. Rev. D 59 074026 (1999)
* (28) Ishida M, in Proceedings of the Possible Existence of sigma-Meson and Its Implication to Hadron Physics (Eds S Ishida et al.) (KEK Proceedings 2000-4); hep-ph/0012325
* (29) Barnett R M et al. (Particle Data Group) Phys. Rev. D 54 1 (1996)
* (30) Eidelman S et al. (Particle Data Group) Phys. Lett. B 592 1 (2004)
* (31) Spanier S, Törnqvist N A, Amsler C Phys. Lett. B 667 594 (2008)
* (32) Amsler C et al. (Note on scalar mesons) J. Phys. G 37 075021 (2010)
* (33) Montanet L Rept. Prog. Phys. 46 337 (1983)
* (34) Achasov N N, Devyanin S A, Shestakov G N Usp. Fiz. Nauk 142 361 (1984) [Achasov N N, Devyanin S A, Shestakov G N Sov. Phys. Usp. 27 161 (1984)]
* (35) Achasov N N, in Proceedings of the Rheinfels Workshop 1990 on Hadron Mass Spectrum (Eds E Klempt, K Peters) Nucl. Phys. B Proc. Suppl. 21 189 (1991)
* (36) Achasov N N Usp. Fiz. Nauk 168 1257 (1998) [Achasov N N Physics-Uspekhi 41 1149 (1998)]
* (37) Achasov N N, in Proceedings of the 8th International Conference On Hadron Spectroscopy (Eds W G Li, Y Z Huang, B S Zou) Nucl. Phys. A 675 279c (2000)
* (38) Achasov N N Yad. Fiz. 65 573 (2002) [Achasov N N Phys. Atom. Nucl. 65 546 (2002)]
* (39) Achasov N N, in Proceedings of the Ninth International Conference on Hadron Spectroscopy (Eds D Amelin, A M Zaitsev) AIP Conf. Proc. Vol. 619, p. 112 (2002)
* (40) Achasov N N, in Proceedings of the International Symposium on Hadron Spectroscopy, Chiral Symmetry and Relativistic Description of Bound Systems (Eds S Ishida et al.) (KEK Proceedings 2003-7) p. 151
* (41) Achasov N N, in Proceedings of the KEK Workshop on Hadron Spectroscopy and Chiral Particle Search in J/Psi Decay Data at BES (Eds K Takamatsu et al.) (KEK Proceedings 2003-10) p. 66
* (42) Achasov N N, in Proceedings of the 13th International Seminar QUARKS’2004 (Eds D G Levkov, V A Matveev, V A Rubakov) (INR RAS, Moscow, 2004) p. 110
* (43) Achasov N N, in Proceedings of the International Bogolyubov Conference “Problems of Theoretical and Mathematical Physics” (Eds V G Kadyshevsky, A N Sissakian) Phys. Part. Nucl. 36, Suppl. 2, 146 (2005)
* (44) Achasov N N, in Proceedings of the 14th International Seminar QUARKS’2006 (Eds S V Demidov et al.) (INR RAS, Moscow, 2007) Vol. 1, p. 29
* (45) Achasov N N, in Proceedings of the 15th International Seminar QUARKS’2008 (Eds V A Duk, V A Matveev, V A Rubakov) (INR RAS, Moscow, 2010) Vol. 1, p. 3; arXiv:0810.2601
* (46) Achasov N N, in Proceedings of the 14th High-Energy Physics International Conference on Quantum ChromoDynamics (Ed. S Narison) Nucl. Phys. B Proc. Suppl. 186 283 (2009)
* (47) Achasov N N, in Proceedings of the International Bogolyubov Conference “Problems of Theoretical and Mathematical Physics” devoted to the 100th anniversary of N.N.Bogolyubov’s birth (RAS, Moscow, JINR, Dubna, August 2009) Phys. Part. Nucl. 41 891 (2010); arXiv:1001.3468
* (48) Achasov N N, Shestakov G N Usp. Fiz. Nauk 161 53 (1991) [Achasov N N, Shestakov G N Sov. Phys. Usp. 34 471 (1991)]
* (49) Achasov N N, Shestakov G N, in Proceedings of the International Workshop on $e^{+}e^{-}$ Collisions from $\phi$ to $\psi$ (Eds G V Fedotovich, S I Redin) (BINP, Novosibirsk, 2000) p. 294
* (50) Delbourgo R, Scadron M D Int. J. Mod. Phys. A 13 657 (1998)
* (51) Godfrey S, Napolitano J Rev. Mod. Phys. 71 1411 (1999)
* (52) Tuan S F, in Proceedings of the Ninth International Conference on Hadron Spectroscopy (Eds D Amelin, A M Zaitsev) AIP Conf. Proc. Vol. 619, p. 495 (2002)
* (53) Tuan S F, in Proceedings of the International Symposium on Hadron Spectroscopy, Chiral Symmetry and Relativistic Description of Bound Systems (Eds S. Ishida et al.) (KEK Proceedings 2003-7) p. 319
* (54) Close F E, Törnqvist N A J. Phys. G 28 R249 (2002)
* (55) Alford M, Jaffe R L, in Proceedings of the High-Energy Physics Workshop on Scalar Mesons: An Interesting Puzzle for QCD (Ed. A H Fariborz) AIP Conf. Proc. Vol. 688, p. 208 (2003)
* (56) Jaffe R L, Wilczek F Phys. Rev. Lett. 91 232003 (2003)
* (57) Amsler C, Törnqvist N A Phys. Rep. 389 61 (2004)
* (58) Maiani L et al. Phys. Rev. Lett. 93 212002 (2004)
* (59) Jaffe R L Phys. Rep. 409 1 (2005)
* (60) Jaffe R L, in Proceedings of the YKIS Seminar on New Frontiers in QCD: Exotic Hadrons and Hadronic Matter (Tokio, Japan) Progr. Theor. Phys. Suppl. 168 127 (2007)
* (61) Kalashnikova Yu S et al. Eur. Phys. J. A 24 437 (2005)
* (62) Caprini I, Colangelo G, Leutwyler H Phys. Rev. Lett. 96 132001 (2006)
* (63) Bugg D V Eur. Phys. J. C 47 57 (2006)
* (64) Achasov N N, Kiselev A V, Shestakov G N, in Proceedings of the International Workshop on $e^{+}e^{-}$ Collisions from $\phi$ to $\psi$ (Eds A Bondar, S Eidelman) Nucl. Phys. B Proc. Suppl. 162, 127 (2006)
* (65) Achasov N N, Kiselev A V, Shestakov G N, in Proceedings of the International Workshop on $e^{+}e^{-}$ Collisions from $\phi$ to $\psi$ (Eds C Bibi, G Venanzoni) Nucl. Phys. B Proc. Suppl. 181-182, 169 (2008)
* (66) Achasov N N, Shestakov G N, in Proceedings of the 6th International Workshop on $e^{+}e^{-}$ Collisions from $\phi$ to $\psi$ (Beijing, China, 2009) Chinese Physics C 34 807 (2010)
* (67) Achasov N N, Shestakov G N, Invited talk at the 16th International Seminar on High Energy Physics QUARKS-2010 (Kolomna, Russia, June 2010)
* (68) Fariborz A H, Jora R, Schechter J Phys. Rev. D 76 014011 (2006)
* (69) Fariborz A H, Jora R, Schechter J Phys. Rev. D 77 094004 (2008)
* (70) Fariborz A H, Jora R, Schechter J Nucl. Phys. B Proc. Suppl. 186 298 (2008)
* (71) Fariborz A H, Jora R, Schechter J Phys. Rev. D 79 074014 (2009)
* (72) Narison S Phys. Rev. D 73 114024 (2006)
* (73) Narison S, Talk given at the 14th High-Energy Physics International Conference in Quantum Chromodynamics (Montpellier, France, 2008); arXiv:0811.0563
* (74) Törnqvist N A Acta. Phys. Polon. B 38 2831 (2007)
* (75) Klempt E, Zaitsev A Phys. Rep. 454 1 (2007)
* (76) Maiani L, Polosa A D, Riquer V Phys. Lett. B 651 129 (2007)
* (77) Pennington M R, in Proceedings of the YKIS Seminar on New Frontiers in QCD: Exotic Hadrons and Hadronic Matter (Tokio, Japan) Progr. Theor. Phys. Suppl. 168 143 (2007)
* (78) Pennington M R, in Proceedings of the 11th International Conference on Meson-Nucleon Physics and the Structure of the Nucleon (Julich, Germany, 2007) p. 106
* (79) van Beveren E, Rupp G, in Proceedings of 11th International Conference on Meson-Nucleon Physics and the Structure of the Nucleon (Eds H Machner, S Krewald)(2007) p. 130
* (80) Bystritsky Yu M et al. Phys. Rev. D 77 054008 (2007)
* (81) Leutwyler H, in Proceedings of the Workshop on Scalar Mesons and Related Topics Honoring 70th Birthday of Michael Scadron AIP Conf. Proc. Vol. 1030, p. 46 (2008)
* (82) ’t Hooft G et al. Phys. Lett. B 662 424 (2008)
* (83) Ivashin S, Korchin A Eur. Phys. J. C 54 89 (2008)
* (84) Ebert D, Faustov R N, Galkin V O Eur. Phys. J. C 60 273 (2009)
* (85) Achasov N N, Devyanin S A, Shestakov G N Phys. Lett. 108B 134 (1982)
* (86) Achasov N N, Devyanin S A, Shestakov G N Z. Phys. C 16 55 (1982)
* (87) Budnev V M et al. Phys. Rep. 15 181 (1975)
* (88) Kolanoski H “Two-photon physics at $e^{+}e^{-}$ storage rings”, Springer Tracts in Modern Physics, Vol. 105 (1984)
* (89) Mori T et al. (Belle) in Proceedings of the International Simposium on Hadron Spectroscopy, Chiral Symmetry and Relativistic Description of Bound Systems (Eds S Ishida et al.) (KEK Proceedings 2003-7) p. 159
* (90) Mori T et al. (Belle) Phys. Rev. D 75 051101(R) (2007)
* (91) Mori T et al. (Belle) J. Phys. Soc. Jpn. 76 074102 (2007)
* (92) Uehara S et al. (Belle) Phys. Rev. D 78 052004 (2008)
* (93) Uehara S et al. (Belle) Phys. Rev. D 80 032001 (2009)
* (94) Marsiske H et al. (Crystal Ball) Phys. Rev. D 41 3324 (1990)
* (95) Boyer J et al. (MARK II) Phys. Rev. D 42 1350 (1990)
* (96) Oest T et al. (JADE) Z. Phys. C 47 343 (1990)
* (97) Behrend H J et al. (CELLO) Z. Phys. C 56 381 (1992)
* (98) Bienlein J K, in Proceedings of the 9th International Workshop on Photon-Photon Collisions (Eds D Caldwell, H P Paar) (Singapore: World Scientific, 1992) p. 241
* (99) Barate R et al. (ALEPH) Phys. Lett B 472 189 (2000)
* (100) Braccini S, in Proceedings of the Meson 2000 Workshop (Eds L Jarczyk at al.) Acta Phys. Polon. B 31 2143 (2000)
* (101) Achasov N N, Shestakov G N Phys. Rev. D 72 013006 (2005)
* (102) Achasov N N, Shestakov G N Phys. Rev. D 77 074020 (2008)
* (103) Achasov N N, Shestakov G N Pis’ma Zh. Eksp. Teor. Fiz. 88 345 (2008) [Achasov N N, Shestakov G N JETP Lett. 88 295 (2008)]
* (104) Achasov N N, Shestakov G N Pis’ma Zh. Eksp. Teor. Fiz. 90 355 (2009) [Achasov N N, Shestakov G N JETP Lett. 90 313 (2009)]
* (105) Achasov N N, Shestakov G N Phys. Rev. D 81 094029 (2010)
* (106) Achasov N N, Shestakov G N Pis’ma Zh. Eksp. Teor. Fiz. 92 3 (2010) [Achasov N N, Shestakov G N JETP Lett. 92 1 (2010)]
* (107) Achasov N N, Devyanin S A, Shestakov G N Z. Phys. C 27, 99 (1985)
* (108) Achasov N N, Shestakov G N Z. Phys. C 41 309 (1988)
* (109) Achasov N N, Shestakov G N Yad. Fiz. 55 2999 (1992) [Achasov N N, Shestakov G N Sov. J. Nucl. Phys. 55, 1677 (1992)]
* (110) Achasov N N, Shestakov G N Mod. Phys. Lett. A 9 1351 (1994)
* (111) Flatte S M et al. Phys. Lett. B 38 232 (1972)
* (112) Protopopescu S D et al. Phys. Rev. D 7 1279 (1973)
* (113) Hyams B et al. Nucl. Phys. B 64 134 (1973)
* (114) Grayer G et al. Nucl. Phys. B 75 189 (1974)
* (115) Hyams B et al. Nucl. Phys. B 100 205 (1975)
* (116) Gay J et al. Phys. Lett. B 63 220 (1976)
* (117) Morgan D Phys. Lett. B 51 71 (1974)
* (118) Flatte S M Phys. Lett. B 63 224, 228 (1976)
* (119) Martin A D, Ozmutlu E N, Squires E J Nucl. Phys. B 121 514 (1977)
* (120) Petersen J L “The $\pi\pi$ Interaction”, Yellow CERN Preprint 77-04 (Geneva: CERN, 1977)
* (121) Estabrooks P Phys. Rev. D 19 2678 (1979)
* (122) Achasov N N, Devyanin S A, Shestakov G N Phys. Lett. B 88 367 (1979)
* (123) Achasov N N, Devyanin S A, Shestakov G N Yad. Fiz. 33 1337 (1981) [Achasov N N, Devyanin S A, Shestakov G N Sov. J. Nucl. Phys. 33 715 1981]
* (124) Achasov N N, Shestakov G N Phys. Rev. Lett. 92 182001 (2004)
* (125) Achasov N N, Shestakov G N Phys. Rev. D 70 074015 (2004)
* (126) Wu J-J, Zhao Q, Zou B S Phys. Rev. D 75 114012 (2007)
* (127) Wu J-J, Zou B S Phys. Rev. D 78 074017 (2008)
* (128) Dorofeev V et al., in Proceedings of the 12th International Conference on Hadron Spectroscopy (Frascati Physics Series) Vol. XLVI (2007); arXiv:0712.2512
* (129) Dorofeev V et al. Eur. Phys. J. A 38 149 (2008)
* (130) Nikolaenko V et al. Int. J. Mod. Phys. A 24 295 (2009)
* (131) Harris F, arXiv:1008.3569
* (132) Achasov N N, Devyanin S A, Shestakov G N Yad. Fiz. 32 1098 (1980) [Achasov N N, Devyanin S A, Shestakov G N Sov. J. Nucl. Phys. 32 566 (1980)]
* (133) Achasov N N, Devyanin S A, Shestakov G N Phys. Lett. B 96 168 (1980)
* (134) Achasov N N, Devyanin S A, Shestakov G N Phys. Lett. B 102 196 (1981)
* (135) Achasov N N, Devyanin S A, Shestakov G N “On four-quark nature of scalar $S^{*}(980)$ and $\delta(980)$ resonances” Preprint TP-121 (Novosibirsk: Institute for Mathematics, 1981)
* (136) Achasov N N, Devyanin S A, Shestakov G N Z. Phys. C 22 53 (1984)
* (137) Achasov M N et al. (SND) Phys. Lett. B 438 441 (1998)
* (138) Achasov M N et al. (SND) Phys. Lett. B 440 442 (1998)
* (139) Achasov M N et al. (SND) Phys. Lett. B 479 53 (2000)
* (140) Achasov M N et al. (SND) Phys. Lett. B 485 349 (2000)
* (141) Akhmetshin R R et al. (CMD-2) Phys. Lett. B 462 371 (1999)
* (142) Akhmetshin R R et al. (CMD-2) Phys. Lett. B 462 380 (1999)
* (143) Aloisio A et al. (KLOE) Phys. Lett. B 536 209 (2002)
* (144) Aloisio A et al. (KLOE) Phys. Lett. B 537 21 (2002)
* (145) Dubrovin M, in High-Energy Physics Workshop on Scalar Mesons: An Interesting Puzzle for QCD (Ed A H Fariborz) AIP Conf. Proc. Vol. 688, p. 231 (2004)
* (146) Ambrosino F et al. (KLOE) Phys. Lett. B 634 148 (2006)
* (147) Ambrosino F et al. (KLOE) Eur. Phys. J. C 49 473 (2007)
* (148) Ambrosino F et al. (KLOE), contributed to 23rd International Symposium on Lepton-Photon Interactions at High Energy (Daegu, Korea, 2007); arXiv: 0707.4609
* (149) Bonvicini G et al. (CLEO) Phys. Rev. D 76 012001 (2007)
* (150) Cavoto G, in Proceedings of the 5th Flavor Physics and CP Violation Conference (Bled, Slovenia, 2007) p. 22; arXiv:0707.1242
* (151) Bossi F et al. Riv. Nuovo Cim. 031 531 (2008); arXiv:0811.1929
* (152) Achasov N N, Kiselev A V Phys. Rev. D 70 111901(R) (2004)
* (153) Bramon A, Grau A, Pancheri G Phys. Lett. B 289 97 (1992)
* (154) Close F E, Isgur N, Kumano S Nucl. Phys. B 389 513 (1993)
* (155) Lucio J L, Napsuciale M Phys. Lett. B 331 418 (1994)
* (156) Achasov N N, in The Second DA$\Phi$NE Physics Handbook(Eds L Maiani, G Pancheri, N Paver) Vol. II, p. 671 (1995)
* (157) Achasov N N, Gubin V V, Solodov E P Phys. Rev. D 55 2672 (1997)
* (158) Achasov N N, Gubin V V, Solodov E P Yad. Fiz. 60 1279 (1997) [Achasov N N, Gubin V V, Solodov E P Phys. Atom. Nucl. 60 1152 (1997)]
* (159) Achasov N N, Gubin V V, Shevchenko V I Phys. Rev. D 56 203 (1997)
* (160) Achasov N N, Gubin V V, Shevchenko V I Int. J. Mod. Phys. A 12 5019 (1997)
* (161) Achasov N N, Gubin V V, Shevchenko V I Yad. Fiz. 60 89 (1997) [Achasov N N, Gubin V V, Shevchenko V I Phys. Atom. Nucl. 60 89 (1997)]
* (162) Achasov N N, Gubin V V Phys. Rev. D 56 4084 (1997)
* (163) Achasov N N, Gubin V V Yad. Fiz. 61 274 (1998) [Achasov N N, Gubin V V Phys. Atom. Nucl. 61 224 (1998)]
* (164) Achasov N N, Gubin V V Phys. Rev. D 57 1987 (1998)
* (165) Achasov N N, Gubin V V Yad. Fiz. 61 1473 (1998) [Achasov N N, Gubin V V Phys. Atom. Nucl. 61 1367 (1998)]
* (166) Ambrosino F et al. (KLOE) Phys. Lett. B 679 10 (2009)
* (167) Ambrosino F et al. (KLOE) Phys. Lett. B 681 5 (2009)
* (168) Bini, C. (KLOE), 2008, Talk given at 34th International Conference on High Energy Physics (Philadelphia, Pennsylvania); arXiv:0809.5004
* (169) Achasov N N, Kiselev A V Phys. Rev. D 68 014006 (2003)
* (170) Achasov N N, Kiselev A V Yad. Fiz. 67 653 (2004) [Achasov N N, Kiselev A V Phys. Atom. Nucl. 67 633 (2004)]
* (171) Achasov N N, Kiselev A V Phys. Rev. D 73 054029 (2006)
* (172) Achasov N N, Kiselev A V Yad. Fiz. 70 2005 (2007) [Achasov N N, Kiselev A V Phys. At. Nucl. 70 1956 (2007)]
* (173) Di Micco B, in Proceedings of the International Workshop on $e^{+}e^{-}$ Collisions from $\phi$ to $\psi$ (Eds C Bibi, G Venanzoni) Nucl. Phys. B Proc. Suppl. 181-182 215 (2008)
* (174) Shekhovtsova O, Venanzoni G, Pancheri G Comput. Phys. Commun. 180 1206 (2009).
* (175) Amelino-Camelia et al. Eur. Phys. J. C 68 619 (2010); arXiv:1003.3868.
* (176) Field J H, in Proceedings of the 4th International Colloquium on Photon-Photon Interactions (Singapore: World Scientific, 1981) p. 447
* (177) Hilger E, in Proceedings of the 4th International Colloquium on Photon-Photon Interactions (Singapore: World Scientific, 1981) p. 149
* (178) Wedemeyer R J, in Proceedings of the 10th International Symposium on Lepton and Photon Interactions at High Energies (Ed W Pfeil) (Bonn Univ.: Phys. Inst., 1981) p. 410
* (179) Edwards C et al. (Crystal Ball) Phys. Lett. B 110 82 (1982)
* (180) Olsson J E, in Proceedings of the 5th International Workshop on Photon-Photon Interactions (Ed C Berger) (Berlin: Springer-Verlag, 1983) Lecture Notes in Physics, Vol. 191, p. 45
* (181) Mennessier G Z. Phys. C 16 241 (1983)
* (182) Kolanoski H, 1985, in Proceedings of the 12th International Symposium on Lepton and Photon Interactions at High Energies (Eds M Konuma, K Takahashi) (Kyoto Univ.: Research Inst. Fund. Phys., 1986) p. 90
* (183) Kolanoski H, Zerwas P, in High Energy Electron-Positron Physics (Eds A Ali, P Söding) (Singapore: World Scientific, 1988) p. 695
* (184) Kolanoski H, in Proceedings of the 9th European Symposium on Antiproton-Proton Interactions and Fundamental Symmetries (Eds K Kleinknecht, E Klempt) Nucl. Phys. B Proc. Suppl. 8 41 (1989)
* (185) Cordier A, Proceedings of the 6th International Workshop on Photon-Photon Collisions (Ed R L Lander) (Singapore: World Scientific, 1985) p. 122
* (186) Erne F C, Proceedings of the 6th International Workshop on Photon-Photon Collisions (Ed R L Lander) (Singapore: World Scientific, 1985) p. 151
* (187) Barnes T Phys. Lett. B 165 434 (1985)
* (188) Kaloshin A E, Serebryakov V V Z. Phys. C 32 279 (1986)
* (189) Antreasyan D et al. Phys. Rev. D 33 1847 (1986)
* (190) Johnson R P “Measurements of charged two-particle exclusive states in photon-photon interactions”, Ph.D. thesis, (Stanford University, SLAC-Report-294, 1986)
* (191) Poppe M Intern. J. Mod. Phys. A 1 545 (1986)
* (192) Berger Ch, Wagner W Phys. Rep. C 146 1 (1987)
* (193) Morgan D, Pennington M R Phys. Lett. B 192 207 (1987)
* (194) Morgan D, Pennington M R Z. Phys. C 37 431 (1988)
* (195) Chanowitz M S, in Proceedings of the 8th International Workshop on Photon-Photon Collisions (Ed U Karshon) (Singapore: World Scientific, 1988) p. 205
* (196) Hikasa K et al. (Particle Data Group) Phys. Rev. D 45 S1 (1992)
* (197) Morgan D, Pennington M R, Whalley M R “A compilation of the data on two-photon reactions leading to hadron final states”, J. Phys. G 20, Suppl. 8A, A1 (1994)
* (198) Barnes T, in Proceedings of the 9th International Workshop on Photon-Photon Collisions (Eds D O Caldwell, H P Paar) (Singapore: World Scientific, 1992) p. 263, p. 275
* (199) Kolanoski H, in Proceedings of the 4th International Conference on Hadron Spectroscopy (Eds S Oneda, D C Peaslee) (Singapore: World Scientific, 1992) p. 377
* (200) Karch K-H, in Proceedings of the 26th Rencontre de Moriond: High Energy Hadronic Interactions (Ed J Tran Thanh Van) (France: Gif-sur-Yvette, Editions Frontieres, 1991), p. 423
* (201) Cahn R N, in Proceedings of the 14th International Symposium on Lepton and Photon at High Energies (Ed M Riordan) (Singapore: World Scientific, 1990) p. 60
* (202) Feindt M, Harjes J, in Proceedings of the Rheinfeld 1990 Workshop on the Hadron Mass Specrum (Eds E Klempt, K Peters) Nucl. Phys. B Proc. Suppl. 21 61 (1991)
* (203) Berger S B, Feld B T Phys. Rev. D 8 3875 (1973)
* (204) Barbieri R, Gatto R, Kögerler R Phys. Lett. 60B 183 (1976)
* (205) Jackson J D, in Proceedings of the SLAC Summer Institute on Particle Physics: Weak Interactions at High Energy and the Production of New Particles (Ed M C Zipf) SLAC Rep. No. 198, p. 147 (1976)
* (206) Budnev V M, Kaloshin A E Phys. Lett. B 86 351 (1979)
* (207) Bergström L, Hulth G, Snellman H Z. Phys. C 16 263 (1983)
* (208) Morgan D, Pennington M R Z. Phys. C 48 623 (1990)
* (209) Li Z P, Close F E, Barnes T Phys. Rev. D 43 2161 (1991)
* (210) Münz C R Nucl. Phys. A 609 364 (1996)
* (211) Pennington M R Mod. Phys. Lett. A 22 1439 (2007)
* (212) Weinstein J, Isgur N Phys. Rev. Lett. 48 652 (1982)
* (213) Weinstein J, Isgur N Phys. Rev. D 41 2236 (1990)
* (214) Achasov N N, Kiselev A V Phys. Rev. D 76 077501 (2007)
* (215) Achasov N N, Kiselev A V Phys. Rev. D 78 058502 (2008)
* (216) Dzierba A R, in Proceedings of the Second Workshop on Physics and Detectors for DA$\Phi$NE’95 (Eds R Baldini et al.) (Frascati Physics Series, 1996) Vol. 4, p. 99
* (217) Alde D et al. Yad. Fiz. 62 462 (1999) [Alde D et al. Phys. At. Nucl. 62 421 (1999)]
* (218) Alde D et al. Z. Phys. C 66 375 (1995)
* (219) Alde D et al. Eur. Phys. J. A 3 361 (1998)
* (220) Gunter J et al. Phys. Rev. D 64 072003 (2001)
* (221) Branz T, Gutsche T, Lyubovitskij V E Eur. Phys. J. A 37 303 (2007)
* (222) Boglione M, Pennington M R Eur. Phys. J. C 9 11 (1999)
* (223) Pennington M R, in Proceedings of the International Conference on the Structure and Interactions of the Photon (Ed S Soldner-Rembold) Nucl. Phys. B Proc. Suppl. 82 291 (2000)
* (224) Groom D E et al. (Particle Data Group) Eur. Phys. J. C 15 1 (2000)
* (225) Yao W M et al. (Particle Data Group) J. Phys. G 33 1 (2006)
* (226) Abe K et al. (Belle) arXiv:0711.1926
* (227) Nakazawa N, in Proceedings of the International Workshop on $e^{+}e^{-}$ Collisions from $\phi$ to $\psi$ (Ed C Bibi, G. Venanzoni) Nucl. Phys. B Proc. Suppl. 181-182, 233 (2008).
* (228) Adachi I et al. (Belle) arXiv:0810.0334
* (229) Low F E Phys. Rev. 96 1428 (1954)
* (230) Gell-Mann M, Goldberger M L Phys. Rev. 96 1433 (1954)
* (231) Abarbanel H D I, Goldberger M L Phys. Rev. 165 1594 (1968)
* (232) Lyth D H J. Phys. G 10 39 (1984)
* (233) Lyth D H J. Phys. G 11 459 (1985)
* (234) Bijnens J, Cornet F Nucl. Phys. B 296 557 (1988)
* (235) Donoghue J F, Holstein B R, Lin Y C Phys. Rev. D 37 2423 (1988)
* (236) Donoghue J F, Holstein B R Phys. Rev. D 48 137 (1993)
* (237) Morgan D, Pennington M R Phys. Lett. B 272 134 (1991)
* (238) Oller J A, Oset E., in Proceedings of the 7th International Conference on Hadron Spectroscopy (Eds S-U Chung, H Willutzki) AIP Conf. Proc. Vol. 432, p. 413 (1998)
* (239) Pennington M R Phys. Rev. Lett. 97 011601 (2006)
* (240) Krammer M, Krasemann H Phys. Lett. B 73 58 (1978)
* (241) Krammer M Phys. Lett. B 74 361 (1978)
* (242) Krasemann H, Vermaseren J A M Nucl Phys. B 184 269 (1981)
* (243) Gersten A Nucl. Phys. B 12 537 (1969)
* (244) Barrelet E Nuovo Cim. A 8 331 (1972)
* (245) Sadovsky S A Yad. Fiz. 62 562 (1999) [Sadovsky S A Phys. Atom. Nucl. 62 519 (1999)]
* (246) Pennington M R, in Proceedings of the International Workshop on $e^{+}e^{-}$ Collisions from $\phi$ to $\psi$ (Eds C Bibi G Venanzoni) Nucl. Phys. B Proc. Suppl. 181-182 251 (2008)
* (247) Pennington M R et al. Eur. Phys. J. C 56 1 (2008)
* (248) Mennessier G et al. arXiv:0707.4511
* (249) Mennessier G, Narison S, Ochs W Phys. Lett. B 665 205 (2008)
* (250) Mennessier G, Narison S, Ochs W, in Proceedings of the International Workshop on $e^{+}e^{-}$ Collisions from $\phi$ to $\psi$ (Eds C Bibi, Venanzoni G) Nucl. Phys. B Proc. Suppl. 181-182 238 (2008)
* (251) Mennessier G, Talk given at the 14th High-Energy Physics International Conference in Quantum Chromodynamics (Montpellier, France, 2008), arXiv: 0811.1589
* (252) Oller J A, Roca L, Schat C Phys. Lett. B 659 201 (2008)
* (253) Oller J A, Roca L Eur. Phys. J. A 37 15 (2008)
* (254) van Beveren E et al. Phys. Rev. D 79 098501 (2009)
* (255) Kalinovsky Yu L, Volkov M K, arXiv:0809.1795
* (256) Mao Y at al. Phys. Rev. D 79 116008 (2009)
* (257) Mennessier G, Narison S, Wang X-G, arXiv:1009.2773
* (258) Garsia-Martin R, Moussallam B Eur. Phys. J 70 155 (2010)
* (259) Adler S L Phys. Rev. 177 2426 (1969)
* (260) Bell J S, Jackiw L Nuovo Cim. A 60 47 (1969)
* (261) Bardeen W A, Fritzsch H, Gell-Mann M, in Proceedings of the Meeting on Scale and Conformal Symmetry in Hadron Physics (Ed R Gatto) (Wiley, 1973); hep-ph/0211388
* (262) Leutwyler H Nucl. Phys. B Proc. Suppl. 64 223 (1998)
* (263) Ioffe B L, Oganesian A G Phys. Lett. B 647 389 (2007)
* (264) Bernstein A M, arXiv:0707.4250
* (265) Feldmann T Int. J. Mod. Phys. A 15 159 (2000)
* (266) Babcock J, Rosner J L Phys. Rev. D 14 1286 (1976)
* (267) Rosner J L Phys. Rev. D 23 1127 (1981)
* (268) Berger Ch, in Proceedings of the International Workshop on gamma gamma Collisions (Eds G Cochard, P Kessler) (Berlin: Springer Verlag, 1980) Lecture Notes in Physics, Vol. 134, p. 82
* (269) Albrecht H et al. (ARGUS) Z. Phys. C 48 183 (1990)
* (270) Li D-M, Yu H, Shen Q-X J. Phys. G 27 807 (2001)
* (271) Durusoy N B et al. Phys. Lett. B 45 517 (1973)
* (272) Hoogland W et al. Nucl. Phys. B 126 109 (1977)
* (273) Watson K M Phys. Rev. 88 1163 (1952)
* (274) Achasov N N, Shestakov G N Phys. Rev. D 67 114018 (2003)
* (275) Achasov N N, Shestakov G N Phys. Rev. D 58 054011 (1998)
* (276) Osborn H Nucl. Phys. B 15 501 (1970)
* (277) Petersen J L Phys. Rep. 2 155 (1971)
* (278) Bernard V, Kaiser N, Meissner U-G Phys. Rev. D 44 3698 (1991)
* (279) Black D, Fariborz A H, Schechter J Phys.Rev. D 61 074030 (2000)
* (280) Achasov N N, Shestakov G N Phys. Rev. D 53 3559 (1996)
* (281) Ochs W, in Proceedings of the XIII International Conference on Hadron Spectroscopy (Tallahassee, Florida, 2009) AIP Conf. Proc. Vol. 1257, p. 252 (2010)
* (282) Kalashnikova Yu et al. Phys. Rev. C 73 45203 (2006)
* (283) Hanhart C et al. Phys. Rev. D 75 074015 (2007)
* (284) Czerwinski E, arXiv:1009.0113
* (285) Uehara S et al., arXiv:1007.3779
* (286) Althoff M et al. (TASSO) Z. Phys. C 29 189 (1985)
* (287) Althoff M et al. (TASSO) Phys. Lett. B 121 216 (1983)
* (288) Berger Ch et al. (PLUTO) Z. Phys. C 37 329 (1988)
* (289) Behrend H J et al. (CELLO) Z. Phys. C 43 91 (1989)
* (290) Feindt M, Harjes J, in Proceedings of the Rheinfeld 1990 Workshop on the Hadron Mass Specrum (Eds E Klempt, K Peters) Nucl. Phys. B Proc. Suppl. 21 61 (1991)
* (291) Acciarri M et al. (L3) Phys. Lett. B 501 173 (2001)
* (292) Abe K et al. (Belle) Eur. Phys. J. C 32 323 (2004)
* (293) Faiman D, Lipkin H J, Rubinstein H R Phys. Lett. B 59 269 (1975)
* (294) Achard P et al. Phys. Lett. B 568 11 (2003)
* (295) Achard P et al. Phys. Lett. B 597 26 (2004)
* (296) Achard P et al. Phys. Lett. B 604 48 (2004)
* (297) Achard P et al. Phys. Lett. B 615 19 (2005)
|
arxiv-papers
| 2009-05-13T05:33:52 |
2024-09-04T02:49:02.574203
|
{
"license": "Public Domain",
"authors": "N.N. Achasov and G.N. Shestakov",
"submitter": "Georgii Shestakov",
"url": "https://arxiv.org/abs/0905.2017"
}
|
0905.2026
|
# Determination of the ${}^{3}{\rm{He}}+\alpha\to\rm{{}^{7}Be}$ asymptotic
normalization coefficients (nuclear vertex constants) and their application
for extrapolation of the ${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$
astrophysical $S$ factors to the solar energy region
S.B. Igamov, Q.I. Tursunmahatov and R. Yarmukhamedov Corresponding author,
E-mail: rakhim@inp.uz
###### Abstract
A new analysis of the modern precise measured astrophysical $S$ factors for
the direct capture ${}^{3}He(\alpha,\gamma)^{7}{\rm{Be}}$ reaction [B.S. Nara
Singh et al., Phys.Rev.Lett. 93, 262503 (2004); D. Bemmerer et al.,
Phys.Rev.Lett. 97, 122502 (2006); F.Confortola et al., Phys.Rev.C 75, 065803
(2007), T.A.D.Brown et al., Phys.Rev. C 76, 055801 (2007) and A Di Leva, et
al.,Phys.Rev.Lett. 102, 232502 (2009)] populating to the ground and first
excited states of ${}^{7}{\rm Be}$ is carried out based on the modified two -
body potential approach. New estimates are obtained for the „indirectly
determined“ values of the asymptotic normalization constants (the nuclear
vertex constants) for ${}^{3}{\rm{He}}+\alpha\to{\rm{{}^{7}Be}}$(g.s.) and
${}^{3}{\rm{He}}+\alpha\to{\rm{{}^{7}Be}}$(0.429 MeV) as well as the
astrophysical $S$ factors $S_{34}(E)$ at E$\leq$ 90 keV, including $E$=0. The
values of asymptotic normalization constants have been used for getting
information about the $\alpha$-particle spectroscopic factors for the mirror
(${\rm{{}^{7}Li}}{\rm{{}^{7}Be}}$)-pair.
Institute of Nuclear Physics, Uzbekistan Academy of Sciences,100214 Tashkent,
Uzbekistan
PACS: 25.55.-e;26.35.+c;26.65.+t
## 1 Introduction
The ${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$ reaction is one of the
critical links in the ${}^{7}{\rm{Be}}$ and ${}^{8}B$ branches of the
$pp$–chain of solar hydrogen burning [1–3]. The total capture rate determined
by processes of this chain is sensitive to the cross section $\sigma_{34}(E)$
(or the astrophysical $S$ factor $S_{34}(E)$ ) for the
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$ reaction and predicted neutrino
rate varies as $[S_{34}(0)]^{0.8}$ [2, 3].
Despite the impressive improvements in our understanding of the
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$ reaction made in the past
decades (see Refs [4–11] and references therein), however, some ambiguities
connected with both the extrapolation of the measured cross sections for the
aforesaid reaction to the solar energy region and the theoretical predictions
for $\sigma_{34}(E)$ (or $S_{34}(E)$) still exist and they may influence the
predictions of the standard solar model [2, 3] .
Experimentally, there are two types of data for the
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$ reaction at extremely low
energies: i) six measurements based on detecting of $\gamma$-rays capture (see
[4] and references therein) from which the astrophysical $S$ factor
$S_{34}(0)$ extracted by the authors of those works changes within the range
0.47$\leq S_{34}(0)\leq$0.58 ${\rm{keV\,\,\,b}}$ and ii) six measurements
based on detecting of ${}^{7}{\rm{Be}}$ (see [4] references therein as well as
[6-11]) from which $S_{34}(0)$ extracted by the authors of these works changes
within the range 0.53$\leq S_{34}\leq$0.63 ${\rm{keV\,\,\,b}}$. All of these
measured data have a similar energy dependence for the astrophysical $S$
factors $S_{34}(E)$. Nevertheless, the adaptation of the available energy
dependencies predicted in [12, 13] for the extrapolation of each of the
measured data to low experimentally inaccessible energy regions, including
$E$=0, leads to a value of $S_{34}(0)$ that differs from others and this
difference exceeds the experimental uncertainty.
The theoretical calculations of $S_{34}(0)$ performed within different methods
also show considerable spread [12,14–19] and the result depends on a specific
model used. For example, the resonating-group method calculations of
$S_{34}(0)$ performed in Ref.[12] show considerable sensitivity to the form of
the effective NN interaction used and the estimates have been obtained within
the range of 0.312$\leq S_{34}(0)\leq$ 0.841 ${\rm{keV\,\,\,b}}$.
The estimation of $S_{34}(0)$=0.52$\pm$ 0.03 ${\rm{keV\,\,\,b}}$ [20] also
should be noted. The latter has been obtained from the analysis of the
experimental astrophysical $S$ factors [21], which were performed within the
framework of the standard two-body potential model in the assumption that the
dominant contribution to the peripheral reaction comes from the surface and
external regions of the nucleus ${}^{7}{\rm{Be}}$ [22]. At this, in [20] the
contribution from the nuclear interior ($r<r_{cut}$, $r_{cut}$=4 fm) to the
amplitude is ignored. In this case, the astrophysical $S$ factor is directly
expressed in terms of the nuclear vertex constants (NVC) for the virtual
decays ${\rm{{}^{7}Be}}\to\alpha+^{3}{\rm{He}}$ (or respective the asymptotic
normalization coefficients (ANC) for
${\rm{{}^{3}He}}+\alpha\to{\rm{{}^{7}Be}}$) [24, 25]. As a result, in Ref.
[20], the NVC-values for the virtual decays
${\rm{{}^{7}Be}}{\rm{(g.s.)}}\to\alpha+{\rm{{}^{3}He}}$ and
${\rm{{}^{7}Be}}{\rm{(0.429\,\,MeV)}}\to\alpha+{\rm{{}^{3}He}}$ were obtained,
which were then used for calculations of the astrophysical $S$ factors at
$E<$180 keV, including $E$=0. However, the values of the ANCs (or NVCs)
${\rm{{}^{3}He}}+\alpha\to{\rm{{}^{7}Be}}$ and the $S_{34}(0)$ obtained in
[20] may not be enough accurate associated both with the aforesaid assumption
in respect to the contribution from the nuclear interior ($r<r_{cut}$) and
with a presence of the spread in the experimental data [21] used for the
analysis. As far available values of these ANCs obtained in [13, 16], they
depend noticeably on a specific model used. Therefore, determination of
precise experimental values of the ANCs for
${\rm{{}^{3}He}}+\alpha\to{\rm{{}^{7}Be}}$(g.s.) and
${\rm{{}^{3}He}}+\alpha\to{\rm{{}^{7}Be}}$(0.429 MeV) is highly desirable
since it has direct effects in the correct extrapolation of the
${\rm{{}^{3}He}}(\alpha,\gamma){\rm{{}^{7}Be}}$ astrophysical $S$ factor at
solar energies.
Recently, a modified two-body potential approach (MTBPA) was proposed in [23]
for the peripheral direct capture ${\rm A}(a,\gamma){\rm B}$ reaction, which
is based on the idea proposed in paper [22] that low-energy direct radiative
captures of particle $a$ by light nuclei ${\rm A}$ proceed mainly in regions
well outside the range of the internuclear $a{\rm A}$ interactions. One notes
that in MTBPA the direct astrophysical $S$ factor is expressed in terms of ANC
for ${\rm A}+a\to{\rm B}$ rather than through the spectroscopic factor for the
nucleus ${\rm B}$ in the (${\rm A}+a$) configuration as it is made within the
standard two-body potential method [26, 27]. In Refs.[23, 28, 29], MTBPA has
been successfully applied to the radiative proton and $\alpha$-particle
capture by some light nuclei. Therefore, it is of great interest to apply the
MTBPA for analysis of the ${\rm{{}^{3}He}}(\alpha,\gamma){\rm{{}^{7}Be}}$
reaction.
In this work new analysis of the modern precise experimental astrophysical $S$
factors for the direct capture ${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$
reaction at extremely low energies ($\gtrsim$ 90 keV) [6-11] is performed
within the MTBPA [23] to obtain „indirectly determined“ values both of the
ANCs (the NVCs) for ${}^{3}{\rm{He}}+\alpha\to{\rm{{}^{7}Be}}$(g.s.) and
${}^{3}{\rm{He}}+\alpha\to{\rm{{}^{7}Be}}$(0.429 MeV), and of $S_{34}(E)$ at
$E\leq$ 90 keV, including $E$=0. In this work we quantitatively show that the
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$ reaction within the aforesaid
energy region is mainly peripheral and one can extract ANCs for
${}^{3}{\rm{He}}+\alpha\to{\rm{{}^{7}Be}}$ directly from the
${}^{3}{\rm{He}}(\alpha,\gamma){\rm{{}^{7}Be}}$ reaction where the ambiguities
inherent for the standard two -body potential model calculation of the
${}^{3}{\rm{He}}(\alpha,\gamma){\rm{{}^{7}Be}}$ reaction, which is connected
with the choice of the geometric parameters (the radius $R$ and the
diffuseness $a$) for the Woods–Saxon potential and the spectroscopic factors
[18, 17], can be reduced in the physically acceptable limit, being within the
experimental errors for the $S_{34}(E)$.
The contents of this paper are as follows. In Section 2 the results of the
analysis of the precise measured astrophysical $S$ factors for the direct
radiative capture ${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$ reaction is
presented (Subsections 2.1–2.3). The conclusion is given in Section 3. In
Appendix basic formulae of the modified two-body potential approach to the
direct radiative capture ${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$
reaction are given.
## 2 Analysis of ${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$ reaction
Let us write $l_{f}$ ($j_{f}$) for the relative orbital (total) angular moment
of ${}^{3}{\rm{He}}$ and $\alpha$-particle in nucleus
${}^{7}{\rm{Be}}\,\,(\alpha+{\rm{{}^{3}He}})$, $l_{i}$ ($j_{i}$) for the
orbital (total) angular moment of the relative motion of the colliding
particles in the initial state. For the
${\rm{{}^{3}He}}(\alpha,\gamma){\rm{{}^{7}Be}}$ reaction populating to the
ground and first excited ($E^{*}$=0.429 MeV; $J^{\pi}$=1/2-) states of
${\rm{{}^{7}Be}}$, the values of $j_{f}$ are taken to be equal to 3/2 and 1/2,
respectively, the value of $l_{f}$ is taken to be equal to 1 as well as
$l_{i}$=0, 2 for the $E1$-transition and $l_{i}$=1 for the $E2$-transition.
The basic formulae used for the analysis are presented in Appendix.
### 2.1 The asymptotic normalization coefficients for
${}^{3}{\rm{He}}+\alpha\to^{7}{\rm{Be}}$
To determine the ANC values for
${\rm{{}^{3}He}}+\alpha\to{\rm{{}^{7}Be}}$(g.s) and
${\rm{{}^{3}He}}+\alpha\to{\rm{{}^{7}Be}}$(0.429 MeV) the recent experimental
astrophysical $S$ factors, $S^{exp}_{l_{f}j_{f}}(E)$, for the
${\rm{{}^{3}He}}(\alpha,\gamma){\rm{{}^{7}Be}}$ reaction populating to the
ground ($l_{f}$=1 and $j_{f}$=3/2) and first excited ($E^{*}$=0.429 $MeV$;
$J^{\pi}=1/2^{-}$, $l_{f}$=1 and $j_{f}$=1/2) states [6–11] are reanalyzed
based on the relations (A1)–(A7). The experimental data analyzed by us cover
the energy ranges $E$=92.9–168.9 keV [7–9], 420–951 keV [6], 327–1235 keV [10]
and 701–1203 keV [11] for which only the external capture is substantially
dominant [19, 22]. Also, in [9] the experimental astrophysical $S$ factors for
the the ${\rm{{}^{3}He}}(\alpha,\gamma){\rm{{}^{7}Be}}$ reaction populating to
the first and excited states of the ${\rm{{}^{7}Be}}$ nucleus have been
separated only for the energies of $E$=92.9, 105.6 and 147.7 keV . Whereas, in
[10] the experimental astrophysical $S$ factors have been separated for all
experimental points of $E$ from the aforesaid energy region by means of
detecting the prompt $\gamma$ ray (the prompt method) and by counting the
${\rm{{}^{7}Be}}$ activity (the activation).
The Woods–Saxon potential split with a parity ($l$-dependence) for the spin-
orbital term proposed by the authors of Refs. [30–32] is used here for the
calculations of both bound state radial wave function
$\varphi_{l_{f}j_{f}}(r)$ and scattering wave function $\psi_{l_{i}j_{i}}(r)$.
Such the choice is based on the following considerations. Firstly, this
potential form is justified from the microscopic point of view because it
makes it possible to take into account the Pauli principle between nucleons in
${}^{3}{\rm{He}}$\- and $\alpha$-clusters in the ($\alpha+{\rm{{}^{3}He}}$)
bound state by means of inclusion of deeply bound states forbidden by the
Pauli exclusion principle. The latter imitates the additional node ($n$)
arising in the wave functions of $\alpha-{\rm{{}^{3}He}}$ relative motion in
${}^{7}{\rm{Be}}$ similarly as the result of the microscopic resonating-group
method [12]. Secondly, this potential describes well the phase shifts for
$\alpha{\rm{{}^{3}He}}$-scattering in the wide energy range [31, 32] and
reproduces the energies of low-lying states of the ${}^{7}Be$ nucleus [33].
We vary the geometric parameters (radius $R$ and diffuseness $a$) of the
adopted Woods–Saxon potential in the physically acceptable ranges ($R$ in
1.62–1.98 fm and $a$ in 0.63–0.77 fm [23]) in respect to the standard values
($R$=1.80 fm and $a$=0.70 fm [31, 32]) using the procedure of the depth
adjusted to fit the binding energies. As an illustration, Fig.1 shows plots of
the ${\cal{R}}_{l_{f}j_{f}}(E,C^{(sp)}_{l_{f}j_{f}})$ dependence on the
single-particle ANC, $C^{(sp)}_{l_{f}j_{f}}$ for $l_{f}$= 1 and $j_{f}$=3/2
and 1/2 only for the two values of energy $E$. The width of the band for these
curves is the result of the weak „residual“ $(R,a)$-dependence of the
${\cal{R}}_{l_{f}j_{f}}(E,C^{(sp)}_{l_{f}j_{f}})$ on the parameters $R$ and
$a$ (up to $\pm 2\%$) for the
$C^{(sp)}_{l_{f}j_{f}}=C^{(sp)}_{l_{f}j_{f}}(R,a)=const$ [23, 44]. The same
dependence is also observed at other energies. For example, for fig.1 plotted
for $E$=0.1056 (0.1477) MeV overall uncertainty ($\Delta_{{\cal{R}}}$ ) of the
function ${\cal{R}}_{l_{f}j_{f}}(E,C^{(sp)}_{l_{f}j_{f}})$ in respect to the
central values ${\cal{R}}_{l_{f}j_{f}}(E,C^{(sp)}_{l_{f}j_{f}})$ corresponding
to the central values of $C^{(sp)}_{l_{f}j_{f}}(1.80,0.70)$ comes to
$\Delta_{{\cal{R}}}$=$\pm$ 4.5($\pm$4.5)% for the ground state of
${}^{7}{\rm{Be}}$ and $\Delta_{{\cal{R}}}$ =$\pm$ 3.4($\pm$2.9)% for the
excited state of ${}^{7}{\rm{Be}}$. As it is seen from here that the
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$(0.429 MeV) reaction is slightly
more peripheral than the ${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$(g.s.)
reaction since the binging energy for ${}^{7}{\rm{Be}}$(0.429 MeV) is less
than that for ${}^{7}{\rm{Be}}$(g.s). The similar dependence of
${\cal{R}}_{l_{f}j_{f}}(E,C^{(sp)}_{l_{f}j_{f}})$ on the
$C^{(sp)}_{l_{f}j_{f}}$ values is observed for the other aforesaid energies
$E$ and the value of $\Delta_{{\cal{R}}}$ is no more than $\sim\pm$ 5.0%. It
follows from here that the condition (A2) is satisfied for the considered
reaction at the energies above mentioned within the uncertainties not
exceeding the experimental errors of $S^{exp}_{l_{f}j_{f}}(E)$. It should be
noted that values of $\Delta_{{\cal{R}}}$ becomes larger as the energy $E$
increases (for $E$ more than 1.3 MeV).
Thus, over the energy region 92.9$\leq$E$\leq$ 1200 keV the
${\cal{R}}_{l_{f}j_{f}}(E,C_{l_{f}j_{f}}^{(sp)})$-dependence on
$C_{l_{f}j_{f}}^{(sp)}$ is exactly sufficiently weak being within the
experimental uncertainties for the $S^{exp}_{l_{f}j_{f}}(E)$. Such dependence
is apparently associated also with the indirect taking into account the Pauli
principle mentioned above within the nuclear interior in the adopted nuclear
$\alpha{\rm{{}^{3}He}}$ potential leading as a whole to reduction of the
contribution from the interior part of the radial matrix element into the
${\cal{R}}_{l_{f}j_{f}}(E,C_{l_{f}j_{f}}^{(sp)})$ function, which is typical
for peripheral reactions.
We also calculated the $\alpha^{3}{\rm{He}}$-elastic scattering phase shifts
by variation of the parameters $R$ and $a$ in the same range for the adopted
Woods–Saxon potential. As an illustration, the results of the calculations
corresponding to the $s_{1/2}$ and $p_{3/2}$ waves are only presented in Fig.2
in which the width of the bands corresponds to a change of phase shifts values
with respect to variation of values of the $R$ and $a$ parameters. As it is
seen from Fig.2, the experimental phase shifts [34, 35] are well reproduced
within uncertainty of about $\pm$ 5%. The same results are also obtained for
the $p_{1/2}$ and $d_{5/2}$ waves.
This circumstance allows us to test the condition (A3) at the energies of $E$=
92.9, 105.6 and 147.7 keV for which the
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$(g.s.) and
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$(0.429 MeV) astrophysical $S$
factors were separately measured in [9]. As an illustration, for the same
energies $E$ as in Fig.1 we present in Fig.3 (the upper panels) the results of
$C_{l_{f}j_{f}}^{2}$-value calculation given by Eq.(A3) ($(l_{f}\,\,j_{f})$=(1
3/2) and (1 1/2) ) in which instead of the $S_{l_{f}j_{f}}(E)$ the
experimental $S$ factors for the ${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$
reaction populating to the ground and first excited states of
${}^{7}{\rm{Be}}$ were taken. It should be noted that the same dependence
occurs for other considered energies. The calculation shows the obtained
$C_{l_{f}j_{f}}^{2}$ values also weakly (up to 5.0 %) depend on the
$C^{(sp)}_{l_{f}j_{f}}$-value. Consequently, the
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$ reaction within the considered
energy ranges is peripheral and a use of parametrization in terms of the ANCs
given by Eq.(A1) is adequate to the physics of the reaction under
consideration. However, the values of the spectroscopic factors $Z_{1\,\,3/2}$
and $Z_{1\,\,1/2}$ corresponding to the $(\alpha+^{3}{\rm{He}})$-configuration
for ${}^{7}{\rm{Be}}$(g.s.) and ${}^{7}{\rm{Be}}$(0.429 keV), respectively,
change strongly about 1.7 times since calculated $\tilde{S}_{l_{f}j_{f}}(E)$
that vary by 1.75 times (see, the lower panels in Fig.3).
For each energy $E$ experimental point ($E$=92.9, 105.6 and 147.7 keV) the
values of the ANCs are obtained for
$\alpha+{\rm{{}^{3}He}}\rightarrow{\rm{{}^{7}Be}}(g.s.)$ and
$\alpha+{\rm{{}^{3}He}}\rightarrow{\rm{{}^{7}Be}}$(0.429 MeV) by using the
corresponding experimental astrophysical $S$ factor ($S_{1\,\,3/2}^{exp}(E)$
and $S_{1\,\,1/2}^{exp}(E)$, (the activation)) [7, 9] in the ratio of the
r.h.s. of the relation (A1) instead of the $S_{l_{f}j_{f}}(E)$ and the central
values of ${\cal{R}}_{l_{f}j_{f}}(E,C^{(sp)}_{l_{f}j_{f}})$ corresponding to
the adopted values of the parameters $R$ and $a$. The results of the ANCs,
$(C_{1\,\,3/2}^{exp})^{2}$ and $(C_{1\,\,1/2}^{exp})^{2}$, for these three
energy $E$ experimental points are displayed in Fig.4 $a$ and $c$ (filled
circle symbols) and the second and third columns of Table 1. The uncertainties
pointed in this figure correspond to those found from (A1) (averaged square
errors (a.s.e.)), which include the total experimental errors (a.s.e. from the
statical and systematic uncertainties) in the corresponding experimental
astrophysical $S$ factor and the aforesaid uncertainty in the
${\cal{R}}_{l_{f}j_{f}}(E,C^{(sp)}_{l_{f}j_{f}})$. One should note that the
same results for the ANCs are obtained when $S^{exp}_{34}(E)$
($S^{exp}_{1\,3/2}(E)$ and $R^{exp}(E)$) [7, 9] are used in Eq.(A5) (in
Eq.(A6) and (A7)) instead of $S_{34}(E)$ ($S_{1\,3/2}(E)$ and $R(E)$). Then in
Eq.(A6), inserting the averaged means of $\lambda_{C}$ ($\lambda_{C}$=0.666),
obtained from the three data, and replacing of the $S_{34}(E)$ in the l.h.s.
of Eq.(A5) with $S^{exp}_{34}(E)$ for the others, the two experimental points
of energy $E$ ($E$=126.5 and 168.9 keV) from [7, 8], four one $E$ ($E$=420.0,
506.0, 615.0 and 951.0 keV) from [6], the three one $E$ ($E$=93.3, 106.1 and
170.1 keV) from [9] and the ten one $E$ ($E$=701–1203 keV) from [11] can also
determine values of ANCs, $C_{1\,\,3/2}^{2}$ and $C_{1\,\,1/2}^{2}$. The
results of the ANCs $(C_{1\,j_{f}}^{exp})^{2}$ for
$\alpha+^{3}{\rm{He}}\rightarrow{\rm{{}^{7}Be}}$(g.s.) and
$\alpha+^{3}{\rm{He}}\rightarrow{\rm{{}^{7}Be}}$(0.429 MeV) are displayed in
Fig.4 in which the open cycle and triangle symbols obtained from the analysis
of the data of [6–9] as well as the filled triangles symbols obtained from the
analysis of the data of [11](the ${\rm{{}^{7}Be}}$ recoils). Besides, the
results obtained from the data of [7–9] are also presented in the second and
third columns of Table 1. The same way the values of the ANCs are obtained by
using the separated experimental astrophysical $S$ factors
($S_{1\,\,3/2}^{exp}$ and $S_{1\,\,1/2}^{exp}$) [10]. The results for these
ANCs are also presented in Figs.4$b$ and $d$ both for the activation (filled
star symbols) and for the prompt method (filled square symbols).
As it is seen from Figs.4, for each of the independent measured experimental
astrophysical $S$ factors the ratio in the r.h.s. of the relation (A4) does
not practically depend on the energy $E$ within the experimental
uncertainties, although absolute values of the corresponding experimental
astrophysical $S$ factors for the reactions under consideration depend
noticeably on the energy and change by up to about 1.7 times when $E$ changes
from 92.6 keV to 1200 keV. This fact allows us to conclude that the energy
dependence of the experimental astrophysical $S$ factors [6–11] is well
determined by the calculated function
${\cal{R}}_{l_{f}j_{f}}(E,C^{(sp)}_{l_{f}j_{f}})$ and ${{\cal
R}}_{13/2}(E,C_{13/2}^{(sp)})+\lambda_{C}{{\cal
R}}_{11/2}(E,C_{11/2}^{(sp)})$. Hence, the experimental astrophysical $S$
factors presented in [6–11] can be used as an independent source of reliable
information about the ANCs for
$\alpha+^{3}{\rm{He}}\rightarrow{\rm{{}^{7}Be}}$(g.s.) and
$\alpha+^{3}{\rm{He}}\rightarrow{\rm{{}^{7}Be}}$(0.429 MeV). Also, in Fig.4
and Table 2 the weighted means of the ANCs-values and their uncertainties (the
solid line and the band width, respectively), derived both separately from
each experimental data and from all of the experimental points, are presented.
As it is seen also from the first and second (fifth and sixth) lines of Table
2 the weighted means of the ANCs-values for
$\alpha+^{3}{\rm{He}}\rightarrow{\rm{{}^{7}Be}}$(g.s.) and
$\alpha+^{3}{\rm{He}}\rightarrow{\rm{{}^{7}Be}}$(0.429 MeV) obtained by the
analysis performed separately for the activation, the prompt method and the
${\rm{{}^{7}Be}}$ recoils of the experimental data from the works [6–9] ([10,
11]) are in a good agreement with one another. These results are the first
ones of the present work. Nevertheless, the weighted means [5] of the ANC-
values obtained by using separately the experimental data of the works
Refs.[6–9] and of the works Refs.[10, 11] noticeably differ from one another
(up to 1.13 times for $\alpha+^{3}{\rm{He}}\rightarrow{\rm{{}^{7}Be}}$(g.s.)
and up to 1.12 times for
$\alpha+^{3}{\rm{He}}\rightarrow{\rm{{}^{7}Be}}$(0.429 MeV), see the
parenthetical figures in Table 2). The main reason of this difference is in
the systematical discrepancy observed in absolute values of the experimental
astrophysical $S$ factors measured by authors of works Refs [6–9](the set I)
and of the works Ref. [10, 11] (the set II, see Fig.5). Also, the central
values of the weighted means for the ANC-values for
${}^{3}{\rm{He}}+\alpha\rightarrow{\rm{{}^{7}Be}}$(g.s.) and
${}^{3}{\rm{He}}+\alpha\rightarrow{\rm{{}^{7}Be}}$(0.429 MeV) obtained from
all of the experimental data [6–11], which is presented in the last line of
Table 2, differ up to 10% more (3% less) than those deduced from the data of
[6–9]([10, 11]). As, at present, there is no a reasonable argument to
prosecute to some of these experimental data measured by two groups ([6-9](the
set I) and [10, 11](the set II)), it seems, it is reasonable to obtain the
weighted means of the ANCs derived from all these real experimental ANCs with
upper and lower limits corresponding to the experimental data of the set II
and Set I, respectively. This leads to the asymmetric uncertainty for the
weighted means of ANCs and this is caused with the aforesaid systematical
discrepancy observed in absolute values of the experimental data of the sets I
and II (see the last line of Table 2). In this connection, from our point of
view a new precisely measurement of $S_{34}^{exp}(E)$ is highly encouraged
since it allows one to get an additional information about the ANCs.
Nevertheless, below we will use these ANCs for extrapolation of the
astrophysical $S$ factors at energies down, including $E$=0. The corresponding
values of NVCs obtained by using Eq.(A8) are given in Table 2.
A comparison of the present result and that obtained in paper [20] shows that
the underestimation of the contribution both of the nuclear interior and of
the nuclear exterior indeed occurs in [20] since the contribution of the
nuclear interior ($r<$ 4.0 fm) to the calculated astrophysical $S$ factors and
use the experimental data [6-11] more accurate than those in Ref.[20] can
influence the extracted values of the ANCs. Besides, one would also like to
note that in reality the values of the ANCs, $C_{1\,\,3/2}$ and
$C_{1\,\,1/2}$, should not be equal, as it was assumed in [13] and the values
of $C_{1\,\,3/2}^{2}=C_{1\,\,1/2}^{2}$=14.4 fm-1 were obtained from the
analysis of $S_{34}^{exp}(E)$ performed in within the $R$-matrix method.
The resulting ANC (NVC) value for
$\alpha+{\rm{{}^{3}He}}\to{\rm{{}^{7}Be}}$(g.s.) obtained by us is in good
agreement with the value $C_{1\,\,3/2}^{2}$=25.3 fm-1
($|G_{1\,\,3/2}|^{2}$=1.20 fm) and that for
$\alpha+{\rm{{}^{3}He}}\to{\rm{{}^{7}Be}}$(0.429 MeV) differs noticeably from
the value $C_{1\,\,1/2}^{2}$=22.0 fm-1 ($|G_{1\,\,1/2}|^{2}$=1.04 fm), which
were obtained in [36] within the ($\alpha+^{3}{\rm{He}}$)-channel resonating-
group method. Also, the results of the present work differ noticeably from the
values $C_{1\,\,3/2}^{2}$=12.6$\pm$ 1.1 fm-1 and $C_{1\,\,1/2}^{2}=8.41\pm
0.58$ fm-1($C_{1\,\,3/2}$=3.55$\pm$ 0.15 fm-1/2, $C_{1\,\,1/2}$=2.90$\pm$ 0.10
fm-1/2, $|G_{1\,\,3/2}|^{2}$=0.596$\pm$ 0.052 fm and
$|G_{1\,\,1/2}|^{2}$=0.397$\pm$ 0.030 fm) [16] as well as those
$C_{1\,\,3/2}^{2}=C_{1\,\,1/2}^{2}$=14.4 fm-1
($C_{1\,\,3/2}=C_{1\,\,1/2}$=3.79 fm-1/2 and
$|G_{1\,\,3/2}|^{2}=|G_{1\,\,1/2}|^{2}$=0.680 fm) [13]. In this connection one
would like to note that in [16] the bound state wave functions, which
correspond to the binding energy for ${\rm{{}^{7}Be}}$(g.s.) in the
($\alpha+{\rm{{}^{3}He}}$)-channel differing noticeably from the experimental
ones (see Table I in Ref.[16]), and the initial state wave functions were
computed with different potentials and, so, these calculations were not self-
consistent. Since the ANCs for
${\rm{{}^{3}He}}+\alpha\rightarrow{\rm{{}^{7}Be}}$ are sensitive to the form
of the NN potential, it is desirable, firstly, to calculate the wave functions
of the bound state using other forms of the NN potential, and, secondly, in
order to guarantee the self-consistency, the same forms of the NN potential
should be used for such calculation of the initial wave functions.
### 2.2 $\alpha$-particle spectroscopic factors for the mirror
(${}^{7}{\rm{Li}}^{7}{\rm{Be}}$)–pair
The „indirectly determined“ values of the ANCs for
${}^{3}{\rm{He}}+\alpha\rightarrow{\rm{{}^{7}Be}}$ presented in the last line
of Table 2) and those for $\alpha+t\rightarrow{\rm{{}^{7}Li}}$ deduced in
Ref.[23] can be used for obtaining information on the ratio
$R_{Z;j_{f}}=Z_{1j_{f}}(^{7}{\rm{Be}})/Z_{1j_{f}}({\rm{{}^{7}Li}})$ for the
virtual $\alpha$ decays of the bound mirror
(${\rm{{}^{7}Li}}^{7}{\rm{Be}}$)-pair, where
$Z_{1j_{f}}({\rm{{}^{7}Be}})(Z_{1j_{f}}({\rm{{}^{7}Li}}))$ is the
spectroscopic factor for ${\rm{{}^{7}Be}}$ (${\rm{{}^{7}Li}}$) in the
($\alpha+{\rm{{}^{3}He}}$)(($\alpha+t$))-configuration. For this aim, from
$C_{1\,j_{f}}({\rm{B}})=Z_{1\,j_{f}}^{1/2}({\rm{B}})C^{(sp)}_{1\,j_{f}}({\rm{B}})$
(${\rm{B}}={\rm{{}^{7}Li}}$ and ${\rm{{}^{7}Be}}$) we form the relation
$R_{Z;\,j_{f}}=\frac{R_{C;\,j_{f}}}{R_{C^{(sp)};\,j_{f}}},$ (1)
where
$R_{C;\,j_{f}}=\Big{(}C_{1\,j_{f}}(^{7}{\rm{Be}})/C_{1\,j_{f}}({\rm{{}^{7}Li}})\Big{)}^{2}$($R_{C^{(sp)};\,j_{f}}=\Big{(}C_{1\,j_{f}}^{(sp)}(^{7}{\rm{Be}})/C_{1\,j_{f}}^{(sp)}({\rm{{}^{7}Li}})\Big{)}^{2}$)
is the ratio of squares of the ANCs (single-particle ANCs) for the bound
mirror (${\rm{{}^{7}Li}}^{7}{\rm{Be}}$)-pair and $j_{f}$=3/2(1/2) for the
ground (first excited) state of the mirror nuclei. Besides, it should be noted
that the relation (1) allows one to verify a validity of the approximation
($R_{C;j_{f}}\approx R_{C^{(sp)};\,j_{f}}$, i.e. $R_{Z;\,j_{f}}\approx$ 1)
used in Refs.[37] for the mirror (${\rm{{}^{7}Li}}^{7}{\rm{Be}}$) conjugated
$\alpha$ decays.
For the bound and first excited state of the mirror
(${\rm{{}^{7}Li}}^{7}{\rm{Be}}$)-pair the values of
$C_{1\,j_{f}}^{(sp)}(^{7}{\rm{Be}})$ and
$C_{1\,j_{f}}^{(sp)}({\rm{{}^{7}Li}})$ change by the factor of 1.3 under the
variation of the geometric parameters ($R$ and $a$) of the adopted Woods–Saxon
potential [31, 32] within the aforesaid ranges, while the ratios
$R_{C^{(sp)};\,3/2}$ and $R_{C^{(sp)};\,1/2}$ for the bound and first excited
states of the mirror (${\rm{{}^{7}Li}}^{7}{\rm{Be}}$)-pair change by only
about 1.5% and 6%, respectively. It is seen that the ratios do not depend
practically from variation of the free parameters $R$ and $a$, which are equal
to $R_{C^{(sp)};\,3/2}$=1.37$\pm$ 0.02 and $R_{C^{(sp)};\,1/2}$=1.40$\pm$
0.09. They are in good agreement with those calculated in [37] within the
microscopic cluster and two-body potential models (see Table I there). The
ratios for the ANCs are $R_{C;\,3/2}$=1.83${}^{{\rm{+0.18}}}_{{\rm{-0.25}}}\,$
and $R_{C;\,1/2}$=1.77${}^{{\rm{+0.19}}}_{{\rm{-0.24}}}\,$. From (1) the
values of the ratio $R_{Z;\,j_{f}}$ are equal to
$R_{Z;\,3/2}$=1.34${}^{{\rm{+0.13}}}_{{\rm{-0.18}}}\,$ and
$R_{Z;\,1/2}$=1.26${}^{{\rm{+0.16}}}_{{\rm{-0.19}}}\,$ for the ground and the
first excited states, respectively. Within their uncertainties, these values
differ slightly from those of $R_{Z;\,3/2}$=0.995$\pm$0.005 and
$R_{Z;\,1/2}$=0.990 calculated in Ref.[37] within the microscopic cluster
model. One notes that the values of $R_{Z;\,j_{f}}$ calculated in [37] are
sensitive to the model assumptions (the choice of the oscillation radius $b$
and the form of the effective $NN$ potential) and such model dependence may
actually influence the mirror symmetry for the $\alpha$-particle spectroscopic
factors. The mirror symmetry breakup for the $\alpha$-particle spectroscopic
factors can also be signalled by the results for the ratio of
$S_{34}({\rm{{}^{7}Be}})/S_{34}({\rm{{}^{7}Li}})$ at zero energies for the
mirror (${\rm{{}^{7}Li}}^{7}{\rm{Be}}$)-pair obtained in [12] within the
resonating-group method by using the seven forms for the effective $NN$
potential. As shown in [12], this ratio is sensitive to a form of the
effective $NN$ potential used and changes from 1.0 to 1.18 times in a
dependence from the effective $NN$ potential used. One of the possible reasons
of the sensitivity observed in [12] can apparently be associated with a
sensitivity of the ratio $R_{Z;\,j_{f}}$ to a form of the effective $NN$
potential used. In a contrast of such model dependence observed in [12, 37],
the problem of the ambiguity connected with the model $(R,a)$-dependence for
the values of the ratios $R_{Z;\,j_{j}}$ found by us from Eq.(1) is reduced to
minimum within the experimental uncertainty.
It is seen from here that the empirical values of $R_{Z;\,j_{f}}$ exceed unity
both for the ground state and for the first excited state of the mirror
(${\rm{{}^{7}Li}}^{7}{\rm{Be}}$)-pair. This result for $R_{Z;\,j_{f}}$ is not
accidental and can be explained qualitatively by the following consideration.
The fact is that the spectroscopic factor $Z_{1\,j_{f}}({\rm{{}^{7}Li}})$ (or
$Z_{1\,j_{f}}(^{7}{\rm{Be}})$) is determined as a norm of the radial overlap
function of the bound state wave functions of the $t$, $\alpha$ and
${\rm{{}^{7}Li}}$ (or ${\rm{{}^{3}He}}$, $\alpha$ and ${\rm{{}^{7}Be}}$)
nuclei and is given by Eqs.(100) and (101) from Ref. [24]. The interval of
integration ($0\leq r<\infty$) in Eq. (101) can be divided in two parts. In
the first integral denoted by $Z_{1\,j_{f}}^{(1)}({\rm{{}^{7}Li}})$ for
${\rm{{}^{7}Li}}$ and $Z_{1\,j_{f}}^{(1)}({\rm{{}^{7}Be}})$ for
${\rm{{}^{7}Be}}$, the integration over $r$ covers the region 0$\leq r\leq
r_{c}$ (the internal region), where nuclear ($\alpha t$ or
$\alpha^{3}{\rm{He}}$) interactions are dominate over the Coulomb
interactions. In the second integral
$Z_{1\,j_{f}}^{(2)}({\rm{B}})=C_{1\,j_{f}}^{2}({\rm{B}})\int_{r_{c}}^{\infty}drW_{\eta_{{\rm
B}};\,3/2}^{2}(2\kappa_{\alpha a}r),$ (2)
where in the asymptotic region the radial overlap function entering the
integrand is replaced by the appropriate Whittaker function (see, for example,
Eq.(108) of Ref.[24]), interaction between $a$ and $\alpha$-particle ( $a=t$
for ${\rm B}={\rm{{}^{7}Li}}$ or $a={\rm{{}^{3}He}}$ for ${\rm
B}={\rm{{}^{7}Be}}$) is governed by the Coulomb forces only (the external
region). In (2), $\kappa_{\alpha a}=\sqrt{2\mu_{\alpha a}\varepsilon_{\alpha
a}}$ and $W_{\eta_{{\rm{B}}};\,3/2}(x)$ is the Whittaker function. One notes
that the magnitudes $Z_{1\,j_{f}}^{(1)}({\rm{{}^{7}Li}})$
($Z_{1\,j_{f}}^{(1)}({\rm{{}^{7}Be}})$) and
$Z_{1\,j_{f}}^{(2)}({\rm{{}^{7}Li}})$ ($Z_{1\,j_{f}}^{(2)}({\rm{{}^{7}Be}})$)
define the probability of finding $t$ (or ${\rm{{}^{3}He}}$) in the
($\alpha+t$) configuration (or the ($\alpha+{\rm{{}^{3}He}}$) configuration)
at distances of $r\leq r_{c}$ and of $r>r_{c}$, respectively. Obviously
$Z_{1\,j_{f}}({\rm{{}^{7}Li}})=Z_{1\,j_{f}}^{(1)}({\rm{{}^{7}Li}})+Z_{1\,j_{f}}^{(2)}({\rm{{}^{7}Li}})$
and
$Z_{1\,j_{f}}({\rm{{}^{7}Be}})=Z_{1\,j_{f}}^{(1)}({\rm{{}^{7}Be}})+Z_{1\,j_{f}}^{(2)}({\rm{{}^{7}Be}})$.
An information about values of $Z_{1\,j_{f}}^{(2)}({\rm{{}^{7}Li}})$ and
$Z_{1\,j_{f}}^{(2)}({\rm{{}^{7}Be}})$ can be obtained from (2) by using the
values of the ANCs for $\alpha+t\to{\rm{{}^{7}Li}}$ and
$\alpha+{\rm{{}^{3}He}}\to{\rm{{}^{7}Be}}$ recommended in [23] and in the
present work, respectively. For example, for $r_{c}\approx$4.0 fm (the surface
regions for the mirror (${}^{7}{\rm{Li}}^{7}{\rm{Be}}$)-pair) the calculation
shows that the ratio
$R_{Z;\,j_{f}}^{(2)}=Z_{1\,j_{f}}^{(2)}({\rm{{}^{7}Be}})/Z_{1\,j_{f}}^{(2)}({\rm{{}^{7}Li}})$
is equal to 1.43${}^{{\rm{+0.13}}}_{{\rm{-0.18}}}$
(1.31${}^{{\rm{+0.14}}}_{{\rm{-0.18}}}$) for the ground (excited) states of
the ${\rm{{}^{7}Li}}$ and ${\rm{{}^{7}Be}}$ nuclei, i.e. the ratio
$R_{Z;\,j_{f}}^{(2)}>1$. Owing to the principle of equivalency of nuclear
interactions between nucleons of the ($\alpha t$)-pair in the nucleus
${\rm{{}^{7}Li}}$ and ($\alpha^{3}{\rm{He}}$)-pair in the nucleus
${\rm{{}^{7}Be}}$ [37], the values of $Z_{1\,j_{f}}^{(1)}({\rm{{}^{7}Li}})$
and $Z_{1\,j_{f}}^{(1)}({\rm{{}^{7}Be}})$ should not differ appreciably. If
one suggests that $R_{Z;\,j_{f}}^{(1)}\approx$1, then the ratio
$R_{Z;\,j_{f}}>$1.
### 2.3 The ${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$ astrophysical $S$
factor at solar energies
The equation (A1) and the weighted means of the ANCs obtained for the
${}^{3}{\rm{He}}+\alpha\to{\rm{{}^{7}Be}}$(g.s) and
${}^{3}{\rm{He}}+\alpha\to{\rm{{}^{7}Be}}$(0.429 MeV) can be used for
extrapolating the ${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$ astrophysical
$S$ factor for capture to the ground and first excited states as well as the
total astrophysical $S$ factor at solar energies ($E\leq 25$ keV). We tested
again the fulfilment of the condition (A2) in the same way as it is done above
for E$\geq$ 90 keV and similar results plotted in Fig.1 are also observed at
energies of E$<90$ keV.
The experimental and calculated astrophysical $S$ factors for the
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$(g.s.),
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$ (0.429 MeV) and
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$ (g.s.+0.429 MeV) reactions are
presented in Table 1 and displayed in Fig.5 _a_ , _b_ and _c_ , respectively.
In Figs.5 _a_ and _b_ , the open diamond and triangle symbols (the filled
triangle symbols) show our results for the
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$(g.s.) and
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$(0.429 MeV) reactions (see Table
1 also), which are obtained from the analysis of the total experimental
astrophysical $S$ factors of [7–9] and [6]([11]), respectively, by using the
corresponding values of the ANCs for each energy $E$ experimental point
presented Figs.4 _a_ and _b_ ( see Table 1 too). There the open circle symbols
lying along the smooth solid lines are the results of the extrapolation
obtained by us in which each of the quoted uncertainties is the uncertainty
associated with that for the ANCs adopted. All these results are the second
ones of the present work. In Figs.5 _a_ and _b_ the experimental data plotted
by the filled circle symbols (filled star and square symbols) are taken from
[9] (from [10]). The solid lines present our calculations performed also with
the standard values of geometric parameters $R$=1.80 fm and $a$=0.70 fm. In
Fig.5 _c_ , the symbols are data of all experiments [6–11] and the solid line
presents our calculations performed with the standard values of geometric
parameters $R$=1.80 fm and $a$=0.70 fm by using the weighted means of the ANCs
($C_{1\,\,3/2}^{2}$ and $C_{1\,\,1/2}^{2}$) presented in the last line of
Table 2. There the dashed (dot-dashed) lines are the results of calculation
obtained by using the aforesaid lower (upper) limit values of the ANCs pointed
out in the last line of Table 2 and the standard values of the geometric
parameters ($R$ and $a$), and the dotted line is the result of Ref.[18, 19].
As it is seen from these figures, the equations (A1), (A4) and (A5) allow one
to perform a correct extrapolation of the corresponding astrophysical $S$
factors at solar energies. But, the noticeable systematical underestimation
between the results of calculations performed in Ref.[18, 19] in respect to
the experimental data occurs.
The weighted means of the total astrophysical $S$ factor $S_{34}(E)$ at solar
energies ($E$=0 and 23 ${\rm{keV\,\,b}}$) obtained by us are presented in the
the last line of Table 2. As it is seen from Table 2 the weighted means of
$S_{34}(0)$, deduced by us separately from each the activation and the prompt
method of the experimental data from works [6–9] and [10, 11] (the first and
second lines as well as the fifth and sixth lines), agree well within their
uncertainties with each other and with those recommended in [9–11]. But, these
weighted means of $S_{34}(0)$ obtained by us from the independent analysis of
the different data (the set I and the set II) differ also noticeably from one
another (about 11%) and this distinction is mainly associated with the
aforesaid difference observed in magnitudes of the corresponding ANCs
presented in the third and seventh lines of Table 2. Nevertheless, the
weighted mean of $S_{34}$(0)=0.613${}^{{\rm{+0.\,026}}}_{{\rm{-0.\,063}}}$
${\rm{keV\,\,b}}$, obtained by us by using the weighted means of the ANCs
presented in the last line of Table 2, within the asymmetric uncertainty,
which is caused with the asymmetric uncertainty for the ANCs presented in the
last line of Table 2, agrees also with that recommended in [9–11,38]. But, it
is interesting to note that the central value of it is closer to that given in
the third line of Table 2, than to the central value of the weighted mean
given in the seventh line of Table 2. Also, the astrophysical $S$ factors,
calculated by using the values of the ANCs obtained separately from the set I,
the set II and both them (see Table 2), are fitted independently using a
second-order polynomial within three energy intervals (0$\leq E\leq$ 500 keV ,
0$\leq E\leq$ 1000 keV and 0$\leq E\leq$ 1200 keV). The results for the slop
$S_{34}^{\prime}$(0)/$S_{34}$(0) are to be -0.711 MeV-1, -0.734 MeV-1 and
-0.726 MeV-1 in dependence on the aforesaid intervals, respectively, and they
do not depend on the values of the ANCs used. One notes that they also are in
agreement with -0.73 MeV-1 [19] and -0.92$\pm$0.18 MeV-1 [38]. It is seen from
here that the $S_{34}(E)$ calculated by us (the solid lines in Fig.5 _a_ , _b_
and _c_) and that obtained in [19, 38] have practically the same energy
dependence within the aforesaid energy interval but they differ mainly with
each other by a normalization.
Our result for $S_{34}(0)$ differs noticeably on that recommended in Refs.[16]
[20] and [13]($\approx$ 40 keVb, 0.52$\pm$0.03 keVb and 0.51$\pm$0.04 keV b,
respectively). This circumstance is apparently connected with the
underestimation of the contribution from the external part in the amplitude
admitted in these works. Besides, the result of the present work is noticeably
larger than the result of $S_{34}(0)$=0.516 (0.53) keV b [18]([19]) obtained
within the standard two-body ($\alpha+{\rm{{}^{3}He}}$) potential by using
$\alpha{\rm{{}^{3}He}}$ potential deduced by a double-folding procedure. One
of the possible reason of this discrepancy can be apparently associated with
the assumption admitted in [18, 19] that a value of the ratio $R_{Z;\,j_{f}}$
for the bound mirror (${\rm{{}^{7}Li}}{\rm{{}^{7}Be}}$)-pair is taken equal to
unity ($Z_{1\,3/2}({\rm{{}^{7}Be}})$ = $Z_{1\,1/2}({\rm{{}^{7}Be}})$=1.17
[39]), which in turn results in the observed systematical underestimation of
the calculated $S_{34}(E)$ in respect to the experimental $S$ factors (see the
dashed line in Fig.5 _c_). But, as shown in Subsection 2.3 the values of
$R_{Z;\,j_{f}}$ for the ground and first excited states of the mirror
(${\rm{{}^{7}Li}}{\rm{{}^{7}Be}}$)-pair are large unity. Therefore, the
underestimated values of $Z_{1\,3/2}({\rm{{}^{7}Be}})$ and
$Z_{1\,1/2}({\rm{{}^{7}Be}})$ used in [18] also result in the underestimated
value of $S_{34}(0)$ for the direct capture
${\rm{{}^{3}He}}(\alpha,\gamma){\rm{{}^{7}Be}}$ reaction. Perhaps, the
assumption about equal values of the spectroscopic factor
($Z_{1\,3/2}({\rm{{}^{7}Li}})$ = $Z_{1\,1/2}({\rm{{}^{7}Li}})$=1.17 [18, 39])
is correct only for the spectroscopic factors $Z_{1\,j}({\rm{{}^{7}Li}})$
since the value of $S_{34}(0)$ obtained in [23] and [18] for the direct
capture $t(\alpha,\gamma){\rm{{}^{7}Li}}$ reaction agree excellently with each
other. One notes that in Ref.[23] the analysis of the
$t(\alpha,\gamma){\rm{{}^{7}Li}}$ experimental astrophysical $S$ factors [40]
has also been performed within MTBPA and the ANC for
$\alpha+t\to{\rm{{}^{7}Li}}$(g.s.), which has been deduced there from the
results of Ref.[18], also is in good agreement with that recommended by
authors of Ref.[23].
Nevertheless, we observe that the value of $S_{34}(0)$= 0.56 ${\rm{keV\,\,b}}$
[14] obtained within the microscopical ($\alpha+{\rm{{}^{3}He}}$)-cluster
approach is in agreement with our result within the uncertainty. Besides, our
result is also in excellent agreement with that of $S_{34}(0)$= 0.609
${\rm{keV\,\,b}}$ [12] and $S_{34}(0)$= 0.621 ${\rm{keV\,\,b}}$ [36] obtained
within the ($\alpha+{\rm{{}^{3}He}}$)-channel of version of the resonating-
group method by using the modified Wildermuth-Tang (MWT) and the near-Serber
exchange mixture forms for the effective NN potential, respectively. It
follows from here that the mutual agreement between the results obtained in
the present work and works of [12, 14, 36], which is based on the common
approximation about the cluster $(\alpha+{\rm{{}^{3}He}})$ structure of the
${\rm{{}^{7}Be}}$, allows one to draw a conclusion about the dominant
contribution of the $(\alpha+{\rm{{}^{3}He}})$ clusterization to the low-
energy ${\rm{{}^{3}He}}(\alpha,\gamma){\rm{{}^{7}Be}}$ cross section both in
the absolute normalization and in the energy dependence [6–11]. Therefore,
single-channel $(\alpha+{\rm{{}^{3}He}})$ approximation for ${\rm{{}^{7}Be}}$
[12, 14] is quite appropriate for this reaction in the considered energy
range.
Also, it is interesting to note that the ratios of the „indirectly determined“
astrophysical $S$ factors, $S_{1\,\,3/2}$(0) and $S_{1\,\,1/2}(0)$, for the
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$ reaction populating to the
ground and first excited states obtained in the present work to those for the
mirror $t(\alpha,\gamma)^{7}{\rm{Li}}$ reaction populating to the ground and
first excited states deduced in Ref.[23] are equal to
$R_{S}^{(g.s.)}$=6.87${}^{+0.70}_{-0.87}$ and
$R_{S}^{(exc)}$=6.11${}^{+0.67}_{-0.86}$ , respectively. These values are in a
good agreement with those of $R_{S}^{(g.s.)}$=6.6 and $R_{S}^{(exc)}$=5.9
deduced in Ref.[37] within the microscopic cluster model. This result also
confirms directly our estimation for the ratio $R_{C;\,j_{f}}$ obtained above
since the ANCs for $t+\alpha\to{\rm{{}^{7}Li}}$(g.s) and
$t+\alpha\to{\rm{{}^{7}Li}}$(0.478 MeV) as well as the ANCs for
${\rm{{}^{3}He}}+\alpha\to{\rm{{}^{7}Be}}$(g.s) and
${\rm{{}^{3}He}}+\alpha\to{\rm{{}^{7}Be}}$(0.429 MeV) determine the
astrophysical $S$ factors for the $t(\alpha,\gamma)^{7}{\rm{Li}}$ and
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$ reactions at zero energies and,
consequently, the ratios $R_{S}^{(g.s.)}$ and $R_{S}^{(exc)}$ are proportional
to $R_{C;\,3/2}$ and $R_{C;\,1/2}$, respectively.
Fig. 5 _d_ shows a comparison between the branching ratio $R^{exp}(E)$
obtained in the present work (the open triangle and square symbols) and that
recommended in Refs.[41] (the filled square symbols), in [7, 9] (the filled
circle symbols) and [10] (the filled triangle symbols). The weighted mean
$\bar{R}^{exp}$ of the $R^{exp}(E)$ recommended by us is equal to
$\bar{R}^{exp}$=0.43$\pm$ 0.01. As it is seen from Fig.5 _d_ , the branching
ratio obtained in the present work and in [7, 9, 10] is in a good agreement
with that recommended in Ref.[41] although the underestimation occurs for the
$S_{34}^{exp}(E)$ obtained in Ref.[41]. Such a good agreement between two of
the experimental data for the $R^{exp}(E)$ can apparently be explained by the
fact that there is a reduction factor in [41], being overall for the
${}^{3}{\rm{He}}(\alpha,\gamma){\rm{{}^{7}Be}}$(g.s.) and
${}^{3}{\rm{He}}(\alpha,\gamma){\rm{{}^{7}Be}}$(0.429 MeV) astrophysical $S$
factors. The present result for $\bar{R}^{exp}$ is in an excellent agreement
with those of 0.43$\pm$ 0.02 [41] and 0.43 [18, 42] but is noticeably larger
than 0.37 [16] and 0.32$\pm$ 0.01 [43].
## 3 Conclusion
The analysis of the modern experimental astrophysical $S$ factors,
$S^{exp}_{34}(E)$, for the ${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$
reaction, which were precisely measured at energies $E$=92.9-1235 keV [6–11],
has been performed within the modified two-body potential approach [23]. The
performed scrupulous analysis shows quantitatively that the
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$ reaction within the considered
energy ranges is mainly peripheral and the parametrization of the direct
astrophysical $S$ factors in terms of ANCs for the
${}^{3}{\rm{He}}+\alpha\rightarrow{\rm{{}^{7}Be}}$ is adequate to the physics
of the peripheral reaction under consideration.
It is shown that the experimental astrophysical $S$ factors of the reaction
under consideration [6–11] can be used as an independent source of getting the
information about the ANCs (or NVCs) for
${}^{3}{\rm{He}}+\alpha\rightarrow\rm{{}^{7}Be}$ (or for the virtual decay
$\rm{{}^{7}Be}\rightarrow\alpha+^{3}{\rm{He}}$), and the found ANCs can
predict the experimental astrophysical $S$ factors separated for the
${\rm{{}^{3}He}}(\alpha,\gamma){\rm{{}^{7}Be}}$(g.s.) and
${\rm{{}^{3}He}}(\alpha,\gamma){\rm{{}^{7}Be}}$(0.429 MeV) reactions at low
experimentally acceptable energy regions (126.5$\leq$ E$\leq$ 1203 keV)
obtained from the total experimental astrophysical $S$ factors [6–9,11]. The
new estimation for the weighted means of the ANCs for
${}^{3}{\rm{He}}+\alpha\rightarrow{\rm{{}^{7}Be}}$ and NVCs for the virtual
decay $\rm{{}^{7}Be}\rightarrow\alpha+^{3}{\rm{He}}$ are obtained. Also, the
values of ANCs were used for getting the information about the
$\alpha$-particle spectroscopic factors for the mirror
(${}^{7}Li{\rm{{}^{7}Be}}$)-pair.
The obtained values of the ANCs were also used for an extrapolation of
astrophysical $S$ factors at energies less than 90 keV, including $E$=0. In
particular, the weighted mean of the branching ratio $\bar{R}^{exp}$
($\bar{R}^{exp}$=0.43$\pm$ 0.01) and the total astrophysical $S$ factor
$S_{34}(0)$ ($S_{34}$(0)=0.613${}^{{\rm{+0.026}}}_{{\rm{-0.063}}}$
${\rm{keV\,\,b}}$) obtained here are in agreement with that deduced in [7–11]
from the analysis the same experimental asprophysical $S$ factors. Besides,
our result for $S_{34}(0)$ is in an agreement with that $S_{34}(0)$=0.56 keV
[14] obtained within the microscopical single-channel
($\alpha+{\rm{{}^{3}He}}$) cluster model, $S_{34}(0)$= 0.609 ${\rm{keV\,\,b}}$
[12] and $S_{34}(0)$= 0.621 ${\rm{keV\,\,b}}$ [36] obtained within the
($\alpha+{\rm{{}^{3}He}}$)-channel of version of the resonating-group method,
but it is noticeably larger than the result of $S_{34}(0)$=0.516 (0.53) keV
[18]([19]) obtained within the standard two-body ($\alpha+{\rm{{}^{3}He}}$)
potential by using $\alpha{\rm{{}^{3}He}}$ potential deduced by a double-
folding procedure.
Acknowledgments
The authors are deeply grateful to S. V. Artemov, L.D. Blokhintsev and A.M.
Mukhamedzhanov for discussions and general encouragement. The authors thank
also D.Bemmerer for providing the experimental results of the updated data
analysis. The work has been supported by The Academy of Science of The
Republic of Uzbekistan (Grant No.FA-F2-F077).
Appendix: Basic formulae
Here we repeat only the idea and the essential formulae of the MTBPA [23]
specialized for the ${}^{3}{\rm{He}}(\alpha,\gamma){\rm{{}^{7}Be}}$
astrophysical $S$ factor that are important for the following analysis.
According to [23], for fixed $l_{f}$ and $j_{f}$ we can write the
astrophysical $S$ factor, $S_{l_{f}j_{f}}(E)$, for the peripheral direct
capture ${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$ reaction in the
following form
$S_{l_{f}j_{f}}(E)=C^{2}_{l_{f}j_{f}}{\cal{R}}_{l_{f}j_{f}}(E,C_{l_{f}j_{f}}^{(sp)}).$
$None$
Here, $C_{l_{f}j_{f}}$ is the ANC for
${\rm{{}^{3}He}}+\alpha\to{\rm{{}^{7}Be}}$, which determines the amplitude of
the tail of the ${\rm{{}^{7}Be}}$ nucleus bound state wave function in the
($\alpha+{\rm{{}^{3}He}}$)-channel and is related to the spectroscopic factor
$Z_{l_{f}j_{f}}$ for the ($\alpha+{\rm{{}^{3}He}}$)-configuration with the
quantum numbers $l_{f}$ and $j_{f}$ in the ${\rm{{}^{7}Be}}$ nucleus by the
equation $C_{l_{f}j_{f}}=Z_{l_{f}j_{f}}^{1/2}C^{(sp)}_{l_{f}j_{f}}$ [24], and
${\cal{R}}_{l_{f}j_{f}}(E,C^{(sp)}_{l_{f}j_{f}})=S_{l_{f}j_{f}}^{(sp)}(E)/(C^{(sp)}_{l_{f}j_{f}})^{2}$,
where $S_{l_{f}j_{f}}^{(sp)}(E)$ is the single-particle astrophysical $S$
factor [5] and $C^{(sp)}_{l_{f}j_{f}}$ ($\equiv C^{(sp)}_{l_{f}j_{f}}(R,a)$
[44]) is the single-particle ANC, which determines the amplitude of the tail
of the single-particle wave function of the bound ($\alpha+^{3}{\rm{He}}$)
state
$\varphi_{l_{f}j_{f}}(r)$($\equiv\varphi_{l_{f}j_{f}}(r;C^{(sp)}_{l_{f}j_{f}}$)
[44]) and in turn is itself a function of the geometric parameters (radius of
$R$ and diffuseness $a$) of the Woods-Saxon potential, i.e.
$C^{(sp)}_{l_{f}j_{f}}\equiv C^{(sp)}_{l_{f}j_{f}}(R,a)$[44].
In order to make the dependence of the
${\cal{R}}_{l_{f}j_{f}}(E,C_{l_{f}j_{f}}^{(sp)})$ function on
$C_{l_{f}j_{f}}^{(sp)}$ more explicit, in the radial matrix element [23, 26]
entering in the ${\cal{R}}_{l_{f}j_{f}}(E,C_{l_{f}j_{f}}^{(sp)})$ function, we
split the space of interaction in two parts separated by the channel radius
$r_{c}$: the interior part ($0\leq r\leq r_{c})$, where nuclear forces between
the $\alpha{\rm{{}^{3}He}}$-pair are important, and the exterior part
($r_{c}\leq r<\infty$), where the interaction between the $\alpha$-particle
and ${\rm{{}^{3}He}}$ is governed by Coulomb force only. The contribution from
the exterior part of the radial matrix element into the
${\cal{R}}_{l_{f}j_{f}}(E,C_{l_{f}j_{f}}^{(sp)})$ function does not depend on
$C_{l_{f}j_{f}}^{(sp)}$ since for $r>r_{c}$ the wave function
$\varphi_{l_{f}j_{f}}(r;C_{l_{f}j_{f}}^{(sp)})$ can be approximated by its
asymptotic behavior [24]. Consequently, the parametrization of the
astrophysical $S$ factor in the form (A1) makes one it possible to fix a
contribution from the exterior region ($r_{c}\leq r<\infty$), which is
dominant for the peripheral reaction, by a model independent way if the ANCs
$C_{l_{f}j_{f}}^{2}$ are known. It follows from here that the contribution
from the interior part of the radial matrix element into the
${\cal{R}}_{l_{f}j_{f}}(E,C_{l_{f}j_{f}}^{(sp)})$ function, which depends on
$C_{l_{f}j_{f}}^{(sp)}$ through the fraction
$\varphi_{l_{f}j_{f}}(r;C_{l_{f}j_{f}}^{(sp)})/C_{l_{f}j_{f}}^{(sp)}$ [44,
45], must exactly determine the dependence of the
${\cal{R}}_{l_{f}j_{f}}(E,C_{l_{f}j_{f}}^{(sp)})$ function on
$C_{l_{f}j_{f}}^{(sp)}$.
In Eq. (A1) the ANCs $C_{l_{f}j_{f}}^{2}$ and the free parameter
$C_{l_{f}j_{f}}^{(sp)}$ are unknown. But, for the peripheral
${}^{3}{\rm{He}}(\alpha,\gamma){\rm{{}^{7}Be}}$ reaction the equation (A1) can
be used for determination of the ANCs. For this aim, obviously the following
additional requirements [23]
${\cal{R}}_{l_{f}j_{f}}(E,C^{(sp)}_{l_{f}j_{f}})=f(E)$ $None$
and
$C_{l_{f}j_{f}}^{2}=\frac{S_{l_{f}j_{f}}(E)}{{\cal{R}}_{l_{f}j_{f}}(E,C_{l_{f}j_{f}}^{(sp)})}=const$
$None$
must be fulfilled as a function of the free parameter $C_{l_{f}j_{f}}^{(sp)}$
for each energy $E$ experimental point from the range $E_{min}\leq E\leq
E_{max}$ and values of the function of
${\cal{R}}_{l_{f}j_{f}}(E,C_{l_{f}j_{f}}^{(sp)})$ from (A2).
The fulfillment of the relations (A2) and (A3) or their violation within the
experimental uncertainty for $S_{l_{f}j_{f}}^{exp}(E)$ enables one, firstly,
to determine an interval for energies E where the dominance of extra-nuclear
capture occurs and, secondly, to obtain the value $(C_{l_{f}j_{f}}^{exp})^{2}$
for ${}^{3}{\rm{He}}+\alpha\to{\rm{{}^{7}Be}}$ using the experimental
astrophysical $S$ factors $S^{exp}_{l_{f}j_{f}}(E)$, precisely measured by
authors of Refs. [6–11], instead of $S_{l_{f}j_{f}}(E)$, i.e.
$(C_{{l_{f}j_{f}}}^{exp})^{2}=\frac{S^{exp}_{l_{f}j_{f}}(E)}{{\cal{R}}_{l_{f}j_{f}}(E,C_{l_{f}j_{f}}^{(sp)})}.$
$None$
Then, the value $(C_{l_{f}j_{f}}^{exp})^{2}$ can be used for extrapolation of
the astrophysical $S$ factor $S_{l_{f}j_{f}}(E)$ to the region of experimental
inaccessible energies 0$\leq E<E_{min}$ by using the obtained value
$(C_{l_{f}j_{f}}^{exp})^{2}$ in (A1).
The total astrophysical $S$ factor for the
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$(g.s.+0.429 MeV) reaction is
given by
$S_{34}(E)=\sum_{l_{f}=1;\,j_{f}=1/2,\,3/2}S_{l_{f}j_{f}}(E)=$ $None$
$=C_{1\,\,3/2}^{2}[{{\cal
R}}_{1\,\,3/2}(E,C_{1\,\,3/2}^{(sp)})+\lambda_{C}{{\cal
R}}_{1\,\,1/2}(E,C_{1\,\,1/2}^{(sp)})]$ $None$ $=C_{1\,\,3/2}^{2}{{\cal
R}}_{1\,\,3/2}(E,C_{1\,\,3/2}^{(sp)})[1+R(E)]$ $None$
in which $\lambda_{C}=(C_{1\,\,1/2}/C_{1\,\,3/2})^{2}$ and $R(E)$ is a
branching ratio.
One notes that in the two-body potential model the ANC $C_{l_{f}j_{f}}$ is
related to the NVC $G_{l_{f}j_{f}}$ for the virtual decay
${\rm{{}^{7}Be}}\to\alpha+{\rm{{}^{3}He}}$ by the equation [24]
$G_{l_{f}j_{f}}=-i^{l_{f}+\eta_{{\rm{\,{}^{7}Be}}}}\frac{\sqrt{\pi}}{\mu}C_{l_{f}j_{f}},$
$None$
where $\eta_{{\rm{\,{}^{7}Be}}}$ is the Coulomb parameter for the
${\rm{{}^{7}Be}}\,(\alpha+{\rm{{}^{3}He}})$ bound state. In (A8) the
combinatorial factor taking into account the nucleon’s identity is absorbed in
$C_{l_{f}j_{f}}$ and its numerical value depends on a specific model used to
describe wave functions of the ${\rm{{}^{3}He}}$, $\alpha$ and
${\rm{{}^{7}Be}}$ nuclei [25]. Hence, the proportionality factor in (A8),
which relates NVC’s and ANCs, depends on the choice of nuclear model [25].
But, as it is noted in [25], the NVC $G_{l_{f}j_{f}}$ is a more fundamental
quantity than the ANC $C_{l_{f}j_{f}}$ since the NVC is determined in a model-
independent way as the residue of the partial $S$-matrix of the elastic
$\alpha{\rm{{}^{3}He}}$-scattering at the pole
$E=-\varepsilon_{\alpha{\rm{{}^{3}He}}}$
($\varepsilon_{\alpha{\rm{{}^{3}He}}}$ is the binding energy of the bound
($\alpha+{\rm{{}^{3}He}}$) state of ${\rm{{}^{7}Be}}$) [24, 25]. Therefore, it
is also of interest to get an information about values of the NVCs from Eqs.
(A4) and (A8).
## References
* [1] J. N. Bahcall, A. M. Serenelli, and S. Basu, Astrophys. J. 621, L85 (2005).
* [2] J. N. Bahcall, W.F. Huebner, S. H. Lubow, P. D. Parker, and R. K. Ulrich, Rev.Mod.Phys. 54, 767 (1982).
* [3] J. N. Bahcall, and M. H. Pinsonneault, Rev.Mod.Phys. 64, 781 (1992).
* [4] E.G. Adelberger, S.M.Austin, J.N. Bahcall, A.B.Balantekin, G.Bogaert, L.S.Brown, L.Buchmann, F.E.Cecil, A.E.Champagne, L. de Braeckeleer, Ch.A. Duba, S.R. Elliott, S.J. Freedman, M. Gai, G.Goldring, Ch. R. Gould, A. Gruzinov, W.C. Haxton, K.M. Heeger, and E.Henley, Rev. Mod. Phys. 70, 1265 (1998).
* [5] C.Angulo, M.Arnould, M.Rayet, P.Descouvemont, D.Baye, C.Leclercq-Willain, A.Coc, S.Barhoumi, P.Aguer, C.Rolfs, R.Kunz, J.W. Hammer, A.Mayer, T. Paradellis, S.Kossionides, C.Chronidou, K.Spyrou, S.Degl’Innocenti, G. Fiorentini, B.Ricci, S.Zavatarelli, C.Providencia, H.Wolters, J.Soares, C.Grama, J.Rahighi, A.Shotter, M.L.Rachti, Nucl. Phys. A 656, 3 (1999).
* [6] B. S. Nara Singh, M. Hass, Y. Nir-El, and G. Haquin, Phys.Rev.Lett. 93, 262503 (2004).
* [7] D. Bemmerer, F. Confortola, H. Costantini, A. Formicola, Gy. Gyürky, R. Bonetti, C. Broggini, P. Corvisiero, Z. Elekes, Zs. Fülöp, G. Gervino, A. Guglielmetti, C. Gustavino, G. Imbriani, M. Junker, M. Laubenstein, A. Lemut, B. Limata, V.Lozza, M. Marta, R. Menegazzo, P. Prati, V. Roca, C. Rolfs, C. Rossi Alvarez, E. Somorjai, O. Straniero, F. Strieder, F. Terrasi, and H.P. Trautvetter (The LUNA Collaboration), Phys.Rev.Lett. 97, 122502 (2006); private communication.
* [8] Gy. Gyürky, F. Confortola, H. Costantini, A. Formicola, D. Bemmerer, R. Bonetti, C. Broggini, P. Corvisiero, Z. Elekes, Zs. Fülöp, G. Gervino, A. Guglielmetti, C. Gustavino, G. Imbriani, M. Junker, M. Laubenstein, A. Lemut, B. Limata, V.Lozza, M. Marta, R. Menegazzo, P. Prati, V. Roca, C. Rolfs, C. Rossi Alvarez, E. Somorjai, O. Straniero, F. Strieder, F. Terrasi, and H.P. Trautvetter (The LUNA Collaboration), Phys.Rev. C 75, 035805 (2007) .
* [9] F. Confortola, D. Bemmerer, H. Costantini, A. Formicola, Gy. Gyürky, P. Bezzon, R. Bonetti, C. Broggini, P. Corvisiero, Z. Elekes, Zs. Fülöp, G. Gervino, A. Guglielmetti, C. Gustavino, G. Imbriani, M. Junker, M. Laubenstein, A. Lemut, B. Limata, V.Lozza, M. Marta, R. Menegazzo, P. Prati, V. Roca, C. Rolfs, C. Rossi Alvarez, E. Somorjai, O. Straniero, F. Strieder, F. Terrasi, and H.P. Trautvetter (The LUNA Collaboration), Phys.Rev. C 75, 065803 (2007).
* [10] T.A.D. Brown, C. Bordeanu, R.F. Snover, D.W. Storm, D. Melconian, A.L. Sallaska, S.K.L. Sjue, S. Triambak, Phys.Rev. C 76, 055801 (2007).
* [11] A Di Leva, L. Gialanella, R. Kunz, D. Rogalla, D. Schürmann, F. Strieder, M. De Cesare, N. De Cesare, A. D’Onofrio, Z. Fülöp, G. Gyürky, G. Imbariani, G. Mangano, A. Ordine, V. Roca, C. Rolfs, M. Romano, E. Somorjai, and F. Terrasi, Phys.Rev.Lett. 102, 232502 (2009).
* [12] T. Kajino, Nucl.Phys. A 460, 559 (1986).
* [13] P.Descouvement, A. Adahchour, C. Angulo, A. Coc, E. Vangioni-Flam. Atomic data and Nuclear data Tables. 88, 203 (2004).
* [14] K. Langanke, Nucl. Phys. A 457, 351 (1986).
* [15] A. Csótó and K. Langanke, Few-body Systems. 29, 121 (2000).
* [16] K.M. Nollett, Phys. Rev. C 63, 054002 (2001).
* [17] S.B. Dubovitchenko, A.V. Dzhazairov - Kakhramanov, Yad. Fiz. 58, 635 (1995) [Phys. At. Nucl. 58, 579 (1995)]. 48, 1664 (1982); Nucl.Phys. A 419, 115 (1984).
* [18] P. Mohr, H. Abele, R. Zwiebel, G. Staudt, H. Krauss, H. Oberhummer, A. Denker, J.W. Hammer and F. Wolf, Phys. Rev. C 48, 1420 (1993).
* [19] P. Mohr, Phys. Rev. C 79, 065804 (2009).
* [20] S.B. Igamov, T.M. Tursunmuratov, and R. Yarmukhamedov,Yad.Fiz. 60, 1252 (1997) [Phys. At. Nucl. 60, 1126 (1997)].
* [21] J.L. Osborne, C. A. Barnes, R.W. Ravanagh, R.M. Kremer, G.J. Mathews, J.L. Zyskind, P.D. Parker, and A.J. Howard, Nucl.Phys.A 419, 115(1984).
* [22] R.F. Christy and I. Duck, Nucl.Phys. 24, 89 (1961).
* [23] S.B. Igamov, and R. Yarmukhamedov, Nucl.Phys.A 781, 247 (2007);Nucl.Phys.A 832, 346 (2010).
* [24] L.D. Blokhintsev, I.Borbely, E.I. Dolinskii, Fiz.Elem.Chastits At. Yadra. 8, 1189 (1977)[Sov. J. Part. Nucl. 8, 485 (1977)].
* [25] L.D. Blokhintsev and V.O. Yeromenko, Yad.Fiz. 71, 1219 (2008) [Phys. At. Nucl. 71, 1126 (2008)].
* [26] R.G.H. Robertson, P. Dyer, R.A. Warner, R.C. Melin, T.J. Bowles, A.B. McDonald, G.C. Ball, W.G. Davies, E.D. Earle, Phys.Rev.Lett. 47, 1867 (1981).
* [27] K.H. Kim, M.H. Park, B.T. Kim, Phys.Rev. C 35 363 (1987).
* [28] S.B. Igamov and R. Yarmukhamedov, Phys. At. Nucl. 71, 1740 (2008)].
* [29] S.V. Artemov, S.B. Igamov, K.I. Tursunmakhatov, and R. Yarmukhamedov, Izv. RAN: Seriya Fizicheskaya, 73, 176 (2009)[Bull.RAS:Physics,73, 165 (2009)].
* [30] V.G. Neudatchin, V.I. Kukulin, A.N. Boyarkina and V.D. Korennoy, Lett.Nuovo Cim.5, 834 (1972).
* [31] V.I. Kukulin, V.G. Neudatchin, and Yu. F. Smirnov, Nucl.Phys. A 245, 429 (1975).
* [32] V.I. Kukulin, V.G. Neudatchin, I.T. Obukhovsky and Yu.F. Smirnov. "Clusters as Subsystems in Light Nuclei". In Clustering Phenomena in nuclei, Vol.3, eds. K. Wildermuth and P. Kramer ( Vieweg , Braunschweig, 1983)p.1.
* [33] S.B. Dubovichenko and M.A. Zhusupov, Ivz. Akad. Nauk Kaz.SSR, Ser. Fiz.-Mat. 4, 44 (1983); Yad. Fiz. 39, 1378 (1984) [Sov. J. Nucl. Phys.39, 870 (1984).]
* [34] A.L.C. Barnard, C. M. Jones, and G.C. Phillips, Nucl.Phys. 50, 629 (1964).
* [35] R. J. Spiger, and T. A. Tombrello, Phys.Rev. 163 (1967) 964.
* [36] H. Walliser, H. Kanada, and Y.C. Tang, Nucl.Phys. A 419, 133 (1984).
* [37] N.K. Timofeyuk, P. Descouvemont, R.C. Johnson, Phys.Rev. C 75, 034302 (2007).
* [38] R. H. Cyburt and B. Davids, Phys.Rev.C 78, 064614(2008).
* [39] D. Kurath and D. J. Millener, Nucl.Phys.A 238, 269 (1975).
* [40] C.R. Brune, W.H. Geist, R.W. Kavanagh, K.D. Veal, Phys.Rev.Lett. 83, (1999) 4025.
* [41] H.Kräwinkel, H.W. Becker, L. Buchmann, J. Görres, K.U. Kettner, W.E. Kieser, R. Santo, P. Schmalbrock, H.P. Trautvetter, A. Vlieks, C. Rolfs, J.W. Hammer, R.E. Azuma, W.S. Rodney, Z.Phys. A 304, 307 (1982).
* [42] T. Altmeyer, E. Kolbe, T. Warmann, K. Langanke, H.J. Assenbaum, Z.Phys. A 330, 277 (1988).
* [43] U. Schröder, A. Redder, C. Rolfs, R.E. Azuma, L. Buchmann, C. Campbell, J.D. King, T.R. Donoghue, Phys.Lett. B 192, 55 (1987).
* [44] S. A. Goncharov, J. Dobesh, E. I. Dolinskii, A. M. Mukhamedzhanov and J. Cejpek, Yad. Fiz.35, 662 (1982)[Sov. J. Nucl. Phys. 35, 383 (1982)].
* [45] A. M. Mukhamedzanov, F. M. Nunes, Phys.Rev. C 72, 017602 (2005).
Figure 1: The dependence of ${\cal{R}}_{l_{f}j_{f}}(E,C^{(sp)}_{l_{f}j_{f}})$
as a function of the single-particle ANC, $C^{(sp)}_{l_{f}j_{f}}$, for the
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$(g.s.) ($(l_{f},j_{f})$=(1,3/2))
and ${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$(0.429 MeV
($(l_{f},j_{f})$=(1,1/2)) reactions at different energies $E$.
Figure 2: The energy dependence of the $\alpha^{3}{\rm{He}}$-elastic
scattering phase shifts for the $s_{1/2}$ and $p_{3/2}$ partial waves. The
experimental data are from [34, 35]. The bands are our calculated data. The
width of the bands for fixed energies corresponds to the variation of the
parameters $R$ and $a$ of the adopted Woods–Saxon potential within the
intervals of $R$=1.62 to 1.98 fm and $a$=0.63 to 0.77 fm.
Figure 3: The dependence of the ANCs $C_{l_{f}j_{f}}$ (upper band) and the
spectroscopic factors $Z_{l_{f}j_{f}}$ (lower band) on the single-particle ANC
$C^{(sp)}_{l_{f}j_{f}}$ for the
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$(g.s.) (the left column,
$(l_{f},j_{f})$=(1,3/2)) and
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$(0.429 MeV) (the right column,
$(l_{f},j_{f})$=(1,1/2)) reactions at different energies $E$.
Figure 4: The values of the ANCs, $C^{2}_{1\,\,3/2}$ and $C^{2}_{1\,\,3/2}$,
for $\alpha+{\rm{{}^{3}He}}\to{\rm{{}^{7}Be}}$(g.s.) ((_a_) and (_b_)) and
$\alpha+^{3}{\rm{He}}\to^{7}{\rm{Be}}$(0.429 MeV) ((_c_) and (_d_)) for each
energy $E$ experimental point. The open triangle and cycle symbols and filled
triangles (filled star(the activation), square (the prompt method) symbols)
are data obtained by using the total (separated) experimental astrophysical
$S$ factors from [6](the activation), [7, 9] (the activation and the prompt
method) and [11](the ${\rm{{}^{7}Be}}$ recoils) (from [10]), respectively,
while filled circle symbols are data obtained from the separated experimental
astrophysical $S$ factors from Refs.[7–9]. The solid lines present our results
for the weighted means. Everywhere the width of each of the band is the
weighted uncertainty.
Figure 5: The astrophysical $S$ factors for the
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$(g.s.) (_a_),
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$ (0.429 MeV) (_b_) and
${}^{3}{\rm{He}}(\alpha,\gamma)^{7}{\rm{Be}}$ (0.429 MeV) (_c_) reactions as
well as the branching ratio (_d_). In (_a_) and (_b_): the open diamond and
triangle symbols (the filled triangle symbols) are our result separated from
the total experimental astrophysical $S$ factors of Refs.[7-9] and [6],
respectively, ([11]); the filled circle symbols (filled star and square
symbols) are experimental data of Ref.[9] (Ref.[10], the activation and the
prompt method, respectively); the open circle symbols are our results of the
extrapolation; in (_c_), the symbols are data of all experiments [6]–[10] and
the present work(the open cycle symbols). The solid lines present our
calculations performed with the standard values of geometric parameters
$R$=1.80 fm and $a$=0.70 fm. The dashed line is the result of [18, 19]. In
(_d_):the filled circle, triangle and square symbols are experimental data
taken from Refs.[7, 9], [10] and [41], respectively, and the open triangle and
square symbols are our results. The straight line (width of band ) is our
result for the weighted mean(its uncertainty).
Table 1: The „indirectly determined“ values of the asymptotic normalization
constants ($(C_{13/2}^{exp})^{2}$ and ($C_{11/2}^{exp})^{2}$) for
$\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{}^{3}{\rm{He}}+\alpha\to^{7}{\rm{Be}}$,
the experimental astrophysical $S$ factors ($S^{exp}_{1j_{f}}$ and
$S^{exp}_{34}(E)$) and branching ratio ($R^{exp}(E)$) at different energies
$E$.
$E$ | $(C_{1\,\,j_{f}}^{exp})^{2}$ | $S^{exp}_{1\,\,j_{f}}$ | $S^{exp}_{34}(E)$ | $R^{exp}(E)$
---|---|---|---|---
(keV) | (fm${}^{-1})$ | (${\rm{keV\,\,\,b}}$) | (${\rm{keV\,\,\,b}}$) |
| $j_{f}$=3/2 | $j_{f}$=1/2 | $j_{f}$=3/2 | $j_{f}$=1/2 | |
92.9∗) | 22.0$\pm$1.8 | 14.0$\pm$1.2 | 0.387$\pm$0.031[7, 9] | 0.147$\pm$0.012[7, 9] | 0.534$\pm$0.023[7, 9] | 0.380$\pm$0.030[7, 9]
93.3∗∗) | 21.4$\pm$1.4 | 14.7$\pm$0.9 | 0.374$\pm$0.020 | 0.153$\pm$0.009 | 0.527$\pm$0.027[9] | 0.409$\pm$0.03
105.6∗) | 21.0$\pm$1.8 | 14.6$\pm$1.2 | 0.365$\pm$0.030[7, 8] | 0.151$\pm$0.012[7, 8] | 0.516$\pm$0.031[7, 8] | 0.415$\pm$0.029[7, 9]
106.1∗∗) | 21.2$\pm$1.3 | 14.6$\pm$0.9 | 0.368$\pm$0.020 | 0.150$\pm$0.009 | 0.518$\pm$0.024[9] | 0.408$\pm$0.020
126.5∗) | 21.2$\pm$0.9 | 14.6$\pm$0.6 | 0.366$\pm$0.023 | 0.148$\pm$0.009 | 0.514$\pm$0.019[7, 8] | 0.404$\pm$0.020
147.7∗) | 20.8$\pm$1.1 | 14.6$\pm$0.7 | 0.352$\pm$0.017[7] | 0.147$\pm$0.007[7] | 0.499$\pm$0.017[7, 8] | 0.417$\pm$0.020[7]
168.9∗) | 20.6$\pm$0.8 | 14.1$\pm$0.6 | 0.343$\pm$0.010 | 0.139$\pm$0.006 | 0.482$\pm$0.017[7, 8] | 0.405$\pm$0.020
170.1∗∗) | 21.9$\pm$1.2 | 15.0$\pm$0.8 | 0.362$\pm$0.020 | 0.148$\pm$0.008 | 0.510$\pm$0.021[9] | 0.409$\pm$0.030
420.0∗) | 21.4$\pm$1.7 | 14.7$\pm$1.1 | 0.297$\pm$0.020 | 0.123$\pm$0.009 | 0.420$\pm$0.030[6] | 0.414$\pm$0.050
506.0∗) | 20.9$\pm$1.9 | 14.3$\pm$1.3 | 0.266$\pm$0.020 | 0.113$\pm$0.010 | 0.379$\pm$0.030[6] | 0.424$\pm$0.050
615.0∗) | 21.5$\pm$1.4 | 14.7$\pm$0.9 | 0.254$\pm$0.020 | 0.108$\pm$0.006 | 0.362$\pm$0.020[6] | 0.425$\pm$0.040
951.0∗) | 22.7$\pm$1.2 | 15.6$\pm$0.8 | 0.220$\pm$0.010 | 0.096$\pm$0.005 | 0.316$\pm$0.010[6] | 0.436$\pm$0.030
∗) the activation
∗∗) the prompt method
Table 2: The weighted means of the ANC-values ($C^{{\rm exp}})^{2}$ for ${}^{3}He+\alpha\rightarrow^{7}{\rm Be}$, NVC’s $\mid G\mid^{2}_{{\rm exp}}$ and the calculated values of $S_{3\,4}(E)$ at energies $E$=0 and 23 keV. The the second and third lines (the fifth, sixth and ninth lines) correspond to the results obtained by means of the analysis of data from works pointed out in the first column, and the penultimate line corresponds to that obtained by using data from all experiments [6–11]. The figures parenthetical are the weighted means obtained from corresponding ones given for the activation and the prompt method. Experimental data | $(C^{{\rm exp}}_{1\,3/2})^{2}$, | $\mid G_{1\,3/2}\mid^{2}_{{\rm exp}}$, | $(C^{{\rm exp}}_{1\,1/2})^{2}$, | $\mid G_{1\,1/2}\mid^{2}_{{\rm exp}}$, | $S_{3\,4}$(0), | $S_{3\,4}$(23 keV),
---|---|---|---|---|---|---
| fm-1 | fm | fm-1 | fm | keV b | keV b
[6–9](the activation) | 21.2$\pm$0.4 | 1.00$\pm$0.02 | 14.5$\pm$0.3 | 0.688$\pm$0.013 | 0.560$\pm$0.003 | 0.551$\pm$0.003
[9](the prompt | 21.5$\pm$0.7 | 1.02$\pm$0.04 | 14.8$\pm$0.5 | 0.697$\pm$0.024 | 0.568$\pm$0.002 | 0.558$\pm$0.002
method), the set I | (21.3$\pm$0.4) | (1.01$\pm$0.02) | (14.6$\pm$0.2) | (0.690$\pm$0.011) | (0.566$\pm$ 0.004) | (0.556$\pm$0.003)
| | | | | 0.560$\pm$0.017 [9] |
[10](the activation) | 24.0$\pm$0.4 | 1.13$\pm$0.02 | 16.2$\pm$0.2 | 0.768$\pm$0.011 | 0.630$\pm$0.008 | 0.619$\pm$0.008
[10](the prompt me- | 24.1$\pm$0.3 | 1.14$\pm$0.02 | 16.4$\pm$0.2 | 0.773$\pm$0.011 | 0.624$\pm$0.010 | $0.612\pm$0.010
thod) and [11] (the | (24.1$\pm$0.2) | (1.14$\pm$0.01) | (16.3$\pm$0.2) | (0.771$\pm$0.008) | (0.628$\pm$ 0.006) | (0.616$\pm$0.006)
${\rm{{}^{7}Be}}$ recoils),the set II | | | | | 0.596$\pm$ 0.021 [10] |
| | | | | 0.57$\pm$ 0.04 [11] |
${\rm{[6-10]}}$ | | | | | 0.580$\pm$ 0.043 [38] |
[6–11] | 23.3${}^{{\rm{+1.0}}}_{{\rm{-2.4}}}$ | 1.10${}^{{\rm{+0.05}}}_{{\rm{-0.11}}}$ | 15.9${}^{{\rm{+0.6}}}_{{\rm{-1.5}}}$ | 0.751${}^{{\rm{+0.028}}}_{{\rm{-0.072}}}$ | 0.613${}^{{\rm{+0.026}}}_{{\rm{-0.063}}}$ | 0.601${}^{{\rm{+0.030}}}_{{\rm{-0.072}}}$
|
arxiv-papers
| 2009-05-13T07:13:48 |
2024-09-04T02:49:02.587515
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "S.B. Igamov, Q.I. Tursunmahatov and R. Yarmukhamedov (INP, Tashkent,\n Uzbekistan)",
"submitter": "Sayrambay Igamov Dr.",
"url": "https://arxiv.org/abs/0905.2026"
}
|
0905.2125
|
Experience-driven formation of parts-based representations in a model of
layered visual memory
Jenia Jitsev1,2,∗ and Christoph von der Malsburg1
1Frankfurt Institute of Advanced Studies, Frankfurt Am Main, Germany
2Johann Wolfgang Goethe University, Frankfurt Am Main, Germany
Running Title:
Formation of layered visual memory.
Correspondence:
Jenia Jitsev
Frankfurt Institute for Advanced Studies (FIAS),
Ruth-Moufang-Str.1, 60438 Frankfurt am Main, Germany
jitsev@fias.uni-frankfurt.de
## Abstract
Growing neuropsychological and neurophysiological evidence suggests that the
visual cortex uses parts-based representations to encode, store and retrieve
relevant objects. In such a scheme, objects are represented as a set of
spatially distributed local features, or parts, arranged in stereotypical
fashion. To encode the local appearance and to represent the relations between
the constituent parts, there has to be an appropriate memory structure formed
by previous experience with visual objects. Here, we propose a model how a
hierarchical memory structure supporting efficient storage and rapid recall of
parts-based representations can be established by an experience-driven process
of self-organization. The process is based on the collaboration of slow
bidirectional synaptic plasticity and homeostatic unit activity regulation,
both running at the top of fast activity dynamics with winner-take-all
character modulated by an oscillatory rhythm. These neural mechanisms lay down
the basis for cooperation and competition between the distributed units and
their synaptic connections. Choosing human face recognition as a test task, we
show that, under the condition of open-ended, unsupervised incremental
learning, the system is able to form memory traces for individual faces in a
parts-based fashion. On a lower memory layer the synaptic structure is
developed to represent local facial features and their interrelations, while
the identities of different persons are captured explicitly on a higher layer.
An additional property of the resulting representations is the sparseness of
both the activity during the recall and the synaptic patterns comprising the
memory traces.
## Keywords
Visual memory, self-organization, unsupervised learning, competitive learning,
bidirectional plasticity, activity homeostasis, parts-based representation,
cortical column
## 1\. Introduction
A working hypothesis of cognitive neuroscience states that the higher
functions of the brain require coordinated interplay of multiple cortical
areas distributed over the brain-wide network. For instance, the mechanisms of
memory are thought to be subserved by various cortical and subcortical
regions, including the medial temporal lobe (MTL), inferior temporal (IT) and
prefrontal (PFC) cortex areas (Fuster, 1997; Miyashita, 2004) to name only few
of them prominent in the function of the visual memory. Studies of information
processing going on in the course of encoding, consolidation and retrieval of
visual representations reveal a hierarchical organization, sparse distributed
activity and massive recurrent communication within the memory structure (Tsao
et al., 2006; Konen and Kastner, 2008; Osada et al., 2008). Here we focus our
attention on developmental issues and discuss the process of self-organization
that may lead to the formation of the core structure responsible for flexible,
rapid and efficient memory function, with organizational properties as
inferred from the experimental works.
It is widely held that processes responsible for memory formation rely on
activity-dependent modification of the synaptic transmission and on regulation
of the intrinsic properties of single neurons (Miyashita, 1988; Bear, 1996;
Zhang and Linden, 2003). However, it is far from clear how these local
processes could be orchestrated for memorizing complex visual objects composed
of many spatially distributed subparts arranged in stereotypic relations. In
mature cortex, there is strong evidence for a basic vocabulary of shape
primitives and elementary object parts in the TEO and TE areas of posterior
and anterior IT (Fujita et al., 1992; Tanaka, 2003) as well as for identity
and category specific neurons in anterior IT, PFC and hippocampus (Freedman et
al., 2003; Quiroga et al., 2005). Further findings indicate that the encoding
of visual objects involves the formation of sparse clusters of distributed
activity across the processing hierarchy within inferior temporal cortex
(Tsunoda et al., 2001; Reddy and Kanwisher, 2006). This seems to be a neuronal
basis for the parts-based representation that the visual system employs to
construct objects from their constituent part elements (Ullman et al., 2002;
Hayworth and Biederman, 2006).
In the light of these findings, we may ask ourselves whether the observed
memory organization happens to be the outcome of a self-organization process
that would have to find solution to a number of developmental tasks. To
provide a neural substrate for the parts-based representation, memory traces
have to be formed and maintained in an unsupervised fashion to span the basic
vocabulary for the visual elements and to define associative links between
them. Subsets of associatively linked complex features can then be interpreted
as coherent objects composed of the respective parts. As there is a virtually
unlimited number of visual objects in the environment, the limited resources
spent on formation of these memory traces have to be carefully allocated to
avoid unfavorable interference effects and information loss caused by
potential memory content overlap. Thus, the system is permanently confronted
with the problem of selecting the right small population out of the totally
available, potentially conflicting synaptic facilities which has to be
modified for acquisition and consolidation of a novel stimulus. Moreover, if
objects stored in memory are supposed to share common parts, a regulation
mechanism would be required to balance the usage load of part-specific units
and minimize the interference, reassuring their optimal participation in
memory content formation and encoding. Another issue is the timing of the
modifications, which have to be coordinated properly if the correct relational
structure of distributed parts constituting the object’s identity is to be
stored in the memory.
The same selection problem arises on the fast time scale, during memory recall
or for encoding of a novel object. Currently, there is a broad agreement on
the sparseness of the activity patterns evoked by the presentation of a
complex visual object, where only a small fraction of the available neurons in
the higher visual cortex participate in the stimulus-related response (Rolls
and Tovee, 1995; Olshausen and Field, 2004; Quiroga et al., 2008). In the
context of the parts-based representation scheme, one possible interpretation
of sparse activation would be the selection of few parts from a large
overcomplete vocabulary for the composition of the global visual object.
Considering the speed of object recognition measured in psychophysical
experiments on humans and primates (Thorpe and Fabre-Thorpe, 2001), there have
to be neural mechanisms allowing this selection procedure to happen within the
very short time of a few hundred milliseconds. Moreover, if relations are to
be represented by dynamic assemblies of co-activated part-specific neurons,
such a combinatorial selection would require clear unambiguous temporal
correlations between the constituent neurons to identify them and only them as
being part of the same assembly encoding the object (von der Malsburg, 1999;
Singer, 1999).
Hypothesizing that the process of neural resource selection and its
coordination across distributed units is a crucial ingredient for successful
structure formation and learning, we address in this study the neural
mechanisms behind the selection process by incorporating them in a model of a
layered visual memory. Here we take the competition and cooperation between
the neuronal units as the functional basis for the structure formation (von
der Malsburg and Singer, 1988; Edelman, 1993) and provide modification
mechanisms based on activity-dependent bidirectional plasticity (Bienenstock
et al., 1982; Artola and Singer, 1993) and homeostatic activity regulation
(Desai et al., 1999). We confront the system with a task of unsupervised
learning and human face recognition using a database of natural face images.
Our aim is then to demonstrate the formation of synaptic memory structure
comprising bottom-up, lateral and top-down connectivity.
Starting from an initial undifferentiated connectivity state, the system is
able to form a representational basis for the storage of individual faces in a
parts-based fashion by developing memory traces for each individual person
over repetitive presentations of the face images. The memory traces are
residing in the scaffold of lateral and top-down connectivity making up the
content of the associative memory that holds the associatively linked local
features on the lower and the configurational global identity on the higher
memory layer. The recognition of face identity can then be explicitly signaled
by the units on the higher memory layer (Fig. 1). By performing this self-
organization, the system solves a highly non-trivial and important problem of
capturing simultaneously local and global signal structure in an unsupervised,
open-ended fashion, learning not only the appearance of local parts, but also
memorizing their combinations to represent the global stimulus identity
explicitly in lateral and top-down connectivity. None of the previous works on
unsupervised learning of natural object representation were able to solve this
problem in this explicit form (Waydo and Koch, 2008; Wallis et al., 2008).
As a consequence of this explicit representation, the local facial features
are interpreted in the global context of the identity of a person, making use
of the structure formed in the course of previous experience. This contextual
structure can also be utilized in generative fashion to replay the memory
content in absence of external stimuli, also supporting the mechanism of
selective object-based attention. The binding of the local features and their
identity label into a coherent assembly is done in the course of a decision
cycle spanned by a common oscillatory rhythm. The rhythm modulates the
competition strength and builds up a frame for repetitive local winner-take-
all computation. As the agreement between incoming bottom-up, lateral and top-
down signals gets continuously improved during the competitive learning, the
bound assemblies tend to reflect more and more consistently the face
identities stored in the memory, so that the recognition error progressively
decreases. Moreover, the employment of the contextual connectivity speeds up
the learning progress and leads to a greater capability to generalize over
novel data not shown before. The advanced view on the structure formation as
an optimization process driven by evolutionary mechanisms of selection and
amplification may also serve as a conceptual basis for studying self-
organization of generic subsystem coordination, independent of the nature of
the cognitive task.
## 2\. Materials and Methods
### 2.1. Visual memory network organization
Our model is based on two consecutive interconnected layers (Fig. 1), which we
tend to identify with the hierarchically organized regions of IT and PFC,
containing a number of segregated cortical modules that will be termed columns
(Fujita et al., 1992; Mountcastle, 1997; Tanaka, 2003). The columns situated
on the lower layer will be termed here bunch columns, as each of them are
supposed to hold a set of local facial features acquired in the course of
learning. The column on the higher memory layer will be called identity column
as its task will be to learn the global face identity for each individual
person composed out of distributed local features on the lower memory layer.
Being a local processing module, each column contains further a number of
subunits we call core units (or simply units), which receive common excitatory
afferents and are bound by common lateral inhibition. Acting as elementary
processing units of the network, the core units represent an analogy to a
tightly coupled population of excitatory pyramidal neurons (“pyramidal core”)
as documented in cortical layers II/III and V (Peters et al., 1997; Rockland
and Ichinohe, 2004; Yoshimura et al., 2005). These populations are thought to
be capable of sustaining their own activity even if afferent drive is removed.
On the lower level of processing, each bunch column is attached to a dedicated
landmark on the face to process the sensory signal represented by a Gabor
filter bank extracted locally from the image (Daugman, 1985; Wiskott et al.,
1997). The connections bunch units receive from the image constitute their
bottom-up receptive fields (here, referring to a receptive field we always
mean the pattern of synaptic connections converging on a unit). Furthermore,
there are excitatory lateral connections between the bunch columns on the
lower layer binding the core units across the modules. The bunch units also
send bottom-up efferents to and get top-down afferent projections from the
identity units situated on the higher level of processing. All the types of
intercolumnar synapses are excitatory and plastic, the connectivity structure
being all-to-all homogeneous in the initial state.
Figure 1: Layered visual memory model. (A) Two consecutive interconnected
layers for hierarchical processing. On the lower bunch layer (IT, each column
contains $n=20$ units), a storehouse of local parts linked associatively via
lateral connections is formed by unsupervised learning. On the higher identity
layer (PFC, column contains $m=40$ units), symbols for person identities
emerge, being semantically rooted in parts-based representations of the lower
layer. The identity units provide further contextual support for the lower
layer by establishing top-down projections to the corresponding part-specific
units. (B) Different face views used as input to the memory (one person out of
total $40$ used for learning shown). Top left is the original view with
neutral expression used for learning. Other views were used for testing the
generalization performance (bottom row shows the duplicate views taken two
weeks after the original series.). (C) Facial landmarks used for the sensory
input to the memory, provided by Gabor filter banks extracted at each landmark
point
### 2.2. Dynamics of a core unit
A cortical column module containing a set of $n$ core units is modeled by a
set of $n$ differential equations each describing the dynamic behavior of the
unit’s activity variable $p$. The basic form of the equation, ignoring the
afferent inputs for the time being, is motivated by a previous computational
study on a cortical column (Lücke, 2005):
$\tau\dfrac{dp}{dt}=\alpha p^{2}(1-p)-\beta
p^{3}-\lambda\nu(\operatorname*{max}(\vec{\mathbf{p}}_{t})-p)p,$ (1)
where $\tau$ is the time constant, $\alpha$ the strength of the self-
excitability, $\beta$ the strength of self-inhibitory effects, $\lambda$ the
strength of the lateral inhibition between the units, $\nu$ the inhibitory
oscillation signal and $\operatorname*{max}(\vec{\mathbf{p}}_{t})$ the
activity of the strongest unit in the column module. In this study we set for
all units $\tau=0.02\,ms$, $\alpha=\beta=1$, $\lambda=2$. As $p$ reflects the
activity of a whole neuronal population receiving common afferents, we may
assume a small time constant value, referring to an almost instantaneous
response behavior of a sufficiently large ($n=100$ or more) population of
neurons (Gerstner, 2000).
Figure 2: Excitatory ($\omega$) and inhibitory ($\nu$) oscillation rhythms
defining a decision cycle in the gamma range.
A crucial property of the column dynamics is the ability to change the
structure of the stable activity states by variation of the parameter $\nu$.
We take the oscillatory inhibition activity $\nu$ (Fig. 2) to be of a form
$\nu(t)=\nu_{min}+\frac{1}{k\cdot
e^{-g\,(mod(t,T)\,-\,0.5\,(T+T_{init}))}\,+\,(\nu_{max}-\nu_{min})^{-1}}$ (2)
with its period $T=25\,ms$ being in the gamma range. $\nu_{min}$ and
$\nu_{max}$ are the lower and upper bounds for oscillation amplitude,
$T_{init}$, $k$, $g$ parameterize the form of the sigmoid activity curve. Here
the values are set to $\nu_{min}=0.005$, $\nu_{max}=1.0$, $T_{init}=5ms$,
$g=0.5$, $k=2$. With the rising strength of the oscillatory inhibition, the
parameter $\nu$ crosses a critical bifurcation point of structural instability
which is given by:
$\nu_{c}=\frac{\alpha}{\lambda}$ (3)
so that by inserting the given values of $\alpha$ and $\lambda$ we obtain
$\nu_{c}=0.5$. For the range $\nu<\nu_{c}$ any units subset can remain active
(with the stationary activity level $p=\tfrac{\alpha}{\alpha+\beta}$), as
these states are stable given the low strength of lateral inhibition. After
crossing the critical value $\nu_{c}$, all those states having more than one
unit active loose the stability, so that only a single winner unit can remain
active on the level $\tfrac{\alpha}{\alpha+\beta}$. The bifurcation property
realizes winner-take-all behavior of the column acting as a competitive
decision unit (Lücke, 2005) to select the best response alternative on the
basis of the incoming input.
The qualitative dynamical behavior stays the same in the extended formulation
of the activity equation, which is:
$\displaystyle\tau\dfrac{dp}{dt}=$
$\displaystyle\alpha\omega(1+\kappa^{LAT}I^{LAT}+\kappa^{TD}I^{TD})p^{2}(1-p)-\beta
p^{3}$ (4)
$\displaystyle-\lambda\omega\nu(\operatorname*{max}(\vec{\mathbf{p}}_{t})-p)p+\kappa^{BU}I^{BU}p^{2}+\theta
p$ $\displaystyle+\omega\epsilon+\sigma\eta_{t}p,$
where $I^{BU}$, $I^{LAT}$, $I^{TD}$ are the afferent inputs of respective
bottom-up, lateral and top-down origin, $\kappa^{BU}=\kappa^{LAT}$ =
$\kappa^{TD}=1$ are their coupling coefficients, $\omega$ is an excitatory
oscillatory signal, $\theta$ an excitability threshold of the unit,
$\sigma=0.001$ is parameterizing the multiplicative gaussian white noise
$\eta_{t}$ and $\epsilon$ is an unspecific excitatory drive. $\theta$ is a
dynamic threshold variable used for homeostatic activity regulation of the
unit, it will be described later in detail; $\epsilon$ depends on the total
number of core units $n$, $\epsilon=\tfrac{1}{5n}$.
An important modeling assumption is the separation of the synapses of
different origin as implemented in Eq.4. This separation causes different
synaptic inputs to have different impact on the activity of the unit. The
functional difference can be made explicitly evident by taking a glance at the
stable state of the winner unit (assuming for clarity
$\sigma=\epsilon=\theta=0$), which takes the value
$p_{stable}=\dfrac{\alpha\omega(1+\kappa^{LAT}I^{LAT}+\kappa^{TD}I^{TD})+\kappa^{BU}I^{BU}}{\alpha\omega(1+\kappa^{LAT}I^{LAT}+\kappa^{TD}I^{TD})+\beta},$
(5)
where bottom-up input $I^{BU}$ contributes to the activity level in a linear
fashion, while the contribution of lateral and top-down inputs $I^{LAT}$ and
$I^{TD}$ is non-linear, resembling the pure driving and hybrid driving-
modulating roles of afferents from different origin commonly assumed for
cortical processing (Sherman and Guillery, 1998; Friston, 2005). The course of
the activity is also influenced by the excitatory oscillatory activity
$\omega$ (Fig. 2), which is given by:
$\omega(t)=\omega_{min}+\frac{mod(t,T)}{T}\,(\omega_{max}-\omega_{min}),$ (6)
where $\omega_{min}=0.25$ and $\omega_{max}=0.75$ are the lower and upper
bounds for oscillation amplitude. The excitatory oscillation doesn’t make any
impact on the critical bifurcation point $\nu_{c}$, as it modulates the self-
excitability strength $\alpha$ and the lateral inhibition strength $\lambda$
to the same extent (Eq. 4). Instead, it elevates the activity level of the
units as long as they manage to resist the rising inhibition and remain in the
active state. In the state where lateral inhibition gets strong enough to shut
down all but the strongest core unit, only this winner unit is affected by the
elevating impact of the excitatory oscillation, being able to further amplify
its activity at the cost of suppressing the others. Both inhibitory and
excitatory oscillations may have presumably different sources, the former
being generated by the interneuron network of fast-spiking (FS) inhibitory
cells (Whittington et al., 1995) and the latter having its origin in
activities of fast rhythmic bursting (FRB), or chattering, excitatory neurons
(Gray and McCormick, 1996).
In addition to the local competitive mechanism supported by the lateral
inhibition within a column, we use a simple form of forward inhibition
(Douglas and Martin, 1991) acting on the incoming afferents. To model this,
the incoming presynaptic activities are transformed as following before they
make up the afferent input via the respective receptive field of a unit:
$\displaystyle\hat{p}^{pre}_{i}=p^{pre}_{i}-{\displaystyle\dfrac{1}{K}\sum_{j}^{K}p^{pre}_{j}},$
$\displaystyle\qquad pre\in\\{BU,LAT,TD\\}$ (7) $\displaystyle
I^{Source}={\displaystyle\sum_{i}^{K}w^{Source}_{i}\hat{p}^{pre}_{i}},$
$\displaystyle\qquad Source\in\\{BU,LAT,TD\\},$
where $p^{pre}$ stands for raw presynaptic activity, $\hat{p}^{pre}$ is the
presynaptic activity transformed by forward inhibition, $K$ is the total
number of incoming synapses of a certain origin, the weights $w^{Source}_{i}$
constitute the receptive field and $I^{Source}$ designates the final computed
value of the afferent input from the respective origin. Although all plastic
synaptic connections in the network are taken to be of excitatory nature, the
forward inhibition allows units to exert inhibitory action across the columns.
An important effect of this processing is the selection and amplification of
strong incoming activities at the cost of weaker ones, which can be
interpreted as presynaptic competition among the afferent signals (Douglas and
Martin, 1991; Swadlow, 2003).
An additional property of the dynamics is the natural restriction of the
population activity values $p$ to the interval between $0$ and $1$ (Eq. 5),
given that the afferent input also stays in the same range. This allows both
interpretations of the variable as either the population rate or the
probability of an arbitrary neuron from the population to generate a spike.
### 2.3. Homeostatic activity regulation
The activity dynamics equation (Eq. 4) contains the variable threshold
$\theta$, which regulates the excitability of the unit. Here, higher values of
$\theta$ stand for higher unit excitability, implying a greater potential to
become active given a certain amount of input. The threshold is updated
according to the following rule:
$\dfrac{d\theta}{dt}=\tau_{\theta}(p_{aim}-<p>),$ (8)
where $<p>=\tfrac{1}{T}{\displaystyle\int_{t}^{t+T}p(t)dt}$ is the average
activity of the unit measured over the period $T$ of a decision cycle,
$p_{aim}$ specifies the target activity level and $\tau$ is the inverse time
constant ($\tau_{\theta}=10^{-4}ms^{-1}$). The target activity level $p_{aim}$
depends on the number of units $n$ in a column, $p_{aim}=\tfrac{1}{n}$. The
initial value of the excitability threshold is zero, $\theta(0)=0$.
The motivation behind this homeostatic regulation of unit’s activity (Desai et
al., 1999; Zhang and Linden, 2003) is to encourage a uniform usage load across
units in the network, so that their participation on the formation of the
memory traces is balanced. Bearing in mind the strongly competitive character
of the columnar dynamics, the regulation of the excitability threshold changes
the a-priori probability of a unit to be winner of a decision cycle. So, if a
certain unit happens to take part too frequently in encoding of the memory
content, violating the requirement of the uniform win probability across the
units, its excitability will be downregulated so that the core unit becomes
more difficult to activate, giving an opportunity for other units to
participate in the representation. Reversely, a unit being silent for too long
is upregulated, so it can get excited more easily and contribute to memory
formation.
### 2.4. Activity-dependent bidirectional plasticity
We choose a bidirectional modification rule to specify how a synapse
connecting one core unit to another may undergo a change in its strength $w$:
$\dfrac{dw}{dt}=\varepsilon
p^{pre}p^{post}\mathcal{H}(\chi-A(t))\mathcal{H}(p^{post}-\theta_{0}^{-})\mathcal{H}_{-}^{+}(p^{post}-\theta_{-}^{+})$
(9)
with the sign switch functions $\mathcal{H}(x)$ and $\mathcal{H}_{-}^{+}(x)$
given as following
$\displaystyle\mathcal{H}(x)=\begin{cases}1,\quad x\geq 0\\\ 0,\quad
x<0\end{cases},$
$\displaystyle\qquad\mathcal{H}_{-}^{+}(x)=\begin{cases}1,\quad x\geq 0\\\
-1,\quad x<0\end{cases}$ (10)
providing the bidirectional form of the synaptic modification. The amplitude
of the change is determined by the correlation between the presynaptic
activity $p^{pre}$ and the postsynaptic activity $p^{post}$, both variables
being non-negative due to the properties of the unit activity dynamics. The
learning rate $\varepsilon=5\cdot 10^{-4}\,ms^{-1}$ specifies the speed of
modification being the inverse time constant. Other variables determine the
sign of the modification. The threshold
$\theta_{-}^{+}=\operatorname*{max}(\vec{\mathbf{p}}_{t}^{post})$ is used to
compare the postsynaptic activity against current maximum activity in the
column. $A(t)$ is the the total activity level in the postsynaptic column at
time point $t$, $A(t)=\sum_{i=1}^{n}p_{i}(t)$, where $n$ is the number of
units in the column and $p_{i}(t)$ their activities at time point $t$. $A(t)$
is compared to a variable gating threshold $\chi$, which pursues the average
total activity level $<A(t)>$ computed over the period $T$ of a decision
cycle:
$\displaystyle\dfrac{d\chi}{dt}=\tau_{\chi}(<A(t)>-\chi),$
$\displaystyle\quad<A(t)>=\frac{1}{T}{\displaystyle\int_{t}^{t+T}A(t)dt}$ (11)
with $\tau_{\chi}=10^{-3}\,ms^{-1}$ as inverse time constant, the threshold
initial value set to $\chi(0)=0.5$. Furthermore, the postsynaptic activity
$p^{post}$ is compared to the sliding threshold $\theta_{0}^{-}$ that follows
the average postsynaptic activity $<p^{post}(t)>$ computed over the period $T$
of a decision cycle:
$\displaystyle\frac{d\theta_{0}^{-}}{dt}=\tau_{\theta_{0}^{-}}(<p^{post}(t)>-\theta_{0}^{-}),$
$\displaystyle\quad<p^{post}(t)>=\frac{1}{T}{\displaystyle\int_{t}^{t+T}p^{post}(t)dt}$
(12)
with the inverse time constant $\tau_{\theta_{0}^{-}}=2\cdot
10^{-3}\,ms^{-1}$, the initial value of the threshold
$\theta_{0}^{-}(0)=p_{aim}$ being equal to the target postsynaptic activity
level (see Eq. 8).
Figure 3: Bidirectional plasticity. (A) Experimentally grounded modification
rule (ABS, Artola and Singer, 1993) (B) A simplified sign switch rule used in
the model.
The rule employed here is a simplified version of a bidirectional modification
assuming the existence of two sliding thresholds $\theta_{0}^{-}$ and
$\theta_{-}^{+}$ (Fig. 3), which subdivide the range of postsynaptic activity
into zones where no modification, depression or potentiation may occur,
resembling BCM and ABS learning rules rooted in neurophysiological findings
(Bienenstock et al., 1982; Artola and Singer, 1993; Bear, 1996; Cho et al.,
2001). If the postsynaptic activity level is too low
($p^{post}<\theta_{0}^{-}$), no modification can be triggered. A mediocre
level of activation ($\theta_{0}^{-}<p^{post}<\theta_{-}^{+}$) promotes long-
term depression (LTD, negative sign), and a high level of activity
($p^{post}>\theta_{-}^{+}$) makes long-term potentiation (LTP, positive sign)
possible. Combined with the winner-take-all-like behavior of the core units,
the intended effect of the rule is to introduce the competition in synaptic
formation across the receptive fields of the units, enabling them to separate
patterns even if they are highly similar and overlap strongly. If multiple
core units are frequently co-activated by a stimulus, the winner unit gets an
advantage in potentiating its stimulated synapses, while the stimulated
synapses of the units with lower activity either do not change or are affected
by the depression. If this situation occurs over and over, the receptive
fields of previously co-activated units are supposed to drift apart preferring
the structure where strong synapses are not in conflict with each other
anymore, emphasizing the discriminative features of the patterns preferred by
the units.
In addition, we here use multiplicative synaptic scaling applied to synapses
grouped according to their origin (bottom-up, lateral and top-down). We model
this simply by $L^{2}$-normalization of the receptive field vector,
$\tilde{w}^{Source}_{i}={w}^{Source}_{i}\big{/}\lVert\mathbf{w}^{Source}\rVert_{\\!{}_{\hskip
1.0pt2}}$, with ${w}^{Source}_{i}$ as a weight of the receptive field
comprising the synapses of the respective origin $Source\in\\{BU,LAT,TD\\}$,
and $\tilde{w}^{Source}_{i}$ its normalized version. The normalizing procedure
can be applied after a number of decision cycles, here we choose this number
to be $10$ cycles. The scaling mechanism promotes competition between synapses
within the receptive field, as the growth of one synapse happens at the cost
of the weakening the others (Miller and MacKay, 1994).
### 2.5. Open-ended unsupervised learning and performance evaluation
Data format. To provide the system with natural image input, we choose the AR
database containing grayscale human face photographs of $126$ persons in total
(Martinez and Benavente, 1998). For each person, there is a number of views
taken under different conditions (Fig. 1 B). The original view with neutral
facial expression is accompanied by a duplicate view depicting the same person
at a later time point (two weeks after the original shot). Furthermore, there
are variations in emotional expression such as smiling or sad for both
original and duplicate views. The images were automatically prelabeled with a
graph structure put upon the face, positioning nodes on consistent landmarks
across different individuals with a software (EAGLE) based on the algorithm
described in (Wiskott et al., 1997). A subset of $L=6$ facial landmarks was
selected around the eyes, nose and mouth regions (Fig. 1 C), each landmark
being subserved by a single bunch column. Being attached to a dedicated facial
landmark, each bunch column is provided with a sensory image signal
represented by a Gabor filter bank extracted locally. The Gabor wavelet family
used for the filter operation is parameterized by the frequency $k$ and
orientation $\varphi$ of the sinusoidal wave and the width of the gaussian
envelope $\sigma$ (Daugman, 1985). We use $s=5$ different frequencies and
$r=8$ different orientations sampled uniformly to construct the full filter
bank (for more details refer to (Wiskott et al., 1997)). The local filtering
of the image produces a complex vector of responses, containing both amplitude
and phase information. We use only the amplitude part consisting of $s\cdot
r=40$ real coefficients to model the responses of complex cells. This
amplitude vector is further normalized by $L^{2}$-Norm to serve as bottom-up
input for the respective landmark bunch column of the lower memory layer.
Network configurations. Selecting randomly $P=40$ persons from a database, we
allocate $n=20$ core units for each bunch column to ensure that multiple
persons have to share some common parts. The identity column then contains
$m=40$ units corresponding to the number of persons we want be able to recall
explicitly. Two different configurations of the memory system are employed to
test our hypothesis about the functional advantage of a fully recurrent
structure over the purely feed-forward one. Each configuration is supposed to
form the memory structure in the course of the learning phase. While the fully
recurrent configuration learns bottom-up, lateral and top-down connectivity,
the purely feed-forward configuration is a stripped-off version using only the
bottom-up pathways. Observing these different configurations during the
learning phase and testing them on novel face views subsequently, we are able
to compare both in terms of learning progress and performance on the
recognition task to find out potential functional differences between them.
Simulation. In order to run the memory network, the solutions for the
differential equations governing the behavior of dynamical variables have to
be computed numerically in an iterative fashion. We use a simple Euler method
with a fixed time step $\Delta t=0.02ms$ to do this. To save computational
time, slow threshold variables are updated once in a decision cycle,
correcting the time steps accordingly.
Open-ended unsupervised learning. The system starts with homogeneously
initialized structure parameters, all threshold values and all synaptic
weights being undifferentiated, so that intercolumnar all-to-all connectivity
is the initial structure of the memory network. During the iterative learning
procedure, for each decision cycle a face image is selected from a database
randomly and presented to the system, evoking a pattern of activity on both
memory layers and triggering synaptic and threshold modification mechanisms.
The learning procedure is open-ended as there is neither a stop condition nor
an explicitly defined time-dependent learning rate variables which would
decrease with time progress and freeze modifications at some point. The
learning progress can be assessed directly by evaluating the recognition error
on the basis of the previous network responses. Further, the inspection of the
structure of the receptive fields delivers hints about their maturation
progress. Investigating the rate of ongoing modifications of the synaptic
weights and dynamic thresholds could give a hint on whether the changes in the
network structure are still taking place in significant proportion, providing
a basis for a stop condition if necessary. In the later learning phase the
general stability of the established structure can be also verified by simple
visual inspection.
Performance evaluation. To assess the recognition performance of the system,
we make a distinction between the learning and generalization error. The
learning error is defined as a rate of wrong responses to person identity from
the training data set containing the original face views with neutral
expression. The statistics of response behavior to each particular person is
gathered for each identity core unit over the history of the network
stimulation. The learning error rate can then be computed for each small
interval during the learning phase by using the preferences the identity units
have developed for the individual persons during the preceding stimulation.
Opposed to this, the generalization error is computed on the set of novel
views not presented before. During the test for generalization error, all the
synaptic weights are frozen, which is done to exclude the possibility that
recognition rate improves during the testing phase due to potential benefit of
synaptic modifications. The generalization error is assessed for each view
type separately to see potential performance differences between different
views (the duplicate view and the views with two different emotional
expressions, smiling and sad). The history of network behavior during the
learning phase is used again in the same way for the computation of the error
rate, as done for the learning error evaluation.
### 2.6. Assessing network’s organization
To analyze the progress of structure formation, we use measures describing
different properties of the receptive fields. The distance measure calculates
the distance between two synaptic weight vectors $\mathbf{w}_{i}$ and
$\mathbf{w}_{j}$:
$\displaystyle d(\mathbf{w}_{i},\mathbf{w}_{j})$
$\displaystyle=\frac{1}{4}\,\Bigl{(}\frac{\mathbf{w}_{i}}{\lVert\mathbf{w}_{i}\rVert_{\\!{}_{\hskip
1.0pt2}}}-\frac{\mathbf{w}_{j}}{\lVert\mathbf{w}_{j}\rVert_{\\!{}_{\hskip
1.0pt2}}}\Bigr{)}^{2}$ (13) $\displaystyle=\frac{1}{2}(1-\cos\phi),$
where $\phi$ denotes the angle between the two synaptic weight vectors each
comprising a receptive field. The value lies in the interval between zero and
one. If the weight vectors are the same, the distance value is zero, if their
dissimilarity is maximal ($\alpha=\pi$), the value is one. Utilizing this
basic distance measure, we further construct a differentiation measure, which
is supposed to reflect the grade of differentiation between the receptive
fields of the same type across the whole network. The differentiation grade
${D}^{Source}_{k}$ is computed for each column for the receptive fields of a
given type $Source\in\\{BU,LAT,TD\\}$ and then an average differentiation
value ${D}^{Source}$ is built from the values of all $K$ columns:
$\displaystyle\mathcal{D}^{Source}_{k}=$
$\displaystyle\frac{1}{n(n-1)}{\sum_{i}^{n}\sum_{j\neq
i}^{n}d(\mathbf{w}^{Source}_{i},\mathbf{w}^{Source}_{j})}$ (14)
$\displaystyle\mathcal{D}^{Source}=$
$\displaystyle\frac{1}{K}{\sum_{k}^{K}{D}^{Source}_{k}},$
where $n$ is the number of units in the column. The differentiation grade
measure is evaluated separately for bunch columns on the lower memory layer
and for the identity column on the higher memory layer.
Further we employ a measure reflecting the property of the inner structure of
a receptive field to be sparse, that is, possessing few strong synapses and
many weak synapses comprising the receptive field. If the inner receptive
field structure is poorly differentiated the sparseness value will be low; if
differentiation within the receptive field is strong, then the value will be
high. To assess the same property not only within, but also across receptive
fields, the overlap measure is defined. If the receptive fields of the same
type have many strong overlapping synapses in common the value will be high,
if there are only few such overlapping synapses the value will be low. The
overlap measure is thus closely related to the differentiation grade between
the receptive fields as assessed using the distance measure. Both sparseness
denoted as $\zeta$ and overlap denoted as $\xi$ have the same scheme behind
their computation, with the only difference that the former is computed within
while the latter across the receptive field vectors using a common selectivity
measure $\mathcal{A}^{Source}(s)$ as defined in (Rolls and Tovee, 1995).
Again, the computation is done for each column on receptive fields of the same
type $Source\in\\{BU,LAT,TD\\}$, building then type-specific average values
$\mathcal{C}^{Source}$ and $\mathcal{E}^{Source}$ over all $K$ columns:
$\displaystyle\mathcal{A}^{Source}(s)=\Bigl{(}\tfrac{1}{s}\sum_{i}^{s}w^{Source}_{i}\Bigr{)}^{2}\bigg{/}\Bigl{(}\tfrac{1}{s}\sum_{i}^{s}(w^{Source}_{i})^{2}\Bigr{)}$
(15)
$\displaystyle\zeta^{Source}_{k}=\frac{1}{n}{\sum_{i}^{n}\mathcal{A}^{Source}_{i}(r)},\qquad\xi^{Source}_{k}=\frac{1}{r}{\sum_{i}^{r}(1-\mathcal{A}^{Source}_{i}(n))}$
$\displaystyle\mathcal{C}^{Source}=\frac{1}{K}{\sum_{k}^{K}{\zeta}^{Source}_{k}},\qquad\mathcal{E}^{Source}=\frac{1}{K}{\sum_{k}^{K}{\xi}^{Source}_{k}},$
where $r$ is the number of synapses comprising a receptive field of type
$Source\in\\{BU,LAT,TD\\}$, $n$ is the number of units in a column, and $K$ is
the total number of assessed columns. The evaluation is done separately for
the bunch columns and the identity column.
## 3\. Results
### 3.1. Structure formation
Facing a task of unsupervised learning, the system develops a structural basis
for storing the faces of individual persons shown during the learning phase.
The vocabularies for the distributed local features are created on the lower
memory layer to represent facial parts. These vocabularies are formed by the
bottom-up synaptic connections of the bunch columns attached to their facial
landmarks. Each core unit of the bunch columns becomes thus sensitive to a
particular local facial appearance due to the established structure of its
bottom-up receptive field. At the same time, the lateral connectivity between
the bunch columns gets shaped capturing the associative relations between the
distributed features. These relations are represented by associative links
between those core units that are regularly used in the composition of a
particular individual face. The same configurational information enters into
the structure of bottom-up connectivity converging on the identity column
units, being also represented in the top-down connections projecting from the
identity column back on the lower layer.
Figure 4: Time snapshots of structure formation. From left to right, snapshots
from early, middle and late formation phase of (A) lower layer bottom-up
connectivity containing local facial parts, (B) lower layer associative
lateral connectivity, (C) top-down compositional connectivity projecting from
the higher back on the lower layer, which is roughly the transposed version of
the higher layer bottom-up connectivity visualized in (D), holding global
identities.
Each person repeatedly presented to the system during the learning phase
leaves a memory trace comprising the parts-based representation of its face on
the lower layer and the explicit configurational identity on the higher layer
of the memory (Fig. 4). The course of gradual differentiation of bottom-up,
lateral and top-down connectivity reveals the ongoing process of memory
consolidation, where memory traces induced by the face images become more
stable and get opportunity to amplify their structure. A common developmental
pattern seems to underlie the time courses of structure organization (Sec.
2.6). There is an initial resting phase, where no structural changes appear,
followed by a maturation phase, where massive reorganization occurs and change
rate peaks at its maximum value (Fig. 5, 6). Finally a saturation phase is
reached, where the structure stabilizes at a certain level of organization and
the change rate goes down close to zero.
Figure 5: Differentiation time course over $5\cdot 10^{5}$ decision cycles for
different connectivity types; on the left the grade of differentiation, on the
right its rate. Clear is the general tendency to greater connectivity
differentiation with the learning progress as well as the temporal sequence of
connectivity maturation (see the text). BU, LAT, hBU, TD denote respectively
lower layer bottom-up, lateral, higher layer bottom-up and top-down
connectivity types.
Different connectivity types get organized preferentially within a specific
time window (Fig. 5, 6). There is a clear temporal sequence of connectivity
development, starting with maturation of lower layer bottom up connections,
followed by maturation of lateral connections between the bunch columns and by
the maturation of bottom-up connectivity of the identity column, ending with
the formation of top-down connectivity. Because the development of different
connectivity types is highly interdependent, their developmental phases are
not disjunct in time, but overlap substantially. In parallel, there is a
gradual increase in sparseness within the receptive fields and progressive
reduction of the overlap between them. (Fig. 6) The remaining overlap in
associative lateral and configurational bottom-up connectivity reflects the
extent to which the parts are shared among different stored face
representations.
In the late learning phase, the state of the synaptic structure stabilizes
until no substantial changes in the established memory structure can be
observed (Fig. 5, 6). Remarkably, the bottom-up connectivity of the bunch
columns stays well behind other connectivity types in terms of differentiation
grade, sparseness within the receptive fields and their overlap reduction
achieved in the final stable state (Fig. 5, 6). While being the latest to
initiate its maturation, the top-down connectivity reaches the highest grades
of differentiation and sparseness, also being most successful in reducing the
overlap. The lateral connectivity between the bunch columns and bottom-up
connectivity of the identity column also show comparably high level of
organization. These relationships reflect the distinct functional roles the
different connectivity types play in their contribution to the memory traces -
capturing strongly similar local feature appearance in case of lower layer
bottom-up connectivity on the one hand and on the other hand storing weakly
overlapping associative and configurational information for different faces in
case of lateral and top-down connectivity.
Figure 6: Overlap (A) and sparseness (B) time course over $5\cdot 10^{5}$
decision cycles for different connectivity types. As the learning progresses,
the overlap between the receptive fields is continuously reduced, the
connectivity sparseness increases. Again, the temporal sequence of
connectivity development is clearly visible (see the text). BU, LAT, hBU, TD
denote respectively lower layer bottom-up, lateral, higher layer bottom-up and
top-down connectivity types.
The changes in the synaptic structure are accompanied by the use-dependent
regulation of the excitability thresholds of the core units across the
network. Three developmental phases can be distinguished in the time course of
excitability modifications (Fig. 7). The first phase is characterized by
strong and rapid excitability downregulation in the network. This
downregulation settles down the core units toward the range of the targeted
average activity level $p_{aim}$ ( Eq. 8). In this phase, almost no
differences between the individual thresholds are present (Fig. 8). After
downregulation crosses its peak, a common upregulation sets in and the
differences between the excitability thresholds become much more prominent.
The upregulation phase leads to a slight increase of the average excitability
and is followed by a saturation phase where the average threshold value
stabilizes around certain level.
Figure 7: Time course of excitability regulation. Above the lower, below the
higher memory layer. Obvious are the much stronger pronounced differences in
excitability between the units on the lower layer.
Excitability regulation runs differently on different memory layers. On the
lower layer the down- and upregulation phases are shorter and occur earlier
than the corresponding phases on the higher layer. Moreover, the differences
in excitability between the units on the lower layer are much stronger
pronounced compared to the rather equalized excitability levels of the higher
layer units (Fig. 7 and 8).
Figure 8: (A) Time course of average excitability regulation. Above the whole
course, below the zoom into down- and upregulation phases. On the left for the
bunch units, on the right for the identity units. Black solid curve is the
average value, gray curves mark the standard deviation range. The same
nomenclature applies for the time course of the average unit activity
visualized in (B). The much stronger pronounced differences in excitability
between the units on the lower layer are reflected in the greater dispersion
of their activities around the average activity level on the lower layer.
These differences reflect the distinct functional roles the lower and higher
layer play in the memory organization. The lower layer serves as a storehouse
for associatively linked distributed facial parts that can be shared by
multiple face representations, while the identity units are conjunction-
sensitive units representing the configurational identity of the face. Because
each memorized person is equally likely to appear on the input, the long-term
usage load of the identity units is essentially the same, so no need for a
systematic differentiation of excitability thresholds arises there. Part
sharing on the other hand imposes different usage frequency on different core
units sensitive to different parts, leading to pronounced use-dependent
differences in excitability between the bunch column core units.
### 3.2. Activity formation and coordination
The established synaptic structure supports the parts-based representation
scheme by encoding the relations between the parts in two alternative ways.
First, the relations can be explicitly signaled by the responses of
conjunction, or configuration, specific identity core units on the higher
layer, each responsible for one of the face identities stored in the memory.
Second, the relations can be represented by dynamic assemblies of co-activated
part-specific bunch core units, which can be constructed on demand to encode a
novel face or to recall an already stored one as a composition of its
constituent parts. The selection and binding of the parts-specific and
identity-specific units into a coherent assembly coding for an individual face
is done in the course of a decision cycle defined by common unspecific
excitatory and inhibitory signals oscillating in the gamma range (Singer,
1999; Fries et al., 2007).
There, the global decision process which may be called binding by competition
is responsible for assembly formation, providing clear and unambiguous
temporal correlations between the selected units and setting them apart
against the rest by amplification of their response strength (Fig. 9). The
initial phase of the decision cycle, where the oscillatory inhibition and
excitation are low, is characterized by low undifferentiated activities of the
network units. With both inhibition and excitation rising, only some of the
units are able to resist the inhibition pressure and continue increasing their
activity being selected as candidates for assembly formation in the selection
phase. Ultimately, the growing competition leads to a series of local winner-
take-all decisions across the columns sparsening the activity in the network
by strong amplification of a small unit subset at the cost of suppression of
the others. In the late phase of a decision cycle, this amplified subset of
winner units can be then clearly interpreted as an individual face composed of
the local features from respective landmarks and labeled with person’s
identity, solving the assembly binding problem (von der Malsburg, 1999;
Singer, 1999).
Figure 9: Activity formation during the decision cycle. (A) A sequence of six
successive cycles, each representing a successful recall of a stored
individual face. On the top, the activity course is shown, arrows pointing to
constituent parts shared by two different face identities. Second and forth
cycles show recall of the same face identity. Below is the mean activity
course for each column and the oscillation rhythms defining the decision
cycle. (B) A zoom into a single decision cycle (on the top) to visualize the
activity formation phases. Below is the mean activity course for each column
and distribution of average unit activities over the decision cycle showing
the highly competitive nature of activity formation, where winner units get
amplified at the cost of suppressing the others.
A combined view on the mean activity within the columns reveals once more the
competitive nature of activity formation in the network (Fig. 9). While the
winner unit subset concentrates increasingly high activity, the mean network
activation gets progressively reduced at the end of the decision cycle after
crossing its peak in the selection phase, indicating that winner subset
amplification occurs at the cost of suppressing the rest. Generally, during
the whole decision cycle the mean network activity stays at a low level
($p=0.08-0.09$), far below the activity level reached by the winner units
subset at the end of the cycle ($p=0.4-0.6$).
One may ask to what extent the competitive activity formation becomes more
organized or coherent in terms of representing the memory content as the
learning progresses. In other words, we are interested in the level of
coherence, or agreement, between the local competitive decisions made in the
distributed columns and how it may change with the learning time. One
indicator of such coherent behavior is the agreement achieved at the end of
the decision cycle between the afferent signals that arrive at network units
from different sources such as bottom-up, lateral or top-down. By computing
the standard correlation coefficient $\rho$ (DeGroot and Schervish, 2001), we
obtain for each afferent signal pair of different sources a course showing the
development of the coordination between the signals over the learning time.
Figure 10: Improvement of signal coordination in the course of learning.
Standard correlation coefficients $\rho$ were computed for each signal pair.
BU, LAT, TD denote respectively bottom-up, lateral and top-down signals.
The coordination level between the bottom-up, lateral and top-down signals
increases gradually from the initially very low value close to zero toward
higher and higher grade (Fig. 10). The low coherence value in the early
learning phase reveals the inability of the signals converging on the network
units to be in consensus with each other about the local decision outcome,
deranging the global decision making. As learning progresses, the signal
pathway structure is gradually improved for the storage and representation of
the content, leading to stronger and stronger consistency in local signaling.
The bottom-up and lateral signals are the first to develop a significant grade
of coherence. Slightly later the lateral and top-down signals reach a
substantial coherence level and the latest to establish a coordinated cross-
talk are the signals from bottom-up and top-down sources. Furthermore, the
lateral and top-down signals establish the strongest final grade of coherence
that is significantly higher than the coherence between bottom-up and lateral
as well as bottom-up and top-down signals. Their coherence still reaches
substantial values though, the former being slightly above the latter.
During the course of a single decision cycle, a co-activation measure can be
used to check whether the incoming signals are coordinated properly to make up
the decisions. The relationship between the afferent signal coordination and
the function of the memory is particularly clear if the coordination level in
a successful recall is compared to the coordination shown during a failed
recall, where the identity of the person is misclassified (Fig. 11). In a
successful recall, where the facial representation and person’s identity are
correctly retrieved from the memory, a well-established coordination can be
observed between the co-active afferent signals converging on the winner
units. In a failed recall, the identity column making a wrong decision sends
top-down signals that are not in agreement with the bottom-up and lateral
signals conveyed by the bunch columns. As consequence, the signal coordination
breaks down, serving as a reliable indicator of a recall failure (Fig. 11
(D)).
A further indicator that can help in differentiating a successful from a
partially or completely failed recall is the activity level of the winner
units at the end of the decision cycle. A successful recall is accompanied by
a high degree of cooperation between the participating winner units, so that
the level of their final activation is high. At the same time, the competitive
action of the winner units subset suppresses strongly the rest activity, so
that the overall network activity is substantially diminished. Contrarily, a
failed recall has something to do with disagreement between some local
decisions, resulting in decreased afferent signal coherence, which in turn
leads to a much lower level of final activity in the winner units. Their
competitive influence is also weakened, leading to a higher overall network
activity (Fig. 11 (F)). Thus, a simple comparison of the winner activities to
their average level can already provide enough information to conclude about
the quality of recall. The recall quality can be assessed on the global level
of identity as well as on the component level, where either identity
recognition failure or part assignment failure might be stated.
Figure 11: Coordination and activity formation in successful and failed
recall. Two decision cycles showing failed and successful recall. (A) Network
activity course. (B) Bottom-up afferent signals course. (C) Lateral and top-
down afferent signals course. (D) Signal coordination course assessed by
measuring the co-activation of bottom-up, lateral and top-down signals
converging on the network units. In the failed recall, there is a clear break-
down of signal coordination in afferents converging on the winner units. (E)
Course of mean activity in the columns. In the failed recall, a substantially
increased overall activation is clearly seen as well as the shift of its
broader peak to a later time point. (F) Winner unit activities at the end of
the decision cycle on the left and mean unit activities (excluding the
winners) over whole cycle on the right for each column. In the failed recall,
winner activities are consistently lower, while the mean rest unit activities
are consistently higher than in the successful recall.
### 3.3. Recognition performance
To assess the recognition capability of the memory, we evaluate the learning
and generalization error of two different system configurations. These
different configurations, the fully recurrent and purely feed-forward one, are
set up to substantiate the hypothesis stating the functional advantage of the
recurrent memory structure over the structure with purely feed-forward
connectivity. Both configurations were trained under equal conditions and then
tested to compare their performance against each other (refer to Sec. 2.5).
Both the purely feed-forward and fully recurrent configurations are able to
successfully store the facial identities of the persons ($40$ in total) in the
memory structure. Strong decay of the learning error over the time is clearly
evident for both network configurations. The learning error rate falls rapidly
in the early learning phase (first $5\cdot 10^{4}$ decision cycles) until it
saturates at the values slightly below $5\%$ in the later phase beyond
$10^{5}$ cycles (Fig. 12). Although there is no significant difference in the
learning error rate between the both configurations after the saturation level
is reached, the time needed to reach the saturation level is substantially
shorter for the fully recurrent configuration (saturates around $10^{5}$
cycles) than for the purely feed-forward one (saturates around $1.5\cdot
10^{5}$ cycles). Thus, the learning progresses about $33\%$ faster for the
fully recurrent system than for the purely feed-forward one. The fully
recurrent configuration seems to speed up the learning progress in the
critical early learning phase, probably taking benefit of additional
assistance provided by lateral and top-down connectivity for the organization,
amplification and stabilization of the memory traces.
Figure 12: Learning error rate of feed-forward and fully recurrent memory
configuration.
At first glance, analysis of the learning error time course suggests that the
only functional advantage of the fully recurrent configuration is the learning
speed-up observed in the early phase. However, another important functional
advantage is revealed if the generalization error rates are compared. The
generalization error is measured on the alternative face views not shown
during the learning phase (see Tab. 1). A striking result is the significant
discrepancy in performance between the two configurations manifested on the
duplicate views containing emotional expressions (smiling and sad). There, the
error rate difference is about $5\%$ in favor of the fully recurrent memory
configuration. The generalization error of purely feed-forward configuration
is $38.46\%$ larger on the duplicate smiling view and $62.5\%$ larger on the
duplicate sad view than the generalization error of the fully recurrent
configuration. On the other views, no significant difference in error rate can
be detected between both configurations.
Configuration | Views, Error Rate
---|---
| Original | Smiling | Sad
fully recurrent | $0.1\%\pm 0.07\%$ | $6.06\%\pm 0.58\%$ | $4.02\%\pm 0.42\%$
purely feed-forward | $0.067\%\pm 0.0528\%$ | $5.72\%\pm 0.92\%$ | $3.75\%\pm 0.38\%$
Configuration | Views, Error Rate
---|---
| Duplicate | Duplicate, smiling | Duplicate, sad
fully recurrent | $1.64\%\pm 0.16\%$ | $\mathbf{13.41}\%\pm 0.94\%$ | $\mathbf{8.74\%}\pm 0.38\%$
purely feed-forward | $1.75\%\pm 0.13\%$ | $\mathbf{18.42\%}\pm 0.93\%$ | $\mathbf{13.68\%}\pm 0.64\%$
Table 1: Comparison of generalization error between the purely feed-forward
and fully recurrent memory configuration. The configurations were tested after
learning time of $5\cdot 10^{5}$ cycles. fully recurrent configuration shows a
significantly better performance on the duplicate views with emotional
expressions, while comparable performance is shown on the other views.
These results highlight an interesting property of the functional advantage as
it has been assessed for the fully recurrent memory configuration. The purely
feed-forward configuration falls significantly behind the fully recurrent one
only on certain views, performing comparably well on the others. Apparently,
the stronger the deviation of the alternative view from the original view
showed during the learning, the more evident is the enhancement in
generalization capability. Even if given only a short time of a single
decision cycle, the recurrent connectivity seems to gain benefit particularly
in novel situations, where purely feed-forward processing alone has more
difficulties in achieving correct interpretation of the less familiar face
view.
## 4\. Discussion
To identify potential neural mechanisms that are responsible for the formation
of parts-based representations in visual memory, we examined the process of
experience-driven structure self-organization in a model of layered memory. We
chose the task of unsupervised open-ended learning and recognition applied to
human faces from a database of natural face images. The final goal was to
build up a hierarchically organized associative memory structure storing faces
of individual persons in a parts-based fashion. Employing slow activity-
dependent bidirectional plasticity (Bienenstock et al., 1982; Artola and
Singer, 1993; Cho et al., 2001) together with homeostatic activity regulation
(Desai et al., 1999; Zhang and Linden, 2003) and a fast neuronal population
dynamics with a strongly competitive nature, the proposed system performed
impressively well on the posed task. It demonstrated the ability to
simultaneously develop local feature vocabularies and put them in a global
context by establishing associative links between the distributed features on
the lower memory layer. On the higher layer, the system was able to use the
configurational information about relatedness of the sparse distributed
features to memorize the face identity explicitly in the bottom-up
connectivity of identity units. The captured feature constellations were also
projected back to the lower layer via top-down connectivity providing
additional contextual support for learning and recognition. The identity
recognition performance of the system on the original and alternative face
views confirmed the functionality of the established memory structure.
Generic memory architecture. When thinking about the processes underlying the
memory formation and function, it is remarkable that the structure and
activity formation in the model network can be governed by a set of local
mechanisms which are the same for all neuronal units and all synapses
comprising the network. Saying that they are the same here means that for
instance the bidirectional plasticity rule for any synapse has not only the
same functional description, but also shares a common set of parameter values
such as time constant, etc. This supports the view that the synapses arriving
from different origins and contacting their target neuron at different sites
of the dendritic tree and soma are a kind of universal learning machines,
which may well differ in their impact on the firing behavior of the neuron
(Sherman and Guillery, 1998; Larkum et al., 2004; Friston, 2005), while
obeying the same generic modification rules. Whether this is indeed the case,
is currently a subject of intense debates (Sjöström et al., 2008; Spruston,
2008). Overall, the organization of the system supports the idea of universal
cortical operations involving strong competitive and cooperative effects (von
der Malsburg and Singer, 1988), which are building up on essentially the same
local circuitry and the same plasticity mechanisms utilized in different
cortical areas (Mountcastle, 1997; Phillips and Singer, 1997; Douglas and
Martin, 2004).
Competition and cooperation in activity and structure formation. In our study,
it becomes clear that learning itself has to rely on certain important
properties of the processing on the fast as well as on the slow time scale. To
capture statistical regularities hidden in the local sensory inputs and their
global compositions, there have to be mechanisms for selecting and amplifying
only a small fraction of available neuronal resources, which then become
dedicated to a particular object, specializing more and more for the
processing of its local features and their relations. Without proper
selection, no learning will succeed. However, without proper learning, no
reasonable selection can be expected either. Here, we break this circularity
by proposing strong competitive interaction between the units on the fast
activity time scale. Given a small amount of neural threshold noise, this
interaction is able to break the symmetry of the initial condition due to the
bifurcation property of the activity dynamics (Lücke, 2005), enforcing the
unit selection and amplification in the initial learning phase even in the
absence of differentiated structure. The response patterns enforced by
competition offer sufficient playground for the learning to ignite and move on
to organize and amplify some synaptic structure that is suitable for laying
down specific memory content via ongoing slow bidirectional Hebbian
plasticity. In combination with competitive activity dynamics, the
bidirectional nature of synaptic modification assists further the competition
between memory traces as it attempts to reduce the overlap between the
patterns the network units preferentially respond to, segregating memory
traces in the network structure whenever possible. The state of
undifferentiated structure is however the worst-case scenario and not
necessarily the initial condition for learning, as there may be basis
structures prepared for the representation of many behavioral relevant
patterns, like for instance faces (Johnson et al., 1991). Interestingly, the
progress from an undifferentiated to a highly organized state via selection
and amplification of a small subset of totally available resources is a
general feature in evolutionary and ontogenetic development of biological
organisms. The notion that the very same principles may guide the activity and
structure formation in the brain supports the view of learning as an
optimization procedure adapting the nervous structure to the demands put on it
by the environment (von der Malsburg and Singer, 1988; Edelman, 1993).
Noteworthy, there is a very important difference in the way how the unit
selection, or decision making, is implemented by competition given the early,
immature or late, mature state of the connectivity structure. In the immature
state where the contextual connectivity is not established yet, the local
decisions in the lower layer bunch columns are made completely independent
from each other. On the contrary, decision making in the mature state involves
interactions between the local decisions via already established lateral and
top-down connections. These associative connections enable cooperation within
and competition between unit assemblies, promoting a coordinated global
decision. The separation of synaptic inputs enables decision making to use
information from different origins according to its functional significance -
carrying either sensory bottom-up evidence for a local appearance or providing
clues for relational binding of distributed parts into a global configuration
(Phillips and Singer, 1997). The agreement between sensory and contextual
signaling about the outcome of local decisions improves continuously as
learning progresses. The initially independent local decision making becomes
thus orchestrated by contextual support formed in the course of previous
experience with visual stimuli.
Signal and plasticity coordination. The coherency of cooperative and
competitive activity formation cannot be guaranteed by the contextual support
alone, as the time coordination of decision making across distributed units
also matters. The decision cycle, which defines a common reference time window
for decision making, orchestrates not only the activities, but also
bidirectional synaptic modifications across the units. This reassures that
structure modification amplifies the connections within the right subset of
simultaneously highly active units encoding a particular face. The cortical
processing seems to be reminiscent of oscillatory rhythms in the gamma range
used here to model the decision cycle. Particularly, there is evidence that
oscillatory activity may serve as reference signal coordinating plasticity
mechanisms in cortical neurons (Huerta and Lisman, 1995; Wespatat et al.,
2004). There is also support for a phase reset mechanism locking the
oscillatory activity on the currently presented stimulus (Makeig et al., 2002;
Axmacher et al., 2006). Taken together, current evidence suggests the possible
interpretation of the gamma cycle as a rapidly repeating winner-take-all
algorithm as it is modeled in this work (Fries et al., 2007). The winner-take-
all competition can be carried out rapidly due to low latencies of fast
inhibition and its result can be read out fast (on the scale of few
milliseconds) due to the response characteristics of the population rate code
(Gerstner, 2000).
Hierarchical parts-based representation. An essential property of our memory
system is parts sharing, as it allows the same basic set of the elementary
parts to be used for the combinatorial composition of familiar and novel
objects without the need to add new physical units into the system. Endowed
with this ability, the memory network can be also interpreted as a layered
neuronal bunch graph (Wiskott et al., 1997), without taking into account the
topological information. Here, the graph nodes are columns, each holding a set
of features with similar physical (visual appearance) or semantical (category
or identity) properties (Tanaka, 2003). In such a graph, new object
representations can be instantiated in a combinatorial fashion by selecting
candidate features from each node. The candidate selection here depends
critically on the homeostatic regulation of activity, which reassures that
each unit is able to participate in memory formation to an equal extent. By
introducing the hierarchy in the graph structure, higher order symbols, like
identity of a person, can be explicitly represented by assigning the chosen
set of candidate features from the lower memory layer to an identity unit on
the higher layer. These higher symbols may be used for a compact
representation of exceptionally important persons (VIPs), without discarding
the information about their composition which is kept in the top-down
connections projecting back to the lower layer. Potentially, it would be also
possible to select multiple candidates from a single node, or column, to
represent an individual face. Here we use very strong competition leading to a
form of activity sparseness termed hard sparseness (Rehn and Sommer, 2007),
limiting the number of active units to one per column. While this kind of
sparse coding is advantageous for learning individual faces, it may be
generally too sparse for representing coarser categories (like male of
female). However, the competition strength can in principle be adjusted
arbitrarily in a task-dependent manner, either by tuning the core unit gain or
by balancing the self-excitation and lateral inhibition. The latter can be
easily implemented by altering the amplitude of inhibitory or excitatory
oscillations. The alteration could be initiated by some kind of internal
cortical signal or state, indicating the task-dependent need for the
competition strength. The tuning of the competition strength would allow the
formation of less sparse activity distributions, representing the stimulus on
a coarser categorical level (Kim et al., 2008).
Attentional and generative mechanisms in the memory. Interestingly, contextual
lateral and top-down connectivity endows the system with further general
capabilities. For instance, selective object-based attention is naturally
given in our model, because the priming of the identity units on the higher
memory layer by preceding sensory or direct external stimulation would also
prime and facilitate the part-specific units on the lower layer via top-down
connections, providing them with a clear advantage in the competition against
other candidates. This priming can mediate covert attention directed to a
specific object, promoting the pop out of its stored parts-based
representation while suppressing the rest of the memory content. Generally,
the selection and amplification by competition can be interpreted as an
attentional mechanism, which focuses the neural resources on processing one
object or category at the cost of suppressing the rest (Lee et al., 1999;
Reynolds et al., 1999). Although not exploited in this study, the network
model is also able to self-generate activity patterns that correspond to the
object representations stored in the memory content in absence of any external
input. This ability relies heavily on the lateral and top-down connectivity
established by previous experience with visual stimuli, placing the model in
remarkable relation to generative approaches explaining construction of data
representations in machine learning (Ulusoy and Bishop, 2005). From this
perspective, each face identity can be interpreted as a global cause producing
the specific activity patterns in the network. The identities are in turn
composed of many local causes, i.e. their constituent parts. The memory
structure captures all the relations between local and global causes, being
able to reproduce data explicitly in an autonomous mode.
Performance advantage over the purely feed-forward structure. Finally, we
presented sound evidence for the functional advantage of lateral and top-down
connectivity over the purely feed-forward structure in the memory formation
and recall. First, the recurrent context-based connectivity seems to speed up
the learning progress. Second, and at least as essential, recurrent
configuration outperforms significantly the purely feed-forward configuration
on the test views which deviate strongly from the original views shown during
learning. This suggests that contextual processing is able to generalize over
new data better than the purely feed-forward solution, which performs
comparably on original or only slightly deviating views. This outcome
indicates that different processing strategies may prove more useful in
different situations. While the recurrent connectivity is mostly beneficial in
novel situations, which require additional effort for the interpretation and
learning of less familiar stimuli configurations, the feed-forward processing
already suffices to do a good and quick job when facing well-known,
overlearned situations, where effortful disambiguation is not required due to
the strong familiarity of the sensory input. There, the feed-forward
processing could benefit from the bottom-up pathway structures formed by
previous experience and evoke clear, unambiguous, easily interpretable
activity patterns along the processing hierarchy without requiring additional
contextual support from lateral and top-down connectivity. There are two
predictions arising from this outcome, which can be tested in a behavioral
experiment involving subordinate level recognition tasks. First, deactivation
of lateral and top-down connectivity in the IT would not change performance
for overlearned content, but would impair recognition for less familiar
instances of the same stimuli viewed under different conditions, the
impairment being the more visible the stronger the viewing condition deviates
from the overlearned one. Second, the same deactivation should lead to a
measurable decrease in the learning speed, increasing the time needed to reach
a certain low level of recognition error.
Model predictions. There are some more predictions that can be derived from
the system’s behavior. One general prediction is that failed memory recall
should be accompanied by the higher overall activation along the IT processing
hierarchy within the gamma or theta cycle, with the activity of the strongest
units at cycle’s peak being on contrary diminished. Reversely, a successful
recall should be characterized by decreased overall activity in the IT and by
increased activity in the winner units cluster. This is also interpretable in
terms of signaling the degree of decision certainty, the successful recall
being accompanied by greater certainty about the recognition result. Further,
a failed recall should involve much more depression (LTD) than potentiation
(LTP), a successful recall much more LTP than LTD on the active synapses. In
addition, if required to memorize and distinguish very similar stimuli, the
recall of such an item should lead to a higher overall activity in the IT
network than for items with less similar appearance. The winner units, on
contrary, should exhibit a reduced activation due to the inhibition
originating from the competing similar content. Again, certainty
interpretation of the activity level is possible here: the more similar the
stimuli to be discriminated, the lower is the winner activation signaling the
decision made, indicating lower certainty about the recognition result. An
interesting prediction concerning the bidirectional plasticity mechanism is
the erasure of a memory trace after repetitive stimulus-induced recall if
LTD/LTP transition threshold is shifted to the higher values, for example due
to an artificial manipulation, as performed in experiments of selective memory
erasure in mice (Cao et al., 2008).
So far, we provided a demonstration of experience-driven structure formation
and its functional benefits in a basic core of what we think can be further
developed into a full-featured, hierarchically organized visual memory domain
for all kind of natural objects. As usual, several open questions remain, such
as invariant or transformation-tolerant processing, development of a full
hierarchy from elementary visual features to object categories and identities,
establishing the interface for behaviorally relevant context as proposed in
the framework of reinforcement learning, incorporating the mechanisms of
active vision and so on. Nevertheless, with this work we hope we succeeded not
only to highlight the crucial importance of coherent interplay between the
bottom-up and top-down influences in the process of memory formation and
recognition, but also to gain more insight into the basic principles behind
the self-organization (von der Malsburg, 2003) of a successful subsystem
coordination across different time scales. Aiming for real world applications,
we believe that the incremental, unsupervised open-ended learning design
instantiated in this work provides an inspiring and guiding paradigm for
developing systems capable of discovering and storing complex structural
regularities from natural sensory streams over multiple descriptional levels.
## Disclosure/Conflict-of-Interest Statement
The authors declare that the research was conducted in the absence of any
commercial or financial relationships that could be construed as a potential
conflict of interest.
## Acknowledgments
We would like to thank Cristina Savin, Cornelius Weber and Urs Bergmann for
the helpful corrections on this manuscript. This work was supported by the EU
project DAISY, FP6-2005-015803.
## References
* Artola and Singer (1993) Artola, A., Singer, W., Nov 1993. Long-term depression of excitatory synaptic transmission and its relationship to long-term potentiation. Trends Neurosci. 16 (11), 480–487.
* Axmacher et al. (2006) Axmacher, N., Mormann, F., Fernández, G., Elger, C. E., Fell, J., Aug 2006. Memory formation by neuronal synchronization. Brain Res. Rev. 52 (1), 170–182.
URL http://dx.doi.org/10.1016/j.brainresrev.2006.01.007
* Bear (1996) Bear, M. F., Nov 1996. A synaptic basis for memory storage in the cerebral cortex. Proc. Natl. Acad. Sci. U. S. A. 93 (24), 13453–13459.
* Bienenstock et al. (1982) Bienenstock, E. L., Cooper, L. N., Munro, P. W., Jan 1982. Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J. Neurosci. 2 (1), 32–48.
* Cao et al. (2008) Cao, X., Wang, H., Mei, B., An, S., Yin, L., Wang, L. P., Tsien, J. Z., Oct 2008\. Inducible and selective erasure of memories in the mouse brain via chemical-genetic manipulation. Neuron 60 (2), 353–366.
URL http://dx.doi.org/10.1016/j.neuron.2008.08.027
* Cho et al. (2001) Cho, K., Aggleton, J. P., Brown, M. W., Bashir, Z. I., Apr 2001. An experimental test of the role of postsynaptic calcium levels in determining synaptic strength using perirhinal cortex of rat. J. Physiol. (Lond.) 532 (Pt 2), 459–466.
* Daugman (1985) Daugman, J. G., Jul 1985. Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. J. Opt. Soc. Am. A 2 (7), 1160–1169.
* DeGroot and Schervish (2001) DeGroot, M. H., Schervish, M. J., 10 2001. Probability and Statistics, 3rd Edition, 3rd Edition. Addison Wesley.
* Desai et al. (1999) Desai, N. S., Rutherford, L. C., Turrigiano, G. G., Jun 1999. Plasticity in the intrinsic excitability of cortical pyramidal neurons. Nat. Neurosci. 2 (6), 515–520.
URL http://dx.doi.org/10.1038/9165
* Douglas and Martin (1991) Douglas, R. J., Martin, K. A., 1991. A functional microcircuit for cat visual cortex. J. Physiol. (Lond.) 440, 735–769.
* Douglas and Martin (2004) Douglas, R. J., Martin, K. A. C., 2004. Neuronal circuits of the neocortex. Annu. Rev. Neurosci. 27, 419–451.
URL http://dx.doi.org/10.1146/annurev.neuro.27.070203.144152
* Edelman (1993) Edelman, G. M., Feb 1993. Neural darwinism: selection and reentrant signaling in higher brain function. Neuron 10 (2), 115–125.
* Freedman et al. (2003) Freedman, D. J., Riesenhuber, M., Poggio, T., Miller, E. K., Jun 2003. A comparison of primate prefrontal and inferior temporal cortices during visual categorization. J. Neurosci. 23 (12), 5235–5246.
* Fries et al. (2007) Fries, P., Nikolić, D., Singer, W., Jul 2007. The gamma cycle. Trends Neurosci. 30 (7), 309–316.
URL http://dx.doi.org/10.1016/j.tins.2007.05.005
* Friston (2005) Friston, K., Apr 2005. A theory of cortical responses. Philos. Trans. R. Soc. Lond. B. Biol. Sci. 360 (1456), 815–836.
URL http://dx.doi.org/10.1098/rstb.2005.1622
* Fujita et al. (1992) Fujita, I., Tanaka, K., Ito, M., Cheng, K., Nov 1992. Columns for visual features of objects in monkey inferotemporal cortex. Nature 360 (6402), 343–346.
URL http://dx.doi.org/10.1038/360343a0
* Fuster (1997) Fuster, J. M., Oct 1997. Network memory. Trends Neurosci. 20 (10), 451–459.
* Gerstner (2000) Gerstner, W., 2000. Population dynamics of spiking neurons: Fast transients, asynchronous states, and locking. Neural Comput. 12 (1), 43–89.
* Gray and McCormick (1996) Gray, C. M., McCormick, D. A., Oct 1996. Chattering cells: superficial pyramidal neurons contributing to the generation of synchronous oscillations in the visual cortex. Science 274 (5284), 109–113.
* Hayworth and Biederman (2006) Hayworth, K. J., Biederman, I., Nov 2006. Neural evidence for intermediate representations in object recognition. Vision Res. 46 (23), 4024–4031.
URL http://dx.doi.org/10.1016/j.visres.2006.07.015
* Huerta and Lisman (1995) Huerta, P. T., Lisman, J. E., Nov 1995. Bidirectional synaptic plasticity induced by a single burst during cholinergic theta oscillation in ca1 in vitro. Neuron 15 (5), 1053–1063.
* Johnson et al. (1991) Johnson, M. H., Dziurawiec, S., Ellis, H., Morton, J., Aug 1991. Newborns’ preferential tracking of face-like stimuli and its subsequent decline. Cognition 40 (1-2), 1–19.
* Kim et al. (2008) Kim, Y., Vladimirskiy, B. B., Senn, W., 2008. Modulating the granularity of category formation by global cortical states. Front. Comput. Neurosci. 2, 1.
URL http://dx.doi.org/10.3389/neuro.10.001.2008
* Konen and Kastner (2008) Konen, C. S., Kastner, S., Feb 2008. Two hierarchically organized neural systems for object information in human visual cortex. Nat. Neurosci. 11 (2), 224–231.
URL http://dx.doi.org/10.1038/nn2036
* Larkum et al. (2004) Larkum, M. E., Senn, W., Lüscher, H.-R., Oct 2004. Top-down dendritic input increases the gain of layer 5 pyramidal neurons. Cereb. Cortex 14 (10), 1059–1070.
URL http://dx.doi.org/10.1093/cercor/bhh065
* Lee et al. (1999) Lee, D. K., Itti, L., Koch, C., Braun, J., Apr 1999. Attention activates winner-take-all competition among visual filters. Nat. Neurosci. 2 (4), 375–381.
URL http://dx.doi.org/10.1038/7286
* Lücke (2005) Lücke, J., 2005. Dynamics of cortical columns – sensitive decision making. In: Proc. ICANN. LNCS 3696. Springer, pp. 25 – 30.
* Makeig et al. (2002) Makeig, S., Westerfield, M., Jung, T. P., Enghoff, S., Townsend, J., Courchesne, E., Sejnowski, T. J., Jan 2002. Dynamic brain sources of visual evoked responses. Science 295 (5555), 690–694.
URL http://dx.doi.org/10.1126/science.1066168
* Martinez and Benavente (1998) Martinez, A., Benavente, R., June 1998. The AR face database. Tech. Rep. 24, CVC Technical Report 24.
* Miller and MacKay (1994) Miller, K. D., MacKay, D. J. C., 1994. The role of constraints in hebbian learning. Neural Comput. 6 (1), 100–126.
* Miyashita (1988) Miyashita, Y., Oct 1988. Neuronal correlate of visual associative long-term memory in the primate temporal cortex. Nature 335 (6193), 817–820.
URL http://dx.doi.org/10.1038/335817a0
* Miyashita (2004) Miyashita, Y., Oct 2004. Cognitive memory: cellular and network machineries and their top-down control. Science 306 (5695), 435–440.
URL http://dx.doi.org/10.1126/science.1101864
* Mountcastle (1997) Mountcastle, V. B., Apr 1997. The columnar organization of the neocortex. Brain 120 ( Pt 4), 701–722.
* Olshausen and Field (2004) Olshausen, B. A., Field, D. J., Aug 2004. Sparse coding of sensory inputs. Curr. Opin. Neurobiol. 14 (4), 481–487.
URL http://dx.doi.org/10.1016/j.conb.2004.07.007
* Osada et al. (2008) Osada, T., Adachi, Y., Kimura, H. M., Miyashita, Y., Jun 2008. Towards understanding of the cortical network underlying associative memory. Philos. Trans. R. Soc. Lond. B. Biol. Sci. 363 (1500), 2187–2199.
URL http://dx.doi.org/10.1098/rstb.2008.2271
* Peters et al. (1997) Peters, A., Cifuentes, J. M., Sethares, C., 1997. The organization of pyramidal cells in area 18 of the rhesus monkey. Cereb. Cortex 7, 405 – 421.
* Phillips and Singer (1997) Phillips, W. A., Singer, W., Dec 1997. In search of common foundations for cortical computation. Behav. Brain Sci. 20 (4), 657–83; discussion 683–722.
* Quiroga et al. (2008) Quiroga, R. Q., Kreiman, G., Koch, C., Fried, I., Mar 2008. Sparse but not ’grandmother-cell’ coding in the medial temporal lobe. Trends Cogn. Sci. 12 (3), 87–91.
URL http://dx.doi.org/10.1016/j.tics.2007.12.003
* Quiroga et al. (2005) Quiroga, R. Q., Reddy, L., Kreiman, G., Koch, C., Fried, I., Jun 2005. Invariant visual representation by single neurons in the human brain. Nature 435 (7045), 1102–1107.
URL http://dx.doi.org/10.1038/nature03687
* Reddy and Kanwisher (2006) Reddy, L., Kanwisher, N., Aug 2006. Coding of visual objects in the ventral stream. Curr. Opin. Neurobiol. 16 (4), 408–414.
URL http://dx.doi.org/10.1016/j.conb.2006.06.004
* Rehn and Sommer (2007) Rehn, M., Sommer, F. T., Apr 2007. A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields. J. Comput. Neurosci. 22 (2), 135–146.
URL http://dx.doi.org/10.1007/s10827-006-0003-9
* Reynolds et al. (1999) Reynolds, J. H., Chelazzi, L., Desimone, R., Mar 1999. Competitive mechanisms subserve attention in macaque areas v2 and v4. J. Neurosci. 19 (5), 1736–1753.
* Rockland and Ichinohe (2004) Rockland, K. S., Ichinohe, N., Oct 2004. Some thoughts on cortical minicolumns. Exp. Brain Res. 158 (3), 265–277.
URL http://dx.doi.org/10.1007/s00221-004-2024-9
* Rolls and Tovee (1995) Rolls, E. T., Tovee, M. J., Feb 1995. Sparseness of the neuronal representation of stimuli in the primate temporal visual cortex. J. Neurophysiol. 73 (2), 713–726.
* Sherman and Guillery (1998) Sherman, S. M., Guillery, R. W., Jun 1998. On the actions that one nerve cell can have on another: distinguishing "drivers" from "modulators". Proc. Natl. Acad. Sci. U. S. A. 95 (12), 7121–7126.
* Singer (1999) Singer, W., Sep 1999. Neuronal synchrony: a versatile code for the definition of relations? Neuron 24 (1), 49–65, 111–25.
* Sjöström et al. (2008) Sjöström, P. J., Rancz, E. A., Roth, A., Häusser, M., Apr 2008. Dendritic excitability and synaptic plasticity. Physiol. Rev. 88 (2), 769–840.
URL http://dx.doi.org/10.1152/physrev.00016.2007
* Spruston (2008) Spruston, N., Mar 2008. Pyramidal neurons: dendritic structure and synaptic integration. Nat. Rev. Neurosci. 9 (3), 206–221.
URL http://dx.doi.org/10.1038/nrn2286
* Swadlow (2003) Swadlow, H. A., Jan 2003. Fast-spike interneurons and feedforward inhibition in awake sensory neocortex. Cereb. Cortex 13 (1), 25–32.
* Tanaka (2003) Tanaka, K., 2003. Columns for complex visual object features in the inferotemporal cortex: Clustering of cells with similar but slightly different stimulus selectivities. Cereb. Cortex 13 (1), 90–99.
* Thorpe and Fabre-Thorpe (2001) Thorpe, S. J., Fabre-Thorpe, M., Jan 2001. Neuroscience. seeking categories in the brain. Science 291 (5502), 260–263.
* Tsao et al. (2006) Tsao, D. Y., Freiwald, W. A., Tootell, R. B. H., Livingstone, M. S., Feb 2006. A cortical region consisting entirely of face-selective cells. Science 311 (5761), 670–674.
URL http://dx.doi.org/10.1126/science.1119983
* Tsunoda et al. (2001) Tsunoda, K., Yamane, Y., Nishizaki, M., Tanifuji, M., Aug 2001. Complex objects are represented in macaque inferotemporal cortex by the combination of feature columns. Nat. Neurosci. 4 (8), 832–838.
URL http://dx.doi.org/10.1038/90547
* Ullman et al. (2002) Ullman, S., Vidal-Naquet, M., Sali, E., Jul 2002. Visual features of intermediate complexity and their use in classification. Nat. Neurosci. 5 (7), 682–687.
URL http://dx.doi.org/10.1038/nn870
* Ulusoy and Bishop (2005) Ulusoy, I., Bishop, C. M., 2005. Generative versus discriminative methods for object recognition. In: CVPR ’05: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05) - Volume 2. IEEE Computer Society, Washington, DC, USA, pp. 258–265.
* von der Malsburg (1999) von der Malsburg, C., 1999. The what and why of binding: The modeler´s perspective. Neuron 24 (1), 95–104.
* von der Malsburg (2003) von der Malsburg, C., 2003. Self-organization and the brain. In: Arbib, M. (Ed.), The handbook of brain theory and neural networks. MIT Press.
* von der Malsburg and Singer (1988) von der Malsburg, C., Singer, W., 1988. Principles of cortical network organization. In: Rakic, P., Singer, W. (Eds.), Neurobiology of Neocortex. Wiley, New York, pp. 69–99.
* Wallis et al. (2008) Wallis, G., Siebeck, U. E., Swann, K., Blanz, V., Bülthoff, H. H., 2008. The prototype effect revisited: Evidence for an abstract feature model of face recognition. J. Vis. 8 (3), 20.1–2015.
URL http://dx.doi.org/10.1167/8.3.20
* Waydo and Koch (2008) Waydo, S., Koch, C., 2008. Unsupervised learning of individuals and categories from images. Neural Comput. 20 (5), 1165–1178.
* Wespatat et al. (2004) Wespatat, V., Tennigkeit, F., Singer, W., Oct 2004. Phase sensitivity of synaptic modifications in oscillating cells of rat visual cortex. J. Neurosci. 24 (41), 9067–9075.
URL http://dx.doi.org/10.1523/JNEUROSCI.2221-04.2004
* Whittington et al. (1995) Whittington, M. A., Traub, R. D., Jefferys, J. G., Feb 1995. Synchronized oscillations in interneuron networks driven by metabotropic glutamate receptor activation. Nature 373 (6515), 612–615.
URL http://dx.doi.org/10.1038/373612a0
* Wiskott et al. (1997) Wiskott, L., Fellous, J.-M., Krüger, N., von der Malsburg, C., 1997. Face recognition by elastic bunch graph matching. IEEE T. Pattern. Anal. 19 (7), 775–779.
* Yoshimura et al. (2005) Yoshimura, Y., Dantzker, J. L. M., Callaway, E. M., Feb 2005. Excitatory cortical neurons form fine-scale functional networks. Nature 433 (7028), 868–873.
URL http://dx.doi.org/10.1038/nature03252
* Zhang and Linden (2003) Zhang, W., Linden, D. J., Nov 2003. The other side of the engram: experience-driven changes in neuronal intrinsic excitability. Nat. Rev. Neurosci. 4 (11), 885–900.
URL http://dx.doi.org/10.1038/nrn1248
|
arxiv-papers
| 2009-05-13T14:23:36 |
2024-09-04T02:49:02.595338
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Jenia Jitsev, Christoph von der Malsburg",
"submitter": "Jenia Jitsev",
"url": "https://arxiv.org/abs/0905.2125"
}
|
0905.2181
|
Non-Bayesian particle filters
Alexandre J. Chorin and Xuemin Tu
Department of Mathematics, University of California at Berkeley
and
Lawrence Berkeley National Laboratory
Berkeley, CA, 94720
###### Abstract
Particle filters for data assimilation in nonlinear problems use “particles”
(replicas of the underlying system) to generate a sequence of probability
density functions (pdfs) through a Bayesian process. This can be expensive
because a significant number of particles has to be used to maintain accuracy.
We offer here an alternative, in which the relevant pdfs are sampled directly
by an iteration. An example is discussed in detail.
Keywords particle filter, chainless sampling, normalization factor, iteration,
non-Bayesian
## 1 Introduction.
There are many problems in science in which the state of a system must be
identified from an uncertain equation supplemented by a stream of noisy data
(see e.g. [1]). A natural model of this situation consists of a stochastic
differential equation (SDE):
$d{\bf x}={\bf f}({\bf x},t)\,dt+g({\bf x},t)\,d{\bf w},$ (1)
where ${\bf x}=(x_{1},x_{2},\dots,x_{m})$ is an $m$-dimensional vector, $d{\bf
w}$ is $m$-dimensional Brownian motion, ${\bf f}$ is an $m$-dimensional vector
function, and $g$ is a scalar (i.e., an $m$ by $m$ diagonal matrix of the form
$gI$, where $g$ is a scalar and $I$ is the identity matrix). The Brownian
motion encapsulates all the uncertainty in this equation. The initial state
${\bf x}(0)$ is assumed given and may be random as well.
As the experiment unfolds, it is observed, and the values ${\bf b}^{n}$ of a
measurement process are recorded at times $t^{n}$; for simplicity assume
$t^{n}=n\delta$, where $\delta$ is a fixed time interval and $n$ is an
integer. The measurements are related to the evolving state ${\bf x}(t)$ by
${\bf b}^{n}={\bf h}({\bf x}^{n})+G{\bf W}^{n},$ (2)
where ${\bf h}$ is a $k$-dimensional, generally nonlinear, vector function
with $k\leq m$, $G$ is a diagonal matrix, ${\bf x}^{n}={\bf x}(n\delta)$, and
${\bf W}^{n}$ is a vector whose components are independent Gaussian variables
of mean 0 and variance 1, independent also of the Brownian motion in equation
(1). The task is to estimate ${\bf x}$ on the basis of equation (1) and the
observations (2).
If the system (1) is linear and the data are Gaussian, the solution can be
found via the Kalman-Bucy filter. In the general case, it is natural to try to
estimate ${\bf x}$ as the mean of its evolving probability density. The
initial state ${\bf x}$ is known and so is its probability density; all one
has to do is evaluate sequentially the density $P_{n+1}$ of ${\bf x}^{n+1}$
given the probability density $P_{n}$ of ${\bf x}^{n}$ and the data ${\bf
b}^{n+1}$. This can be done by following “particles” (replicas of the system)
whose empirical distribution approximates $P_{n}$. In a Bayesian filter (see
e.g [2, 3, 4, 5, 6, 7, 8, 9], one uses the pdf $P_{n}$ and equation (1) to
generate a prior density, and then one uses the new data ${\bf b}^{n+1}$ to
generate a posterior density $P_{n+1}$. In addition, one may have to sample
backward to take into account the information each measurement provides about
the past and avoid having too many identical particles. Evolving particles is
typically expensive, and the backward sampling, usually done by Markov chain
Monte Carlo (MCMC), can be expensive as well, because the number of particles
needed can grow catastrophically (see e.g. [10]).
In this paper we offer an alternative to the standard approach, in which
$P_{n+1}$ is sampled directly without recourse to Bayes’ theorem and backward
sampling, if needed, is done by chainless Monte Carlo [11]. Our direct
sampling is based on a representation of a variable with density $P_{n+1}$ by
a collection of functions of Gaussian variables parametrized by the support of
$P_{n}$, with parameters found by iteration. The construction is related to
chainless sampling as described in [11]. The idea in chainless sampling is to
produce a sample of a large set of variables by sequentially sampling a
growing sequence of nested conditionally independent subsets. As observed in
[12, 13], chainless sampling for a SDE reduces to interpolatory sampling, as
explained below. Our construction will be explained in the following sections
through an example where the position of a ship is deduced from the
measurements of an azimuth, already used as a test bed in [6, 14, 15].
## 2 Sampling by interpolation and iteration.
First we explain how to sample via interpolation and iteration in a simple
example, related to the example and the construction in [12]. Consider the
scalar SDE
$dx=f(x,t)dt+\sqrt{\sigma}\,dw;$ (3)
we want to find sample paths $x=x(t),0\leq t\leq 1$, subject to the conditions
$x(0)=0,x(1)=X$.
Let $N(a,v)$ denote a Gaussian variable with mean $a$ and variance $v$. We
first discretize equation (3) on a regular mesh $t^{0},t^{1},\dots,t^{N}$,
where $t^{n}=n\delta$, $\delta=1/N$, $0\leq n\leq N$, with $x^{n}=x(t^{n})$,
and, following [12], use a balanced implicit discretization [16, 17]:
$x^{n+1}=x^{n}+f(x^{n},t^{n})\delta+(x^{n+1}-x^{n})f^{\prime}(x^{n})\delta+W^{n+1},$
where $f^{\prime}(x^{n},t^{n})=\frac{\partial f}{\partial x^{n}}(x^{n},t^{n})$
and $W^{n+1}$ is $N(0,\sigma/N)$. The joint probability density of the
variables $x^{1},\dots,x^{N-1}$ is $Z^{-1}\exp(-\sum_{0}^{N}V^{i})$, where $Z$
is the normalization constant and
$\displaystyle V_{i}$ $\displaystyle=\frac{\left((1-\delta
f^{\prime})(x^{n+1}-x^{n})-\delta f\right)^{2}}{2\sigma\delta}$
$\displaystyle=\frac{\left(x^{n+1}-x^{n}-\delta f/(1-\delta
f^{\prime})\right)^{2}}{2\sigma_{n}},$
where $f,f^{\prime}$ are functions of the $x^{j}$, and
$\sigma_{n}=\sigma\delta/(1-\delta f^{\prime})^{2}$ (see [18]). One can obtain
sample solutions by sampling this density, e.g. by MCMC, or one can obtain
them by interpolation (chainless sampling), as follows.
Consider first the special case $f(x,t)=f(t)$, so that in particular
$f^{\prime}=0$. Each increment $x^{n+1}-x^{n}$ is now a $N(a_{n},\sigma/N)$
variable, with the $a_{n}=f(t^{n})\delta$ known explicitly. Let $N$ be a power
of $2$. Consider the variable $x^{N/2}$. On one hand,
$x^{N/2}=\sum_{1}^{N/2}(x^{n}-x^{n-1})=N(A_{1},V_{1}),$
where $A_{1}=\sum_{1}^{N/2}a_{n},V_{1}=\sigma/2$. On the other hand,
$X=x^{N/2}+\sum_{N/2+1}^{N}(x^{n}-x^{n-1}),$
so that
$x^{N/2}=N(A_{2},V_{2}),$
with
$A_{2}=X-\sum_{N/2+1}^{N-1}a_{n},\quad V_{2}=V_{1}.$
The pdf of $x^{N/2}$ is the product of the two pdfs; one can check that
$\displaystyle\exp\left(-\frac{(x-A_{1})^{2}}{2V_{1}}\right)\exp\left(-\frac{(x-A_{2})^{2}}{2V_{2}}\right)$
$\displaystyle=$
$\displaystyle\exp\left(-\frac{(x-\bar{a})^{2}}{2\bar{v}}\right)\exp(-\phi),$
where $\bar{v}=\frac{V_{1}V_{2}}{V_{1}+V_{2}}$,
$\bar{a}=\frac{V_{1}A_{1}+V_{2}A_{2}}{V_{1}+V_{2}}$, and
$\phi=\frac{(A_{2}-A_{1})^{2}}{2(V_{1}+V_{2})}$; $e^{-\phi}$ is the
probability of getting from the origin to $X$, up to a normalization constant.
Pick a sample $\xi_{1}$ from the $N(0,1)$ density; one obtains a sample of
$x^{N/2}$ by setting $x^{N/2}=\bar{a}+\sqrt{\bar{v}}\xi_{1}$. Given a sample
of $x^{N/2}$ one can similarly sample $x^{N/4},x^{3N/4}$, then $x^{N/8}$,
$x^{3N/8}$, etc., until all the $x^{j}$ have been sampled. If we define
${\bf\xi}=(\xi_{1},\xi_{2},\dots,\xi_{N-1})$, then for each choice of
${\bf\xi}$ we find a sample $(x^{1},\dots,x^{N-1})$ such that
$\displaystyle\exp\left(-\frac{\xi_{1}^{2}+\cdots+\xi_{N-1}^{2}}{2}\right)\exp\left(-\frac{(X-\sum_{n}a_{n})^{2}}{2\sigma}\right)$
$\displaystyle=$
$\displaystyle\exp\left(-\frac{(x^{1}-x^{0}-a_{0})^{2}}{2\sigma/N}-\frac{(x^{2}-x^{1}-a_{1})^{2}}{2\sigma/N}\right.$
$\displaystyle\left.-\dots-\frac{(x^{N}-x^{N-1}-a_{N-1})^{2}}{2\sigma/N}\right),$
(4)
where the factor $\exp\left(-\frac{(X-\sum_{n}a_{n})^{2}}{2\sigma}\right)$ on
the left is the probability of the fixed end value $X$ up to a normalization
constant. In this linear problem, this factor is the same for all the samples
and therefore harmless. One can repeat this sampling process for multiple
choices of the variables $\xi_{j}$; each sample of the corresponding set of
$x^{n}$ is independent of any previous samples of this set.
Now return to the general case. The functions $f$, $f^{\prime}$ are now
functions of the $x^{j}$. We obtain a sample of the probability density we
want by iteration. First pick $\Xi=(\xi_{1},\xi_{2},\dots,\xi_{N-1})$, where
each $\xi_{j}$ is drawn independently from the $N(0,1)$ density (this vector
remains fixed during the iteration). Make a first guess ${\bf
x}^{0}=(x^{1}_{0},x^{2}_{0},\dots,x^{N-1}_{0})$ (for example, if $X\neq 0$,
pick ${\bf x}=0$). Evaluate the functions $f,f^{\prime}$ at ${\bf x}^{j}$
(note that now $f^{\prime}\neq 0$, and therefore the variances of the various
increments are no longer constants). We are back in previous case, and can
find values of the increments $x^{n+1}_{j+1}-x^{n}_{j+1}$ corresponding to the
values of $f,f^{\prime}$ we have. Repeat the process starting with the new
iterate. If the vectors ${\bf x}^{j}$ converge to a vector ${\bf
x}=(x^{1},\dots,x^{N-1})$, we obtain, in the limit, equation (4), where now on
the right side $\sigma$ depends on $n$ so that $\sigma=\sigma_{n}$, and both
$a_{n},\sigma_{n}$ are functions of the final ${\bf x}$. The left hand side of
(4) becomes:
$\exp\left(-\frac{\xi_{1}^{2}+\cdots+\xi_{N-1}^{2}}{2}\right)\exp\left(-\frac{(X-\sum_{n}a_{n})^{2}}{2\sum_{n}\sigma_{n}}\right).$
Note that now the factor
$\exp\left(-\frac{(X-\sum_{n}a_{n})^{2}}{2\sum_{n}\sigma_{n}}\right)$ is
different from sample to sample, and changes the relative weights of the
different samples. In averaging, one should take this factor as weight, or
resample as described at the end of the following section. In order to obtain
more uniform weights, one also can use the strategies in [11, 12].
One can readily see that the iteration converges if $KTM<1$, where $K$ is the
Lipshitz constant of $f$, $T$ is the length of the interval on which one works
(here $T=1$), and $M$ is the maximum norm of the vectors ${\bf x}^{j+1}-{\bf
x}^{j}$. If this inequality is not satisfied for the iteration above, it can
be re-established by a suitable underrelaxation. One should course choose $N$
large enough so that the results are converged in $N$. We do not provide more
details here because they are extraneous to our purpose, which is to explain
chainless/interpolatory sampling and the use of reference variables in a
simple context.
## 3 The ship azimuth problem.
The problem we focus on is discussed in [6, 14, 15], where it is used to
demonstrate the capabilities of particular Bayesian filters. A ship sets out
from a point $(x_{0},y_{0})$ in the plane and undergoes a random walk,
$\displaystyle x^{n+1}$ $\displaystyle=x^{n}+dx^{n+1},$ $\displaystyle
y^{n+1}$ $\displaystyle=y^{n}+dy^{n+1},$ (5)
for $n\geq 0$, and with $x^{0}=y^{0}$ given, and $dx^{n+1}=N(dx^{n},\sigma)$,
$dy^{n+1}=N(dy^{n},\sigma)$, i.e., each displacement is a sample of a Gaussian
random variable whose variance $\sigma$ does not change from step to step and
whose mean is the value of the previous displacement. An observer makes noisy
measurements of the azimuth $\arctan(y^{n}/x^{n})$, recording
$b^{n}=\arctan\frac{y^{n}}{x^{n}}+N(0,s).$ (6)
where the variance $s$ is also fixed; here the observed quantity $b$ is scalar
and is not be denoted by a boldfaced letter. The problem is to reconstruct the
positions ${\bf x}^{n}=(x^{n},y^{n})$ from equations (5,6). We take the same
parameters as [6]: $x_{0}=0.01,y_{0}=20$, $dx^{1}=0.002$, $dy^{1}=-0.06$,
$\sigma=1\cdot 10^{-6},s=25\cdot 10^{-6}$. We follow numerically $M$
particles, all starting from $X_{i}^{0}=x_{0},Y_{i}^{0}=y_{0}$, as described
in the following sections, and we estimate the ship’s position at time
$n\delta$ as the mean of the locations ${\bf
X}^{n}_{i}=(X^{n}_{i},Y^{n}_{i}),i=1,\dots,M$ of the particles at that time.
The authors of [6] also show numerical results for runs with varying data and
constants; we discuss those refinements in section 6 below.
## 4 Forward step.
Assume we have a collection of $M$ particles ${\bf X}^{n}$ at time
$t^{n}=n\delta$ whose empirical density approximates $P_{n}$; now we find
increments $d{\bf X}^{n+1}$ such that the empirical density of ${\bf
X}^{n+1}={\bf X}^{n}+d{\bf X}^{n+1}$ approximates $P_{n+1}$. $P_{n+1}$ is
known implicitly: it is the product of the density that can be deduced from
the SDE and the one that comes from the observations, with the appropriate
normalization. If the increments were known, their probability $p$ (the
density $P_{n+1}$ evaluated at the resulting positions ${\bf X}^{n+1}$) would
be known, so $p$ is a function of $d{\bf X}^{n+1}$, $p=p(d{\bf X}^{n+1})$. For
each particle $i$, we are going to sample a Gaussian reference density, obtain
a sample of probability $\rho$, then solve (by iteration) the equation
$\rho=p(d{\bf X}^{n+1}_{i})$ (7)
to obtain $d{\bf X}_{i}^{n+1}$.
Define $f(x,y)=\arctan(y/x)$ and $f^{n}=f(X^{n},Y^{n})$. We are working on one
particle at a time, so the index $i$ can be temporarily suppressed. Pick two
independent samples $\xi_{x}$, $\xi_{y}$ from a $N(0,1)$ density (the
reference density in the present calculation), and set
$\rho=\frac{1}{2\pi}\exp\left(-\frac{\xi_{x}^{2}}{2}-\frac{\xi_{y}^{2}}{2}\right)$;
the variables $\xi_{x}$, $\xi_{y}$ remain unchanged until the end of the
iteration. We are looking for displacements $dX^{n+1}$, $dY^{n+1}$, and
parameters $a_{x},a_{y},v_{x},v_{y},\phi$, such that:
$\displaystyle 2\pi\rho=$
$\displaystyle\exp\left(-\frac{(dX^{n+1}-dX^{n})^{2}}{2\sigma}-\frac{(dY^{n+1}-dY^{n})^{2}}{2\sigma}\right.$
$\displaystyle\left.-\frac{(f^{n+1}-b^{n+1})^{2}}{2s}\right)\exp(\phi)$
$\displaystyle=$
$\displaystyle\exp\left(-\frac{(dX^{n+1}-a_{x})^{2}}{2v_{x}}-\frac{(dY^{n+1}-a_{y})^{2}}{2v_{y}}\right)$
(8)
The first equality states what we wish to accomplish: find increments
$dX^{n+1}$, $dY^{n+1}$, functions respectively of $\xi_{x},\xi_{y}$, whose
probability with respect to $P_{n+1}$ is $\rho$. The factor $e^{\phi}$ is
needed to normalize this term ($\phi$ is called below a “phase”). The second
equality says how the goal is reached: we are looking for parameters
$a_{x},a_{y},v_{x},v_{y},$ (all functions of ${\bf X}^{n}$) such that the
increments are samples of Gaussian variables with these parameters, with the
assumed probability. One should remember that in our example the mean of
$dX^{n+1}$ is $dX^{n}$, and similarly for $dY^{n+1}$. We are not representing
$P_{n+1}$ as a function of a single Gaussian- there is a different Gaussian
for every value of ${\bf X}^{n}$.
To satisfy the second equality we set up an iteration for vectors $d{\bf
X}^{n+1,j}(=d{\bf X}^{j}$ for brevity) that converges to $d{\bf X}^{n+1}$.
Start with $d{\bf X}^{0}=0$. We now explain how to compute $d{\bf X}^{j+1}$
given $d{\bf X}^{j}$.
Approximate the observation equation (6) by
$f({\bf
X}^{j})+f_{x}\cdot(dX^{j+1}-dX^{j})+f_{y}\cdot(dY^{j+1}-dY^{j})=b^{n+1}+N(0,s),$
(9)
where the derivatives $f_{x},f_{y}$ are, like $f$, evaluated at ${\bf
X}^{j}={\bf X}^{n}+d{\bf X}^{j}$, i.e., approximate the observation equation
by its Taylor series expansion around the previous iterate. Define a variable
$\eta^{j+1}=(f_{x}\cdot dX^{j+1}+f_{y}\cdot
dY^{j+1})/\sqrt{f_{x}^{2}+f_{y}^{2}}$. The approximate observation equation
says that $\eta^{j+1}$ is a $N(a_{1},v_{1})$ variable, with
$\displaystyle a_{1}$ $\displaystyle=-\frac{f-f_{x}\cdot dX^{j}-f_{y}\cdot
dY^{j}-b^{n+1}}{\sqrt{f_{x}^{2}+f_{y}^{2}}}\,,$ $\displaystyle v_{1}$
$\displaystyle=\frac{s}{f_{x}^{2}+f_{y}^{2}}.$ (10)
On the other hand, from the equations of motion one finds that $\eta^{j+1}$ is
$N(a_{2},v_{2})$, with $a_{2}=(f_{x}\cdot dX^{n}+f_{y}\cdot
dY^{n})/\sqrt{f_{x}^{2}+f_{y}^{2}}$ and $v_{2}=\sigma$. Hence the pdf of
$\eta^{j+1}$ is, up to normalization factors,
$\exp\left(-\frac{(x-a_{1})^{2}}{2v_{1}}-\frac{(x-a_{2})^{2}}{2v_{2}}\right)=\exp\left(-\frac{(x-\bar{a})^{2}}{2\bar{v}}\right)\exp(-\phi),$
where $\bar{v}=\frac{v_{1}v_{2}}{v_{1}+v_{2}}$,
$\bar{a}=\frac{a_{1}v_{1}+a_{2}v_{2}}{v_{1}+v_{2}}$,
$\phi=\frac{(a_{1}-a_{2})^{2}}{2(v_{1}+v_{2})}=\phi^{j+1}$.
We can also define a variable $\eta_{+}^{j+1}$ that is a linear combination of
$dX^{j+1}$, $dY^{j+1}$ and is uncorrelated with $\eta^{j+1}$:
$\eta_{+}^{j+1}=\frac{-f_{y}\cdot dY^{j+1}+f_{x}\cdot
dX^{j+1}}{\sqrt{f_{x}^{2}+f_{y}^{2}}}.$
The observations do not affect $\eta_{+}^{j+1}$, so its mean and variance are
known. Given the means and variances of $\eta^{j+1}$, $\eta^{j+1}_{+}$ one can
easily invert the orthogonal matrix that connects them to $dX^{j+1}$,
$dY^{j+1}$ and find the means and variances $a_{x},v_{x}$ of $dX^{j+1}$ and
$a_{y},v_{y}$ of $dY^{j+1}$ after their modification by the observation (the
subscripts on $a,v$ are labels, not differentiations). Now one can produce
values for $dX^{j+1},dY^{j+1}$:
$dX^{j+1}=a_{x}+\sqrt{v_{x}}\xi_{x},\quad dY^{j+1}=a_{y}+\sqrt{v_{y}}\xi_{y},$
where $\xi_{x}$, $\xi_{y}$ are the samples from $N(0,1)$ chosen at the
beginning of the iteration. This completes the iteration.
This iteration converges to ${\bf X}^{n+1}$ such that $f({\bf
X}^{n+1})=b^{n+1}+N(0,s)$, and the phases $\phi^{j}$ converge to a limit
$\phi=\phi_{i}$, where the particle index $i$ has been restored. The time
interval over which the solution is updated in each step is short, and we do
not expect any problem with convergence, either here or in the next section,
and indeed there is none; in all cases the iteration converges in a small
number of steps. Note that after the iteration the variables
$X^{n+1}_{i},Y^{n+1}_{i}$ are no longer independent- the observation creates a
relation between them.
Do this for all the particles. The particles are now samples of $P_{n+1}$, but
they have been obtained by sampling different densities (remember that the
parameters in the Gaussians in equation (8) vary). One can get rid of this
heterogeneity by viewing the factors $\exp(-\phi)$ as weights and resampling,
i.e., for each of $M$ random numbers $\theta_{k},k=1,\dots,M$ drawn from the
uniform distribution on $[0,1]$, choose a new ${\bf\hat{X}}^{n+1}_{k}={\bf
X}^{n+1}_{i}$ such that $Z^{-1}\sum_{j=1}^{i-1}\exp(-\phi_{j})<\theta_{k}\leq
Z^{-1}\sum_{j=1}^{i}\exp(-\phi_{j})$ (where
$Z=\sum_{j=1}^{M}\exp(-\phi_{j})$), and then suppress the hat. We have traded
the resampling of Bayesian filters for a resampling based on the normalizing
factors of the several Gaussian densities; this is a worthwhile trade because
in a Bayesian filter one gets a set of samples many of which may have low
probability with respect to $P_{n+1}$, and here we have a set of samples each
one of which has high probability with respect to a pdf close to $P_{n+1}$.
Note also that the resampling does not have to be done at every step- for
example, one can add up the phases for a given particle and resample only when
the ratio of the largest cumulative weight $\exp(-\sum\phi_{i})$ to the
smallest such weight exceeds some limit $L$ (the summation is over the weights
accrued to a particular particle $i$ since the last resampling). If one is
worried by too many particles being close to each other (”depletion” in the
Bayesian terminology), one can divide the set of particles into subsets of
small size and resample only inside those subsets, creating a greater
diversity. As will be seen in section 6, none of these strategies will be used
here and we will resample fully at every step.
## 5 Backward sampling.
The algorithm of the previous section is sufficient to create a filter, but
accuracy may require an additional refinement. Every observation provides
information not only about the future but also about the past- it may, for
example, tag as improbable earlier states that had seemed probable before the
observation was made; one may have to go back and correct the past after every
observation (this backward sampling is often misleadingly motivated solely by
the need to create greater diversity among the particles in a Bayesian
filter). As will be seen below, this backward sampling does not provide a
significant boost to accuracy in the present problem, but it is described here
for the sake a completeness.
Given a set of particles at time $(n+1)\delta$, after a forward step and maybe
a subsequent resampling, one can figure out where each particle $i$ was in the
previous two steps, and have a partial history for each particle $i$: ${\bf
X}_{i}^{n-1},{\bf X}_{i}^{n},{\bf X}_{i}^{n+1}$ (if resamples had occurred,
some parts of that history may be shared among several current particles).
Knowing the first and the last member of this sequence, one can interpolate
for the middle term as in section 2, thus projecting information backward.
This requires that one recompute $d{\bf X}^{n}$.
Let $d{\bf X}^{\text{tot}}=d{\bf X}^{n}+d{\bf X}^{n+1}$; in the present
section this quantity is assumed known and remains fixed. In the azimuth
problem discussed here, one has to deal with the slight complication due to
the fact that the mean of each increment is the value of the previous one, so
that two successive increments are related in a slightly more complicated way
than usual. The displacement $dX^{n}$ is a $N(dX^{n-1},\sigma)$ variable, and
$dX^{n+1}$ is a $N(dX^{n},\sigma)$ variable, so that one goes from $X^{n-1}$
to $X^{n+1}$ by sampling first a $(2dX^{n-1},4\sigma)$ variable that takes us
from ${X}^{n-1}$ to an intermediate point $P$, with a correction by the
observation half way up this first leg, and then one samples a
$N(dX^{\text{tot}},\sigma)$ variable to reach $X^{n+1}$, and similarly for
$Y$. Let the variable that connects ${\bf X}^{n-1}$ to $P$ be $d{\bf
X}^{\text{new}}$, so that what replaces $d{\bf X}^{n}$ is $d{\bf
X}^{\text{new}}/2$. Accordingly, we are looking for a new displacement $d{\bf
X}^{\text{new}}=(dX^{\text{new}},dY^{\text{new}})$, and for parameters
$a_{x}^{\text{new}},a_{y}^{\text{new}},v_{x}^{\text{new}},v_{y}^{\text{new}}$
such that
$\displaystyle\exp\left(-\frac{\xi_{x}^{2}+\xi_{y}^{2}}{2}\right)$
$\displaystyle=$
$\displaystyle\exp\left(-\frac{(dX^{\text{new}}-2dX^{n-1})^{2}}{8\sigma}-\frac{(dY^{\text{new}}-2dY^{n-1})^{2}}{8\sigma}\right)$
$\displaystyle\times\exp\left(-\frac{(f^{n}-b^{n})^{2}}{2s}\right)$
$\displaystyle\times\exp\left(-\frac{(dX^{\text{new}}-dX^{\text{tot}})^{2}}{2\sigma}-\frac{(dY^{\text{new}}-dX^{\text{tot}})^{2}}{2\sigma}\right)\exp(\phi)$
$\displaystyle=$
$\displaystyle\exp\left(-\frac{(dX^{\text{new}}-\bar{a}_{x})^{2}}{2v^{\text{new}}_{x}}-\frac{(dY^{\text{new}}-\bar{a}_{y})^{2}}{2v^{\text{new}}_{y}}\right),$
where $f^{n}=f(X^{n-1}+dX^{\text{new}}/2,Y^{n-1}+dY^{\text{new}}/2)$ and
$\xi_{x}$, $\xi_{y}$ are independent $N(0,1)$ Gaussian variables. As in
equation (8), the first equality embodies what we wish to accomplish- find
increments, functions of the reference variables, that sample the new pdf at
time $n\delta$ defined by the forward motion, the constraint imposed by the
observation, and by knowledge of the position at time $(n+1)\delta t$. The
second equality states that this is done by finding particle-dependent
parameters for a Gaussian density.
We again find these parameters as well as the increments by iteration. Much of
the work is separate for the $X$ and $Y$ components of the equations of
motion, so we write some of the equations for the $X$ component only. Again
set up an iteration for variables $dX^{\text{new},j}=dX^{j}$ which converge to
$dX^{\text{new}}$. Start with $dX^{0}=0$. To find $dX^{j+1}$ given $dX^{j}$,
approximate the observation equation (6), as before, by equation (9); define
again variables $\eta^{j+1},\eta^{j+1}_{+}$, one in the direction of the
approximate constraint and one orthogonal to it; in the direction of the
constraint multiply the pdfs as in the previous section; construct new means
$a^{1}_{x},a^{1}_{y}$ and new variances $v^{1}_{x},v^{1}_{y}$ for $dX,dY$ at
time $n$, taking into account the observation at time $n$, again as before.
This also produces a phase $\phi=\phi_{0}$.
Now take into account that the location of the boat at time $n+1$ is known;
this creates a new mean $\bar{a}_{x}$, a new variance $\bar{v}_{x}$, and a new
phase $\phi_{x}$, by $\bar{v}=\frac{v_{1}v_{2}}{v_{1}+v_{2}}$,
$\bar{a}_{x}=\frac{a_{1}v_{1}+a_{2}v_{2}}{v_{1}+v_{2}}$,
$\phi_{x}=\frac{(a_{1}-a_{2})^{2}}{v_{1}+v_{2}}$, where
$a_{1}=2a^{1},v_{1}=4v^{1}_{x},a_{2}=X^{\text{tot}},v_{2}=\sigma$. Finally,
find a new interpolated position
$dX^{j+1}=a_{x}^{\text{new}}/2+\sqrt{v_{x}^{\text{new}}}\xi_{x}$ (the
calculation for $dY^{j+1}$ is similar, with a phase $\phi_{y}$), and we are
done. The total phase for in this iteration is
$\phi=\phi_{0}+\phi_{x}+\phi_{y}$. As the iterates $dX^{j}$ converge to
$dX^{\text{new}}$, the phases converge to a limit $\phi=\phi_{i}$. The
probability of a particle arriving at the given position at time $(n+1)\delta
t$ having been determined in the forward step, there is no need to resample
before comparing samples. Once one has the values of $\bf{X}^{\text{new}}$, a
forward step gives corrected values of ${\bf X}^{n+1}$; one can use this
interpolation process to correct estimates of ${\bf X}^{k}$ by subsequent
observations for $k=n-1,k=n-2,\dots$, as many as are useful.
## 6 Numerical results.
Before presenting examples of numerical results for the azimuth problem, we
discuss the accuracy one can expect. A single set of observations for our
problem relies on 160 samples of a $N(0,\sigma)$ variable. The maximum
likelihood estimate of $\sigma$ given these samples is a random variable with
mean $\sigma$ and standard deviation $.11\sigma$. We estimate the uncertainty
in the position of the boat by picking a set of observations, then making
multiple runs of the boat where the random components of the motion in the
direction of the constraint are frozen while the ones orthogonal to it are
sampled over and over from the suitable Gaussian density, then computing the
distances to the fixed observations, estimating the standard deviation of
these differences, and accepting the trajectory if the estimated standard
deviation is within one standard deviation of the nominal value of $s$. This
process generates a family of boat trajectories compatible with the given
observations. In Table I we display the standard deviations of the differences
between the resulting paths and the original path that produced the
observations after the number of steps indicated there (the means of these
differences are statistically indistinguishable from zero). This Table
provides an estimate of the accuracy we can expect. It is fair to assume that
these standard deviations are underestimates of the uncertainty- a variation
of a single standard deviation in $s$ is a strict constraint, and we allowed
no variation in $\sigma$.
Table I
Intrinsic uncertainty in the azimuth problem
step | $x$ component | $y$ component
---|---|---
40 | .0005 | .21
80 | .004 | .58
120 | .010 | .88
160 | .017 | .95
If one wants reliable information about the performance of the filter, it is
not sufficient to run the boat once, record observations, and then use the
filter to reconstruct the boat’s path, because the difference between the true
path and the reconstruction is a random variable which may be accidentally
atypically small or atypically large. We have therefore run a large number of
such reconstructions and computed the means and standard deviations of the
discrepancies between path and reconstruction as a function of the number of
steps and of other parameters. In Tables II and III we display the means and
standard deviations of these discrepancies (not of their mean!) in the the x
and y components of the paths with 2000 runs, at the steps and numbers of
particles indicated, with no backward sampling. (Ref. [6] used 100 particles).
On the average the error is zero, and the error that can be expected in any
one run is of the order of magnitude of the unavoidable error. The standard
deviation of the discrepancy is not significantly smaller with 2 particles
that with 100- the main source of the discrepancy is the uncertainty in the
data. Most of time one single particle (no resampling) is enough; however, a
single particle may temporarily stray into low-probability areas and creates
large arguments and numerical difficulties in the various functions used in
the program. Two particles with resampling keep each other within bounds,
because if one of them strays it gets replaced by a replica of the other. The
various more sophisticated resampling strategies at the end of section 4 make
no discernible difference here, and backward sampling does not help much
either, because they too are unable to remedy the limitation of the data set.
Table IIa
Mean and standard variation of the discrepancy between synthetic data and
their reconstruction, 2000 runs, no back step, 100 particles
n. of steps | x component | y component
---|---|---
| mean | s.d. | mean | s.d.
40 | .0004 | .04 | .0001 | .17
80 | -.001 | .04 | -.01 | .54
120 | -.0008 | .07 | -.03 | 1.02
160 | -.002 | .18 | -.05 | 1.56
Table IIb
Mean and standard variation of the discrepancy between synthetic data and
their reconstruction, 2000 runs, no back step, 2 particles
n. of steps | x component | y component
---|---|---
| mean | s.d. | mean | s.d.
40 | .002 | .17 | -.0004 | .20
80 | .01 | .43 | -.0006 | .58
120 | .01 | .57 | .009 | 1.08
160 | .006 | .54 | .01 | 1.67
In Figure 1 we plot a sample boat path, its reconstruction, and the
reconstructions obtained (i) when the initial data for the reconstruction are
strongly perturbed (here, the initial data for $x,y$ were perturbed initially
by, respectively, $.1$ and $.4$), and (ii) when the value of $\sigma$ assumed
in the reconstruction is random: $\sigma=N(\sigma_{0},\epsilon\sigma_{0})$,
where $\sigma_{0}$ is the constant value used until now and $\epsilon=0.4$ but
the calculation is otherwise identical. This produces variations in $\sigma$
of the order of $40\%$; any larger variance in the perturbations produced
negative value of $\sigma$. The differences between the reconstructions and
the true path remain within the acceptable range of errors. These graphs show
that the filter has little sensitivity to perturbations (we did not calculate
statistics here because the insensitivity holds for each individual run).
Figure 1: Some boat trajectories (explained in the text)
We now estimate the parameter $\sigma$ from data. The filter needs an estimate
of $\sigma$ to function, call this estimate $\sigma_{\text{assumed}}$. If
$\sigma_{\text{assumed}}\neq\sigma$, the other assumptions used to produce the
data set (e.g. independence of the displacements and of the observations) are
also false, and all one has to do is detect the fallacy. We do it by picking a
trajectory of a particle and computing the quantity
$D=\frac{(\sum_{2}^{J}(dX^{j+1}-dX^{j}))^{2}+(\sum_{2}^{J}(dY^{j+1}-dY^{j}))^{2}}{\sum_{2}^{J}(dX^{j+1}-dX^{j})^{2}+\sum_{2}^{J}(dY^{j+1}-dY^{j})^{2}}.$
If the increments are independent then on the average $D=1$; we will try to
find the real $\sigma$ by finding a value of $\sigma_{\text{assumed}}$ for
which this happens. We chose $J=40$ (the early part of a trajectory is less
noisy than the later parts).
As we already know, a single run cannot provide an accurate estimate of
$\sigma$, and accuracy in the reconstruction depends on how many runs are
used. In Table III we display some values of $D$ averaged over 200 and over
5000 runs as a function of the ratio of $\sigma_{\text{assumed}}$ to the value
of $\sigma$ used to generate the data. From the longer computation one can
find the correct value of $\sigma$ with an error of about $3\%$, while with
200 runs the uncertainty is about $10\%$.
Table III
The mean of the discriminant D as a function of
$\sigma_{\text{assumed}}/\sigma$, 30 particles
$\sigma_{\text{assumed}}/\sigma$ | 5000 runs | 200 runs
---|---|---
.5 | 1.14 $\pm$ .01 | 1.21 $\pm$ .08
.6 | 1.08 $\pm$ .01 | 1.14 $\pm$ .07
.7 | 1.05 $\pm$ .01 | 1.10 $\pm$ .07
.8 | 1.04 $\pm$ .01 | 1.14 $\pm$ .07
.9 | 1.00 $\pm$ .01 | 1.01 $\pm$ .07
1.0 | 1.00 $\pm$ .01 | .96 $\pm$ .07
1.1 | .97 $\pm$ .01 | 1.01 $\pm$ .07
1.2 | .94 $\pm$ .01 | .99 $\pm$ .07
1.3 | .93 $\pm$ .01 | 1.02 $\pm$ .07
1.4 | .90 $\pm$ .01 | .85 $\pm$ .06
1.5 | .89 $\pm$ .01 | .93 $\pm$ .07
2.0 | .86 $\pm$ .01 | .78 $\pm$ .05
## 7 Conclusions.
We have exhibited a non-Bayesian filtering method, related to recent work on
chainless sampling, designed to focus particle paths more sharply and thus
require fewer of them, at the cost of an added complexity in the evaluation of
each path. The main features of the algorithm are a representation of a new
pdf by means of a set of functions of Gaussian variables and a resampling
based on normalization factors. The construction was demonstrated on a
standard ill-conditioned test problem. Further applications will be published
elsewhere.
## 8 Acknowledgments.
We would like to thank Prof. R. Kupferman, Prof. R. Miller, and Dr. J. Weare
for asking searching questions and providing good advice. This work was
supported in part by the Director, Office of Science, Computational and
Technology Research, U.S. Department of Energy under Contract No. DE-
AC02-05CH11231, and by the National Science Foundation under grant
DMS-0705910.
## References
* [1] Doucet, A., de Freitas, N., and Gordon, N. (eds) (2001), Sequential Monte Carlo Methods in Practice, Springer, New York.
* [2] Maceachern, S., Clyde, M., and Liu, J. (1999), Can. J. Stat. 27:251–267.
* [3] Liu, J. and Sabati, C. (2000), Biometrika 87:353–369.
* [4] Doucet, A., Godsill, S., and Andrieu, C. (2000), Stat. Comp. 10:197–208.
* [5] Arulampalam, M., Maskell, S., Gordon, N., and Clapp, T. (2002), IEEE Trans. Sig. Proc. 50:174–188.
* [6] Gilks, W. and Berzuini, C. (2001), J. Roy. Statist. Soc. B 63:127–146.
* [7] Chorin, A.J. and Krause, P. (2004), Proc. Nat. Acad. Sci. USA 101:15013–15017.
* [8] Dowd, M. (2006), Environmetrics 17:435–455.
* [9] Doucet, A. and Johansen, A., Particle Filtering and Smoothing: Fifteen years Later, Handbook of Nonlinear Filtering (eds. D. Crisan et B. Rozovsky), Oxford University Press, to appear.
* [10] Snyder, C., Bengtsson, T., Bickel, P, and Anderson, J. (2008), Mon. Wea. Rev. 136:4629–4640.
* [11] Chorin, A.J. (2008), Comm. Appl. Math. Comp. Sc. 3:77–93.
* [12] Weare, J. (2007), Proc. Nat. Acad. Sc. USA 104:12657–12662.
* [13] Weare, J., Particle filtering with path sampling and an application to a bimodal ocean current model, in press, J. Comput. Phys.
* [14] Gordon, N., Salmond, D., and Smith A. (1993), IEEE Proceedings-F 140: 107–113.
* [15] Carpenter, J., Clifford, P., and Fearnhead, P. (1999) IEEE Proceedings-Radar Sonar and Navigation 146:2–7.
* [16] Milstein, G., Platen, E., and Schurz H. (1998), SIAM J. Num. Anal. 35:1010–1019.
* [17] Kloeden, P. and Platen, E. (1992), Numerical Solution of Stochastic Differential Equations, Springer, Berlin.
* [18] Stuart, A., Voss, J., and Wilberg, P. (2004), Comm. Math. Sc. 4:685–697.
|
arxiv-papers
| 2009-05-13T23:54:51 |
2024-09-04T02:49:02.604962
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Alexandre J. Chorin and Xuemin Tu",
"submitter": "Xuemin Tu",
"url": "https://arxiv.org/abs/0905.2181"
}
|
0905.2248
|
# Protection against link errors and failures using network coding
Shizheng Li and Aditya Ramamoorthy The authors are with Department of
Electrical and Computer Engineering, Iowa State University. Email: {szli,
adityar}@iastate.eduThis material in this work appeared in part in
International Symposium on Information Theory, Seoul, Korea, 2009.
###### Abstract
We propose a network-coding based scheme to protect multiple bidirectional
unicast connections against adversarial errors and failures in a network. The
network consists of a set of bidirectional primary path connections that carry
the uncoded traffic. The end nodes of the bidirectional connections are
connected by a set of shared protection paths that provide the redundancy
required for protection. Such protection strategies are employed in the domain
of optical networks for recovery from failures. In this work we consider the
problem of simultaneous protection against adversarial errors and failures.
Suppose that $n_{e}$ paths are corrupted by the omniscient adversary. Under
our proposed protocol, the errors can be corrected at all the end nodes with
$4n_{e}$ protection paths. More generally, if there are $n_{e}$ adversarial
errors and $n_{f}$ failures, $4n_{e}+2n_{f}$ protection paths are sufficient.
The number of protection paths only depends on the number of errors and
failures being protected against and is independent of the number of unicast
connections.
###### Index Terms:
Network coding, network error correction, adversarial error, network
protection
## I Introduction
Protection of networks against faults and errors is an important problem.
Networks are subject to various fault mechanisms such as link failures,
adversarial attacks among others and need to be able to function in a robust
manner even in the presence of these impairments. In order to protect networks
against these issues, additional resources, e.g., spare source-terminal paths
are usually provisioned. A good survey of issues in network protection can be
found in [1]. Recently, the technique of network coding [2] was applied to the
problem of network protection. The protection strategies for link-disjoint
connections in [3, 4, 5] perform network coding over p-Cycles [6], which are
shared by connections to be protected. The work in [7, 8] uses paths instead
of cycles to carry coded data units and proposes a simple protocol that does
not require any synchronization among network nodes, yet protecting multiple
primary path connections with shared protection paths. These schemes deal
exclusively with link failures, e.g., due to fiber cuts in optical networks,
and assume that each node knows the location of the failures at the time of
decoding. In this work we consider the more general problem of protection
against errors. An error in the network, refers to the alteration of the
transmitted data unit in some manner such that the nodes do not know the
location of the errors before decoding. If errors over a link are random,
classical error control codes [19] that protect individual links may be able
to help in recovering data at the terminals. However, such a strategy will in
general not work when we consider adversarial errors in networks. An adversary
may be limited in the number of links she can control. However for those
links, she can basically corrupt the transmission in any arbitrary manner. An
error correction code will be unable to handle a computationally unbounded
adversary who knows the associated generator matrix and the actual codes under
transmission. This is because she can always replace the actual transmitted
codeword by another valid codeword.
In this paper we investigate the usage of network coding over protection paths
for protection against adversarial errors. Protection against link failures in
network-coded multicast connections was discussed in [9]. The problem of
network error correction in multicast has been studied to some extent. Bounds
such as Hamming bound and Singleton Bound in classical coding theory are
generalized to network multicast in [10, 11]. Several error correction coding
schemes are proposed, e.g., [12, 13, 14, 15]. However, these error correction
schemes work in the context of network-coded multicast connections.
In this work we attempt to simultaneously protect multiple unicast connections
using network coding by transmitting redundant information over protection
paths. Note that even the error-free multiple unicast problem under network
coding is not completely understood given the current state of the art [16].
Therefore we consider the multiple unicast problem under certain restrictions
on the underlying topology. In our work we consider each individual unicast to
be operating over a single primary path. Moreover, we assume that protection
paths passing through the end nodes of each unicast connection have been
provisioned (see Figure 1 for an example). The primary and protection paths
can be provisioned optimally by integer linear programming (ILP). Although the
ILP has high (potentially exponential) computational complexity, it only needs
to run once before the transmission of data and there are powerful ILP
solvers, e.g. CPLEX, to solve ILP problems. Suppose that the adversary
controls only one path. Within the considered model, there are several
possible protection options. At one extreme, each primary path can be
protected by two additional protection paths that are exclusively provisioned
for it. This is a special case of our model. At the other extreme, one can
consider provisioning protection paths that simultaneously protect all the
primary paths. There also exist a host of intermediate strategies that may be
less resource expensive. In this sense, our model captures a wide variety of
protection options. However, the model does not capture scenarios where the
uncoded unicast traffic travels over different primary paths. The model
considers wired networks only and does not capture the characteristics of
wireless networks.
Figure 1: Three primary paths $S_{i}-T_{i},i=1,\ldots,3$ being protected by a
single protection path $\mathbf{P}^{(k)}$. The single lines represent the
primary paths and the double lines represent the protection path. The
clockwise direction of the protection path is $\mathbf{S}^{(k)}$ and the
counter clockwise direction is $\mathbf{T}^{(k)}$. $\sigma(S_{2})=T_{3}$,
$\tau^{-1}(T_{3})=T_{2}$. The encoded data units on $\mathbf{S}^{(k)}$ are
labeled inside the protection path and the encoded data units on
$\mathbf{T}^{(k)}$ are labeled outside the protection path. At $T_{3}$, the
data unit
$P^{(k)}=\alpha_{1}d_{1}+\beta_{1}\hat{u}_{1}+\alpha_{2}d_{2}+\beta_{2}\hat{u}_{2}+\alpha_{1}\hat{d}_{1}+\beta_{1}u_{1}+\alpha_{3}d_{3}+\beta_{3}\hat{u}_{3}+\alpha_{2}\hat{d}_{2}+\beta_{2}u_{2}$,
if there is no error, $P^{(k)}=\alpha_{3}d_{3}+\beta_{3}u_{3}$.
Our work is a significant generalization of [7]. We assume the omniscient
adversary model [13], under which the adversary has full knowledge of all
details of the protocol (encoding/decoding algorithms, coefficients, etc.) and
has no secrets hidden from her. An adversary changes data units on several
paths, which may be primary paths or protection paths. The number of errors
equals the number of paths the adversary attacks. If multiple paths share one
link and the adversary controls that link, it is treated as multiple errors.
Our schemes enable all nodes to recover from $n_{e}$ errors, provided that
$4n_{e}$ protection paths are shared by all the connections. More generally,
if there are $n_{e}$ adversarial errors and $n_{f}$ failures, a total of
$4n_{e}+2n_{f}$ protection paths are sufficient. We emphasize that the number
of protection paths only depends on the number of errors and failures being
protected against and is independent of the number of unicast connections.
Simulation results show that if the number of primary paths is large, the
proposed protection scheme consumes less network resources compared to the 2+1
protection scheme, where 2+1 means that we use two additional paths to protect
each primary connection.
Section II introduces the network model and our encoding protocol, which is a
generalization of [7]. The error model is explained in Section III. In Section
IV, we present the decoding algorithm and conditions when a single error
happens. Generalizations to multiple errors and combinations of errors and
failures are considered in Section V and Section VI. In Section VII, we
briefly show how the optimal primary and protection paths are provisioned by
integer linear programming and the simulation shows that our proposed approach
saves network resources. Section VIII concludes the paper.
## II Network model and encoding protocol
Suppose that $2n$ nodes in the network establish $n$ bidirectional unicast
connections with the same capacity. These nodes are partitioned into two
disjoint sets ${\cal S}$ and ${\cal T}$ such that each node in ${\cal S}$
connects to one node in ${\cal T}$. The $n$ connections are labeled by numbers
$1,\ldots,n$ and the nodes participating in the $i$th connection are given
index $i$, i.e., $S_{i}$ and $T_{i}$. Each connection contains one
bidirectional primary path $S_{i}-T_{i}$. $S_{i}$ and $T_{i}$ send data units
they want to transmit onto the primary path. The data unit sent from $S_{i}$
to $T_{i}$ (from $T_{i}$ to $S_{i}$ respectively) on the primary path is
denoted by $d_{i}$ ($u_{i}$ respectively). The data unit received on the
primary path by $T_{i}$ ($S_{i}$ respectively) is denoted by $\hat{d}_{i}$
($\hat{u}_{i}$ respectively).
A protection path $\mathbf{P}$ is a bidirectional path going through all $2n$
end nodes of the $n$ connections. It has the same capacity as the primary
paths and consists of two unidirectional paths $\mathbf{S}$ and $\mathbf{T}$
in opposite directions. $M$ protection paths are used and we assume that there
are enough resources in the network so that these protection paths can always
be found and provisioned. In this paper we mainly focus on the case where all
protection paths pass through all $2n$ end nodes of the connections, see Fig.
1 for an example, and they are denoted by
$\mathbf{P}^{(1)},\ldots,\mathbf{P}^{(M)}$. The order in which the protection
paths pass through the end nodes does not matter. The more general case where
different primary path connections are protected by different protection paths
will be discussed in Section IV-F. All operations are over the finite field
$GF(q)$, $q=2^{r}$, where $r$ is the length of the data unit in bits.
Frequently used notations in this paper are summarized in Table I.
TABLE I: Frequently used notations in this paper. Notation | Meaning
---|---
$n$ | The number of primary connections
$M$ | The number of protection paths
$S_{i},T_{i}$ | The end nodes of the $i^{th}$ primary connection
$d_{i},u_{i}$ | The data unit sent by $S_{i}$, $T_{i}$ respectively
$\hat{d}_{i},\hat{u}_{i}$ | The data unit received by $T_{i}$, $S_{i}$ respectively
$\alpha_{i}^{(k)},\beta_{i}^{(k)}$ | The encoding coefficients for the $i^{th}$ primary
connection on the $k^{th}$ protection path
$n_{e},n_{f}$ | The number of errors and failures in the network
$n_{c},n_{p}$ | The number of errors on the primary paths
and the protection paths respectively
$e_{d_{i}},e_{u_{i}}$ | The error values of $d_{i},u_{i}$ respectively
The system works in rounds. Time is assumed to be slotted. Each data unit is
assigned a round number. In each round a new data unit $d_{i}$ or $u_{i}$ is
transmitted by node $S_{i}$ or $T_{i}$ on its primary path. In addition, it
also transmits an appropriately encoded data unit in each direction on the
protection path. The encoding operation is executed by each node in ${\cal S}$
and ${\cal T}$, where all nodes have sufficiently large buffers. The encoding
and decoding operations only take place between data units of the same round.
When a node is transmitting and receiving data units of certain round on the
primary path, it is receiving data units of earlier rounds from the protection
paths. The nodes use the large, though bounded-size buffer to store the
transmitted and received data units for encoding and decoding. Once the
encoding and decoding for a certain round is done, the data units of that
round can be removed from the buffer. Overall, this ensures that the protocol
works even when there is no explicit time synchronization between the
transmissions.
Each connection $S_{i}-T_{i}$ has $2M$ encoding coefficients:
$\alpha_{i}^{(1)},\ldots,\alpha_{i}^{(M)},\beta_{i}^{(1)},\ldots,\beta_{i}^{(M)}$,
where $\alpha_{i}^{(k)}$ and $\beta_{i}^{(k)}$ are used for encoding on
protection path $\mathbf{P}^{(k)}$. Each protection path uses the same
protocol but different coefficients in general. The coefficients are assumed
to be known by the end nodes before the transmission. We specify the protocol
for protection path $\mathbf{P}^{(k)}$, which consists of two unidirectional
paths $\mathbf{S}^{(k)}$ and $\mathbf{T}^{(k)}$. We first define the following
notations.
* •
$\sigma(S_{i})/\sigma(T_{i})$: the next node downstream from $S_{i}$
(respectively $T_{i}$) on $\mathbf{S}^{(k)}$.
$\sigma^{-1}(S_{i})/\sigma^{-1}(T_{i})$: the next node upstream from $S_{i}$
(respectively $T_{i}$) on $\mathbf{S}^{(k)}$ (see example in Fig. 1).
* •
$\tau(S_{i})/\tau(T_{i})$: the next node downstream from $S_{i}$ (respectively
$T_{i}$) on $\mathbf{T}^{(k)}$. $\tau^{-1}(S_{i})/\tau^{-1}(T_{i})$: the next
node upstream from $S_{i}$ (respectively $T_{i}$) on $\mathbf{T}^{(k)}$ (see
example in Fig. 1).
Each node transmits to its downstream node, the sum of the data units from its
upstream node and a linear combination of the data units it has, on each
unidirectional protection path. Consider the $k^{th}$ protection path
$\mathbf{P}^{(k)}$, denote the data unit transmitted on link
$e\in\mathbf{S}^{(k)}$ ($e\in\mathbf{T}^{(k)}$) by $\mathbf{S}_{e}$
($\mathbf{T}_{e}$). Node $S_{i}$ knows $d_{i}$,$\hat{u}_{i}$, and $T_{i}$
knows $u_{i}$, $\hat{d}_{i}$. The encoding operations are as follows.
$\displaystyle\mathbf{S}_{S_{i}\rightarrow\sigma(S_{i})}$ $\displaystyle=$
$\displaystyle\mathbf{S}_{\sigma^{-1}(S_{i})\rightarrow
S_{i}}+\alpha_{i}^{(k)}d_{i}+\beta_{i}^{(k)}\hat{u}_{i},$
$\displaystyle\mathbf{T}_{S_{i}\rightarrow\tau(S_{i})}$ $\displaystyle=$
$\displaystyle\mathbf{T}_{\tau^{-1}(S_{i})\rightarrow
S_{i}}+\alpha_{i}^{(k)}d_{i}+\beta_{i}^{(k)}\hat{u}_{i},$
$\displaystyle\mathbf{S}_{T_{i}\rightarrow\sigma(T_{i})}$ $\displaystyle=$
$\displaystyle\mathbf{S}_{\sigma^{-1}(T_{i})\rightarrow
T_{i}}+\alpha_{i}^{(k)}\hat{d}_{i}+\beta_{i}^{(k)}u_{i},\text{and}$
$\displaystyle\mathbf{T}_{T_{i}\rightarrow\tau(T_{i})}$ $\displaystyle=$
$\displaystyle\mathbf{T}_{\tau^{-1}(T_{i})\rightarrow
T_{i}}+\alpha_{i}^{(k)}\hat{d}_{i}+\beta_{i}^{(k)}u_{i}.\vspace{-2mm}$
We focus our discussion on node $T_{i}$. Once node $T_{i}$ receives data units
over both $\mathbf{S}^{(k)}$ and $\mathbf{T}^{(k)}$ it adds these data units.
Denote the sum as $P^{(k)}$111The values of $P^{(k)}$ are different at
different end nodes. Here we focus our discussion on node $T_{i}$. To keep the
notation simple, we use $P^{(k)}$ instead of $P^{(k)}_{T_{i}}$ . $T_{i}$ gets
two values $\mathbf{S}_{\sigma^{-1}(T_{i})\rightarrow T_{i}}$ and
$\mathbf{T}_{\tau^{-1}(T_{i})\rightarrow T_{i}}$ from $\mathbf{P}^{(k)}$,
$P^{(k)}$ equals
$\mathbf{S}_{\sigma^{-1}(T_{i})\rightarrow
T_{i}}+\mathbf{T}_{\tau^{-1}(T_{i})\rightarrow T_{i}}=\sum_{l:S_{l}\in{\cal
S}}\alpha_{l}^{(k)}d_{l}+\sum_{l:T_{l}\in{\cal
T}\backslash\\{T_{i}\\}}\beta_{l}^{(k)}u_{l}+\sum_{l:S_{l}\in{\cal
S}}\beta_{l}^{(k)}\hat{u}_{l}+\sum_{l:T_{l}\in{\cal
T}\backslash\\{T_{i}\\}}\alpha_{l}^{(k)}\hat{d}_{l}.$ (1)
In the absence of any errors, $d_{l}=\hat{d}_{l}$, $u_{l}=\hat{u}_{l}$ for all
$l$, most terms cancel out because the addition operations are performed over
an extension field of the binary field and
$P^{(k)}=\alpha_{i}^{(k)}d_{i}+\beta_{i}^{(k)}\hat{u}_{i}$. Similar
expressions can be derived for the other end nodes. See Fig. 1 for an example
of the encoding protocol.
## III Error Model
If the adversary changes data units on one (primary or protection) path, an
error happens. If the adversary controls a link through which multiple paths
pass, or the adversary controls several links, multiple errors occur. We
assume that the adversary knows the communication protocols described above,
including the encoding/decoding function and encoding coefficients. There are
no secrets hidden from her. If a primary or protection path is under the
control of an adversary, she can arbitrarily change the data units in each
direction on that path. If $d_{i}\neq\hat{d}_{i}$ or $u_{i}\neq\hat{u}_{i}$
(or both), we say that there is an error on primary path $S_{i}-T_{i}$ with
error values $e_{d_{i}}=d_{i}+\hat{d}_{i}$ and $e_{u_{i}}=u_{i}+\hat{u}_{i}$.
As for protection path error, although the error is bidirectional, we shall
see that each node will see only one error due to the nature of the encoding
protocol. In fact, even multiple errors on the same protection path can be
shown to only have an aggregate effect as one error at one node. This is
because from one protection path, only the sum ($P^{(k)}$) of data units from
two directions is used in decoding at a node. If this data unit is changed due
to several errors, it can be modeled as one variable $e_{p_{k}}$ at the node.
However, different nodes will have different values of $e_{p_{k}}$ in general.
If there is a primary path failure (as opposed to error) on $S_{i}-T_{i}$, we
have $\hat{d}_{i}=\hat{u}_{i}=0$. i.e. failures are not adversarial. If a
protection path fails, it becomes useless and the end nodes ignore the data
units on that path. All nodes know the locations of failures but do not know
the locations of errors.
When there are errors in the network, the error terms will not cancel out in
(1) and $T_{i}$ obtains
$P^{(k)}=\alpha_{i}^{(k)}d_{i}+\beta_{i}^{(k)}(u_{i}+e_{u_{i}})+\sum_{l\in
I_{\backslash
i}}(\alpha_{l}^{(k)}e_{d_{l}}+\beta_{l}^{(k)}e_{u_{l}})+e_{p_{k}}$ on
protection path $\mathbf{P}^{(k)}$, where $I_{\backslash
i}=\\{1,\ldots,n\\}\backslash\\{i\\}$, the index set excluding $i$, and
$e_{p_{k}}$ is the error on protection path $\mathbf{P}^{(k)}$ seen by
$T_{i}$. Note that since $T_{i}$ knows $u_{i}$, we can subtract it from this
equation. Together with the data unit $P_{m}$ from the primary path, $T_{i}$
has the following data units.
$\displaystyle P_{m}=\hat{d}_{i}$ $\displaystyle=$ $\displaystyle
d_{i}+e_{d_{i}},$ (2) $\displaystyle
P^{(k)^{\prime}}=P^{(k)}-\beta_{i}^{(k)}u_{i}$ $\displaystyle=$
$\displaystyle\alpha_{i}^{(k)}d_{i}+\beta_{i}^{(k)}e_{u_{i}}+\sum_{l\in
I_{\backslash
i}}(\alpha_{l}^{(k)}e_{d_{l}}+\beta_{l}^{(k)}e_{u_{l}})+e_{p_{k}},k=1,\ldots,M\vspace{-2mm}$
(3)
We multiply (2) by $\alpha_{i}^{(k)}$ and add to the $k^{th}$ equation in (3)
to obtain
$\sum_{l=1}^{n}(\alpha_{l}^{(k)}e_{d_{l}}+\beta_{l}^{(k)}e_{u_{l}})+e_{p_{k}}=\alpha_{i}^{(k)}P_{m}+P^{(k)^{\prime}},k=1,\ldots,M.$
(4)
This can be represented in matrix form as
$\left[\begin{array}[]{cccccccccccc}\alpha_{1}^{(1)}&\beta_{1}^{(1)}&\cdots&\alpha_{n}^{(1)}&\beta_{n}^{(1)}&1&0&\cdots&0\\\
\alpha_{1}^{(2)}&\beta_{1}^{(2)}&\cdots&\alpha_{n}^{(2)}&\beta_{n}^{(2)}&0&1&\cdots&0\\\
\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\\
\alpha_{1}^{(M)}&\beta_{1}^{(M)}&\cdots&\alpha_{n}^{(M)}&\beta_{n}^{(M)}&0&0&\cdots&1\end{array}\right]E=P_{syn},$
(5)
where the length-$(2n+M)$ vector
$E=[e_{d_{1}},e_{u_{1}},\ldots,e_{d_{n}},e_{u_{n}},e_{p_{1}},\ldots,e_{p_{M}}]^{T}$
and the length-$M$ vector
$P_{syn}=[\alpha_{i}^{(1)}P_{m}+P^{(1)^{\prime}},\alpha_{i}^{(2)}P_{m}+P^{(2)^{\prime}},\ldots,\alpha_{i}^{(M)}P_{m}+P^{(M)^{\prime}}]^{T}$.
Analogous to classical coding theory, we call $P_{syn}$ the syndrome available
at the decoder. Denote the $M\times(2n+M)$ coefficient matrix of (5) as
$H_{ext}$, and denote the first $2n$ columns of $H_{ext}$ as a matrix
$H=[\mathbf{v}_{1},\mathbf{v}_{2},\ldots,\mathbf{v}_{2n}]$, where
$\mathbf{v}_{j}$ is the $j^{th}$ column of $H$. Then
$\mathbf{v}_{2i-1},\mathbf{v}_{2i}$ are the columns consisting of encoding
coefficients $\alpha_{i}$'s and $\beta_{i}$'s for the connection
$S_{i}-T_{i}$. The last $M$ columns of $H_{ext}$ form an identity matrix
$I_{M\times M}$ and can be denoted column by column as
$[\mathbf{v}^{p}_{1},\ldots,\mathbf{v}^{p}_{M}]$. Note that $T_{i}$ knows $H$
and $P_{syn}$ and shall attempt to decode $d_{i}$ even in the presence of the
errors. Node $S_{i}$ gets very similar equations to those at $T_{i}$. Thus we
will focus our discussion on $T_{i}$. Each end node uses the same decoding
algorithm and works individually without cooperation and without
synchronization.
## IV Recovery from single error
In this section, we focus on the case when there is only one error in the
network. We first present the decoding algorithm and then prove its
correctness under appropriate conditions.
### IV-A Decoding algorithm at node $T_{i}$ ($S_{i}$ operates similarly)
1. 1.
Attempt to solve the following system of equations
$[\mathbf{v}_{2i-1}\mathbf{v}_{2i}]\left[\begin{array}[]{c}e_{d_{i}}\\\
e_{u_{i}}\end{array}\right]=P_{syn}\vspace{-2mm}$ (6)
2. 2.
If (6) has a solution $(e_{d_{i}},e_{u_{i}})$, compute
$d_{i}=P_{m}+e_{d_{i}}$, otherwise, $d_{i}=P_{m}$
We show below that this algorithm works when the error happens on a primary
path or on one of the protection paths.
### IV-B Condition for one primary path error correction
In this subsection, we consider primary path error only. Define an error
pattern to be the two columns in $H$ corresponding to the erroneous primary
path. If the error happens on $S_{i}-T_{i}$, the error pattern is
$\\{\mathbf{v}_{2i-1},\mathbf{v}_{2i}\\}$. An error value vector corresponding
to an error pattern is obtained by letting the error values corresponding to
other $n-1$ primary paths to be zero. The error value vector corresponding to
error pattern $\\{\mathbf{v}_{2i-1},\mathbf{v}_{2i}\\}$ is the length-$2n$
vector $E_{i}=[0,\ldots,e_{d_{i}},e_{u_{i}},\ldots,0]^{T}$. Assume that
$e_{d_{i}}$'s and $e_{u_{i}}$'s are not all zero. The case when all of them
are zero is trivial because it implies that no error happens.
###### Theorem 1
Suppose there is at most one error on a primary path. The decoding algorithm
outputs the correct data unit at every node if and only if the vectors in the
set
$\\{\mathbf{v}_{2i-1},\mathbf{v}_{2i},\mathbf{v}_{2j-1},\mathbf{v}_{2j}\\}$222In
fact, it can be viewed as the error pattern when $S_{i}-T_{i},S_{j}-T_{j}$ are
in error. for all $i,j=1,\ldots,n,i\neq j$ are linearly independent.
_Proof:_ First assume that the vectors in the sets
$\\{\mathbf{v}_{2i-1},\mathbf{v}_{2i},\mathbf{v}_{2j-1},\mathbf{v}_{2j}\\}$
are linearly independent. Let $E_{a}$ and $E_{b}$ be error value vectors
corresponding to errors happening on different primary paths $S_{a}-T_{a}$ and
$S_{b}-T_{b}$ respectively. Suppose there exist $E_{a}$ and $E_{b}$ such that
$HE_{a}=HE_{b}$, i.e., $H(E_{a}+E_{b})=0$. Note that the vector
$(E_{a}+E_{b})$ has at most four error values
$[e_{d_{a}},e_{u_{a}},e_{d_{b}},e_{u_{b}}]$ which are not all zero and such
that
$[\begin{array}[]{ccccc}\mathbf{v}_{2a-1},\mathbf{v}_{2a},\mathbf{v}_{2b-1},\mathbf{v}_{2b}\end{array}][e_{d_{a}},e_{u_{a}},e_{d_{b}},e_{u_{b}}]^{T}=\mathbf{0}$.
This implies
$\\{\mathbf{v}_{2a-1},\mathbf{v}_{2a},\mathbf{v}_{2b-1},\mathbf{v}_{2b}\\}$
are linearly dependent, which is a contradiction. Therefore, under our
condition that
$\\{\mathbf{v}_{2i-1},\mathbf{v}_{2i},\mathbf{v}_{2j-1},\mathbf{v}_{2j}\\}$
for all $i,j=1,\ldots,n,i\neq j$ are linearly independent, there does not
exist $E_{a},E_{b}$ such that $HE_{a}=HE_{b}$. This means that if we try to
solve the system of linear equations according to every possible error value
vectors $E_{1},\ldots,E_{n}$, it either has no solution or its solution is the
actual error in the network. The node $T_{i}$ is only interested in $d_{i}$,
in our decoding algorithm, it tries to solve the equations (6) according to
the error value vector $E_{i}$. If it has a solution, the error happens on
$S_{i}-T_{i}$. The matrix $[\mathbf{v}_{2i-1},\mathbf{v}_{2i}]$ has rank two,
so equations (6) have unique solution for $e_{d_{1}}$. $d_{i}=P_{m}+e_{d_{i}}$
gives decoded $d_{i}$. If (6) does not have solution, the error is not on
$S_{i}-T_{i}$. $T_{i}$ simply picks up $d_{i}=P_{m}$ from the primary path
$S_{i}-T_{i}$.
Conversely, suppose that a vector set
$\\{\mathbf{v}_{2i_{1}-1},\mathbf{v}_{2i_{1}},\mathbf{v}_{2j_{1}-1},\mathbf{v}_{2j_{1}}\\}$
is linearly dependent. There exist $E_{i_{1}}$ and $E_{j_{1}}$ such that
$HE_{i_{1}}=HE_{j_{1}}$. Both equations $HE_{i_{1}}=P_{syn}$ and
$HE_{j_{1}}=P_{syn}$ have solution. Suppose the error in fact happens on
$S_{j_{1}}-T_{j_{1}}$, the decoder at $T_{i_{1}}$ can also find a solution to
$HE_{i_{1}}=P_{syn}$ and use the solution to compute $d_{i}$. This leads to
decoding error. $\blacksquare$
If there is no error in the network, $P_{syn}=0$ and solving (6) gives
$e_{d_{i}}=e_{u_{i}}=0$. In order to make
$\\{\mathbf{v}_{2i-1},\mathbf{v}_{2i},\mathbf{v}_{2j-1},\mathbf{v}_{2j}\\}$
independent, we need the length of vectors to be at least four, i.e., $M\geq
4$. In fact, we shall see that several coefficient assignment strategies
ensure that four protection paths are sufficient to make the condition hold
for $\forall i,j=1,\ldots,n,i\neq j$. The condition in Theorem 1 can be stated
as all $M\times M$ ($4\times 4$) matrices of the form
$[\mathbf{v}_{2i-1},\mathbf{v}_{2i},\mathbf{v}_{2j-1},\mathbf{v}_{2j}],i,j=1,\ldots,n,i<j\vspace{-2mm}$
(7)
have full rank.
### IV-C Coefficient assignment methods
We shall introduce several ways to assign encoding coefficients, so that (7)
has full rank. Later we will see these schemes also work when protection path
error is possible.
* $(1)$
A simple scheme of coefficient assignment and implementation. Choose $n$ non-
zero distinct elements $\gamma_{1},\ldots,\gamma_{n}$ from $GF(q)$. For all
$i=1,\ldots,n$, $\alpha_{i}^{(1)}=1$, $\alpha_{i}^{(2)}=\gamma_{i}$,
$\beta_{i}^{(3)}=1$, $\beta_{i}^{(4)}=\gamma_{i}$ and all other coefficients
are zero. It can be shown by performing Gaussian elimination that the matrix
(7) has full rank as long as $\gamma$'s are distinct. The minimum field size
needed is $q>n$.
Consider decoding at node $T_{i}$, Table II is a summary of the data units
$P_{m},P_{syn}$ that $T_{1}$ gets from primary path and protection paths under
different cases. $P_{syn}^{(k)}$ is the $k^{th}$ component of $P_{syn}$. The
decoding is done as follows. If $P_{syn}^{(1)}$ and $P_{syn}^{(2)}$ are both
zero, then $e_{d_{l}}=0,\forall l$, $T_{i}$ simply pick $d_{i}=P_{m}$. If
$P_{syn}^{(1)}$ and $P_{syn}^{(2)}$ are both non-zero, $T_{i}$ computes
$S=P_{syn}^{(2)}\times(P_{syn}^{(1)})^{-1}$. If $S=\gamma_{i}$, the error
happens on $S_{i}-T_{i}$ and the error value is $e_{d_{i}}=P_{syn}^{(1)}$,
then $d_{i}=P_{m}+e_{d_{i}}$. If $S=\gamma_{x}$, the error happens on
$S_{x}-T_{x},x\neq i$, then $T_{i}$ picks up $d_{i}=P_{m}$.
Note that we only used $P_{m},P_{syn}^{(1)},P_{syn}^{(2)}$ to decode $d_{i}$
at $T_{i}$. However, we cannot remove paths
$\mathbf{P}^{(3)},\mathbf{P}^{(4)}$ because at $S_{i}$ we should use
$P_{m},P_{syn}^{(3)},P_{syn}^{(4)}$ to decode.
TABLE II: Data obtained by $T_{i}$ under the simple coefficient assignment. | No error | Error on $S_{i}-T_{i}$ | Error on $S_{x}-T_{x},i\neq x$
---|---|---|---
$P_{m}$ | $d_{i}$ | $d_{i}+e_{d_{i}}$ | $d_{i}$
$P_{syn}^{(1)}$ | $0$ | $e_{d_{i}}$ | $e_{d_{x}}$
$P_{syn}^{(2)}$ | $0$ | $\gamma_{i}e_{d_{i}}$ | $\gamma_{x}e_{d_{x}}$
$P_{syn}^{(3)}$ | 0 | $e_{u_{i}}$ | $e_{u_{x}}$
$P_{syn}^{(4)}$ | 0 | $\gamma_{i}e_{u_{i}}$ | $\gamma_{x}e_{u_{x}}$
* $(2)$
Vandermonde matrix. The second way is to choose $2n$ distinct elements from
$GF(q):\gamma_{\alpha_{1}},\gamma_{\beta_{1}},\ldots,\gamma_{\alpha_{n}},\gamma_{\beta_{n}}$
and let encoding coefficients to be
$\alpha_{i}^{(k)}=\gamma_{\alpha_{i}}^{k-1},\beta_{i}^{(k)}=\gamma_{\beta_{i}}^{k-1}$.
The matrix in equation (7) becomes a Vandermonde matrix and has full rank.
* $(3)$
Random choice. Besides the structured matrices above, choosing coefficients at
random from a large field also works with high probability due to the
following claim.
###### Claim 1
When all coefficients are randomly, independently and uniformly chosen from
$GF(q)$, for given $i$ and $j$, the probability that
$\\{\mathbf{v}_{2i-1},\mathbf{v}_{2i},\mathbf{v}_{2j-1},\mathbf{v}_{2j}\\}$
are linearly independent is $p_{1}=(1-1/q^{3})(1-1/q^{2})(1-1/q)$.
_Proof:_ Suppose we have chosen $\mathbf{v}_{2i-1}$, the probability that
$\mathbf{v}_{2i}$ is not in the span of $\mathbf{v}_{2i-1}$ is $(1-q/q^{4})$.
The probability that $\mathbf{v}_{2j-1}$ is not in the span of
$\\{\mathbf{v}_{2i-1},\mathbf{v}_{2i}\\}$ is $(1-q^{2}/q^{4})$. The
probability that $\mathbf{v}_{2j}$ is not in the span of
$\\{\mathbf{v}_{2i-1},\mathbf{v}_{2i},\mathbf{v}_{2j-1}\\}$ is
$(1-q^{3}/q^{4})$. Since the coefficients are chosen independently, the
probability that four vectors are linearly independent is the product $p_{1}$,
which approaches 1 when $q$ is large. $\blacksquare$In (7) we require
${n\choose 2}$ matrices to have full rank. By union bound, the probability
that the linear independence condition in Theorem 1 holds is at least
$1-(1-p_{1}){n\choose 2}$, which is close to 1 when $q$ is large. In practice,
before all the transmission, we could generate the coefficients randomly until
they satisfy the condition in Theorem 1. Then, transmit those coefficients to
all the end nodes in the network. During the actual transmission of the data
units, the encoding coefficients do not change.
### IV-D Taking protection path error into account
In this subsection, we take protection path errors into account. The error
(assume one error in this section) can happen either on one primary path or
one protection path. Besides $n$ error value vectors $E_{1},\ldots,E_{n}$, we
have $M$ more error value vectors for the protection path error:
$[\mathbf{0}|e_{p_{1}},0,\ldots,0]^{T},\ldots,[\mathbf{0}|0,0,\ldots,e_{p_{M}}]^{T}$,
where $\mathbf{0}$ denote an all-zero vector of length $2n$. Denote them by
$E_{p_{1}},\ldots,E_{p_{M}}$. Using a similar idea to Theorem 1, we have the
following:
###### Theorem 2
If there is one error on one primary path or protection path, the decoding
algorithm works for every node if and only if vectors in the sets
$\displaystyle\\{\mathbf{v}_{2i-1},\mathbf{v}_{2i},\mathbf{v}_{2j-1},\mathbf{v}_{2j}\\},i,j=1,\ldots,n,i\neq
j$ (8)
$\displaystyle\\{\mathbf{v}_{2i-1},\mathbf{v}_{2i},\mathbf{v}^{p}_{l}\\},i=1,\ldots,n,l=1,\ldots,M$
(9)
are linearly independent. Note that $\mathbf{v}^{p}_{l}$ is the $l^{th}$
column in $I_{M\times M}$ in (5).
In fact, $M=4$ suffices and the three coefficient assignment methods we
described in the previous subsection work in this case. The simple coefficient
assignment strategy in Section IV-C(1) enables vector sets (8) and (9) to be
independent. The protection path error makes exact one component of $P_{syn}$
to be nonzero. If $T_{i}$ detects $P_{syn}$ has only one nonzero entry, it can
just pick up the data unit from the primary path since the only error is on
the protection path.
In order to see that Vandermonde matrix also works, we shall show that the
vector sets (9) are linearly independent. Suppose that they are linearly
dependent. Since $\mathbf{v}_{2i-1},\mathbf{v}_{2i}$ are linearly independent,
there exist $a$ and $b$ such that (take $\mathbf{v}^{p}_{1}$ for example):
$a\mathbf{v}_{2i-1}+b\mathbf{v}_{2i}=\mathbf{v}^{p}_{1}$. This means
$a[\gamma_{\alpha_{i}}\gamma_{\alpha_{i}}^{2}]^{T}+b[\gamma_{\beta_{i}}\gamma_{\beta_{i}}^{2}]^{T}=\mathbf{0}$.
However, this is impossible since
$\det\left[\begin{array}[]{cc}\gamma_{\alpha_{i}}&\gamma_{\beta_{i}}\\\
\gamma_{\alpha_{i}}^{2}&\gamma_{\beta_{i}}^{2}\end{array}\right]\neq 0.$
Therefore, $\\{\mathbf{v}_{2i-1},\mathbf{v}_{2i},\mathbf{v}^{p}_{1}\\}$ are
linearly independent. A similar argument holds for $\mathbf{v}^{p}_{l}$ when
$l\neq 1$.
When the coefficients are randomly chosen from $GF(q)$, for given $i$ and $l$,
the probability that
$\\{\mathbf{v}_{2i-1},\mathbf{v}_{2i},\mathbf{v}^{p}_{l}\\}$ are linearly
independent is $p_{2}=(1-1/q^{3})(1-1/q^{2})$. Considering all vector sets in
Theorem 2, the probability of successful decoding at all nodes is at least
$1-(1-p_{1}){n\choose 2}-(1-p_{2})nM$, which approaches 1 when $q$ is large.
### IV-E Remark
We can compare our results with classical results in coding theory. In
classical coding theory, in the presence of two adversarial errors, we need a
code with minimum distance at least five for correct decoding. This means that
to transmit one symbol of information, we need to transmit a codeword with at
least five symbols. In our problem, each connection has a total of five paths
(one primary and four protection). A single error on a bidirectional primary
path induces two errors, one in each direction. Therefore in an approximate
sense we are using almost the optimal number of protection paths. However, a
proof of this statement seems to be hard to arrive at. It is important to note
that the protection paths are shared so the cost of protection per primary
path connection is small.
### IV-F The case when the primary paths are protected by different
protection paths
If the primary paths are protected by different protection paths, the models
are similar. Specifically, consider node $T_{i}$ and it is protected by the
protection path $\mathbf{P}_{k}$, if we denote the set of primary paths
protected by protection path $\mathbf{P}_{k}$ by
$N(\mathbf{P}_{k})\subseteq\\{1,\ldots,n\\}$, the equation obtained from
protection path $\mathbf{P}_{k}$ by $T_{i}$ is similar to (4): $\sum_{l\in
N(\mathbf{P}_{k})}(\alpha_{l}^{(k)}e_{d_{l}}+\beta_{l}^{(k)}e_{u_{l}})+e_{p_{k}}=\alpha_{i}^{(k)}P_{m}+P^{(k)^{\prime}}.$
Now, $T_{i}$ obtains $M_{i}$ equations, where $M_{i}$ is the number of
protection paths protecting connection $S_{i}-T_{i}$. The system of equations
it gets is similar to (5), but the $M_{i}\times 2n$ coefficient matrix $H$ may
contain zeros induced by the network topology. If connection $S_{l}-T_{l}$ is
not protected by $\mathbf{P}_{k}$, the corresponding two terms in the $k$th
row are zero. The identity matrix in $H_{ext}$ is $I_{M_{i}\times M_{i}}$. The
models are similar to the case when all connections are protected by the same
protection paths and the decoding algorithms and conditions in Theorem 1 and 2
still work.
The difference comes from the coefficient assignment. $H$ may contain some
zeros depending on the topology. In order to make (8),(9) to be linearly
independent, we can use the method of matrix completion [17]. We view the
encoding coefficients in $H$ as indeterminates to be decided. The matrices we
require to have full rank are a collection ${\cal C}_{H}$ of submatrices of
$H_{ext}$, where ${\cal C}_{H}$ depends on the network topology. Each matrix
in ${\cal C}_{H}$ consists of some indeterminates and possibly some zeros due
to the topological constraints and ones coming from the last $M_{1}$ columns
of $H_{ext}$. The problem of choosing encoding coefficients can be solved by
matrix completion. A simultaneous max-rank completion of ${\cal C}_{H}$ is an
assignment of values from $GF(q)$ to the indeterminates that preserves the
rank of all matrices in ${\cal C}_{H}$. After completion, each matrix will
have the maximum possible rank. Note that if $H$ contains too many zeros, it
may be not possible to make the matrices to have the required rank when
$M_{i}=4$. Thus, $M_{i}=4$ is a necessary but not in general sufficient
condition for successful recovery. It is known that choosing the
indeterminates at random from a sufficiently large field can solve the matrix
completion problem with high probability [18]. Hence, we can choose encoding
coefficients randomly from a large field. It is clear therefore that the
general case can be treated conceptually in a similar manner to what we
discussed earlier. Thus, we shall mainly focus on the case when the protection
paths protect all the primary paths.
## V Recovery from multiple errors
Our analysis can be generalized to multiple errors on primary and protection
paths. Assume that $n_{c}$ errors happen on primary paths and
$n_{p}=n_{e}-n_{c}$ errors happen on protection paths. As described in Section
III, a given primary path error corresponds to two specific columns in
$H_{ext}$ while a protection path error corresponds to one specific column in
$H_{ext}$. Recall that we view $H_{ext}$ as a set of column vectors :
$\\{\mathbf{v}_{1},\mathbf{v}_{2},\ldots,\mathbf{v}_{2n-1},\mathbf{v}_{2n},\mathbf{v}^{p}_{1},\mathbf{v}^{p}_{2},\ldots,\mathbf{v}^{p}_{M}\\}$.
An error pattern is specified by the subset of columns of $H_{ext}$
corresponding to the paths in error.
###### Definition 1
A subset of columns of $H_{ext}$ denoted as $A(m_{1},m_{2})$ is an error
pattern with $m_{1}$ errors on primary paths
$\\{c_{1},\ldots,c_{m_{1}}\\}\subseteq\\{1,\ldots,n\\}$ and $m_{2}$ errors on
protection paths $\\{p_{1},\ldots,p_{m_{2}}\\}\subseteq\\{1,\ldots,M\\}$ if it
has the following form: $A(m_{1},m_{2})=A_{c}(m_{1})\cup A_{p}(m_{2})$, where
$A_{c}(m_{1})=\\{\mathbf{v}_{2c_{1}-1},\mathbf{v}_{2c_{1}},~{}\ldots~{},\mathbf{v}_{2c_{m_{1}}-1},\mathbf{v}_{2c_{m_{1}}}\\}$,
$~{}c_{i}\in\\{1,\ldots,n\\},\forall i=1,\ldots,m_{1}$ and
$A_{p}(m_{2})=\\{\mathbf{v}^{p}_{p_{1}},\ldots,\mathbf{v}^{p}_{p_{m_{2}}}\\},p_{i}\in\\{1,\ldots,M\\},\forall
i=1,\ldots,m_{2}$.
Note that $|A(m_{1},m_{2})|=2m_{1}+m_{2}$ and the set of columns in $H_{ext}$
can be expressed as $A(n,M)$. Although our definition of error pattern is
different from the conventional definition in classical coding theory, we
shall find it helpful for the discussion of our algorithms.
We let $\mathbf{A}(m_{1},m_{2})$ denote the family of error patterns with
$m_{1}$ primary path errors and $m_{2}$ protection path errors (for brevity,
henceforth we refer to such errors as $(m_{1},m_{2})$ type errors).
###### Definition 2
Define $\mathbf{A}(m_{1},m_{2})_{i}$, a subset of $\mathbf{A}(m_{1},m_{2})$,
to be the family of $(m_{1},m_{2})$ type error patterns such that each error
pattern includes an error on primary path $S_{i}-T_{i}$, i.e.,
$A(m_{1},m_{2})\in\mathbf{A}(m_{1},m_{2})_{i}$ if and only if
$\\{\mathbf{v}_{2i-1},\mathbf{v}_{2i}\\}\subseteq A(m_{1},m_{2})$.
Note that $|\mathbf{A}(m_{1},m_{2})|={n\choose m_{1}}{M\choose m_{2}}$ and
$|\mathbf{A}(m_{1},m_{2})_{i}|={n-1\choose m_{1}-1}{M\choose m_{2}}$. Denote
the family of error patterns including an error on $S_{i}-T_{i}$ with $n_{e}$
errors in total as:
$\mathbf{A_{i}}(n_{e})=\cup_{n_{c}=1}^{n_{e}}\mathbf{A}(n_{c},n_{e}-n_{c})_{i}$.
Our definition of an error pattern has only specified the location of the
error but not the actual values. An error value vector $E$ has the following
form
:$[e_{d_{1}},e_{u_{1}},\ldots,e_{d_{n}},e_{u_{n}},e_{p_{1}},\ldots,e_{p_{M}}]^{T}$.
Each entry of the vector corresponds to one column in $H_{ext}$. An error
value vector $E$ corresponds to an error pattern $A(m_{1},m_{2})$ if in $E$,
the entries corresponding to $A(n,M)\backslash A(m_{1},m_{2})$ are zero, while
the other entries may be non-zero and are indeterminates in the decoding
algorithm. We are now ready to present the decoding algorithm in the presence
of multiple errors.
### V-A Multiple errors decoding algorithm at node $T_{i}$ ($S_{i}$ operates
similarly)
1. 1.
Try to solve the system of linear equations specified in (5) according to each
error pattern in $\mathbf{A_{i}}(n_{e})$. This means for each error pattern in
$\mathbf{A_{i}}(n_{e})$, replace $E$ in (5) by the error value vector, which
contains the indeterminates, corresponding to the error pattern.
2. 2.
Suppose that the decoder finds a solution to one of these system of equations.
Compute $d_{i}=P_{m}+e_{d_{i}}$, where $e_{d_{i}}$ is recovered as part of the
solution. If none of these systems of equations has a solution, set
$d_{i}=P_{m}$.
This algorithm requires the enumeration of all error patterns in
$\mathbf{A_{i}}(n_{e})$ and has high computational complexity (exponential in
the number of errors). In Section V-C, a low complexity polynomial-time
algorithm will be proposed under the assumption that the errors only happen on
the primary paths.
### V-B Condition for error correction
###### Theorem 3
Suppose that there are at most $n_{e}$ errors in the network (both primary
path error and protection path error are possible). The result of the decoding
algorithm is correct at every node if and only if the column vectors in
$A(m_{1},m_{2})$ are linearly independent for all
$A(m_{1},m_{2})\in\cup_{n_{c},n_{c}^{\prime}\in\\{0,\ldots,n_{e}\\}}\mathbf{A}(n_{c}+n_{c}^{\prime},2n_{e}-(n_{c}+n_{c}^{\prime}))$.
_Proof:_ First we shall show that under the stated condition, the decoding
algorithm works. Suppose $E_{1}$ and $E_{2}$ denote two error value vectors
corresponding to error patterns in $\mathbf{A}(n_{c},n_{e}-n_{c})$ and
$\mathbf{A}(n_{c}^{\prime},n_{e}-n_{c}^{\prime})$ respectively and $E_{1}\neq
E_{2}$. The linear independence condition in the theorem implies that there do
not exist $E_{1}$ and $E_{2}$ such that $HE_{1}=HE_{2}$. To see this, suppose
there exist such $E_{1}$ and $E_{2}$, then, $HE_{sum}=0$, where
$E_{sum}=E_{1}+E_{2}\neq 0$ has at most $n_{c}+n_{c}^{\prime}$ errors on
primary paths and $n_{p}+n_{p}^{\prime}=2n_{e}-(n_{c}+n_{c}^{\prime})$ errors
on protection path. These errors correspond to a member (which is a set of
column vectors)
$A(n_{c}+n_{c}^{\prime},2n_{e}-(n_{c}+n_{c}^{\prime}))\in\mathbf{A}(n_{c}+n_{c}^{\prime},2n_{e}-(n_{c}+n_{c}^{\prime}))$.
$HE_{sum}=0$ contradicts the linear independence of the column vectors in
$A(n_{c}+n_{c}^{\prime},2n_{e}-(n_{c}+n_{c}^{\prime}))$. Thus, $E_{1},E_{2}$
do not exist for $HE_{1}=HE_{2}$. This means that if a decoder tries to solve
every system of linear equations according to every possible error patterns
with $n_{e}$ errors, it either gets no solution, or gets the same solution for
multiple solvable systems of linear equations. A decoder at $T_{i}$ is only
interested in error patterns in $\mathbf{A_{i}}(n_{e})$. If in step 1 it finds
a solution $E$ for one system of equation, $e_{d_{i}}$ in $E$ is the actual
error value for $d_{i}$ and $d_{i}=P_{m}+e_{d_{i}}$, otherwise, no error
happens on $S_{i}-T_{i}$.
Conversely, if there exist some $n_{c},n_{c}^{\prime}$ such that some member
in $\mathbf{A}(n_{c}+n_{c}^{\prime},2n_{e}-(n_{c}+n_{c}^{\prime}))$ is
linearly dependent, there exist $E_{1}^{\prime}$ and $E_{2}^{\prime}$ such
that $HE_{1}^{\prime}=HE_{2}^{\prime}$ and $E_{1}^{\prime}\neq
E_{2}^{\prime}$. This implies that there exists an $i_{1}$ such that either
$e_{d_{i_{1}}}$ or $e_{u_{i_{1}}}$ is different. At node $T_{i_{1}}$ or
$S_{i_{1}}$, the decoder has no way to distinguish which one is the actual
error value vector and the decoding fails. $\blacksquare$
The above condition is equivalent to the fact that all vector sets
$A(m_{1},m_{2})\in\cup_{m\in\\{0,\ldots,2n_{e}\\}}\mathbf{A}(m,2n_{e}-m)$ are
linearly independent. $|A(m,2n_{e}-m)|=2n_{e}+m$ and its maximum is $4n_{e}$.
Thus, the length of the vectors should be at least $4n_{e}$. In fact,
$M=4n_{e}$ is sufficient under random chosen coefficients. Suppose that the
coefficients are randomly and uniformly chosen from $GF(q)$. For a fixed $m$,
the probability that $A(m,2n_{e}-m)=A_{c}(m)\cup A_{p}(2n_{e}-m)$ is linearly
independent is $p_{1}(m)=\prod_{i=0}^{2m-1}(1-q^{2n_{e}-m+i}/q^{M})$.
Considering all members in $\mathbf{A}(m,2n_{e}-m)$ and all values of $m$, by
union bound, the probability for successful decoding is at least
$1-\sum_{m=0}^{2n_{e}}(1-p_{1}(m)){n\choose m}{M\choose 2n_{e}-m}$, which
approaches 1 when $q$ is large.
### V-C Reed-Solomon like efficient decoding for primary path error only case
If the errors only happen on primary paths, the condition in Theorem 3 becomes
that each member of $\mathbf{A}(2n_{e},0)$ is linearly independent. We can
choose $H$ so that $H_{ij}=(\alpha^{i})^{j-1}$, where $\alpha$ is the
primitive element over $GF(q)$, with $q>2n$. This is a parity check matrix of
a $(2n,2n-M)$ Reed-Solomon code. Denote it by $H_{RS}$. Any $M$ ($M=4n_{e}$)
columns of $H_{RS}$ are linearly independent and satisfies the condition in
Theorem 3. Thus, (5) becomes
$H_{RS}[e_{d_{1}},e_{u_{1}},\ldots,e_{d_{n}},e_{u_{n}}]^{T}=P_{syn}$, in which
$H_{RS}$ and $P_{syn}$ are known by every node. The decoding problem becomes
to find an error pattern with at most $n_{e}$ errors and the corresponding
error value vector. Note that in fact there are $2n_{e}$ error values to be
decided. This problem can be viewed as RS hard decision decoding problem while
the number of errors is bounded by $2n_{e}$. $P_{syn}$ can be viewed as the
syndrome of a received message. We can apply Berlekamp-Massey algorithm (BMA)
for decoding. It is an efficient polynomial time algorithm, while the proposed
algorithm in Section V-A has exponential complexity. Further details about RS
codes and BMA can be found in [19].
## VI Recovery from a combination of errors and failures
We now consider a combination of errors and failures on primary and protection
paths. Recall that when a primary path or a protection path is in failure,
then all the nodes are assumed to be aware of the location of the failure.
Assume that there are a total of $n_{f}$ failures in the network, such that
$n_{f_{c}}$ failures are on primary paths and $n_{f_{p}}=n_{f}-n_{f_{c}}$
failures are on protection paths. If a protection path has a failure it is
basically useless and we remove the equation corresponding to it in error
model (5). Thus, we shall mainly work with primary path failures and error
model (5) will have $M^{\prime}=M-n_{f_{p}}$ equations. In our error model,
when a primary path failure happens, $\hat{d}_{i}=0$ ($\hat{u}_{i}=0$
respectively). We can treat a primary path failure as a primary path error
with error value $e_{d_{i}}=d_{i}$ ($e_{u_{i}}=u_{i}$ respectively). In the
failure-only case considered in [7], $n_{f_{c}}$ protection paths are needed
for recovery from $n_{f_{c}}$ primary path failures. However, the coefficients
are chosen such that $\alpha_{i}^{(k)}=\beta_{i}^{(k)},\forall i,k$, which
violates the condition for error correction discussed before. Thus, we need
more paths when faced with a combination of errors and failures.
The decoding algorithm and condition in this case are very similar to multiple
error case. An important difference is that the decoder knows the location of
$n_{f}$ failures. To handle the case of failures, we need to modify some
definitions in Section V.
###### Definition 3
A subset of columns of $H$ denoted by $F(n_{f_{c}})$ is said to be a failure
pattern with $n_{f_{c}}$ failures on primary paths
$\\{f_{1},\ldots,f_{n_{f_{c}}}\\}\subseteq\\{1,\ldots,n\\}$ if it has the
following form:
$F(n_{f_{c}})=\\{\mathbf{v}_{2f_{1}-1},\mathbf{v}_{2f_{1}},\ldots,\mathbf{v}_{2{f_{n_{f_{c}}}}-1},\mathbf{v}_{2{f_{n_{f_{c}}}}}\\}$,$f_{i}\in\\{1,\ldots,n\\}$.
###### Definition 4
An error/failure pattern with $m_{1}$ primary path errors, $m_{2}$ protection
path errors and failure pattern $F(n_{f_{c}})$ is defined as
$A^{F}(m_{1},m_{2},F(n_{f_{c}}))=A(m_{1},m_{2})_{\backslash F(n_{f_{c}})}\cup
F(n_{f_{c}})$, where $A(m_{1},m_{2})_{\backslash
F(n_{f_{c}})}\in\mathbf{A}(m_{1},m_{2})$ and is such that
$A(m_{1},m_{2})_{\backslash F(n_{f_{c}})}\cap F(n_{f_{c}})=\emptyset$, i.e.,
$A(m_{1},m_{2})_{\backslash F(n_{f_{c}})}$ is a $(m_{1},m_{2})$ type error, of
which the primary path errors do not happen on failed paths in $F(n_{f_{c}})$.
We let $\mathbf{A}^{F}(m_{1},m_{2},F(n_{f_{c}}))$ denote the family of
error/failure patterns with $m_{1}$ primary path errors, $m_{2}$ protection
path errors ($(m_{1},m_{2})$ type errors) and a fixed failure pattern
$F(n_{f_{c}})$.
###### Definition 5
Define a subset of $\mathbf{A}^{F}(m_{1},m_{2},F(n_{f_{c}}))$, denoted as
$\mathbf{A}^{F}(m_{1},m_{2},F(n_{f_{c}}))_{i}$ to be the family of
error/failure patterns such that each pattern includes an error or failure on
$S_{i}-T_{i}$, i.e.,
$A^{F}(m_{1},m_{2},F(n_{f_{c}}))\in\mathbf{A}^{F}(m_{1},m_{2},F(n_{f_{c}}))_{i}$
if and only if $\\{\mathbf{v}_{2i-1},\mathbf{v}_{2i}\\}\subseteq
A^{F}(m_{1},m_{2},F(n_{f_{c}}))$.
An error/failure value vector $E$ corresponds to an error/failure pattern
$A^{F}(m_{1},m_{2},F(n_{f_{c}}))$ if the entries corresponding to
$A(n,M)\backslash A^{F}(m_{1},m_{2},F(n_{f_{c}}))$ are zero, while the other
entries may be non-zero.
### VI-A Decoding algorithm at node $T_{i}$ for combined failures and errors
($S_{i}$ operates similarly)
1. 1.
Note that $T_{i}$ knows the failure pattern for all primary paths
$F(n_{f_{c}})$. It tries to solve equations of (5) form according to each
error/failure pattern in
$\cup_{n_{c}=1}^{n_{e}}\mathbf{A}^{F}(n_{c},n_{e}-n_{c},F(n_{f_{c}}))_{i}$.
The indeterminates are given by the error value vector corresponding to the
error pattern.
2. 2.
Suppose that the decoder finds a solution to one of these system of equations.
Compute $d_{i}=P_{m}+e_{d_{i}}$; If none of these systems of equations has a
solution, set $d_{i}=P_{m}$.
### VI-B Condition for errors/failures correction
###### Theorem 4
Suppose there is at most $n_{e}$ errors and $n_{f_{c}}$ primary path failures
in the network, both primary path error and protection path error are
possible. The decoding algorithm works at every node if and only if the column
vectors in $A(m_{1},m_{2})$ are linearly independent for all
$A(m_{1},m_{2})\in\cup_{m\in\\{0,\ldots,2n_{e}\\}}\mathbf{A}(n_{f_{c}}+m,2n_{e}-m)$.
_Proof:_ The condition implies that for all
$n_{c},n_{c}^{\prime}\in\\{0,\ldots,n_{e}\\}$ and all possible failure
patterns $F(n_{f_{c}})$, each member in
$\mathbf{A}^{F}(n_{c}+n_{c}^{\prime},2n_{e}-(n_{c}+n_{c}^{\prime}),F(n_{f_{c}}))$
contains linearly independent vectors. The rest of the proof is similar to
Theorem 3 and is omitted. $\blacksquare$
The maximum number of vectors contained in each such error pattern is
$4n_{e}+2n_{f_{c}}$. Thus, we need at least $M^{\prime}=4n_{e}+2n_{f_{c}}$
equations in (5) which implies in turn that $M\geq
4n_{e}+2n_{f_{c}}+n_{f_{p}}$. Since we don't know $n_{f_{c}},n_{f_{p}}$ a
priori and in the worse case scenario all failures could happen on the primary
paths, we need at least $M=4n_{e}+2n_{f}$. On the other hand, using random
choice of coefficients from a large enough field, $M=4n_{e}+2n_{f}$ is
sufficient to guarantee that the linearly independence condition in Theorem 4
satisfies with high probability.
If we restrict the errors/failures to be only on the primary paths, then the
condition becomes each member of $\mathbf{A}(2n_{e}+n_{f},0)$ is linearly
independent and we can choose $H$ to be the parity-check matrix of a
$(2n,2n-4n_{e}-2n_{f})$ RS code. In error/failure value vector $E$, the
locations of the failures are known. The decoding problem can be viewed as the
RS hard decision decoding problem while the number of error values is bounded
by $2n_{e}$ and the number of failure values is bounded by $2n_{f}$. It can be
done by a modified BMA [19] that works for errors and erasures.
## VII Simulation results and comparisons
In this section, we shall show how our network coding-based protection scheme
can save network resources by some simulations. Under our adversary error
model, when the adversary controls a single link, one simple protection scheme
is to provision three edge-disjoint paths for each primary connection,
analogous to a (3,1) repetition code. This is referred to as a 2+1 scheme,
meaning that two additional paths are used to protect one connection. We call
our proposed scheme 4+n, i.e., four additional paths are used to protect $n$
connections. It is expected that when $n$ becomes large, 4+n will use fewer
resources than 2+1. We provisioned primary and protection paths for both cases
and compared their cost. Our protection scheme can be used in different
networks including optical network deployed in a large area, or any overlay
network no matter what the underlying supporting network and the scale of the
network are.
(a) Labnet03 Network
(b) Link costs of Labnet03 network.
Figure 2: Labnet03 network with 20 nodes and 53 edges in North America.
Figure 3: COST239 network with 11 nodes and 26 edges in Europe.
In the simulation, we use two networks: 1) Labnet03 network for North America
[20, 21] (Fig.2), 2) COST239 Network for Europe [20, 22] (Fig.3). Our integer
linear programming (ILP) for the proposed 4+n scheme is formulated as follows.
The network topology is modelled as an undirected graph $G=(V,E)$. Considering
that usually there are multiple optical fibers between two cities, we inflate
the graph $G$ such that each edge is copied for several times (four times in
our simulations), i.e., there are four parallel edges between the nodes. An
edge $(i,j)$ in $G$ is replaced by edges
$(i,j)_{1},(i,j)_{2},(i,j)_{3},(i,j)_{4}$ in the inflated graph. The set of
unicast connections to be established is given in ${\cal
N}=\\{(S_{1},T_{1}),\ldots,(S_{n},T_{n})\\}$. In order to model the protection
paths as flows, we add a virtual source $s$ and a virtual sink $t$ to the
network and connect $s$ and $t$ with the end nodes of connections in $\cal N$.
This procedure is illustrated in Fig. 4. We call this inflated graph
$G^{\prime}=(V^{\prime},E^{\prime})$. Every edge $(i,j)_{k}$ connecting node
$i$ and $j$ is associated with a positive number $c_{ij}$, the cost of per
unit flow of this link, which is proportional to the distance between the
nodes $i$ and $j$. Assume that each link has enough capacity so there is no
capacity constraint. We hope to find the optimal $4+n$ paths that satisfy
appropriate constraints on the topology 333we only provision one set of
protection paths for connections in $\cal N$. We could optimally partition
$\cal N$ into several subsets, each of which is protected by a set of
protection paths as in [8]. It will give us better solution but greatly
complicates the ILP. In our simulation, the 4+n scheme shows gains under the
simpler formulation. Thus, we simulate under the simpler formulation. in the
network that minimize the total cost. One protection path can be viewed as a
unit flow from $s$ to $t$, while one primary path $S_{i}-T_{i}$ can be viewed
as a unit flow from $S_{i}$ to $T_{i}$. Therefore, the problem can be
formulated as a minimum cost flow problem under certain conditions. Each edge
$(i,j)_{k}$ is associated with $4+n$ binary flow variables $f_{ij,k}^{m},1\leq
m\leq n+4$, which equals 1 if path $m$ passes through edge $(i,j)_{k}$ and 0
otherwise. The ILP is formulated as follows.
$\min\sum_{(i,j)_{k}\in E^{\prime}}\sum_{1\leq m\leq n+4}c_{ij}f_{ij,k}^{m}.$
(10)
The constraints are such that
1. 1.
Flow conservation constraints hold for primary paths and protection paths.
2. 2.
Each protection path should pass through the end nodes of all the connections.
3. 3.
The primary paths are edge-disjoint.
4. 4.
The primary paths and the protection paths are edge-disjoint.
5. 5.
The protection paths are edge-disjoint.
Figure 4: Inflation of $G$. The left one is the original graph $G$. The
unicast connections of interest are ${\cal
N}=\\{(S_{1},T_{1}),(S_{2},T_{2})\\}$. The right one is the inflated graph
$G^{\prime}$.
The minimization is over $f_{ij,k}^{m},(i,j)_{k}\in E^{\prime},1\leq m\leq
4+n$ and some auxiliary variables that are used to mathematically describe the
constraints. We assume that when an adversary attacks an edge in the network
she can control all paths going through that link. Thus, we have edge-disjoint
constraints so that she only causes one path in error in the network. For
detailed mathematical description of the constraints, please refer to [8] to
see a similar formulation. We call this formulation as ILP1.
We also provision the paths for 2+1 scheme. The provisioning of the paths also
minimizes the total cost, i.e., the objective is to minimize
$\sum_{(i,j)_{k}\in E^{\prime}}(\sum_{1\leq m\leq n}\sum_{1\leq l\leq
3}c_{ij}f_{ij,k}^{ml})$, where $f_{ij,k}^{ml}$ is the flow variable for the
$l^{th}$ path of the $m^{th}$ primary connection. Furthermore, the three paths
for one connection should be edge-disjoint. We call this formulation as ILP2.
However, in general $G^{\prime}$ contains a large number of edges which result
in a long computation time for ILP1. In order to simulate and compare
efficiently, instead of solving the ILP1 directly, we present an upper bound
of the cost for our proposed 4+n scheme that can be computed much faster. The
connection set $\cal N$ is chosen as follows. Instead of choosing $n$
connections at random, we choose $n/2$ connections at random (denoted as the
connection set ${\cal N}_{\frac{1}{2}}$) and duplicate those connections to
obtain $\cal N$. So there are two independent unicast connections between two
cities. We remove the fifth constraint (edge-disjointness of protection paths)
from ILP1 and run the ILP instead on the original graph $G$ for ${\cal
N}_{\frac{1}{2}}$. We call this ILP as ILP3. Then, we modify the optimal
solution of ILP3 properly to obtain a feasible solution of ILP1 for $n$
connections on $G^{\prime}$. This is illustrated in Fig. 5. The cost of this
feasible solution is an upper bound of the optimal cost of ILP1. And from the
simulation for a small number of connections we observe that the bound is
approximately 10% larger than the actual optimal cost. It turns out that
solving ILP2 is fast, therefore we obtain the actual optimal cost for the 2+1
scheme.
Figure 5: A feasible solution of ILP1 is obtained from the optimal solution
of ILP3. Here, ${\cal N}_{\frac{1}{2}}=\\{(S_{1},T_{1}),(S_{2},T_{2})\\}$ and
${\cal N}=\\{(S_{1},T_{1}),(S_{2},T_{2}),(S_{3},T_{3}),(S_{4},T_{4})\\}$,
where $S_{1}=S_{3},T_{1}=T_{3},S_{2}=S_{4},T_{2}=T_{4}$. Suppose the left
graph is the optimal solution obtained from ILP3 on $G$ for ${\cal
N}_{\frac{1}{2}}$. The bold edges indicate that four protection paths pass
through those edges. The right graph is a feasible solution of ILP1 on
$G^{\prime}$. The protection paths are split into four copies of edges so that
the fifth constraint (edge-disjointness of protection paths) hold. And the
paths $S_{1}-T_{1},S_{2}-T_{2}$ are copied to establish
$S_{3}-T_{3},S_{4}-T_{4}$. It remains feasible because in $G^{\prime}$ there
are four such paths for each connection and now we only occupy two of them.
In the simulation, we choose $|{\cal N}_{\frac{1}{2}}|$ from 5 to 9 such that
$n$ goes from 10 to 18. The ILPs are solved by CPLEX. The costs for the 4+n
scheme and 2+1 scheme are averaged over five realizations of ${\cal
N}_{\frac{1}{2}}$. The average costs and percentage gains for different number
of connections are presented in Table.III. and Table.IV. As we expected, the
gain of our proposed scheme increases with the number of connections.
TABLE III: Comparison of the average costs for Labnet03 network $n$ | Average cost for 4+n (upper bound) | Average cost for 2+1 | Percentage gain
---|---|---|---
10 | 1826 | 1916.4 | 4.72%
12 | 2106.4 | 2295.6 | 8.24%
14 | 2339.6 | 2598.8 | 9.97%
16 | 2677.6 | 3049.2 | 12.19%
18 | 3105.2 | 3660 | 15.16%
TABLE IV: Comparison of the average costs for COST239 network $n$ | Average cost for 4+n (upper bound) | Average cost for 2+1 | Percentage gain
---|---|---|---
10 | 1226 | 1245 | 1.53%
12 | 1548 | 1628.4 | 4.94%
14 | 1742.4 | 1854 | 6.02%
16 | 1810.8 | 1958.4 | 7.54%
18 | 1883.2 | 2114.4 | 10.93%
Intuitively, our proposed scheme will have more gain when the connections are
over long distances, e.g., connections between the east coast and the west
coast of the US. Roughly speaking, the number of paths crossing the long
distance (inducing high cost) is $4+n$ for our scheme, while it is $3n$ for
the 2+1 scheme. We also ran some simulation on Labnet03 network to verify this
by choosing the connections to cross the America continent. For a ten
connections setting, we observed 36.7% gain. And when $n=6$ and $n=7$, we
observed up to 15.5% and 17.8% gains respectively. We conclude that our 4+n
scheme is particularly efficient in allocating network resources when the
primary paths are over long distances or have high cost.
## VIII Conclusions and Future Work
In this paper we considered network coding based protection strategies against
adversarial errors for multiple unicast connections that are protected by
shared protection paths. Each unicast connection is established over a primary
path and the protection paths pass through the end nodes of all connections.
We demonstrated suitable encoding coefficient assignments and decoding
algorithms that work in the presence of errors and failures. We showed that
when the adversary is introducing $n_{e}$ errors, which may be on primary
paths or protection paths, $4n_{e}$ protections are sufficient for data
recovery at all the end nodes. More generally, when there are $n_{e}$ errors
and $n_{f}$ failures on primary or protection paths, $4n_{e}+2n_{f}$
protection paths are sufficient for correct decoding at all the end nodes.
Simulations show that our proposed scheme saves network resources compared to
the 2+1 protection scheme, especially when the number of primary paths is
large or the costs for establishing primary paths are high, e.g., long
distance primary connections.
Future work includes investigating more general topologies for network coding-
based protection. The 2+1 scheme can be viewed as one where there is usually
no sharing of protection resources between different primary connections,
whereas the 4+n scheme enforces full sharing of the protection resources.
Schemes that exhibit a tradeoff between these two are worth investigating. For
example, one could consider provisioning two primary paths for each
connection, instead of one and design corresponding network coding protocols.
This would reduce the number of protection paths one needs to provision, and
depending on the network topology, potentially have a lower cost. It is also
interesting to further examine the resource savings when we partition the
primary paths into subsets and provision protection resources for each subset
separately. Furthermore, in this paper we considered an adversarial error
model. When errors are random, we could use classical error control codes to
provide protection. But it is interesting to consider schemes that combine
channel coding across time and the coding across the protection paths in a
better manner. A reviewer has pointed out that rank metric codes [15] might be
also useful for this problem.
## References
* [1] D.Zhou and S.Subramaniam, ``Survivability in optical networks,'' _IEEE Network_ , vol. 14, pp. 16–23, Nov./Dec. 2000.
* [2] R. Ahlswede, N. Cai, S.-Y. Li, and R. W. Yeung, ``Network Information Flow,'' _IEEE Trans. on Info. Th._ , vol. 46, no. 4, pp. 1204–1216, 2000.
* [3] A. Kamal, ``1+n protection in optical mesh networks using network coding on p-cycles,'' in _IEEE Globecom_ , 2006.
* [4] ——, ``1+n protection against multiple faults in mesh networks,'' in _IEEE Intl. Conf. on Commu. (ICC)_ , 2007.
* [5] A. E. Kamal, ``1+n network protection for mesh networks: Network coding-based protection using p-cycles,'' _IEEE/ACM Transactions on Networking_ , vol. 18, no. 1, pp. 67 –80, Feb. 2010.
* [6] D. Stamatelakis and W. D. Grover, ``IP layer restoration and network planning based on virtual protection cycles,'' _IEEE Journal on Selected Areas in Communications_ , vol. 18,no.10, pp. 1938–1949, 2000.
* [7] A. E. Kamal and A. Ramamoorthy, ``Overlay protection against link failures using network coding,'' in _42nd Conf. on Info. Sci. and Sys. (CISS)_ , 2008\.
* [8] A. E. Kamal, A. Ramamoorthy, L. Long, and S. Li, ``Overlay Protection Against Link Failures Using Network Coding, submitted to IEEE/ACM Trans. on Networking,'' 2009.
* [9] R. Koetter and M. Médard, ``An Algebraic Approach to Network Coding,'' _IEEE/ACM Transactions on Networking_ , vol. 11, no. 5, pp. 782–795, 2003\.
* [10] R. W. Yeung and N. Cai, ``Network error correction, Part I: Basic concepts and upper bounds,'' _Comm. in Info. and Sys._ , pp. 19–36, 2006.
* [11] N. Cai and R. W. Yeung, ``Network error correction, Part II: Lower bounds,'' _Comm. in Info. and Sys._ , pp. 37–54, 2006.
* [12] Z. Zhang, ``Linear network error correction codes in packet networks,'' _IEEE Trans. on Info. Th._ , vol. 54, no. 1, pp. 209–218, Jan. 2008.
* [13] S. Jaggi, M. Langberg, S. Katti, T. Ho, D. Katabi, M. Medard, and M. Effros, ``Resilient network coding in the presence of Byzantine adversaries,'' in _IEEE INFOCOM_ , 2007, pp. 616–624.
* [14] S. Yang, R. W. Yeung, and C. K. Ngai, ``Refined coding bounds and code constructions for coherent network error correction,'' preprint.
* [15] D. Silva, F. Kschischang, and R. Koetter, ``A rank-metric approach to error control in random network coding,'' _IEEE Trans. on Info. Th._ , vol. 54, no. 9, pp. 3951 –3967, Sept. 2008.
* [16] Z. Li and B. Li, ``Network coding: the case of multiple unicast sessions,'' in _Proc. of the 42nd Allerton Annual Conference on Communication, Control, and Computing_ , 2004.
* [17] N. J. A. Harvey, D. R. Karger, and K. Murota, ``Deterministic network coding by matrix completion,'' in _SODA '05: Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms_. Philadelphia, PA, USA: Society for Industrial and Applied Mathematics, 2005, pp. 489–498.
* [18] L.Lovasz, ``On determinants, matchings and random algorithms,'' in _Fund. Comput. Theory 79, Berlin_ , 1979.
* [19] S. Lin and D. J. Costello, _Error control coding: fundamentals and applications_. Prentice Hall, 2004.
* [20] M. Menth and R. Martin, ``Network resilience through multi-topology routing,'' in _Proc. of 5th Intl. Workshop on Design of Reliable Communication Networks, 2005_ , pp. 271–277.
* [21] U. Walter, ``Autonomous optimization of Next Generation Networks,'' in _2nd International Workshop on Self-Organizing Systems_ , Sep. 2007.
* [22] A. Kodian and W. Grover, ``Failure-independent path-protecting p-cycles: efficient and simple fully preconnected optical-path protection,'' _Journal of Lightwave Technology_ , vol. 23, no.10, pp. 3241– 3259, 2005\.
|
arxiv-papers
| 2009-05-14T04:45:05 |
2024-09-04T02:49:02.611201
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Shizheng Li and Aditya Ramamoorthy",
"submitter": "Shizheng Li",
"url": "https://arxiv.org/abs/0905.2248"
}
|
0905.2286
|
# Projective normality of finite group quotients and EGZ theorem
S.S.Kannan, S.K.Pattanayak
Chennai Mathematical Institute, Plot H1, SIPCOT IT Park,
Padur Post Office, Siruseri, Tamilnadu - 603103, India.
kannan@cmi.ac.in, santosh@cmi.ac.in, pranab@cmi.ac.in
###### Abstract
In this note, we prove that for any finite dimensional vector space $V$ over
$\mathbb{C}$, and for a finite cyclic group $G$, the projective variety
$\mathbb{P}(V)/G$ is projectively normal with respect to the descent of
$\mathcal{O}(1)^{\otimes|G|}$ by a method using toric variety, and deduce the
EGZ theorem as a consequence.
Keywords: GIT quotient, line bundle, normality of a semigroup.
Introduction
Let $V$ be a finite dimensional representation of a cyclic group $G$ over the
field of complex numbers $\mathbb{C}$. Let $\mathcal{L}$ denote the descent of
$\mathcal{O}(1)^{\otimes|G|}$ to the GIT quotient $\mathbb{P}(V)/G$. In [4],
it is shown that $(\mathbb{P}(V)/G,\mathcal{L})$ is projectively normal. Proof
of this uses the well known arithmetic result due to Erdös-Ginzburg-Ziv (see
[2]).
In this note, we prove the projective normality of
$(\mathbb{P}(V)/G,\mathcal{L})$ by a method using toric variety, and deduce
the EGZ theorem (see [2]) as a consequence.
## 1 Erdös-Ginzburg-Ziv theorem:
We first prove a lemma on normality of a semigroup related to a finite cyclic
group.
###### Lemma 1.1.
Let $M$ be the sub-semigroup of $\mathbb{Z}^{n}$ generated by the finite set
$S=\\{(m_{0},m_{1},\cdots,m_{n-1})\in(\mathbb{Z}_{\geq
0})^{n}:\sum_{i=0}^{n-1}m_{i}=n\,\,and\,\,\sum_{i=0}^{n-1}im_{i}\equiv
0\,\,mod\,\,n\\}$ and let $N$ be the subgroup of $\mathbb{Z}^{n}$ generated by
$M$. Then $M=\\{x\in N:qx\in M\,\,for\,\,some\,\,q\in\mathbb{N}\\}$.
###### Proof.
Consider the homomorphism:
$\Phi:\mathbb{Z}^{n}\longrightarrow\frac{\mathbb{Z}}{n\mathbb{Z}}\oplus\frac{\mathbb{Z}}{n\mathbb{Z}}$
of abelian groups given by:
$\Phi(x_{0},x_{1},\cdots,x_{n-1})=(\sum_{i=0}^{n-1}x_{i}+n\mathbb{Z},\sum_{i=0}^{n-1}ix_{i}+n\mathbb{Z})$.
Clearly, $\Phi$ is surjective and $N\subset Ker(\Phi)$. So
$\frac{\mathbb{Z}^{n}}{Ker(\Phi)}\cong\frac{\mathbb{Z}}{n\mathbb{Z}}\oplus\frac{\mathbb{Z}}{n\mathbb{Z}}\hskip
56.9055pt\longrightarrow(1)$.
Now, we show that $N=Ker(\Phi)$.
Let $\\{e_{i},i=0,1,2,\cdots,n-1\\}$ be the standard basis of
$\mathbb{Z}^{n}$. Then the subgroup of $N$ generated by
$\\{u_{n-2}=ne_{n-2},u_{n-1}=ne_{n-1},u_{r}=e_{r}+(r+1)e_{n-2}+(n-r-2)e_{n-1},r=0,1,2,\cdots,n-3\\}$
is of index $n^{2}$ in $\mathbb{Z}^{n}$.
On the otherhand, $N\subset Ker(\Phi)$. Hence, $N=Ker(\Phi)$, by (1).
Now each $(m_{0},m_{1},\cdots,m_{n-1})\in M$ can be written as a
$\mathbb{Z}$-linear combination of $u_{i}$’s:
$(m_{0},m_{1},\cdots,m_{n-1})=\sum_{i=0}^{n-1}d_{i}u_{i}$, where
$d_{i}=m_{i}\in\mathbb{Z}_{\geq 0},i=0,1,2,\cdots,n-3$,
$d_{n-2}=\frac{m_{n-2}-\sum_{i=0}^{n-3}(i+1)m_{i}}{n}$ and
$d_{n-1}=\frac{m_{n-1}-\sum_{i=0}^{n-3}(n-i-2)m_{i}}{n}$. Notice that
$d_{n-2},d_{n-1}\in\mathbb{Z}$ by the conditions on $N$.
Let $x\in N$ be such that $qx\in M$, for some $q\in N$.
Then
$q(\sum_{i=0}^{n-1}a_{i}u_{i})=\sum_{i=0}^{n-1}b_{i}u_{i}+\sum_{j=1}^{m}c_{j}v_{j}$,
where
$\\{v_{j}:j=1,2,\cdots,m\\}=S\setminus\\{u_{i}:i=0,1,2,\cdots,n-1\\},a_{i}\in\mathbb{Z},b_{i},c_{j}\in\mathbb{Z}_{\geq
0}\,\,\forall\,\,i,j$.
Again, we can write
$v_{k}=\sum_{i=0}^{n-1}d_{i,k}u_{i},d_{i,k}\in\mathbb{Z}\,\,\forall\,\,i,d_{i,k}\in\mathbb{Z}_{\geq
0}\,\,\forall\,\,i=0,1,2,\cdots,n-3$ and $\exists\,\,0\leq i\leq(n-3)$ such
that $d_{i,k}>0$.
If $x\notin M$, we may assume that $a_{i}\leq 0\,\,\forall\,\,i$.
If one of the $b_{i}$’s or $c_{j}$’s is nonzero, then there is an $i$ for
which $a_{i}>0$, contradiction to the assumption that $a_{i}\leq 0$. So, $x\in
M$.
∎
We now prove:
###### Theorem 1.2.
Let $G$ be a cyclic group of order $n$ and $V$ be any finite dimensional
representation of $G$ over $\mathbb{C}$. Let $\mathcal{L}$ be the descent of
$\mathcal{O}(1)^{\otimes n}$. Then $(\mathbb{P}(V)/G,\mathcal{L})$ is
projectively normal.
###### Proof.
Let $R:=\oplus_{d\geq 0}R_{d}$; $R_{d}:=(Sym^{dn}V)^{G}$.
Let $G=<g>$. Write $V^{*}=\oplus_{i=0}^{n-1}V_{i}$ where $V_{i}:=\\{v\in
V^{*}:g.v=\xi^{i}.v\\}$, $0\leq i\leq n-1$, where $\xi$ is a primitive $n$th
root of unity. The $\mathbb{C}$-vector space $R_{1}$ is generated by elements
of the form $X_{0}.X_{1}\ldots X_{n-1}$, where $X_{i}\in
Sym^{m_{i}}(V_{i}),\sum_{i=0}^{n-1}m_{i}=n\,\,and\,\,\sum_{i=0}^{n-1}im_{i}\equiv
0\,\,mod\,\,n$.
So, the $\mathbb{C}$-subalgebra of $\mathbb{C}[V]$ generated by $R_{1}$ is the
algebra corresponding to the semigroup $M$ generated by
$\\{(m_{0},m_{1},\cdots,m_{n-1})\in(\mathbb{Z}_{\geq
0})^{n}:\sum_{i=0}^{n-1}m_{i}=n\,\,and\,\,\sum_{i=0}^{n-1}im_{i}\equiv
0\,\,mod\,\,n\\}$.
By lemma (1.1), $M$ is normal (for the definition of normality of a semigroup,
see page 61 of [1]).
Hence, by theorem 4.39 of [1] the $\mathbb{C}$-subalgebra of $\mathbb{C}[V]$
generated by $R_{1}$ is normal.
Thus, by Exercise 5.14(a) of [3], the theorem follows.
∎
We now deduce EGZ-theorem.
###### Corollary 1.3.
Let $\\{a_{1},a_{2},\cdots,a_{m}\\},m\geq 2n-1$ be a sequence of elements of
$\frac{\mathbb{Z}}{n\mathbb{Z}}$. Then there exists a subsequence
$\\{a_{i_{1}},a_{i_{2}},\cdots,a_{i_{n}}\\}$ of length $n$ whose sum is zero.
###### Proof.
Let $G=\frac{\mathbb{Z}}{n\mathbb{Z}}=<g>$ and $V$ be the regular
representation of $G$ over $\mathbb{C}$.
Let $\\{X_{i}:i=0,1,\cdots,n-1\\}$ be a basis of $V^{*}$ given by:
$\hskip 56.9055ptg.X_{i}=\xi^{i}X_{i},\,\,\forall\,\,g\in G$ and
$i=0,1,\cdots,n-1$, where $\xi$ is a primitive $n$th root of unity.
Let $\\{a_{1},a_{2},\cdots,a_{m}\\},m\geq 2n-1$ be a sequence of elements of
$G$. Consider the subsequence $\\{a_{1},a_{2},\cdots,a_{2n-1}\\}$ of length
$2n-1$.
Take $a=-(\sum_{i=1}^{2n-1}a_{i})$.
Then $(\prod_{i=1}^{2n-1}X_{a_{i}}).X_{a}$ is a $G$-invariant monomial of
degree $2n$.
By Theorem (1.2), there exists a subsequence
$\\{a_{i_{1}},a_{i_{2}},\cdots,a_{i_{n}}\\}$ of
$\\{a_{1},a_{2},\cdots,a_{2n-1},a\\}$ of length $n$ such that
$\prod_{j=1}^{n}X_{a_{i_{j}}}$ is $G$-invariant.
So, $\sum_{j=1}^{n}a_{i_{j}}=0$. Hence, the Corollary follows.
∎
## References
* [1] W.Bruns, J.Gubeladze, Polytopes, Rings, and K-Theory. Springer Monographs in Mathematics. Springer. to appear.
* [2] P.Erdös, A.Ginzburg, A.Ziv, A theorem in additive number theory, Bull. Res. Council, Israel, 10 F(1961) 41-43.
* [3] R.Hartshorne, Algebraic Geometry, Springer-Verlag, 1977.
* [4] S.S.Kannan, S.K.Pattanayak, Pranab Sardar, Projective normality of finite groups quotients. Proc. Amer. Math. Soc. 137 (2009), no. 3, pp. 863-867.
* [5] D.Mumford, J.Fogarty and F.Kirwan, Geometric Invariant theory, Springer-Verlag, 1994.
|
arxiv-papers
| 2009-05-14T09:17:09 |
2024-09-04T02:49:02.617684
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "S.S.Kannan, S.K.Pattanayak",
"submitter": "Senthamarai Kannan S",
"url": "https://arxiv.org/abs/0905.2286"
}
|
0905.2448
|
# Infinite operator-sum representation of density operator for a dissipative
cavity with Kerr medium derived by virtue of entangled state representation
Li-yun Hu and Hong-yi Fan
Department of Physics, Shanghai Jiao Tong University, Shanghai, 200030, China
Corresponding author. Address: Department of Physics, Shanghai Jiao Tong
University, Shanghai, 200030, China. Tel./fax: +86 2162932080.E-mail
addresses: hlyun2008@126.com, hlyun@sjtu.edu.cn (L.-y. Hu).
###### Abstract
By using the thermo entangled state representation we solve the master
equation for a dissipative cavity with Kerr medium to obtain density
operators’ infinite operator-sum representation
$\rho\left(t\right)=\sum_{m,n,l=0}^{\infty}M_{m,n,l}\rho_{0}\mathcal{M}_{m,n,l}^{\dagger}.$
It is noticable that $M_{m,n,l}$ is not hermite conjugate to
$\mathcal{M}_{m,n,l}^{\dagger}$, nevertheless the normalization
$\sum_{m,n,l=0}^{\infty}\mathcal{M}_{nm,,l}^{\dagger}M_{m,n,l}=1$ still holds,
i.e., they are trace-preserving in a general sense. This example may stimulate
further studiing if general superoperator theory needs modification.
## 1 Introduction
For an open quantum system interacting with the enviroment, one uses
superoperators to describe the evolution of density operator from an initial
$\rho_{0}$ (pure state or mixed state) into a final state $\rho.$ A
superoperator plays a role of linear map from $\rho_{0}\rightarrow\rho,$ which
has an operator-sum representation [1, 2]
$\rho=\sum_{n}M_{n}\rho_{0}M_{n}^{\dagger},$ (1)
where the operators $M_{n}$ and $M_{n}^{\dagger}$ are usually hermite
conjugate to each other, and obey the normalization condition [3],
$\sum_{n}M_{n}^{\dagger}M_{n}=1.$ (2)
$M_{n}$ is named Kraus operator [4, 5], $M_{n}$ and $M_{n}^{\dagger}$ are
hermite conjugate to each other. Such an operator-sum representation is the
core of POVM (positive operator-valued measure).
In our very recent papers [6, 7], based on the thermo entangled state
representation, we have derived some density operators that are in infinite
dimensional operator-sum forms, for example, those for describing amplitude-
damping channel and laser process, the corresponding Kraus operators are also
obtained and their normalization proved, which implies trace-preserving.
Then an important and interesting question challenges us: is there an infinite
operator-sum representation of density operator in the infinite sum form
$\rho\left(t\right)=\sum_{n=0}^{\infty}M_{n}\rho_{0}\mathcal{M}_{n}^{\dagger},$
where the normalization $\sum_{n=0}^{\infty}\mathcal{M}_{n}^{\dagger}M_{n}=1$
still holds, but $M_{n}$ and $\mathcal{M}_{n}^{\dagger}$ are not hermite
conjugate to each other? The answer is affirmative. In this paper, by using
the thermo entangled state representation we solve the master equation for a
dissipative cavity with Kerr medium to obtain its density operator, and find
its infinite operator-sum representation really possesses such a ”strange
structure”, this may bring attention of theoreticians who might wonder if the
general theory of superoperators should be modified.
## 2 Brief review of the thermo entangled state representation
In this section, we briefly review the thermo entangled state representation
(TESR). Enlightened by the Thermo Dynamic Theory of Takahashi-Umezawa [8, 9,
10], we have constructed the TESR in doubled Fock space [11],
$\left|\eta\right\rangle=\exp\left(-\frac{1}{2}|\eta|^{2}+\eta
a^{\dagger}-\eta^{\ast}\tilde{a}^{\dagger}+a^{\dagger}\tilde{a}^{\dagger}\right)\left|0,\tilde{0}\right\rangle,\;$
(3)
in which $\tilde{a}^{\dagger}$ is a fictitious mode accompanying the real
photon creation operator $a^{\dagger},$
$\left|0,\tilde{0}\right\rangle=\left|0\right\rangle\left|\tilde{0}\right\rangle,$
$\left|\tilde{0}\right\rangle$ is annihilated by $\tilde{a},$
$\left[\tilde{a},\tilde{a}^{\dagger}\right]=1$. Operating $a$ and $\tilde{a}$
on $\left|\eta\right\rangle$ in Eq.(3) we obtain the eigen-equations of
$\left|\eta\right\rangle$,
$\displaystyle(a-\tilde{a}^{\dagger})\left|\eta\right\rangle$
$\displaystyle=\eta\left|\eta\right\rangle,\;\;\;(a^{\dagger}-\tilde{a})\left|\eta\right\rangle=\eta^{\ast}\left|\eta\right\rangle,$
$\displaystyle\left\langle\eta\right|(a^{\dagger}-\tilde{a})$
$\displaystyle=\eta^{\ast}\left\langle\eta\right|,\ \ \
\left\langle\eta\right|(a-\tilde{a}^{\dagger})=\eta\left\langle\eta\right|.$
(4)
Note that $\left[(a-\tilde{a}^{\dagger}),(a^{\dagger}-\tilde{a})\right]=0,$
thus $\left|\eta\right\rangle$ is the common eigenvector of
$(a-\tilde{a}^{\dagger})$ and $(\tilde{a}-a^{\dagger}).$
Using the normally ordered form of vacuum projector
$\left|0,\tilde{0}\right\rangle\left\langle
0,\tilde{0}\right|=\colon\exp\left(-a^{\dagger}a-\tilde{a}^{\dagger}\tilde{a}\right)\colon,$
and the technique of integration within an ordered product (IWOP) of operators
[12, 13], we can easily prove that $\left|\eta\right\rangle$ is complete and
orthonormal,
$\displaystyle\int\frac{d^{2}\eta}{\pi}\left|\eta\right\rangle\left\langle\eta\right|$
$\displaystyle=$
$\displaystyle\int\frac{d^{2}\eta}{\pi}\colon\exp\left(-|\eta|^{2}+\eta
a^{\dagger}-\eta^{\ast}\tilde{a}^{\dagger}+a^{\dagger}\tilde{a}^{\dagger}+\eta^{\ast}a-\eta\tilde{a}+a\tilde{a}--a^{\dagger}a-\tilde{a}^{\dagger}\tilde{a}\right)\colon=1,$
(5) $\displaystyle\left\langle\eta^{\prime}\right.\left|\eta\right\rangle$
$\displaystyle=$
$\displaystyle\pi\delta\left(\eta^{\prime}-\eta\right)\delta\left(\eta^{\prime\ast}-\eta^{\ast}\right).$
(6)
The $\left|\eta=0\right\rangle$ state
$\text{ \
}\left|\eta=0\right\rangle=e^{a^{\dagger}\tilde{a}^{\dagger}}\left|0,\tilde{0}\right\rangle=\sum_{n=0}^{\infty}\left|n,\tilde{n}\right\rangle,$
(7)
possesses the properties
$\displaystyle a\left|\eta=0\right\rangle$ $\displaystyle=$
$\displaystyle\tilde{a}^{\dagger}\left|\eta=0\right\rangle,\;a^{\dagger}\left|\eta=0\right\rangle=\tilde{a}\left|\eta=0\right\rangle,\;$
(8) $\displaystyle\left(a^{\dagger}a\right)^{n}\left|\eta=0\right\rangle$
$\displaystyle=$
$\displaystyle\left(\tilde{a}^{\dagger}\tilde{a}\right)^{n}\left|\eta=0\right\rangle.$
(9)
Note that density operators are functions of ($a^{\dagger}$, $a)$, i.e.,
defined in the original Fock space, so they are commutative with operators of
($\tilde{a}^{\dagger}$, $\tilde{a})$ in the tilde space.
## 3 Infinite operator-sum representation of density operator for a
dissipative cavity with Kerr medium
In the Markov approximation and interaction picture the master equation for a
dissipative cavity with Kerr medium has the form [14, 15, 16]
$\frac{d\rho}{dt}=-i\chi\left[\left(a^{\dagger}a\right)^{2},\rho\right]+\gamma\left(2a\rho
a^{\dagger}-a^{\dagger}a\rho-\rho a^{\dagger}a\right),$ (10)
where $\gamma$ is decaying parameter of the dissipative cavity, $\chi$ is
coupling factor depending on the Kerr medium. Next we will solve the master
equation by virtue of the entangled state representation and present the
infinite sum representation of density operator.
Operating the both sides of Eq.(10) on the state $\left|\eta=0\right\rangle,$
letting
$\left|\rho\right\rangle=\rho\left|\eta=0\right\rangle,$ (11)
and using Eq.(8) we have the following state vector equation,
$\frac{d}{dt}\left|\rho\right\rangle=\left\\{-i\chi\left[\left(a^{\dagger}a\right)^{2}-\left(\tilde{a}^{\dagger}\tilde{a}\right)^{2}\right]+\gamma\left(2a\tilde{a}-a^{\dagger}a-\tilde{a}^{\dagger}\tilde{a}\right)\right\\}\left|\rho\right\rangle,$
(12)
the formal solution of $\left|\rho\right\rangle$ is
$\left|\rho\right\rangle=\exp\left\\{-i\chi
t\left[\left(a^{\dagger}a\right)^{2}-\left(\tilde{a}^{\dagger}\tilde{a}\right)^{2}\right]+\gamma
t\left(2a\tilde{a}-a^{\dagger}a-\tilde{a}^{\dagger}\tilde{a}\right)\right\\}\left|\rho_{0}\right\rangle,$
(13)
where $\left|\rho_{0}\right\rangle=\rho_{0}\left|\eta=0\right\rangle,$
$\rho_{0}$ is the initial density operator. By introducing the following
operators,
$K_{0}=a^{\dagger}a-\tilde{a}^{\dagger}\tilde{a},\text{ \
}K_{z}=\frac{a^{\dagger}a+\tilde{a}^{\dagger}\tilde{a}+1}{2},\text{ \
}K_{-}=a\tilde{a},$ (14)
and noticing $\left[K_{0},K_{z}\right]=\left[K_{0},K_{-}\right]=0,$ we can
rewrite Eq.(13) as
$\displaystyle\left|\rho\right\rangle$ $\displaystyle=\exp\left\\{-i\chi
t\left[K_{0}(2K_{z}-1)\right]+\gamma
t\left(2K_{-}-2K_{z}+1\right)\right\\}\left|\rho_{0}\right\rangle$
$\displaystyle=\exp\left[i\chi tK_{0}+\gamma
t\right]\exp\left\\{-2t\left(\gamma+i\chi
K_{0}\right)\left[K_{z}+\frac{-\gamma}{\gamma+i\chi
K_{0}}K_{-}\right]\right\\}\left|\rho_{0}\right\rangle.$ (15)
With the aid of the operator identity [17]
$e^{\lambda\left(A+\sigma B\right)}=e^{\lambda A}\exp\left[\sigma
B\left(1-e^{-\lambda\tau}\right)/\tau\right]=\exp\left[\sigma
B\left(e^{\lambda\tau}-1\right)/\tau\right]e^{\lambda A},$ (16)
which is valid for $\left[A,B\right]=\tau B,$ and noticing
$\left[K_{z},K_{-}\right]=-K_{-},$ we can reform Eq.(15) as
$\left|\rho\right\rangle=\exp\left[i\chi tK_{0}+\gamma
t\right]\exp\left[\Gamma_{z}K_{z}\right]\exp\left[\Gamma_{-}K_{-}\right]\left|\rho_{0}\right\rangle,$
(17)
where
$\Gamma_{z}=-2t\left(\gamma+i\chi K_{0}\right),\text{
}\Gamma_{-}=\frac{\gamma(1-e^{-2t\left(\gamma+i\chi
K_{0}\right)})}{\gamma+i\chi K_{0}}.$ (18)
In order to deprive of the state $\left|\eta=0\right\rangle$ from Eq.(17),
using the completeness relation of Fock state in the enlarged space
$\sum_{m,n=0}^{\infty}\left|m,\tilde{n}\right\rangle\left\langle
m,\tilde{n}\right|=1$ and noticing $a^{\dagger
l}\left|n\right\rangle=\sqrt{\frac{\left(l+n\right)!}{n!}}\left|n+l\right\rangle$,
we have
$\displaystyle\left|\rho\right\rangle$ $\displaystyle=\exp\left[i\chi
tK_{0}+\gamma
t\right]\exp\left[\Gamma_{z}K_{z}\right]\sum_{l=0}^{\infty}\frac{\Gamma_{-}^{l}}{l!}a^{l}\rho_{0}a^{\dagger
l}\left|\eta=0\right\rangle$ $\displaystyle=\exp\left[i\chi tK_{0}+\gamma
t\right]\sum_{l=0}^{\infty}\frac{\Gamma_{-}^{l}}{l!}\exp\left[\Gamma_{z}K_{z}\right]\sum_{m,n=0}^{\infty}\left|m,\tilde{n}\right\rangle\left\langle
m,\tilde{n}\right|a^{l}\rho_{0}a^{\dagger l}\left|\eta=0\right\rangle$
$\displaystyle=\sum_{l=0}^{\infty}\frac{\Lambda_{m,n}^{l}}{l!}\exp\left[-it\chi\left(m^{2}-n^{2}\right)-t\gamma\left(m+n\right)\right]\sum_{m,n=0}^{\infty}\left|m,\tilde{n}\right\rangle\left\langle
m,\tilde{n}\right|a^{l}\rho_{0}a^{\dagger l}\left|\eta=0\right\rangle,$ (19)
where we have set
$\Lambda_{m,n}\equiv\frac{\gamma(1-e^{-2t\left(\gamma+i\chi\left(m-n\right)\right)})}{\gamma+i\chi\left(m-n\right)}.$
(20)
Further, using
$\left\langle
n\right|\left.\eta=0\right\rangle=\left|\tilde{n}\right\rangle,\text{
}\left|m,\tilde{n}\right\rangle=\left|m\right\rangle\left\langle
n\right|\left.\eta=0\right\rangle,$ (21)
which leads to
$\left\langle m,\tilde{n}\right|a^{l}\rho_{0}a^{\dagger
l}\left|\eta=0\right\rangle=\left\langle
m\right|\left\langle\tilde{n}\right|a^{l}\rho_{0}a^{\dagger
l}\left|\eta=0\right\rangle=\left\langle m\right|a^{l}\rho_{0}a^{\dagger
l}\left(\left\langle\tilde{n}\right|\left.\eta=0\right\rangle\right)=\left\langle
m\right|a^{l}\rho_{0}a^{\dagger l}\left|n\right\rangle,$ (22)
then Eq.(19) becomes
$\displaystyle\left|\rho\right\rangle$
$\displaystyle=\sum_{m,n,l=0}^{\infty}\frac{\Lambda_{m,n}^{l}}{l!}\exp\left[-i\chi
t\left(m^{2}-n^{2}\right)-\gamma
t\left(m+n\right)\right]\left|m,\tilde{n}\right\rangle\left\langle
m\right|a^{l}\rho_{0}a^{\dagger l}\left|n\right\rangle$
$\displaystyle=\sum_{m,n,l=0}^{\infty}\sqrt{\frac{\left(n+l\right)!\left(m+l\right)!}{n!m!}}\frac{\Lambda_{m,n}^{l}}{l!}e^{-i\chi
t\left(m^{2}-n^{2}\right)-\gamma
t\left(m+n\right)}\left|m,\tilde{n}\right\rangle\rho_{0,m+l,n+l},$ (23)
where $\rho_{0,m+l,n+l}\equiv\left\langle
m+l\right|\rho_{0}\left|n+l\right\rangle.$ Using Eq.(21) again, we see
$\left|\rho\right\rangle=\sum_{m,n,l=0}^{\infty}\sqrt{\frac{\left(n+l\right)!\left(m+l\right)!}{n!m!}}\frac{\left(\Lambda_{m,n}\right)^{l}}{l!}e^{-i\chi
t\left(m^{2}-n^{2}\right)-\gamma
t\left(m+n\right)}\rho_{0,m+l,n+l}\left|m\right\rangle\left\langle
n\right|\left.\eta=0\right\rangle.$ (24)
After depriving $\left|\eta=0\right\rangle$ from the both sides of Eq.(24),
the solution of master equation (10) appears as infinite operator-sum form
$\displaystyle\rho\left(t\right)$ $\displaystyle=$
$\displaystyle\sum_{m,n,l=0}^{\infty}\sqrt{\frac{\left(n+l\right)!\left(m+l\right)!}{n!m!}}\frac{\Lambda_{m,n}^{l}}{l!}e^{-i\chi
t\left(m^{2}-n^{2}\right)-\gamma
t\left(m+n\right)}\left|m\right\rangle\left\langle
m+l\right|\rho_{0}\left|n+l\right\rangle\left\langle n\right|$ (25)
$\displaystyle=$
$\displaystyle\sum_{m,n,l=0}^{\infty}\frac{\Lambda_{m,n}^{l}}{l!}e^{-i\chi
t\left(m^{2}-n^{2}\right)-\gamma
t\left(m+n\right)}\left|m\right\rangle\left\langle
m\right|a^{l}\rho_{0}a^{\dagger l}\left|n\right\rangle\left\langle n\right|,$
Note that the factor $\left(m-n\right)$ appears in the denominator of
$\Lambda_{m,n}$ (see Eq. (20)), (this is originated from the nonlinear term of
$\left(a^{\dagger}a\right)^{2}$), so moving of all $n-$dependent terms to the
right of $a^{l}\rho_{0}a^{\dagger l}$ is impossible, nevertheless, we can
formally express Eq.(25) as
$\rho\left(t\right)=\sum_{m,n,l=0}^{\infty}M_{m,n,l}\rho_{0}\mathcal{M}_{m,n,l}^{\dagger},$
(26)
where the two operators $M_{m,n,l}$ and $\mathcal{M}_{m,n,l}^{\dagger}$ are
respectively defined as
$\displaystyle M_{m,n,l}$ $\displaystyle\equiv$
$\displaystyle\sqrt{\frac{\Lambda_{m,n}^{l}}{l!}}e^{-i\chi tm^{2}-\gamma
tm}\left|m\right\rangle\left\langle m\right|a^{l},\text{ }$
$\displaystyle\mathcal{M}_{m,n,l}^{\dagger}$ $\displaystyle\equiv$
$\displaystyle\left\\{\sqrt{\frac{\Lambda_{n,m}^{l}}{l!}}e^{-i\chi
tn^{2}-\gamma tn}\left|n\right\rangle\left\langle
n\right|a^{l}\right\\}^{{\dagger}},$ (27)
to one’s regret, $M_{m,n,l}$ is not hermite conjugate to
$\mathcal{M}_{m,n,l}^{\dagger}$. This example may surprise us to wonder if the
general superoperator form in Eq. (1) needs modification.
## 4 Further analysis for $\rho\left(t\right)$
Usng the operator identity $e^{\lambda
a^{\dagger}a}=\colon\exp\left[\left(e^{\lambda}-1\right)a^{\dagger}a\right]\colon$
and the IWOP technique, we can prove that
$\displaystyle\ \sum_{m,n,l=0}^{\infty}\mathcal{M}_{m,n,l}^{\dagger}M_{m,n,l}$
$\displaystyle=\sum_{n,l=0}^{\infty}\frac{\left(n+l\right)!}{n!}\frac{(1-e^{-2t\gamma})^{l}}{l!}e^{-2n\gamma
t}\left|n+l\right\rangle\left\langle n+l\right|$
$\displaystyle=\sum_{n,l=0}^{\infty}\frac{(1-e^{-2t\gamma})^{l}}{l!}a^{{\dagger}l}e^{-2\gamma
ta^{\dagger}a}\left|n\right\rangle\left\langle n\right|a^{l}$
$\displaystyle=\sum_{l=0}^{\infty}\frac{(1-e^{-2t\gamma})^{l}}{l!}\colon\exp\left[\left(e^{-2\gamma
t}-1\right)a^{\dagger}a\right]\left(a^{\dagger}a\right)^{l}\colon=1,$ (28)
from which one can see that the normalization still holds, i.e., they are
trace-preserving in a general sense, so $M_{m,n,l}$ and
$\mathcal{M}_{m,n,l}^{\dagger}$ may be named the generalized Kraus operators.
## 5 Reduction of $\rho\left(t\right)$ in some special cases
In general, Eq. (25) indicates that the Kerr medium causes phase diffusion
while the field in cavity is in amplitude-damping. In Eq. (25), when the
decoherent time $t\rightarrow\infty,$ the main contribution comes from the
$m=n=0$ term, in this case $\Lambda\rightarrow 1$, then
$\rho\left(t\rightarrow\infty\right)\rightarrow\sum_{l=0}^{\infty}\left\langle
l\right|\rho_{0}\left|l\right\rangle\left|0\right\rangle\left\langle
0\right|=\left|0\right\rangle\left\langle 0\right|,$ (29)
since $\mathtt{Tr}\rho_{0}=1,$ which shows that the quantum system reduces to
the vacuum state after a long decoherence interaction, as expected.
In particular, when $\chi=0$, Eq.(25) becomes
$\displaystyle\rho\left(t\right)$
$\displaystyle=\sum_{m,n,l=0}^{\infty}\sqrt{\frac{\left(n+l\right)!\left(m+l\right)!}{n!m!}}\frac{(1-e^{-2\gamma
t})^{l}}{l!}e^{-\gamma
t\left(m+n\right)}\rho_{0,m+l,n+l}\left|m\right\rangle\left\langle n\right|$
$\displaystyle=\sum_{m,n,l=0}^{\infty}\frac{(1-e^{-2\gamma
t})^{l}}{l!}e^{-\gamma ta^{\dagger}a}\left|m\right\rangle\left\langle
m\right|a^{l}\rho_{0}a^{{\dagger}l}\left|n\right\rangle\left\langle
n\right|e^{-\gamma ta^{\dagger}a}$
$\displaystyle=\sum_{l=0}^{\infty}\frac{(1-e^{-2\gamma t})^{l}}{l!}e^{-\gamma
ta^{\dagger}a}a^{l}\rho_{0}a^{{\dagger}l}e^{-\gamma ta^{\dagger}a},$ (30)
which corresponds to the amplitude decaying mode, coinciding with the result
in Ref.[6]. While for $\gamma=0,$ Eq.(25) reduces to
$\rho\left(t\right)=\sum_{m,n=0}^{\infty}e^{-i\chi
t\left(m^{2}-n^{2}\right)}\left|m\right\rangle\left\langle
m\right|\rho_{0}\left|n\right\rangle\left\langle
n\right|=e^{-i\chi\left(a^{{\dagger}}a\right)^{2}t}\rho_{0}e^{i\chi\left(a^{{\dagger}}a\right)^{2}t},$
(31)
which implies a process of phase diffusion, for
$\rho_{0}=\left|n\right\rangle\left\langle n\right|,$ then
$\rho\left(t\right)=e^{-i\chi\left(a^{{\dagger}}a\right)^{2}t}\left|n\right\rangle\left\langle
n\right|e^{i\chi\left(a^{{\dagger}}a\right)^{2}t}=\left|n\right\rangle\left\langle
n\right|,$ (32)
which shows neither decay nor phase diffusion happens in the Kerr medium.
In summary, we have demonstrated through the above example that the nonlinear
Hamiltonian in master equation may demand us to modify Eq. (1), the general
expression of superoperator.
Note added. Recently, we were made aware of Refs.[18] which deal with the Kerr
medium using thermo field dynamic theory.
Acknowledgement Work supported by the National Natural Science Foundation of
China under Grant 10775097 and 10874174. L.Y. Hu acknowledges Professor V.
Srinivasan for his kind attention about Refs.[18].
## References
* [1] W. H. Louisell, Quantum Statistical Properties of Radiation (Wiley, New York, 1973).
* [2] H. J. Carmichael, Statistical Methods in Quantum Optics 1: Master Equations and Fokker-Planck Equations, Springer-Verlag, Berlin, 1999; H. J. Carmichael, Statistical Methods in Quantum Optics 2: Non-Classical Fields, (Springer-Verlag, Berlin, 2008).
* [3] J. Preskill, Lecture Notes for Physics 229, Quantum Information and Computation. CIT, 1998.
* [4] K. Kraus, States, Effects, and Operations: Fundamental Notions of Quantum Theroy. Lecture Notes in Physics, Vol. 190. (Springer-Verlag, Berlin, 1983).
* [5] K. E. Hellwig and K. Kraus, Pure operations and measurements. Commun. Math. Phys. 11, 214 (1969); Pure operations and measurements II. Commun. Math. Phys. 16, 142 (1970).
* [6] H. Y. Fan and L. Y. Hu, Opt. Commun. 281 (2008) 5571; H. Y. Fan and L. Y. Hu, Opt. Commun. 2008 to appear
* [7] for a recent review, see H. Y. Fan and L. Y. Hu, Mod, Phys. Lett. B 22 (2008) 2435
* [8] Memorial Issue for H. Umezawa, Int. J. Mod. Phys. B 10, (1996) 1695 memorial issue and references therein.
* [9] H. Umezawa, Advanced Field Theory – Micro, Macro, and Thermal Physics (AIP 1993)
* [10] Y. Takahashi and H. Umezawa, Collecive Phenomena 2, (1975) 55.
* [11] Hong-yi Fan and Yue Fan, Phys. Lett. A 246, (1998) 242; ibid, 282, (2001) 269.
* [12] Hong-yi Fan, Hai-liang Lu and Yue Fan, Ann. Phys _._ 321, (2006) 480.
* [13] A. Wünsche, J. Opt. B: Quantum Semiclass. Opt. 1, (1999) R11.
* [14] G. J. Milburn and C. A. Kolmes, Phys. Rev. Lett. 56, (1986) 2237; D. J. Daniel and G. J. Milburn, Phys. Rev. A 39, (1989) 4628
* [15] S. Chaturvedi, V. Srinivasan and G. S. Agarwal, J. Phys. A. Math. Gen 32 (1999) 1909.
* [16] Wolfgang P. Schleich, Quantum Optics in Phase Space (Wiley-VCH, Birlin, 2001).
* [17] Hong-yi Fan, Representation and Transformation Theory in Quantum Mechanics, (Shanghai Scientific & Technical, Shanghai, 1997) (in Chinese).
* [18] S. Chaturvedi and V. Srinivasan J. Mod. Opt. 38, (1991) 777; S. Chaturvedi and V. Srinivasan Phys. Rev. A 43, (1991) 4054.
|
arxiv-papers
| 2009-05-15T00:03:01 |
2024-09-04T02:49:02.622626
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Li-yun Hu and Hong-yi Fan",
"submitter": "Liyun Hu",
"url": "https://arxiv.org/abs/0905.2448"
}
|
0905.2459
|
# On Design and Implementation of Distributed Modular Audio Recognition
Framework
Requirements and Specification Design DocumentConcordia University
Department of Computer Science and Software Engineering
MARF Research & Development Group:
Serguei A. Mokhov
mokhov@cse.concordia.ca http://marf.sf.net
(Montreal, Quebec, Canada
August 12, 2006)
###### Contents
1. 1 Executive Summary
1. 1.1 Brief Introduction and Goals
2. 1.2 Implemented Features So Far
3. 1.3 Some Design Considerations
4. 1.4 Transactions, Recoverablity, and WAL Design
5. 1.5 Configuration and Deployment
6. 1.6 Testing
7. 1.7 Known Issues and Limitations
8. 1.8 Partially Implemented Planned Features
9. 1.9 NOT Implemented Planned Features
10. 1.10 Conclusion
11. 1.11 Future Work
2. 2 Introduction
1. 2.1 Requirements
2. 2.2 Scope
3. 2.3 Definitions and Acronyms
3. 3 System Overview
1. 3.1 Architectural Strategies
2. 3.2 System Architecture
1. 3.2.1 Module View
2. 3.2.2 Execution View
3. 3.3 Coding Standards and Project Management
4. 3.4 Proof-of-Concept Prototype Assumptions
5. 3.5 Software Interface Design
1. 3.5.1 User Interface
2. 3.5.2 Software Interface
3. 3.5.3 Hardware Interface
4. 4 Detailed System Design
1. 4.1 Directory and Package Organization
2. 4.2 Class Diagrams
3. 4.3 Data Storage Format
1. 4.3.1 Log File Format
4. 4.4 Synchronization
5. 4.5 Write-Ahead Logging and Recovery
6. 4.6 Replication
5. 5 Testing
6. 6 Conclusion
1. 6.1 Summary of Technologies Used
2. 6.2 Future Work and Work-In-Progress
3. 6.3 Acknowledgments
###### List of Figures
1. 2.1 The Core MARF Pipeline Data Flow
2. 2.2 The Distributed MARF Pipeline
3. 3.1 SpeakerIdenApp Client GUI Prototype
4. 3.2 MARF Service Status Monitor GUI Prototype
5. 4.1 Package Structure of the Project
6. 4.2 Sequence Diagram of the Pipeline Of Invocations
7. 4.3 General Architecture Class Diagram of marf.net
8. 4.4 Storage Class Diagram
###### List of Tables
1. 4.1 Details on Main Directory Structure
2. 4.2 DMARF’s Package Organization
## Chapter 1 Executive Summary
This chapter highlights some details for the inpatient readers, while the rest
of the document provides a lot more details.
### 1.1 Brief Introduction and Goals
* •
An open-source project – MARF (http://marf.sf.net, [Gro06]), which stands for
Modular Audio Recognition Framework – originally designed for the pattern
recognition course.
* •
MARF has several applications. Most revolve around its recognition pipeline –
sample loading, preprocessing, feature extraction, training/classifcation. One
of the applications, for example, is Text-Independed Speaker Identification
Application. The pipeline and the application, as they stand, are purely
sequential with even little or no concurrency when processing a bulk of voice
samples.
* •
The classical MARF’s pipeline is in Figure 2.1. The goal of this work is to
distribute the shown stages of the pipeline as services as well as stages that
are not directly present in the figure – sample loading, front-end application
service (e.g. speaker identification service, etc.) and implement some
disaster recovery and replication techniques in the distributed system.
* •
In Figure 2.2 the design of the distributed version of the pipeline is
presented. It indicates different levels of basic front-ends, from higher to
lower which client applications may invoke as well as services may invoke
other services through their front-ends while executing in the pipeline mode.
The back-ends are in charge of providing the actual servant implementations as
well as the features like primary-backup replication, monitoring, and disaster
recovery modules through delegates.
### 1.2 Implemented Features So Far
* •
As of this writing the following are implemented. Most, but not all modules
work:
* •
Out of the following six services:
1. 1.
SpeakerIdent Front-end Service (invokes MARF)
2. 2.
MARF Pipeline Service (invokes the remaining four)
3. 3.
Sample Loader Service
4. 4.
Preprocessing Service
5. 5.
Feature Extraction Service (may invoke Preprocessing for preprocessed sample)
6. 6.
Classification (may invoke Feature Extraction for features)
all the six work in the stand-alone and pipelined modes in CORBA, RMI, and WS.
At the demo time, the RMI and as a consequence in Web Services implementation
of the Sample Loader and Preprocessing stages were not functional (other nodes
were, but could not work as a pipeline) because of the design flaw in the MARF
itself (the Sample class data structure while itself was Serializable, one of
its members, that inherits from a standard Java class, has non-serializable
members in the parent) causing marshalling/unmarshalling to fail. This has
been addressed until after demo.
* •
There are three clients: one for each communication technology type (CORBA,
WS, RMI).
* •
MARF vs. CORBA vs. RMI object adapters to convert serializable objects
understood by the technologies to the MARF native and back.
### 1.3 Some Design Considerations
* •
For WS there are no remote object references, so a class was created called
RemoteObjectReference encapsulating nothing but a type (int) and an URL
(String) as a reference that can be passed around modules, which can later use
it to connect (using WSUtils).
* •
All communication modules rely on their delegates for business and mosf of the
transaction logic, thus remapping remote operations to communication-
technology idependent logic and enabling cross-technology commuincation
through message passing. There are two types of delegates – basic and
recoverable. The basic delegates just merely redirect the business logic and
provide basis for transaction logs while not actually implementing the
transaction routines. They don’t endure the transactions overhead and just
allow to test the distributed business logic. The recoverable delegates are
extension of the basic with the transactionaly on top of the basic operations.
* •
All modules also have utility classes like ORBUtils, RMIUtils, and WSUtils.
These are used by the distributed modules for common registration of services
and their look up. Due to the common design, these can be looked up at run-
time through a reflection by loading the requested module classes. The utility
modules are also responsible for loading the initial service location
information from the dmarf-hosts.properties when available.
### 1.4 Transactions, Recoverablity, and WAL Design
* •
Write-Ahead Log (WAL) consists of entries called “Transactions”. The idea is
that you write to the log first, ahead of committing anything, and once write
call (dump) returns, we commit the transaction.
* •
A Transaction is a data structure maintaining transaction ID (long), a
filename of the object (not of the log, but where the object is normally
permanently stored to distinguish different configurations), the Serializable
value itself (a Message, TrainingSet, or an entire Serializable business-logic
module), and timestamps.
* •
The WAL’s max size is set to empirical 1000 entries before clean up is needed.
Advantage of keeping such entries is to allow a future feature called point-
in-time recovery (PITR), backup, or replication.
* •
MARF-specific note: since MARF core operations are treated as kind of a
business logic black box, the “transactions” are similar to the “before” and
“after” snapshots of serialized data (maybe a design flaw in MARF itself, to
be determined).
* •
Checkpointing in the log is done periodically, by default every second. A
checkpoint is set to be a transaction ID latest committed. Thus, in the event
of a crash, to recover, only committed transactions with the ID greater than
the checkpoint are recovered.
### 1.5 Configuration and Deployment
All CORBA, RMI, and WS use a dmarf-hosts.properties files at startup if
available to locate where the other services are and where to register
themselves.
Web Services have Tomcat context XML files for hosting as well as web.xml and
related WSDL XML files.
All such things are scripted in the GNU Make [SMSP00] Makefile and Ant [Con05]
dmarf-build.xml makefiles.
### 1.6 Testing
A Makefile target marf-client-test for a single wave file and a batch.sh shell
script test mostly CORBA pipeline with 295 testing samples and 31 testing wave
samples x 4 training configs x 16 testing configs.
The largest demo experiment invovled only four machines in two different
buildings running the 6 services and a client (a some machines ran more than
one service of each kind). Killing any of the single services in batch mode
and then restarting it, recovered the ability of a pipeline to operate
normally.
### 1.7 Known Issues and Limitations
* •
After long runs of all six CORBA services on the same machine runs out of file
(and socket) descriptors reaching default kernel limits. (Probably due to
large number of log files opened and not closed while the containing JVM does
not exit and which accumulate over time after lots of rigorous testing).
* •
Main MARF’s design flaws making the pipeline rigid and less concurrent (five-
layer nested transaction, see startRecognitionPipeline() of MARFServerCORBA,
MARFServerRMI, or MARFServerWS for examples.
* •
Transaction ID “wrap-around” for long-running system and transactions with
lots of message passing and other operations. MARF does a lot of writes
(dumps) and long-running servers have a potential to have their transaction
IDs be recycled after an overflow. At the time of this writing, there is no an
estimate of how log it might take when this happens.
* •
All services are single-threaded in the proof-of-concept implementation, so
the concurrency is far from being fully exploited per server instance. This is
to be overcome in the near future.
### 1.8 Partially Implemented Planned Features
* •
WAL logging and recovery.
* •
Message passing (for gossip, TPC or UDP + FIFO) is to be added to the basic
delegates.
* •
Application and Status Monitor GUI – the rudiments are there, but not fully
integrated yet.
### 1.9 NOT Implemented Planned Features
* •
Primary-backup replication with a “warm stanby”.
* •
Lazy, gossip-based replication for Classification training sets.
* •
Two-phase commit for nested MARF Service transactions (covering the entire
pipeline run.
* •
Distributed System-ware NLP-related applications.
* •
Thin test clients and their GUI.
### 1.10 Conclusion
This proof-of-concept implementation of Distributed MARF has proven a
possibility for the pipeline stages and not only to be executed in a pipeline
and stand-alone modes on several computers. This can be useful in providing
any of the mentioned services to clients that have low computational power or
no required environment to run the whole pipeline locally or cannot afford
long-running processing (e.g. collecting samples with a laptop or any mobile
device and submitting them to the server). Additionally, there were discovered
some show-stopping design flaws in the classical MARF itself that have to be
corrected, primarily related to the storage and parameter passing among
modules.
### 1.11 Future Work
Address the design flaws, limitations, and not-implemented features and
release the code (for future improvements). You may volunteer to help to
contribute these ;-) as well as addressing the bugs and limitations when there
is a time and desire. Please email to mokhov@cse.concordia.ca if you are
intrested in contributing to the Distributed MARF project.
## Chapter 2 Introduction
$Revision:1.3$
This chapter briefly presents the purpose and the scope of the work on the
Distributed MARF project with a subset of relevant requirements, definitions,
and acronyms. All these aspects are detailed to some extent later through the
document. The application ideas in small part are coming from [CDK05, WW05,
Mic04, Mic05b, Mic06, Gro06, Mok06b].
### 2.1 Requirements
I have an open-source project – MARF (http://marf.sf.net, [Gro06]), which
stands for Modular Audio Recognition Framework. Originally designed for the
pattern recognition course back in 2002, it had addons from other courses I’ve
taken and maintained and released it relatively regularly.
MARF has several applications. Most revolve around its recognition pipeline –
sample loading, preprocessing, feature extraction, training/classifcation. One
of the applications, for example is Text-Independed Speaker Identification.
The pipeline and the application as they stand are purely sequential with even
little or no concurrency when processing a bulk of voice samples. Thus, the
purpose of this work is to make the pipeline distributed and run on a cluster
or a just a set of distinct computers to compare with the traditional version
and add disaster recovery and service replication, communication technology
indepedence, and so on.
Figure 2.1: The Core MARF Pipeline Data Flow
The classical MARF’s pipeline is in Figure 2.1. The goal of this work is to
distribute the shown stages of the pipeline as services as well as stages that
are not directly present in the figure – sample loading, front-end application
service (e.g. speaker identification service, etc.) and implement some
disaster recovery and replication techniques in the distributed system.
Figure 2.2: The Distributed MARF Pipeline
In Figure 2.2 the distributed version of the pipeline is presented. It
indicates different levels of basic front-ends, from higher to lower, which a
client application may invoke as well as services may invoke other services
through their front-ends while executing in a pipeline-mode. The back-ends are
in charge of providing the actual servant implementations as well as the
features like primary-backup replication, monitoring, and disaster recovery
modules.
There are several distributed services, some are more general, and some are
more specific. The services can and have to intercommunicate. These include:
* •
General MARF Service that exposes MARF’s pipeline to clients and other
services and communicates with the below.
* •
Sample Loading Service knows how to load certain file or stream types (e.g.
WAVE) and convert them accordingly for further preprocessing.
* •
Preprocessing Service accepts incoming voice or text samples and does the
requested preprocessing (all sorts of filters, normalization, etc.).
* •
Feature Exraction Service accepts data, presumably preprocessed, and attempts
to extract features out of it given requested algorithm (out of currently
implemented, like FFT, LPC, MinMax, etc.) and may optionally query the
preprocessed data from the Preprocessing Service.
* •
Classifcation and Training Service accepts feature vectors and either updates
its database of training sets or performs classification against existing
training sets. May optionall query the Feature Extraction Service for the
features.
* •
Natural Language Processing Service accepts natural language texts and
performs also some statistical NLP operations, such as probabilistic parsing,
Zipf’s Law stats, etc.
Some more application-specific front-end services (that are based on the
existing currently non-distributed apps) include but not limited to:
* •
Speaker Identification Service (a front-end) that will communicate with the
MARF service to carry out application tasks.
* •
Language Identification Service would communicate with MARF/NLP for the
similar purpose.
* •
Some others (front-ends for Zipf’s Law, Probabilistic Parsing, and test
applications).
The clients are so-called “thin” clients with GUI or a Web Form allowing users
to upload the samples for training/classification and set the desired
configuration for each run, either for individual samples or batch.
Like it was done in the Distributed Stock Broker [Mok06b], the architecture is
general and usable enough to enable one or more services using CORBA, RMI, Web
Services (WS), Jini, JMS, sockets, whatever (well, actually, Jini, JMS were
not implemented in either applications, but it is not a problem to add with
little or no “disturbance” of the rest of the architecture).
### 2.2 Scope
In the Distributed MARF, if any pipeline stage process crashes access to
information about the pending transactions and computatiion in module is not
only lost while the process remains unavailable but can also be lost forever.
Use of a message logging protocol is one way that a module could recover
information concerning that module’s data after a faulty processor has been
repaired. A WAL message-logging protocol is developed for DMARF. The former is
for the disaster recovery of uncommitted transactions and to avoid data loss.
It also allows for backup replication and point-in-time recovery if WAL logs
are shipped off to a backup storage or a replica manager and can be used to
reconstruct the replica state via gossip or any other replication scheme.
The DMARF is also extended by adding a “warm standby”. The “warm standby” is a
MARF module that is running in the background (normally on a different
machine), receiving operations from the primary server to update its state and
hence ready to jump in if the primary server fails. Thus, when the primary
server receives a request from a client which will change its state, it sends
the request to the backup server, performs the request, receives the response
from the backup server and then sends the reply back to the client. The main
purpose of the “warm stand by” is to minimise the downtime for subsequent
transactions while the primary is in disaster recovery. The primary and backup
servers communicate using either the reliable TCP protocol (over WAN) or a
FIFO-ordered UDP on a LAN. Since this is a secondary feature and the load in
this project will be more than average, we simply might not have time to do
and debug this stuff to be reliable over UDP, so we choose TCP do it for us,
like we did in StockBroker Assignment 2. IFF we have time, we can try to make
a FIFO UDP communication.
* •
Design and implement the set of required interfaces in RMI, CORBA, and WS for
the main MARF’s pipeline stages to run distributedly, including any possible
application front-end and client applications.
* •
Assuming that processor failures are benign (i.e. crash failures) and not
Byzantine, analysis of the classical MARF was done to determine the
information necessary for the proper recovery of a MARF module (that is,
content of the log) and the design of the “warm standby” replication system.
* •
Modify MARF implementation so that it logs the required information using the
WAL message-logging protocol.
* •
Design and implement a recovery module which restarts a MARF module using the
log so that the restarted module can process subsequent requests for the
various operations.
* •
Design and implement the primary server which receives requests from clients,
sends the request to the backup server, performs the request, and sends the
response back to the client only after the request has been completed
correctly both in the primary and the backup servers. When the primary notices
that the backup does not respond within a reasonable time, it assumes and
informs the MARF monitor that the backup has failed so that a new backup
server can be created and initialized.
* •
Design and implement a monitor module which periodically checks the module
process and restarts it if necessary. application. This monitor initializes
the primary and backup servers at the beginning, creates and initializes a
backup server when the primary fails (and the original backup server takes
over as the primary), and creates and initializes a backup server when the
original backup server fails.
* •
Design and implement the backup server which receives requests from the
primary, performs the request and sends the reply back to the primary. If the
backup server does not receive any request from the primary for a reasonable
time, it sends a request to the primary to check if the latter is working. If
the primary server does not reply in a reasonable time, the backup server
assumes that the primary has failed and takes over by configuring itself as
the primary so that it can receive and handle all client requests from that
point onwards; and also informs the broker monitor of the switch over so that
the latter can create and initialize another backup server.
* •
Integrate all the modules properly, deploy the application on a local area
network, and test the correct operation of the application using properly
designed test runs. One may simulate a process failure by killing that process
from the command line while the application is running.
### 2.3 Definitions and Acronyms
API
Application Programmers Interface – a common convenience collection of
objects, methods, and other object members, typically in a library, available
for an application programmer to use.
CORBA
Common Object Request Broker Architecture – a language model independent
platform for distributed execution of applications possibly written in
different languages, and, is, therefore, heterogeneous type of RPC (unlike
Java RMI, which is Java-specific).
HTML
HyperText Markup Language – a tag-based language for defining the layout of
web pages.
IDL
Interface Definition Language – a CORBA interface language to “glue” most
common types and data structures in a specific programming language-
independent way. Interfaces written in IDL are compiled to a language specific
definitions using defined mapping between constructs in IDL and the target
language, e.g. IDL-to-Java compiler (idlj) is used for this purpose in this
assignment.
CVS
Concurrent Versions System – a version and revision control system to manage
source code repository.
DSB
Distributed Stock Broker application.
DMARF
Distributed MARF.
J2SE
Java 2 Standard Edition.
J2EE
Java 2 Entreprise Edition.
JAX-RPC
Java XML-based RPC way of implementing Web Services.
JAX-WS
The new and re-engineered way of Java Web Services implementation as opposed
to the older and being phased-out Java XML-RPC.
JDK
The Java Development Kit. Provides the JRE and a set of tools (e.g. the javac,
idlj, rmic compilers, javadoc, etc.) to develop and execute Java applications.
JRE
The Java Runtime Environment. Provides the JVM and required libraries to
execute Java applications.
JVM
The Java Virtual Machine. Program and framework allowing the execution of
program developed using the Java programming language.
MARF
Modular Audio Recognition Framework [Gro06] has a variety of useful general
purpose utility and storage modules employed in this work, from the same
author.
RMI
Remote Method Invocation – an object-oriented way of calling methods of
objects possibly stored remotely with respect to the calling program.
RPC
A concept of Remote Procedure Call, introduced early by Sun, to indicate that
an implementation certain procedure called by a client may in fact be located
remotely from a client on another machine
SOAP
Simple Object Access Protocol – a protocol for XML message exchange over HTTP
often used for Web Services.
STDOUT
Standard output – an output data stream typically associated with a screen.
STDERR
Standard error – an output data stream typically associated with a screen to
output error information as opposed to the rest of the output sent to STDOUT.
WS
Web Services – another way of doing RPC among even more heterogeneous
architectures and languages using only XML and HTTP as a basis.
WSDL
Web Services Definition Language, written in XML notation, is a language to
describe types and message types a service provides and data exchanged in
SOAP. WSDL’s purpose is similar to IDL and it can be used to generate endpoint
interfaces in different programming languages.
## Chapter 3 System Overview
$Revision:1.2$
In this chapter, we examine the system architecture of the implementation of
the DMARF application and software interface design issues.
### 3.1 Architectural Strategies
The main principles are:
Platform-Independence
where one targets systems that are capable of running a JVM.
Database-Independent API
will allow to swap database/storage engines on-the-fly. The appropriate
adapters will be designed to feed upon required/available data source (binary,
CSV file, XML, or SQL) databases.
Communication Technology Independence
where the system design evolves such that any communication technologies
adapters or plug-ins (e.g. RMI, CORBA, DCOM+, Jini, JMS, Web Services) can be
added with little or no change to the main logic and code base.
Reasonable Efficiency
where one architects and implements an efficient system, but will avoid
advanced programming tricks that improve the efficiency at the cost of
maintainability and readability.
Simplicity and Maintainability
where one targets a simplistic and easy to maintain organization of the
source.
Architectural Consistency
where one consistently implements the chosen architectural approach.
Separation of Concern
where one isolates separate concerns between modules and within modules to
encourage re-use and code simplicity.
### 3.2 System Architecture
#### 3.2.1 Module View
##### Layering
The DMARF application is divided into layers. The top level has a front-end
and a back-end. The front-end itself exists on the client side and on the
server side. The client side is either text-interactive, non-interactive
client classes that connect and query the servers. The front-end on the server
side are the MARF pipeline itself, the application-specific frontend, and
pipeline stage services. All pipeline stages somehow involved to the database
and other storage management subfunctions. At the same time the services are a
back-end for the client connecting in.
#### 3.2.2 Execution View
##### Runtime Entities
In the case of the DMARF application, there is hosting run-time environment of
the JVM and on the server side there must be the naming and implementation
repository service running, in the form of orbd and rmiregistry. For the WS
aspect of the application, there ought to be DNS running and a web servlet
container. The DBS uses Tomcat [Fou05] as a servlet container for MARF WS. The
client side for RMI and CORBA clients just requires a JRE (1.4 is the
minimum). The WS client in addition to JRE may require a servlet container
environment (here Tomcat) and a browser to view and submit a web form. Both
RMI and CORBA client and server applications are stand-alone and non-
interactive. A GUI is projected for the client (and possibly server to
administer it) in one of the follow up versions.
##### Communication Paths
It was resolved that the modules would all communicate through message passing
between methods. CORBA is one of the networking technologies used for remote
invocation. RMI is the base-line technology used for remote method calls.
Further, a JAX-RPC over SOAP is used for Web Services (while a more modern
JAX-WS alternative to JAX-RPC was released, this project still relies on JAX-
RPC 1.1 as it’s not using J2EE and the author found it simpler and faster to
use given the timeframe and more accurate tutorial and book material
available). All: RMI, CORBA, and WS influenced some technology-specific design
decisions, but it was possible to abstract them as RMI and CORBA “agents” and
delegate the business logic to delegate classes enabling all three types of
services to communicate in the future and implement transactions similarly.
Communication to the database depends on the storage manager (each terminal
business logic module in the classical MARF is a StorageManager).
Additionally, Java’s reflection [Gre05] is used to discover instantiation
communication paths at run-time for pluggable modules.
##### Execution Configuration
The execution configuration of the DMARF has to do with where its data/ and
policies/ directories are. The data/ directory is always local to where the
application was ran from. In the case of WS, it has to be where Tomcat’s
current directory is; often is in the logs/ directory of `${catalina.base}`.
The data directory contains the service-assigned databases in the XXX.gzbin
(generated on the first run of the servers). The “XXX” corresponds to the
either training set or a module name that saved their state. Next, orbd keeps
its data structures and logs in the orb.db/ directory also found in the
current directory Additionally, the RMI configuration for application’s (both
client and server) policy files is located in allallow.policy (testing with
all permissions enabled). As for the WS, for deployment two directories META-
INF/ and WEB-INF/ are used. The former contains the Tomcat’s contex file for
deployment that ought to be placed in
`${catalina.base}/conf/Catalina/localhost/` and the latter typically goes to
local/marf as the context describes. It contains web.xml and other XML files
prduced to describe servlet to SOAP mapping when generating .war files with
wscompile and wsdeploy.
The build-and-run files include the Ant [Con05] dmarf-build.xml and the GNU
make [SMSP00] Makefile files. The Makefile is the one capable of starting the
orbd, rmiregistry, the servers, and the clients in various modes. The
execution configuration targets primarily Linux FC4 platform (if one intends
to use gmake), but is not restricted to it.
A hosts configuration file dmarf-hosts.properties is used to tell the services
of how to initialize and where to find other services initially. If the file
is not present, the default of host for all is assumed to be localhost.
### 3.3 Coding Standards and Project Management
In order to produce higher-quality code, it was decided to normalize on
Hungarian Notation coding style used in MARF [Mok06a]. Additionally, javadoc
is used as a source code documentation style for its completeness and the
automated tool support. CVS (cvs) [BddzzP+05] was employed in order to manage
the source code, makefile, and documentation revisions.
### 3.4 Proof-of-Concept Prototype Assumptions
Since this is a prototype application within a timeframe of a course, some
simplifying assumptions took place that were not a part of, explicit or
implied, of the specification.
1. 1.
There is no garbage collection done on the server side in terms of fully
limiting the WAL size.
2. 2.
WAL functinality has not been at all implemented for the modules other than
Classification.
3. 3.
MARF services does not implement nested transaction while pipelining.
4. 4.
Services don’t intercommunicate (TCP or UDP) other than through the pipeline
mode of operation.
5. 5.
No primary-backup or otherwise replication is present.
### 3.5 Software Interface Design
Software interface design comprises both user interfaces and communication
interfaces (central topic of this work) between modules.
#### 3.5.1 User Interface
For the RMI and CORBA clients and servers there is a GUI designed for status
and control as time did not permit to properly integrate one. Therefore, they
use a command-line interface that is typically invoked from a provided
Makefile. GUI integration is projected in the near future. See the interface
prototypes in Figure 3.1 and in Figure 3.2.
Figure 3.1: SpeakerIdenApp Client GUI Prototype Figure 3.2: MARF Service
Status Monitor GUI Prototype
#### 3.5.2 Software Interface
Primary communication-related software interfaces are briefly described below.
A few other interfaces are omitted for brevity (of storage and classical
MARF).
##### RMI
The main RMI interfaces the RMI servants implement are ISpeakerIdentRMI,
IMARFServerRMI, ISampleLoaderRMI, IPreprocessingRMI, IFeatureExtractionRMI,
and IClassificationRMI. They are located in the marf.server.rmi.* and
marf.client.rmi.* packages. There also are the generated files off this
interface for stubs with rmic and the servant implementation.
##### CORBA
The main CORBA IDL interfaces the servants implement are ISpeakerIdentCORBA,
IMARFServerCORBA, ISampleLoaderCORBA, IPreprocessingCORBA,
IFeatureExtractionCORBA, and IClassificationCORBA. The IDL files are located
in the marf.server.corba.* package and are called MARF.idl and Frontends.idl.
There also are the generated files off this interface definition for stub,
skeleton, data types holders and helpers with idlj and the servant
implementation and a data type adapter (described later).
##### WS
The main WS interface the WS “servants” (servlets) implement is
ISpeakerIdentWS, IMARFServerWS, ISampleLoaderWS, IPreprocessingWS,
IFeatureExtractionWS, and IClassificationWS. They are located in the
marf.server.ws.*. There are also the generated files off this interface for
stub and skeleton serializers and builders for each method and non-primitive
data type of Java with wscompile and wsdeploy and the “servant”
implementations. There are about 8 files generated for SOAP XML messages per
method or a data type for requests, responses, faults, building, and
serialization.
##### Delegate
The DMARF is flexible here and allows any delegate implementation as long as
IDelegate in marf.net.server.delelegates is implemented. A common
implementation of it is also there provided with the added value benefit that
all three types of servants of the above can use the same delegate
implementations and therefore can share all of functionality, transactions,
and communication.
#### 3.5.3 Hardware Interface
Hardware interface is fully abstracted by the JVM and the underlying operating
system making the DMARF application fully architecture-independent. The
references to STDOUT and STDERR (by default the screen or a file) are handled
through the System.out and System.err streams. Likewise, STDIN (by default
associated with keyboard) is abstracted by Java’s System.in.
## Chapter 4 Detailed System Design
$Revision:1.2$
This chapter briefly presents the design considerations and assumptions in the
form of directory structure, class diagrams as well as storage organization.
### 4.1 Directory and Package Organization
In this section, the directory structure is introduced. Please note that Java,
by default, converts sub-packages into subdirectories, which is what we see in
Figure 4.1. Please also refer to Table 4.1 and Table 4.2 for description of
the data contained in the directories and the package organization,
respectively.
Figure 4.1: Package Structure of the Project Directory | Description
---|---
bin/ | compiled class files are kept here. The sub-directory structure mimics the one of the src/.
data/ | contains the database as well as stocks file.
logs/ | contains the client and server log files as “screenshots”.
orb.db/ | contains the naming database as well as logs for the orbd.
doc/ | project’s API and manual documentation (well and theory as well).
lib/ | meant for libraries, but for now there is none
src/ | contains the source code files and follows the described package hierarchy.
dist/ | contains distro services .jar and .war files.
policies/ | access policies for the RMI client and server granting various permissions.
META-INF/ | Tomcat’s context file (and later manifest) for deployment .war.
WEB-INF/ | WS WSDL servlet-related deployment information and classes.
Table 4.1: Details on Main Directory Structure Package | Description
---|---
marf | root directory of the MARF project; below are the packages mostly pertinent to the DMARF
marf.net.*.* | MARF’s directory for the some generic networking stuff
marf.net.client | client application code and subpackages
marf.net.client.corba.* | Distributed MARF CORBA clients
marf.net.client.rmi.* | Distributed MARF RMI clients
marf.net.client.ws.* | Distributed MARF WS clients
marf.net.messenging.* | Reserved for message-passing protocols
marf.net.protocol.* | Reserved for other protocols, like two-phase commit
marf.net.server.* | main server code and interfaces is placed here
marf.net.server.rmi.* | RMI-specific sevices implementation
marf.net.server.corba.* | CORBA-specific sevices implementation
marf.net.server.ws.* | WS-specific sevices implementation
marf.net.server.delegates.* | service delegate implementations are here
marf.net.server.frontend.* | root of the service front-ends
marf.net.server.frontend.rmi.* | RMI-specific service front-ends
%****␣detailed-design.tex␣Line␣75␣****marf.net.server.frontend.corba.* | CORBA-specific service front-ends
marf.net.server.frontend.ws.* | WS-specific service front-ends
marf.net.server.frontend.delegates.* | service front-ends delegate implementations
marf.net.server.gossip | reserved for the gossip replication implementation
marf.net.server.gui | server status GUI
marf.net.server.monitoring | reserved for various service monitors and their bootstrap
marf.net.server.persistence | reserved for WAL and Transaction storage management
marf.net.server.recovery | reserved for WAL recovery and logging
marf.Storage | MARF’s storage-related utility classes
marf.util | MARF’s general utility classes (threads, loggers, array processing, etc.)
marf.gui | general-purpose GUI utilities that to be used in the MARF apps, clients, and server status monitors
Table 4.2: DMARF’s Package Organization
### 4.2 Class Diagrams
At this stage, the entire design is summarized in five class diagrams
representing the major modules and their relationships. The diagrams of the
overall architecture and its storage subsystem are in Figure 4.3 and Figure
4.4 respectively. Then, some details on CORBA, RMI, and WS implementations are
in Figure LABEL:fig:corba, Figure LABEL:fig:rmi, and Figure LABEL:fig:ws
respectively. Please locate the detailed description of the modules in the
generated API HTML off javadocs or the javadoc comments themselves in the
doc/api directory. Some of the description appears here as well in the form of
interaction between classes.
Figure 4.2: Sequence Diagram of the Pipeline Of Invocations Figure 4.3:
General Architecture Class Diagram of marf.net
At the beginning of the hierarchy are the IClient and IServer are independent
of a communication technology type of interfaces that “mark” the would-be
classes of either type. This is design of a system where one will be able to
pick and choose either manually or automatically which communication
technologies to use. These interfaces are defined in the marf.net and used in
reflection instantiation utils.
Next, the hierarchy branches to the CORBA, RMI, and WS marked-up sever and
client interfaces, ICORBAServer, ICORBAClient, IRMIServer and IRMIClient,
IWSServer and IWSClient. The specificity of the IRMIServer that it extends the
Remote interface required by the RMI specification. The ICORBAServer allows to
set and get the root POA. And the IWSServer allows setting and getting an in-
house made RemoteObjectReference (which isn’t true object reference as in RMI
or CORBA, but incapsulates the necessary service location information).
Then, the diagram shows only the CORBA details (and RMI and WS are similar,
but the diagram is already cluttered, so they were omitted). Then the diagram
shows all six servants and their relationships with the interfaces as well as
blending in WAL logging and transaction recovery. There some monitoring
modules designed as well.
The clients for the respective technologies are in the marf.net.client.corba,
marf.net.client.rmi, and marf.net.client.ws packages.
When implementing the CORBA services, a data type adapter had to be made to
adapt certain data structures that came from MARF.idl to the common storage
data structures (e.g. Sample, Result, CommunicationException, ResultSet,
etc.). Thus, the MARFObjectAdapter class was provided to adapt these data
structured back and forth with the generic delegate when needed.
The servers for the respective technologies are in the marf.net.server.corba,
marf.net.server.rmi, and marf.net.server.ws packages.
Finally, on the server side, the RecoverableClassificationDelegate interacts
with the WriteAheadLogger for transaction information. The storage manager
here serializes the WAL entries
Figure 4.4: Storage Class Diagram
More design details are revealed in the class diagram of the storage-related
aspects in Figure 4.4. The Database contains stats of classificatio and is
only written by the SpeakerIdent front-end. All, Database, Sample, Result, and
ResultSet and TrainingSet implement Serializable to be able to be stored on
disk or transferred over a network.
The serialization of the WAL instance into the file is handled by the
WALStorageManager class. The IStorageManager interface and its most generic
implementation StorageManager also come from my MARF’s marf.Storage package.
The StorageManager class provides the implementation of serialization of
classes in plain binary as well as compressed binary formats. (It also has
facilities to plug-in other storage or output formats, such as CSV, XML, HTML,
and SQL, which derivatives must implement if they wish.
### 4.3 Data Storage Format
This section is about data storage issues and the details on the chosen
underlying implementation and ways of addressing those issues. For the details
on the classical MARF storage susbsystem please refer to the Storage chapter
in [Gro06].
#### 4.3.1 Log File Format
The log is saved in the module-technology.log files for the server and client
respectively in the application’s current directory. As of this version, the
file is produced with the help of the Logger class that is in marf.util.
(Another logging facility that was considered but not yet only used in WS with
Tomcat is the Log4J tool [AGS+06], which has a full-fledged logging engine.)
The log file produced by Logger has a classical format of “`[ time stamp ]:
message`”. The logger intercepts all attempts to write to STDOUT or STDERR and
makes a copy of them to the file. The output to SDTOUT and STDERR is also
preserved. If the file remains between different runs, the log data is
appended.
### 4.4 Synchronization
The notion of synchronization is crucial in an application that allows access
to a shared resource or a data structure by multiple clients. This includes
our DMARF. At the server side the synchronization must be maintained when the
Database or TrainingSet objects are accessed through the server possibly by
multiple clients. The way it is implemented in this version, the Database
class becomes its own object monitor and all its relevant methods are made
synchronized, thus locking entire object while it’s accessed by a thread
thereby providing data integrity. The whole-instance locking maybe a bit
inefficient, but can be careful re-done by only marking some critical paths
only and not the entire object.
Furthermore, multiple server keep a copy of their own dats structures,
including stock data, making it more concurrent. On top of that, the WS, RMI,
and CORBA brokers act through a delegate implementation allowing to keep all
the synchronization and business logic in one place and decouple communication
from the logic. The rest is taken care of by the WAL.
### 4.5 Write-Ahead Logging and Recovery
The recovery log design is based on the principle of the write-ahead logging.
This means the transaction data is written first to the log, and upon
successful return from writing the log, the transaction is committed.
Checkpointing is done periodically of flushing all the transactions to the
disk with the record of the latest committed transaction ID as a checkpoint.
In the even of crash, upon restart, the WAL is read and the object states are
recovered from the latest checkpoint.
The design of the WAL algorithm in DMARF is modified such that the logged
transaction data contains the “before” and “after” snapshots of the object in
question (a training set, message, or the whole module itself). In part this
is due to the fact that the transactions are wrapped around classical business
logic, that does alter the objects on disk, so in the even of a failure the
“before” snapshot is used to revert the object state on disk the way it was
back before the transaction in question began.
WAL grows up to a certain number of committed transactions. Periodic garbage
collection on WAL and checkpointing are performed. At the garbage collection
oldest aborted transactions are removed as well as up to a 1000 committed
transactions. WAL can be periodically backed up, shipped to another server for
replication, or point-in-time recovery (PITR) and there are timestamps
associated with each serialized transaction.
In most part, WAL is pertinent to the Classification service as this is where
most of writes are done during the training phase (in the classification phase
it is only reading). Sample loading, preprocessing, and feature extraction
services can also perform intermediate writes if asked, but most of the time
they crunch the data and pass it around. The classification statistics is
maintained at the application-specific front-end for now, and there writes are
serialized.
### 4.6 Replication
The replication is done by either the means of WAL (ship over WAL to another
host and “replay” it along certain timeline). Another way is lazy update
though the gossip architecture among replica. Delegates broadcast
“whoHas(config)” requests before computing anything theselves; if shortly
after no response received, the delegate issuing the request starts to compute
the configuration itself, else a transfer is initiated from another delegate
that have computed an identical configuration.
## Chapter 5 Testing
The conducted testing of (mostly CORBA) pipeline including single training
test and a batch training on maximum four computers in separate buildings.
Makefile and batch.sh serve this purpose. If you intend to use them, make sure
you have the server jars in dist/ and properly configured dmarf-
hosts.properties.
The tests were quite successful and terminating any of the service replicas
and restarting it resumed normal operation of the pipeline in the batch mode.
There more thorough testing is to be conducted as the project evolves from a
proof-of-concept to a cleaner solution.
## Chapter 6 Conclusion
$Revision:1.2$
Out of the three main distributed technologies learnt and used through the
course (RMI, CORBA, and Web Services) to implement the MARF services, I
managed to implement all three.
The Java RMI technology seems to be the lowest-level of remote method
invocation tools for programmers to use. Things like Jini, JMS tend to be more
programmer-friendly. Additional limitation that RMI has as the requirement of
the remote methods to throw the RemoteException and when generating stubs RMI-
independent interface hierarchy does work.
A similar problem exists for CORBA, which generates even CORBA-specific data
structures from the struct definitions that cannot be easily linked to the
data structures used elsewhere throughout the program through inheritance or
interfaces.
The WS implementation from the Java-endpoint provided interface and and a
couple of XML files was a natural extension of RMI implementation but with
somewhat different semantics. The implementation aspect was not hard, but the
deployment within a servlet container and WSDL compilation were a large
headache.
However, highly modular design allowed swapping module implementations from
one technology to another if need be making it very extensible by the means of
delegating the actual business logic to the a delegate classes. As an added
bonus of that implementation, RMI, CORBA, WS services can communicate through
TCP or UDP and do transaction. Likewise, all the synchronization efforts are
undertaken by the delegate and the delegate is the single place to fix is
there is something broken. Aside from the delegate class, a data adapter class
for CORBA also contributes here to translate the data structures.
### 6.1 Summary of Technologies Used
The following were the most prominent technologies used throughout the
implementation of the project:
* •
J2SE (primarily 1.4)
* •
Java IDL [Mic04]
* •
Java RMI [WW05]
* •
Java WS with JAX-RPC [Mic06]
* •
Java Servlets [Mic05a]
* •
Java Networking [Mic05b]
* •
Eclipse IDE [c+04]
* •
Apache Ant [Con05]
* •
Apache Jakarta Tomcat 5.5.12 [Fou05]
* •
GNU Make [SMSP00]
### 6.2 Future Work and Work-In-Progress
Extend the remote framework to include other communication technologies (Jini,
JMS, DCOM+, .NET Remoting) in communication-independent fashion and transplant
that all for use in MARF [Gro06]. Additionally, complete application GUI for
the client and possibly server implementations. Finally, complete the advanced
features of distributed systems such as disaster recovery, fault tolerace,
high availability and replication, and others with great deal of thorough
testing.
### 6.3 Acknowledgments
* •
The authors of the Java RMI [WW05], Java IDL [Mic04], Java Web Services
[Mic06] reference material from Sun.
* •
The authors of the textbook [CDK05].
* •
Dr. Rajagopalan Jayakumar for the Distributed Systems Design Course
* •
Dr. Peter Grogono for LaTeX introductory tutorial [Gro01]
* •
Nick Huang, the TA
## Bibliography
* [AGS+06] N. Asokan, Ceki Gulcu, Michael Steiner, IBM Zurich Research Laboratory, and OSS Contributors. log4j, Hierachical Logging Service for Java. apache.org, 2006. http://logging.apache.org/log4j/.
* [BddzzP+05] Brian Berliner, david d ‘zoo’ zuhn, Jeff Polk, Larry Jones, Derek R. Price, Mark D. Baushke, and other authors. Concurrent Versions System (CVS). Free Software Foundation, Inc., 1989-2005. http://www.nongnu.org/cvs/.
* [c+04] Eclipse contributors et al. Eclipse Platform. IBM, 2000-2004. http://www.eclipse.org/platform.
* [CDK05] George Coulouris, Jean Dollimore, and Tim Kindberg. Distributed Systems Concepts and Design. Addison-Wesley, 2005. ISBN: 0-321-26354-5.
* [Con05] Ant Project Contributors. Apache Ant. The Apache Software Foundation, 2000-2005. http://ant.apache.org/.
* [Fou05] Apache Foundation. Apache Jakarta Tomcat. apache.org, 1999-2005. http://jakarta.apache.org/tomcat/index.html.
* [Gre05] Dale Green. Java Reflection API. Sun Microsystems, Inc., 2001-2005. http://java.sun.com/docs/books/tutorial/reflect/index.html.
* [Gro01] Peter Grogono. A LaTeX2e Gallimaufry. Techniques, Tips, and Traps. Department of Computer Science and Software Engineering, Concordia University, March 2001. http://www.cse.concordia.ca/~grogono/documentation.html.
* [Gro06] MARF Research & Development Group. Modular Audio Recognition Framework and Applications. SourceForge.net, 2002-2006. http://marf.sf.net.
* [Mic04] Sun Microsystems. Java IDL. Sun Microsystems, Inc., 2004. http://java.sun.com/j2se/1.5.0/docs/guide/idl/index.html.
* [Mic05a] Sun Microsystems. Java Servlet Technology. Sun Microsystems, Inc., 1994-2005. http://java.sun.com/products/servlets.
* [Mic05b] Sun Microsystems. Custom Networking. Sun Microsystems, Inc., 1995-2005. http://java.sun.com/docs/books/tutorial/networking/index.html.
* [Mic06] Sun Microsystems. The Java Web Services Tutorial (For Java Web Services Developer’s Pack, v2.0). Sun Microsystems, Inc., February 2006. http://java.sun.com/webservices/docs/2.0/tutorial/doc/index.html.
* [Mok06a] Serguei Mokhov. MARF Coding Conventions, 2005-2006. http://marf.sf.net/coding.html.
* [Mok06b] Serguei A. Mokhov. On Design and Implementation of an Heterogeneous Web Services, CORBA, RMI, and TCP/IP-based Distributed Stock Broker System. Technical report, Department of Computer Science and Software Engineering, Concordia University, August 2006.
* [SMSP00] Richard Stallman, Roland McGrath, Paul Smith, and GNU Project. GNU Make. Free Software Foundation, Inc., 1997-2000. http://www.gnu.org/software/make/.
* [WW05] Ann Wollrath and Jim Waldo. Java RMI Tutorial. Sun Microsystems, Inc., 1995-2005. http://java.sun.com/docs/books/tutorial/rmi/index.html.
## Index
* API
* CommunicationException §4.2
* Database §4.2, §4.2, §4.4, §4.4
* IClassificationCORBA §3.5.2
* IClassificationRMI §3.5.2
* IClassificationWS §3.5.2
* IClient §4.2
* ICORBAClient §4.2
* ICORBAServer §4.2, §4.2
* IDelegate §3.5.2
* IFeatureExtractionCORBA §3.5.2
* IFeatureExtractionRMI §3.5.2
* IFeatureExtractionWS §3.5.2
* IMARFServerCORBA §3.5.2
* IMARFServerRMI §3.5.2
* IMARFServerWS §3.5.2
* int 1st item
* IPreprocessingCORBA §3.5.2
* IPreprocessingRMI §3.5.2
* IPreprocessingWS §3.5.2
* IRMIClient §4.2
* IRMIServer §4.2, §4.2
* ISampleLoaderCORBA §3.5.2
* ISampleLoaderRMI §3.5.2
* ISampleLoaderWS §3.5.2
* IServer §4.2
* ISpeakerIdentCORBA §3.5.2
* ISpeakerIdentRMI §3.5.2
* ISpeakerIdentWS §3.5.2
* IStorageManager §4.2
* IWSClient §4.2
* IWSServer §4.2, §4.2
* Logger §4.3.1, §4.3.1
* long 2nd item
* MARFObjectAdapter §4.2
* MARFServerCORBA 2nd item
* MARFServerRMI 2nd item
* MARFServerWS 2nd item
* Message 2nd item
* ORBUtils 3rd item
* Packages
* marf Table 4.2
* marf.client.rmi.* §3.5.2
* marf.gui Table 4.2
* marf.net §4.2
* marf.net.*.* Table 4.2
* marf.net.client Table 4.2
* marf.net.client.corba §4.2
* marf.net.client.corba.* Table 4.2
* marf.net.client.rmi §4.2
* marf.net.client.rmi.* Table 4.2
* marf.net.client.ws §4.2
* marf.net.client.ws.* Table 4.2
* marf.net.messenging.* Table 4.2
* marf.net.protocol.* Table 4.2
* marf.net.server.* Table 4.2
* marf.net.server.corba §4.2
* marf.net.server.corba.* Table 4.2
* marf.net.server.delegates.* Table 4.2
* marf.net.server.delelegates §3.5.2
* marf.net.server.frontend.* Table 4.2
* marf.net.server.frontend.corba.* Table 4.2
* marf.net.server.frontend.delegates.* Table 4.2
* marf.net.server.frontend.rmi.* Table 4.2
* marf.net.server.frontend.ws.* Table 4.2
* marf.net.server.gossip Table 4.2
* marf.net.server.gui Table 4.2
* marf.net.server.monitoring Table 4.2
* marf.net.server.persistence Table 4.2
* marf.net.server.recovery Table 4.2
* marf.net.server.rmi §4.2
* marf.net.server.rmi.* Table 4.2
* marf.net.server.ws §4.2
* marf.net.server.ws.* Table 4.2
* marf.server.corba.* §3.5.2
* marf.server.rmi.* §3.5.2
* marf.server.ws.* §3.5.2
* marf.Storage §4.2, Table 4.2
* marf.util §4.3.1, Table 4.2
* RecoverableClassificationDelegate §4.2
* Remote §4.2
* RemoteException Chapter 6
* RemoteObjectReference 1st item, §4.2
* Result §4.2, §4.2
* ResultSet §4.2, §4.2
* RMIUtils 3rd item
* Sample 2nd item, §4.2, §4.2
* Serializable 2nd item, 2nd item, 2nd item, §4.2
* startRecognitionPipeline() 2nd item
* StorageManager §4.2, §4.2
* String 1st item
* synchronized §4.4
* System.err §3.5.3
* System.in §3.5.3
* System.out §3.5.3
* TrainingSet 2nd item, §4.2, §4.4
* Transaction 2nd item
* WAL §4.2
* WALStorageManager §4.2
* WriteAheadLogger §4.2
* WSUtils 1st item, 3rd item
* CORBA 2nd item, §1.5, item Communication Technology Independence, §3.2.2, §4.2, §4.2, Chapter 6, Chapter 6
* Data Storage Format §4.3
* Log File Format §4.3.1
* Design Chapter 4
* Class Diagrams §4.2
* Data Storage Format §4.3
* Detailed System Design Chapter 4
* Directory Structure §4.1
* Files
* .jar Table 4.1
* .war §3.2.2, Table 4.1, Table 4.1
* allallow.policy §3.2.2
* batch.sh §1.6, Chapter 5
* bin/ Table 4.1
* data/ §3.2.2, Table 4.1
* dist/ Table 4.1, Chapter 5
* dmarf-build.xml §1.5, §3.2.2
* dmarf-hosts.properties 3rd item, §1.5, §3.2.2, Chapter 5
* doc/ Table 4.1
* doc/api §4.2
* Frontends.idl §3.5.2
* lib/ Table 4.1
* local/marf §3.2.2
* logs/ §3.2.2, Table 4.1
* Makefile §1.5, §3.2.2, §3.5.1, Chapter 5
* MARF.idl §3.5.2, §4.2
* META-INF/ §3.2.2, Table 4.1
* module-technology.log §4.3.1
* orb.db/ §3.2.2, Table 4.1
* policies/ §3.2.2, Table 4.1
* src/ Table 4.1, Table 4.1
* stocks Table 4.1
* WEB-INF/ §3.2.2, Table 4.1
* web.xml §1.5, §3.2.2
* XXX.gzbin §3.2.2
* Introduction Chapter 2
* Definitions and Acronyms §2.3, §2.3
* Requirements §2.1
* Scope §2.2
* Java §4.1
* Libraries
* Log4J §4.3.1
* MARF 1st item, 2nd item, 3rd item, 2nd item, Chapter 2, 1st item, §2.1, §2.1, §2.1, §2.2, item DMARF, §3.3, §3.5.2, §4.2, Table 4.2, Table 4.2, Table 4.2, Table 4.2, Table 4.2, §6.2
* MARF
* Core Pipeline Figure 2.1
* Distributed Pipeline Figure 2.2
* Package Organization §4.1
* Packages
* marf Table 4.2
* marf.client.rmi.* §3.5.2
* marf.gui Table 4.2
* marf.net §4.2
* marf.net.*.* Table 4.2
* marf.net.client Table 4.2
* marf.net.client.corba §4.2
* marf.net.client.corba.* Table 4.2
* marf.net.client.rmi §4.2
* marf.net.client.rmi.* Table 4.2
* marf.net.client.ws §4.2
* marf.net.client.ws.* Table 4.2
* marf.net.messenging.* Table 4.2
* marf.net.protocol.* Table 4.2
* marf.net.server.* Table 4.2
* marf.net.server.corba §4.2
* marf.net.server.corba.* Table 4.2
* marf.net.server.delegates.* Table 4.2
* marf.net.server.delelegates §3.5.2
* marf.net.server.frontend.* Table 4.2
* marf.net.server.frontend.corba.* Table 4.2
* marf.net.server.frontend.delegates.* Table 4.2
* marf.net.server.frontend.rmi.* Table 4.2
* marf.net.server.frontend.ws.* Table 4.2
* marf.net.server.gossip Table 4.2
* marf.net.server.gui Table 4.2
* marf.net.server.monitoring Table 4.2
* marf.net.server.persistence Table 4.2
* marf.net.server.recovery Table 4.2
* marf.net.server.rmi §4.2
* marf.net.server.rmi.* Table 4.2
* marf.net.server.ws §4.2
* marf.net.server.ws.* Table 4.2
* marf.server.corba.* §3.5.2
* marf.server.rmi.* §3.5.2
* marf.server.ws.* §3.5.2
* marf.Storage §4.2, Table 4.2
* marf.util §4.3.1, Table 4.2
* RMI 2nd item, 2nd item, §1.5, item Communication Technology Independence, §3.2.2, §4.2, §4.2, Chapter 6, Chapter 6, Chapter 6
* Synchronization §4.4
* System Overview Chapter 3
* Architectural Strategies §3.1
* Coding Standards and Project Management §3.3
* Communication Paths §3.2.2
* Execution Configuration §3.2.2
* Execution View §3.2.2
* Hardware Interface §3.5.3
* Layering §3.2.1
* Module View §3.2.1
* Prototype Assumptions §3.4
* Runtime Entities §3.2.2
* Software Interface §3.5.2
* Software Interface Design §3.5
* System Architecture §3.2
* User Interface §3.5.1
* Testing Chapter 5
* Tools
* cvs §3.3
* gmake §3.2.2
* idlj item JDK, item IDL, §3.5.2
* javac item JDK
* javadoc item JDK, §3.3
* MARF 1st item, 2nd item, 3rd item, 2nd item, Chapter 2, 1st item, §2.1, §2.1, §2.1, §2.2, item DMARF, §3.3, §3.5.2, §4.2, Table 4.2, Table 4.2, Table 4.2, Table 4.2, Table 4.2, §6.2
* orbd §3.2.2, §3.2.2, §3.2.2, Table 4.1
* rmic item JDK, §3.5.2
* rmiregistry §3.2.2, §3.2.2
* wscompile §3.2.2, §3.5.2
* wsdeploy §3.2.2, §3.5.2
|
arxiv-papers
| 2009-05-15T02:52:28 |
2024-09-04T02:49:02.627398
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Serguei A. Mokhov",
"submitter": "Serguei Mokhov",
"url": "https://arxiv.org/abs/0905.2459"
}
|
0905.2462
|
# Storage of polarization-encoded cluster states in an atomic system
Chun-Hua Yuan Li-Qing Chen Weiping Zhang State Key Laboratory of Precision
Spectroscopy, Department of Physics, East China Normal University, Shanghai
200062, P. R. China
###### Abstract
We present a scheme for entanglement macroscopic atomic ensembles which are
four spatially separate regions of an atomic cloud using cluster-correlated
beams. We show that the cluster-type polarization-encoded entanglement could
be mapped onto the long-lived collective ground state of the atomic ensembles,
and the stored entanglement could be retrieved based on the technique of
electromagnetically induced transparency. We also discuss the efficiency of,
the lifetime of, and some quantitative restrictions to the proposed quantum
memory.
###### pacs:
03.67.-a, 42.50.Gy, 42.50.Dv
## I Introduction
Multiparticle graph states, so-called cluster states cluster , have attracted
much attention for its potential applications as a basic resource in the “one-
way” quantum computing scheme briegel ; Nielsen . The cluster state encoded by
the polarization states of photons has been demonstrated experimentally
clusterexp ; Kiesel ; Zhang ; Prevedel ; Lu ; Vallone ; Chenprl ; Tokunaga .
Meanwhile, the combination of optical techniques with quantum memory using
atoms has shown apparent usefulness to scalable all-optical quantum computing
network Knill and long-distance quantum communication Duan . Several
experiments in these aspects have been realized, such as the storage and
retrieval of coherent states Julsgaard , single-photon wave packets Chaneli ;
Eisaman , and squeezed vacuum state Honda ; Appel . With these achievements,
it is worth initiating a study on a reversible memory for a cluster state.
As we know, light is a natural carrier of classical and quantum information.
Now, the macroscopic atomic system can be efficiently used for its storage due
to long lived electronic ground-state as storage units. Based on the light and
atomic ensembles, two types of quantum memories have been put forward: one
based on the quantum Faraday effect supplemented by measurement and feedback
Julsgaard , and the other involving electromagnetically induced transparency
(EIT) Liu01 ; Fleischhauer00 and Raman processes Kozhekin ; Laurat06 . In
addition, photon echo technique has also been proposed for quantum memory
Moiseev . EIT is probably the most actively studied technique to achieve a
quantum memory. EIT polaritons (dark state) were first considered
theoretically by Mazets and Matisov Mazets , and later by Fleischhauer and
Lukin Fleischhauer00 ; Fleischhauer2 who suggested storing a probe pulse
(stopping the polariton) by adiabatically switching off the control laser.
Extending the analysis to a double-$\Lambda$ system Raczy ; Li ; Chong , it is
possible to simultaneously propagate and store two optical pulses in the
medium based on a dark state polariton (DSP) consisting of low-lying atomic
excitations and photon states of two frequencies. The existence of the DSP in
the double-$\Lambda$ atomic system studied by Chong _et al._ Chong required
that the fields obey certain conditions for frequency, amplitude, and phase
matchings. Quantitative relations in the case of double-$\Lambda$ type atoms
are essentially more complicated than for the standard $\Lambda$
configuration. If one of the conditions breaks, such as the phase is
mismatched, then one of the two pulses will be absorbed and lost Kang . In
this sense, the double-$\Lambda$ system is limited to the storage of two or
more optical pulses. Recently, Yoshikawa _et al._ Yoshikawa demonstrated
holographic storage of two coherence gratings in a single BEC cloud. Choi _et
al._ Choi demonstrated that the entanglement of two light fields survives
quantum storage using two atomic ensembles in a cold cloud, where they
realized the coherent conversion of photonic entanglement into and out of a
quantum memory. Hence, in principle, it is now achievable to store more than
two optical fields in atomic ensembles for different purposes.
In this paper, we propose a scheme that can store a polarization-encoded
cluster state reversibly in a cold atomic cloud based on the EIT technique. To
our best knowledge, the storage of polarization entangled state is very useful
in polarization-encoded quantum computing schemes, such as “one-way” quantum
computing and quantum coding. On the other hand, our scheme also presents a
natural extension of existing work Choi ; Lukin00 .
Our paper is organized as follows. In Sec. II, we describe how the
polarization-encoded cluster state can be stored and retrieved, and the method
of the measurement and verification of entanglement storage. In Sec. III, we
analyze and evaluate the efficiency and the fidelity. In Sec. IV, we evaluate
the memory lifetime. In Sec. V, we discuss some restrictions of the proposed
quantum memory. Finally, we conclude with a summary of our results.
## II Storage of Polarization-encoded Cluster State
Figure 1: (Color online) (a) Schematic diagram of the proposed experiment.
The polarization-encoded four-photon cluster state is inputted with a common
perpendicularly-propagating control field $E_{c}$. Four atomic sub-ensembles
(or channels) are represented by four spatially separate and symmetric regions
in a single cloud of cold atoms. The cold atomic cloud is initially prepared
in a magneto-optical trap (MOT) and the MOT fields are turned off during the
period of the storage and retrieval process. The quantization axis $z$ is set
by the trapping magnetic field $\vec{B}$ in the preparation of the cold atomic
cloud. The horizontally and vertically polarized single-photon pulses pass
through $\lambda/4$ plates and are converted into circularly left- and right-
polarized photons, respectively, i.e.,
$|H\rangle\rightarrow|\sigma^{-}\rangle$, and
$|V\rangle\rightarrow|\sigma^{+}\rangle$. (b) The atomic level configuration
for the proposal. $E_{c}$ is the control light with $\pi$-polarization, and
$\hat{E}_{pL}$ and $\hat{E}_{pR}$ are left and right circularly-polarized
lights, respectively. (c) The corresponding decompositions of atom-photon
couplings in (b) according to the photon polarizations (I for $\sigma^{-}$
polarization, II for $\sigma^{+}$ polarization).
In this section, we show the EIT technique can be used to realize a reversible
memory for the polarization encoded cluster state cluster
$\displaystyle|\phi_{\text{in}}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{2}[|H\rangle_{1}|H\rangle_{2}|H\rangle_{3}|H\rangle_{4}+|V\rangle_{1}|V\rangle_{2}|H\rangle_{3}|H\rangle_{4}$
(1) $\displaystyle+$
$\displaystyle|H\rangle_{1}|H\rangle_{2}|V\rangle_{3}|V\rangle_{4}-|V\rangle_{1}|V\rangle_{2}|V\rangle_{3}|V\rangle_{4}],$
where $|H\rangle$ and $|V\rangle$ stand for the single-photon states with the
photon polarized horizontally and vertically, respectively. The four-photon
cluster state shown above has been demonstrated experimentally clusterexp ;
Kiesel ; Zhang ; Prevedel ; Lu ; Vallone ; Chenprl ; Tokunaga using different
methods. Without loss of generality, here we consider the simple case that the
frequencies of all four photons in the state are degenerate. The schematic
diagrams of the proposed experimental system are shown in Fig. 1(a) (hereafter
noted case $1$) and Fig. 2(a) (hereafter noted case $2$), where four atomic
sub-ensembles (or channels) are used. These four sub-ensembles form four
equivalent channels, each of which is used to store the corresponding
polarization-encoded single-photon state $(|H\rangle_{i}$ or $|V\rangle_{i}$;
$i=1,2,3,4)$ in the four-photon cluster state.
### II.1 Quantum memory for a polarization-encoded single-photon state
Figure 2: (Color online) (a) The figure is the same as Fig. 1(a) except that
a common collinearly-propagating control field $E_{c}$ is used. (b) and (c)
are the same as Figs. 1(b) and 1(c), respectively, except the control field
$E_{c}$ is now replaced with right circularly-polarized light in the
collinearly-propagating case. The right circularly-polarized light $E_{c}$ is
converted by $\lambda/4$ plates using the vertically polarized light $E_{c}$.
In order to understand the physics behind the schemes shown in Figs. 1 and 2,
first we discuss the quantum memory for a polarization-encoded single-photon
state. An atomic ensemble containing $N$ atoms for memory using the
$\Lambda$-type atomic level configuration with the excited state $|a\rangle$
and ground states $|b\rangle$ and $|c\rangle$ based on EIT was studied in
detail by Fleischauer and Lukin Fleischhauer00 and also reviewed by a few
authors, such as Petrosyan Petrosyan . Here, we focus on the case of a single-
photon probe field with horizontal or vertical polarization and describe the
dark state of the system. In the frame rotating with the probe and the driving
field frequencies, the interaction Hamiltonian is given by Fleischhauer00 ;
Fleischhauer2 ; Petrosyan
$\hat{H}=\hbar\sum_{j=1}^{N}[-g\hat{\sigma}_{ab}^{j}\hat{\varepsilon}(z_{j})e^{ik_{p}z_{j}}-\Omega_{c}(t)\hat{\sigma}_{ac}^{j}e^{ik_{c}^{||}z_{j}}+\text{H.c.}],$
(2)
where $\hat{\sigma}_{\mu\nu}^{j}=|\mu\rangle_{jj}\langle\nu|$ is the
transition operator of the $j$th atom between states $|\mu\rangle$ and
$|\nu\rangle$, and we consider the single- and two-photon resonance cases.
$g=\wp\sqrt{\omega/2\hbar\epsilon_{0}V}$ is the coupling constant between the
atoms and the quantized field mode which for simplicity is assumed to be equal
for all atoms. $k_{p}$ and
$k_{c}^{||}=\overrightarrow{k}_{c}\cdot\overrightarrow{e}_{z}$ are the wave
vectors of the probe field and the control field along the propagation axis
$z$, respectively. The traveling-wave quantum field operator
$\hat{\varepsilon}(z,t)=\sum_{q}\hat{a}_{q}(t)e^{iqz}$ is expressed through
the superposition of bosonic operators $\hat{a}_{q}(t)$ for the longitudinal
field modes $q$ with wavevectors $k+q$, where the quantization bandwidth
$\delta q$ $[q\in\\{-\delta q/2,\delta q/2\\}]$ is narrow and is restricted by
the width of the EIT window $\Delta\omega_{tr}$ ($\delta
q\leq\Delta\omega_{tr}/c$) Lukin2000 .
Hamiltonian (2) has a family of dark eigen-states $|D_{n}^{q}\rangle$ with
zero eigenvalue $\hat{H}|D_{n}^{q}\rangle=0$, which are decoupled from the
rapidly decaying excited state $|a\rangle$. For a single-photon probe field
$n=1$, the dark eigen-states $|D_{1}^{q}\rangle$ are given by
$|D_{1}^{q}\rangle=\cos\theta|1^{q}\rangle|c^{(0)}\rangle-\sin\theta|0^{q}\rangle|c^{(1)}\rangle,$
(3)
where $\theta(t)$ is the mixing angle and
$\tan^{2}\theta(t)=g^{2}N/|\Omega|^{2}$, and $\theta$ is independent of the
mode $q$. $|n^{q}\rangle$ denotes the state of the quantum field with $n$
photons in mode $q$, and $|c^{(n)}\rangle$ is a symmetric Dicke-type state of
the atomic ensemble with $n$ Raman (spin) excitations, i.e., atoms in state
$|c\rangle$, defined as
$\displaystyle|c^{(0)}\rangle$ $\displaystyle=$
$\displaystyle|b_{1},...,b_{N}\rangle,~{}$ (4) $\displaystyle|c^{(1)}\rangle$
$\displaystyle=$
$\displaystyle\frac{1}{\sqrt{N}}\sum_{j=1}^{N}e^{i(k+q-k_{c}^{||})z_{j}}|b_{1},b_{2},...,c_{j},...,b_{N}\rangle.~{}~{}$
(5)
Figure 3: (Color online) Transverse section of a cold atom cloud. Once four
polarization-entangled single-photon fields have entered the EIT medium, each
field is converted into corresponding single-excitation polariton $\psi_{j}$
$(j=1,2,3,4)$ representing a coupled excitation of the field and atomic
coherence.
Now, we consider a memory for a single photon with horizontal or vertical
polarization. By the $\lambda/4$ plate, its polarization is converted into the
left circular or right circular polarization. Due to the propagating
directions of the probe field and the control field, the corresponding five-
level structures are shown in Fig. 1(b) for case $1$ and Fig. 2(b) for case
$2$ instead of the simple $\Lambda$-type level configuration, where the fields
$E_{pL\text{ }}$and $E_{pR}$ interact with the atoms on the respective
transitions $|b\rangle\rightarrow|a_{-}\rangle$ and
$|b\rangle\rightarrow|a_{+}\rangle$, and the excited states $|a_{\pm}\rangle$
couple to the metastable states $|c_{\pm}\rangle$ by the same control field
$E_{c}$. According to mapping to the different metastable states
$|c_{+}\rangle$ and $|c_{-}\rangle$, collective state (5) is written as
$\displaystyle|c_{+}^{(1)}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{N}}\sum_{j=1}^{N}e^{i(k+q-k_{c}^{||})z_{j}}|b_{1},b_{2},...,(c_{+})_{j},...,b_{N}\rangle,$
$\displaystyle|c_{-}^{(1)}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{N}}\sum_{j=1}^{N}e^{i(k+q-k_{c}^{||})z_{j}}|b_{1},b_{2},...,(c_{-})_{j},...,b_{N}\rangle.$
When $\theta=0$ $(|\Omega|^{2}\gg g^{2}N)$, dark state (3) is comprised of
purely photonic excitation, i.e.,
$|D_{1}^{q}\rangle=|1^{q}\rangle|c^{(0)}\rangle$ while in the opposite limit
of $\theta=\pi/2$ $(|\Omega|^{2}\ll g^{2}N)$, it occurs with the collective
atomic excitation $|D_{1}^{q}\rangle=-|0^{q}\rangle|c_{-}^{(1)}\rangle$ or
$|D_{1}^{q}\rangle=-|0^{q}\rangle|c_{+}^{(1)}\rangle$ for mapping a left or a
right circularly polarized single-photon. For intermediate values of mixing
angle $0<\theta<\pi/2$, the dark state represents a coherent superposition of
photonic and atomic Raman excitations (polariton) Fleischhauer00 ;
Fleischhauer2 . The left or right circularly polarized single-photon is
converted into respective I class or II class polariton:
I class : $\displaystyle\text{
}\hat{\Psi}_{\text{I}}=\cos\theta_{1}(t)\hat{\varepsilon}_{\sigma_{-}}-\sin\theta_{1}(t)\sqrt{N}\hat{\sigma}_{bc_{-}},$
(6) II class : $\displaystyle\text{
}\hat{\Psi}_{\text{II}}=\cos\theta_{2}(t)\hat{\varepsilon}_{\sigma_{+}}-\sin\theta_{2}(t)\sqrt{N}\hat{\sigma}_{bc_{+}},$
(7)
where $\hat{\sigma}_{bc_{j}}$ $(j=-,+)$ are slowly varying operators which are
defined by
$\hat{\sigma}_{\mu\nu}(z,t)=1/N_{z}\sum_{j=1}^{N_{z}}\hat{\sigma}_{\mu\nu}^{j}$
with $N_{z}=(N/L)dz\gg 1$.
### II.2 Quantum memory for polarization-encoded four-photon cluster state
In this subsection, we describe the memory for the polarization-encoded four-
photon cluster state. The cold atomic cloud is released from a magneto-optical
trap (MOT), and the quantization axis $z$ is set along the long axis of the
cloud by a small axial magnetic field. The $\pi$-polarized and the right
circularly polarized control fields are respectively used in case $1$ and case
$2$. As a specific example for realization of our scheme proposed here, we
consider hyperfine levels of 87Rb. For case $1$ [see Fig. 1(b)]: the ground
state $|b\rangle$ corresponds to the $m_{F}=0$ sublevel of the $F=1$, and the
states $|c_{-}\rangle$ and $|c_{+}\rangle$ correspond to the $m_{F}=-1$ and
$m_{F}=1$ sublevels of the $F=2$, respectively. The excited states
$|a_{-}\rangle$ and $|a_{+}\rangle$ correspond to the $m_{F}=-1$ and $m_{F}=1$
sublevels of the $F^{\prime}=1$, respectively. Differently from case $1$, in
case $2$ [see Fig. 2(b)] because of the control field taking place the
$\sigma^{+}$ transitions, the states $|c_{-}\rangle$ and $|c_{+}\rangle$
correspond to the $m_{F}=-2$ and $m_{F}=0$ sublevels of the $F=2$,
respectively.
The outline of our scheme is as follows. All the atoms are initially prepared
in the ground state $|b\rangle$ by optical pumping. The cold cloud was
illuminated by a resonant control laser from radial and axis directions for
cases $1$ and $2$, respectively. The excited states $|a_{-}\rangle$ and
$|a_{+}\rangle$ are resonantly coupled by the same control field $E_{c}$. Then
the polarization-encoded four-photon cluster state is sent to the four atomic
ensembles $A$, $B$, $C$ and $D$ which are represented by four spatially
separate and symmetric regions in a single cloud of cold atoms Choi ;
Matsukevich ; Chou07 ; Simon ; Yoshikawa09 . Before passing into the EIT
medium, the vertically and horizontally polarized single-photons first pass
through $\lambda/4$ plates and are converted into circularly right- and left-
polarized single-photons, i.e., $|V\rangle\longrightarrow|\sigma^{+}\rangle$
and $|H\rangle\longrightarrow|\sigma^{-}\rangle$. Once four polarization-
entangled single-photon fields have entered the EIT medium, these single-
photons propagate along the $z$ axis resonantly interacting with the atoms and
making transitions $|b\rangle\rightarrow|a_{-}\rangle$ or
$|b\rangle\rightarrow|a_{+}\rangle$, and their group velocities are strongly
modified by the control field $E_{c}$. The interaction of the different-
polarization single-photon probe fields with four atomic sub-ensembles
separates into two classes of $\Lambda$-type EIT, which is illustrated in
Figs. 1(c) and 2(c). Each probe field is converted into corresponding single-
excitation polariton $\psi_{j}$ $(j=1,2,3,4)$ representing a coupled
excitation of the field and atomic coherence, and each polariton $\psi_{j}$ is
described by polariton $\hat{\Psi}_{\text{I}}$ or $\hat{\Psi}_{\text{II}}$.
The corresponding transverse section of the four atomic sub-ensembles is shown
in Fig. 3. By switching off the control field adiabatically, these coupled
excitations are converted into the spin wave excitations with a dominant DSP
component, i.e., the cluster state is stored. After a storage period $\tau$,
the stored field state can be retrieved by turning on $E_{c}$ adiabatically.
Once the four single-photons completely enter the EIT medium, under the
adiabatic condition, the state of atomic ensembles $A$, $B$, $C$ and $D$ will
adiabatically follow the specific eigen states of the Hamiltonian (dark
states). Then the dark states of the system are the direct products of those
corresponding to the subsystems $A$, $B$, $C$ and $D$, and the system state
vector $|\Phi(t)\rangle$ is given by
$\displaystyle\underset{\\{q_{1},q_{2},q_{3},q_{4}\\}}{|\Phi(t)\rangle}$
$\displaystyle=$
$\displaystyle|\mathbf{D}_{1}^{q_{1}}\rangle_{A}|\mathbf{D}_{1}^{q_{2}}\rangle_{B}|\mathbf{D}_{1}^{q_{3}}\rangle_{C}|\mathbf{D}_{1}^{q_{4}}\rangle_{D}$
(8) $\displaystyle=$
$\displaystyle(\cos\theta_{1}|1^{q_{1}}\rangle|c^{(0)}\rangle_{A}-\sin\theta_{1}|0^{q_{1}}\rangle|c^{(1)}\rangle_{A})$
$\displaystyle\times$
$\displaystyle(\cos\theta_{2}|1^{q_{2}}\rangle|c^{(0)}\rangle_{B}-\sin\theta_{2}|0^{q_{2}}\rangle|c^{(1)}\rangle_{B})$
$\displaystyle\times$
$\displaystyle(\cos\theta_{3}|1^{q_{3}}\rangle|c^{(0)}\rangle_{C}-\sin\theta_{3}|0^{q_{3}}\rangle|c^{(1)}\rangle_{C})$
$\displaystyle\times$
$\displaystyle(\cos\theta_{4}|1^{q_{4}}\rangle|c^{(0)}\rangle_{D}-\sin\theta_{4}|0^{q_{4}}\rangle|c^{(1)}\rangle_{D})$
$\displaystyle=$
$\displaystyle\cos\theta_{1}\cos\theta_{2}\cos\theta_{3}\cos\theta_{4}|1^{q_{1}},1^{q_{2}},1^{q_{3}},1^{q_{4}}\rangle$
$\displaystyle\otimes$
$\displaystyle|c^{(0)}\rangle_{A}|c^{(0)}\rangle_{B}|c^{(0)}\rangle_{C}|c^{(0)}\rangle_{D}$
$\displaystyle-$
$\displaystyle\cos\theta_{1}\cos\theta_{2}\cos\theta_{3}\sin\theta_{4}|1^{q_{1}},1^{q_{2}},1^{q_{3}},0^{q_{4}}\rangle$
$\displaystyle\otimes$
$\displaystyle|c^{(0)}\rangle_{A}|c^{(0)}\rangle_{B}|c^{(0)}\rangle_{C}|c^{(1)}\rangle_{D}+\cdots$
$\displaystyle+$
$\displaystyle\sin\theta_{1}\sin\theta_{2}\sin\theta_{3}\sin\theta_{4}|0^{q_{1}},0^{q_{2}},0^{q_{3}},0^{q_{4}}\rangle$
$\displaystyle\otimes$
$\displaystyle|c^{(1)}\rangle_{A}|c^{(1)}\rangle_{B}|c^{(1)}\rangle_{C}|c^{(1)}\rangle_{D}.$
When the control field $E_{c}$ is adiabatically switched off
($\theta_{i}=\pi/2$, $i=1,2,3,4$), the state of the photonic component of the
four pulses are homogeneously coherently mapped onto the collectively atomic
excitations
$|1^{q_{1}}\rangle_{1}|1^{q_{2}}\rangle_{2}|1^{q_{3}}\rangle_{3}|1^{q_{4}}\rangle_{4}\longrightarrow|c^{(1)}\rangle_{A}|c^{(1)}\rangle_{B}|c^{(1)}\rangle_{C}|c^{(1)}\rangle_{D}.$
(9)
According to the polarizations of the input four single photons, from Eq. (9)
we have the following one-to-one mappings:
$\left(\begin{array}[]{c}|H\rangle_{1}\longrightarrow|c_{-}^{(1)}\rangle_{A}\\\
|H\rangle_{2}\longrightarrow|c_{-}^{(1)}\rangle_{B}\\\
|H\rangle_{3}\longrightarrow|c_{-}^{(1)}\rangle_{C}\\\
|H\rangle_{4}\longrightarrow|c_{-}^{(1)}\rangle_{D}\end{array}\right),\ \ \
\left(\begin{array}[]{c}|V\rangle_{1}\longrightarrow|c_{+}^{(1)}\rangle_{A}\\\
|V\rangle_{2}\longrightarrow|c_{+}^{(1)}\rangle_{B}\\\
|V\rangle_{3}\longrightarrow|c_{+}^{(1)}\rangle_{C}\\\
|V\rangle_{4}\longrightarrow|c_{+}^{(1)}\rangle_{D}\end{array}\right).$ (10)
Hence, the state $|\psi\rangle_{\text{atom}}$ of four atomic sub-ensembles
($A$, $B$, $C$ and $D$) will depend on the polarization of the input photons.
When the input state is a polarization-encoded cluster state, after
adiabatically turning off the control field, the state of the four atomic
ensembles is a cluster-type state:
$\displaystyle|\psi\rangle_{\text{atom}}$ $\displaystyle=$
$\displaystyle\frac{1}{2}[|c_{-}^{(1)}\rangle_{A}|c_{-}^{(1)}\rangle_{B}|c_{-}^{(1)}\rangle_{C}|c_{-}^{(1)}\rangle_{D}+|c_{+}^{(1)}\rangle_{A}|c_{+}^{(1)}\rangle_{B}$
(11)
$\displaystyle\otimes|c_{-}^{(1)}\rangle_{C}|c_{-}^{(1)}\rangle_{D}+|c_{-}^{(1)}\rangle_{A}|c_{-}^{(1)}\rangle_{B}|c_{+}^{(1)}\rangle_{C}|c_{+}^{(1)}\rangle_{D}$
$\displaystyle-|c_{+}^{(1)}\rangle_{A}|c_{+}^{(1)}\rangle_{B}|c_{+}^{(1)}\rangle_{C}|c_{+}^{(1)}\rangle_{D}].$
That is to say that the entangled photon state $|\phi_{\text{in}}\rangle$ is
coherently mapped to the entangled atomic state $|\psi\rangle_{\text{atom}}$.
At a later time, the entangled photon state can be retrieved on demand from
the entangled atomic state by turning on $E_{c}$ ($\theta_{i}=0$,
$i=1,2,3,4$). After passing through the $\lambda/4$ plates again, the
retrieval polarization-encoded cluster state is
$\displaystyle|\phi_{\text{out}}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{2}[|H\rangle_{1}|H\rangle_{2}|H\rangle_{3}|H\rangle_{4}+|V\rangle_{1}|V\rangle_{2}|H\rangle_{3}|H\rangle_{4}$
(12) $\displaystyle+$
$\displaystyle|H\rangle_{1}|H\rangle_{2}|V\rangle_{3}|V\rangle_{4}-|V\rangle_{1}|V\rangle_{2}|V\rangle_{3}|V\rangle_{4}].~{}~{}~{}~{}$
We describe an ideal transfer of a polarization encoded cluster between light
fields and metastable states of atoms Twocloud . In the ideal case, the
retrieved pulses are identical to the input pulses, provided that the same
control power is used at the storage and the retrieval stages. However, to
realize the ideal storage, two conditions must be met: (1) the whole pulse
must spatially compressed into the atomic ensemble, and (2) all spectral
components of the pulse must fit inside the EIT transparency window. In Secs.
III, IV and V, we consider the realistic parameters of the proposal of
realization.
Next, we describe the measurement and verification processes of the retrieval
cluster state. Recently, Enk _et al._ Enk discussed a number of different
entanglement-verification protocols: teleportation, violation Bell-Clauser-
Horne-Shimony-Holt (CHSH) inequalities, quantum state tomography, entanglement
witnesses Toth , and direct measurements of entanglement. As for the four-
qubit cluster state, the entanglement is verified by measuring the
entanglement witness $\mathcal{W}$. The expectation value of $\mathcal{W}$ is
positive for any separable state, whereas its negative value detects four-
party entanglement close to the cluster state. The theoretically optimal
expectation value of $\mathcal{W}$ is
Tr($\mathcal{W}\rho_{\text{theory}}$)$=-1$ for the cluster state Toth .
## III Analysis of Efficiency and Fidelity
In this section, we analyze the efficiency and the fidelity of the memory. The
memory efficiency is defined as the ratio of the number of retrieved photons
to the number of incident photons Gorshkov07PRL ; Gorshkov07PRA ;
Gorshkov08PRA ; Phillips :
$\eta=\int_{T+\tau}^{\infty}|\hat{\mathcal{E}}_{\text{out}}(t)|^{2}dt/\int_{0}^{T}|\hat{\mathcal{E}}_{\text{in}}(t)|^{2}dt,$
(13)
where $\tau$ is a storage period. Recently, several proposals are presented
Gorshkov07PRL ; Gorshkov07PRA ; Gorshkov08PRA for the optimal efficiency of
light storage and retrieval under the restrictions and limitations of a given
system. Based on these proposals, two optimal protocols have been demonstrated
experimentally Novikova07 ; Novikova08 . The first protocol iteratively
optimizes the input pulse shape for any given control field Novikova07 , while
the second protocol uses optimal control fields calculated for any given input
pulse shape Novikova08 . As for our cluster storage situation, it is difficult
to shape the input signal pulses. Then the second protocol Novikova08 ;
Phillips could be used to improve the efficiency of storage and retrieval of
given signal pulses.
Using the method introduced by Gorshkov _et al._ Gorshkov07PRA , we plot the
optimal efficiency $\eta$ of any one field of the four input fields due to the
four fields are theoretically equivalent. When considering the spin wave
decay, one should just multiply the efficiency by $\exp(-2\gamma_{s}\tau)$
Gorshkov07PRA and the efficiency will decrease. Figure 4 shows the optimal
efficiency $\eta$ versus the optical depth $d$ under different spin wave
decays $\gamma_{s}$. It is desirable to read out as fast as possible due to
the spin wave decay ($\tau\ll 1/\gamma_{s}$).
Next, we analyze the fidelity of memory. For the density matrices $\rho_{in}$
and $\rho_{out}$ of the input and output quantum state, the fidelity is
defined as Uhlmann ; NielsenBook :
$F=(\mathtt{Tr}\sqrt{\rho_{\text{in}}^{1/2}\rho_{\text{out}}\rho_{\text{in}}^{1/2}})^{2}.$
(14)
For two pure states, their fidelity coincides with the overlap. This number is
equal to one only for the ideal channel transmitting or storing the quantum
state perfectly. We describe a method of the projector-based entanglement
witness (Tr($\mathcal{W}\rho_{\text{exp}}$)) in Sec. II to verify the cluster
state, which is also used to obtain information about the fidelity
$F=\langle\phi_{\text{in}}|\rho_{\text{exp}}|\phi_{\text{in}}\rangle$ Toth ;
Kiesel ; Vallone from the measurement process. The observed fidelity $F>1/2$
assures that the retrieved state has genuine four-qubit entanglement Toth ;
Tokunaga . The high-fidelity quantum memory is necessary for quantum
information and quantum communication. Due to nonsimultaneous retrieval, the
influence on the fidelity will be discussed in Sec. V.
## IV Lifetime of quantum memory
Figure 4: (Color online) $\eta$ is the optimal efficiency for adiabatic
storage and retrieval with dimensionless spin wave decay $\gamma_{s}\tau$.
In this section, we discuss the lifetime of the proposed memory. Quantum
memories for storage and retrieval of quantum information are extremely
sensitive to environmental influences, which limits their storage time. In
many experiments of cold atoms in the MOT, the quadrupole magnetic field is
the main source of the atomic memory decoherence Matsukevich ; Polyakov .
Using the freely cold rubidium atoms released from MOT MatsukevichPRL06 , the
coherence time will be increased MatsukevichPRL05 and the longest quantum
memory time of the system reported to date is $32$ $\mu$s MatsukevichPRL06
without using the clock states, where the time is limited by dephasing of
different Zeeman components in the residual magnetic field. Recently, using
magnetic field insensitive states (clock states), the storage time of the
quantum memory storing single excitation has improved to $1$ ms Zhaobo . Zhao
et al. Zhao also used the magnetically insensitive clock transition for the
quantum memory, but they confined the cold rubidium atomic ensemble in a one-
dimensional optical lattice, and the memory lifetime has improved and exceeded
$6$ ms.
In our scheme, all the light fields responsible for trapping and cooling, as
well as the quadrupole magnetic field in the MOT, are shut off during the
period of the storage and retrieval process; ballistic expansion of the freely
falling gas provides a longer memory time limitation. Assuming one-dimensional
case and Maxwell-Boltzmann velocity distribution $f(v_{z})=\sqrt{M/2\pi
k_{B}T}e^{-mv_{z}^{2}/2k_{B}T}$, the atomic mean speed is $\langle
v\rangle=\sqrt{k_{B}T/M}$ where $k_{B}$ is the Boltzmann constant, $T$ is the
temperature, and $M$ is atomic mass. After a storage period $\tau$, the
collective state describing the spin wave $|c^{(1)}(t)\rangle$ evolves to
$|c^{(1)}(t+\tau)\rangle=\frac{1}{\sqrt{N}}\sum_{j=1}^{N}e^{i\Delta
k(z_{j}+v_{j}\tau)}|b_{1},b_{2},...,c_{j},...,b_{N}\rangle,$ (15)
where $\Delta k=k_{p}-k_{c}^{||}+q$ the wavevector of the spin wave. For
narrow transparency window $\Delta\omega_{tr}$, $\left|q\right|\ll k$, the
influence of field mode $q$ on the storage time can be ignored. The stored
information due to atomic random motion begins to decrease, and the obtainable
information by retrieval process is proportional to
$\displaystyle R$ $\displaystyle=$ $\displaystyle\left|\langle
c^{(1)}(t)|c^{(1)}(t+\tau)\rangle\right|^{2}=\left|\frac{1}{N}\sum_{j=1}^{N}e^{i\Delta
kv_{j}\tau}\right|^{2}$ (16) $\displaystyle=$ $\displaystyle\left|\int
f(v)e^{i\Delta
kv\tau}dv\right|^{2}\approx\exp(-\frac{\tau^{2}}{\tau_{s}^{2}}),$
where the integration limits $\pm\infty$ are used to obtain the analytic
solution and $\tau_{s}$ is the $e^{-1}$-coherence time given as Yoshikawa05
$\tau_{s}=\frac{1}{\Delta k\langle v\rangle}.$ (17)
The dephasing induced by atomic random motion can also be described by the
grating wavelength of the spin wave Zhaobo ; Zhao
$\lambda_{s}=\frac{2\pi}{\Delta k}.$ (18)
The parameters of a suitably cold atomic cloud provided by a MOT
MatsukevichPRL05 ; MatsukevichPRL05 for our proposal are as follows: the
optical depth of about $d=10$, the temperature of $T=70$ $\mu$K, and the
atomic mean speed $\langle v\rangle=\sqrt{k_{B}T/M}\simeq 8$ cms-1 for
rubidium mass $M$. From Eq. (17), the lifetime of spin wave can be tuned by
varying the wave vector of the spin wave.
In case $1$ where the control field and the probe fields are propagating in
orthogonal direction, $\Delta k\simeq k_{p},$ the spin wave grating wavelength
is about light wavelength and the spin wave would dephase rapidly due to
atomic motion, so $\tau_{s}\simeq 1.5$ $\mu$s. Considering the efficiency of
memory, the storage time is less than 1 $\mu$s.
In case $2$ the Doppler-free configuration propagation, almost no photonic
momentum is transferred to the atoms. The atomic coherence is localized in the
longitudinal direction, in which oscillations have the small beat frequency
$\Delta\omega\simeq(k_{p}-k_{c}^{||})c=6.8$ GHz between the probe and the
control fields, and the calculated spin-wave wavelength by Eq. (18) is
$\lambda_{s}\approx 27$ cm. The dephasing induced by the atomic motion in this
localized region is very slowly due to the large spin-wave wavelength
$\lambda_{s}$, and the computed lifetime $\tau_{s}$ is large. However atomic
random motion would spread the localized excitation from one ensemble to
another ensemble, which will result in the stored information being quickly
lost. Considering the atoms flying out of the localized atomic ensemble and
the waist of laser beams $D=100$ $\mu$m, the lifetime of the memory can be
estimated as $\tau=D/(2v)\sim 300$ $\mu$s.
Another factor that influences the memory time is the decoherence of the
excited state. The DSP is protected against incoherent decay processes acting
on the excited states because of adiabatical elimination of the excited
states. Although the collective state $|c^{(1)}\rangle$ is an entangled state
of $N$ atoms, its decoherence time is not much different from that of the
quantum state stored in an individual atom and it is quite stable against one-
atom (or few-atom) losses Mewes . Due to the memory efficiency, the maximum
storage time of our proposal must be far smaller than $\tau_{s}$ ($\tau\ll
1/\gamma_{s}$), or else the efficiency will be low. From Fig. 4, in order to
balance the efficiency and preserve the entanglement, suitable storage times
for cases $1$ and $2$ are less than or equal to $0.15$ and $30$ $\mu$s,
respectively.
## V Discussion
In this section, we discuss some restrictions. First, we assume that the input
the four-photon cluster state $|\phi_{\mathtt{in}}\rangle$ generated by
experiment clusterexp ; Kiesel ; Zhang ; Prevedel ; Lu ; Vallone ; Chenprl ;
Tokunaga has high fidelity. Then the four-photon cluster state should be sent
into the atomic cloud simultaneously, which requires that the four incidence
points are symmetric about major axis of ellipsoid. If there is a large
difference, one single-photon probe field is not synchronously sent into the
atomic ensemble with other fields. Then a fraction of single-photon wave
packet, captured in the form of a spin wave, is stored for a time period
$\tau$. The efficiency of light storage will decrease, and the shape of the
output pulse is different from the initial pulse. This case can be avoided
using the perfect cluster state and choosing symmetric atomic ensembles.
Second, in the retrieval process, we assume that one stored field is not
simultaneously retrieved or is not retrieved even. If the four fields are
retrieved non-simultaneously and only one field is retrieved with a little
delay, the entanglement is preserved and the entanglement degree decreases
Curtis . If one field is retrieved with a certain probability, without loss of
generality, we also choose field $E_{1}$. After the retrieval, the field
$E_{1}$ can be written as
$(|0\rangle_{1}+\beta_{1}|1\rangle_{1})/\sqrt{1+|\beta_{1}|^{2}}$, so the
fidelity is
$F=|\langle\phi_{\text{in}}|\phi_{\text{out}}^{\prime}\rangle|^{2}=|\beta_{1}|^{2}/(1+|\beta_{1}|^{2})$.
The retrieval state is a cluster state provided that $|\beta_{1}|^{2}>1$ Toth
; Tokunaga . In order to retrieve the frequency-entangled state with high
fidelity $F>95\%$, then the coefficient $|\beta_{1}|^{2}$ should be
necessarily more than $20$. That is to say that the four fields would be
retrieved nearly simultaneously, or else the fidelity is low. The third factor
that limits the performance of the storage is adiabatic condition. Adiabatic
following occurs when the population in the excited and bright states is small
at all times. For a pulse duration $T$ and a line-width of the excited state
$\gamma$, the adiabatic condition is $g^{2}N\gg\gamma/T$.
## VI Conclusion
In conclusion, we present a scheme for realizing quantum memory for the four-
photon polarization encoded cluster state. Our proposal can be realized by
current technology Choi ; Simon ; MatsukevichPRL06 ; MatsukevichPRL062 ;
Chen07 . The quantum memory of the cluster is essential for “one-way” quantum
computing, and we also expect the ability to store multiple optical modes to
be useful for the quantum information and all-optical quantum computing
network.
## References
* (1) H. J. Briegel and R. Raussendorf, Phys. Rev. Lett. 86, 910 (2001).
* (2) R. Raussendorf and H. J. Briegel, Phys. Rev. Lett. 86, 5188 (2001); R. Raussendorf, D. E. Browne, and H. J. Briegel, Phys. Rev. A 68, 022312 (2003)
* (3) M. A. Nielsen, Phys. Rev. Lett. 93, 040503 (2004).
* (4) P. Walther, K. J. Resch, T. Rudolph, E. Schenck, H. Weinfurter, V. Vedral, M. Aspelmeyer, and A. Zeilinger, Nature 434, 169 (2005); P. Walther, M. Aspelmeyer, K. J. Resch, and A. Zeilinger, Phys. Rev. Lett. 95, 020403 (2005).
* (5) N. Kiesel, C. Schmid, U. Weber, G. Tóth, O. Gühne, R. Ursin, and H. Weinfurter, Phys. Rev. Lett. 95, 210502 (2005).
* (6) A. N. Zhang, C. Y. Lu, X. Q. Zhou, Y. A. Chen, Z. Zhao, T. Yang, and J. W. Pan, Phys. Rev. A 73, 022330 (2006).
* (7) R. Prevedel, P. Walther, F. Tiefenbacher, P. Böhi, R. Kaltenbaek, T. Jennewein, A. Zeilinger, Nature 445, 65 (2007).
* (8) C. Y. Lu, X. Q. Zhou, O. Gühne, W. B. Gao, J. Zhang, Z. S. Yuan, A. Goebel, T. Yang and J. W. Pan, Nature Phys. 3, 91 (2007).
* (9) G. Vallone, E. Pomarico, P. Mataloni, F. De Martini, and V. Berardi, Phys. Rev. Lett. 98, 180502 (2007).
* (10) K. Chen, C. M. Li, Q. Zhang, Y. A. Chen, A. Goebel, S. Chen, A. Mair, and J. W. Pan, Phys. Rev. Lett. 99, 120503 (2007).
* (11) Y. Tokunaga, S. Kuwashiro, T. Yamamoto, M. Koashi, and N. Imoto, Phys. Rev. Lett. 100, 210501 (2008).
* (12) E. Knill, R. Laflamme, G. J. Milburn, Nature 409, 46 (2001).
* (13) L. M. Duan, M. D. Lukin, J. I. Cirac, and P. Zoller, Nature 414, 413 (2001).
* (14) B. Julsgaard, J. Sherson, J. I. Cirac, J. Fiurášek, E. S. Polzik, Nature (London) 432, 482 (2004).
* (15) T. Chanelière D. N. Matsukevich, S. D. Jenkins, S.-Y. Lan, T. A. B. Kennedy, and A. Kuzmich, Nature 438, 833 (2005).
* (16) M. D. Eisaman, A. Andrè, F. Massou, M. Fleischhauer, A. S. Zibrov, and M. D. Lukin, Nature 438, 837 (2005).
* (17) K. Honda, D. Akamatsu, M. Arikawa, Y. Yokoi, K. Akiba, S. Nagatsuka, T. Tanimura, A. Furusawa, and M. Kozuma, Phys. Rev. Lett. 100, 093601 (2008).
* (18) J. Appel, E. Figueroa, D. Korystov, M. Lobino, and A. I. Lvovsky, Phys. Rev. Lett. 100, 093602 (2008).
* (19) C. Liu, Z. Dutton, C. H. Behroozi, L. V. Hau, Nature 409, 490 (2001).
* (20) M. Fleischhauer and M. D. Lukin, Phys. Rev. Lett. 84, 5094 (2000).
* (21) A. E. Kozhekin, K. Molmer, and E. Polzik, Phys. Rev. A, 62, 033809 (2000).
* (22) J. Laurat, H. de Riedmatten, D. Felinto, C. W. Chou, E. W. Schomburg, and H. J. Kimble, Opt. Express 14, 6912 (2006).
* (23) S. A. Moiseev and S. Kroll, Phys. Rev. Lett. 87, 173601 (2001).
* (24) I. E. Mazets and B. G. Matisov, Pis’ma Zh. Eksp. Theor. Fiz. 64, 473 (1996) [JETP Lett. 64, 515 (1996)].
* (25) M. Fleischhauer and M. D. Lukin, Phys. Rev. A 65, 022314 (2002).
* (26) A. Raczyński, J. Zaremba, and S. Zielinśka-Kaniasty, Phys. Rev. A 69, 043801 (2004).
* (27) Z. Li, L. Xu, and K. Wang, Phys. Lett. A 346, 269 (2005).
* (28) Y. D. Chong and M. Soljac̆ić, Phys. Rev. A 77, 013823 (2008).
* (29) H. Kang, G. Hernandez, J. P. Zhang, and Y. F. Zhu, Phys. Rev. A 73, 011802(R) (2006).
* (30) Y. Yoshikawa, K. Nakayama, Y. Torii, and T. Kuga, Phys. Rev. Lett. 99, 220407 (2007).
* (31) K. S. Choi, H. Deng, J. Laurat, and H. J. Kimble, Nature 452, 67 (2008).
* (32) M. D. Lukin, S. F. Yelin, and M. Fleischhauer, Phys. Rev. Lett. 84, 4232 (2000).
* (33) D. Petrosyan, J. Opt. B: Quantum Semiclass. Opt. 7, S141 (2005).
* (34) M. D. Lukin, and A. Imamoǧlu, Phys. Rev. Lett. 84, 1419 (2000).
* (35) D. N. Matsukevich and A. Kuzmich, Science 306, 663 (2004).
* (36) C. W. Chou, J. Laurat, H. Deng, K. S. Choi, H. de Riedmatten, D. Felinto, H. J. Kimble, Science 316, 1316 (2007).
* (37) J. Simon, H. Tanji, S. Ghosh, and V. Vuletić, Nature Phys. 3, 765 (2007).
* (38) Y. Yoshikawa, K. Nakayama, Y. Torii, and T. Kuga, Phys. Rev. A 79, 025601 (2009).
* (39) This cluster state memory can also be realized by mapping the four pulses in and out by extending one cold cloud to two cold clouds in order to avoid the light fields spatial overlapping, each cloud mapping two pulses.
* (40) S. J. van Enk, N. Lütkenhaus, and H. J. Kimble, Phys. Rev. A 75, 052318 (2007).
* (41) G. Tóth and O. Gühne, Phys. Rev. Lett. 94, 060501 (2005); G. Tóth and O. Gühne, Phys. Rev. A 72, 022340 (2005).
* (42) A. V. Gorshkov, A. André, M. Fleischhauer, A. S. Sørensen, and M. D. Lukin, Phys. Rev. Lett. 98, 123601 (2007).
* (43) A. V. Gorshkov, A. André, M. D. Lukin, and A. S. Sørensen, Phys. Rev. A 76, 033804 (2007); ibid. 76, 033805 (2007); ibid. 76, 033806 (2007).
* (44) N. B. Phillips, A. V. Gorshkov, and I. Novikova, Phys. Rev. A 78, 023801 (2008).
* (45) A. V. Gorshkov, T. Calarco, M. D. Lukin, and A. S. Sørensen, Phys. Rev. A textbf77, 043806 (2008).
* (46) I. Novikova, A. V. Gorshkov, D. F. Phillips, A. S. Sørensen, M. D. Lukin, and R. L. Walsworth, Phys. Rev. Lett. 98, 243602 (2007).
* (47) I. Novikova, N. B. Phillips, and A. V. Gorshkov, Phys. Rev. A 78, 021802(R) (2008).
* (48) A. Uhlmann, Rep. Math. Phys. 9, 273 (1976).
* (49) M. A. Nielsen and I. L. Chuang, _Quantum computing and Quantum Information_ (Cambridge University Press, Cambridge, 2000).
* (50) S. V. Polyakov, C.W. Chou, D. Felinto, and H. J. Kimble, Phys. Rev. Lett. 93, 263601 (2004).
* (51) D. N. Matsukevich, T. Chanelière, S. D. Jenkins, S.-Y. Lan, T. A. B. Kennedy, and A. Kuzmich, Phys. Rev. Lett. 97, 013601 (2006).
* (52) D. N. Matsukevich, T. Chanelière, M. Bhattacharya, S.-Y. Lan, S. D. Jenkins, T. A. B. Kennedy, and A. Kuzmich, Phys. Rev. Lett. 95, 040405 (2005).
* (53) B. Zhao, Y. A. Chen, X. H. Bao, T. Strassel, C. S. Chuu, X. M. Jin, J. Schmiedmayer, Z. S. Yuan, S. Chen, J. W. Pan, Nature Phys. 5, 195 (2009).
* (54) R. Zhao, Y. O. Dudin, S. D. Jenkins1, C. J. Campbell, D. N. Matsukevich, T. A. B. Kennedy and A. Kuzmich, Nature Phys. 5, 100 (2009).
* (55) Y. Yoshikawa, Y. Torii, and T. Kuga, Phys. Rev. Lett. 94, 083602 (2005).
* (56) C. Mewes and M. Fleischhauer, Phys. Rev. A 72, 022327 (2005).
* (57) C. J. Broadbent, R. M. Camacho, R. Xin, and J. C. Howell, Phys. Rev. Lett. 100, 133602 (2008).
* (58) D. N. Matsukevich, T. Chanelière, S. D. Jenkins, S.-Y. Lan, T. A. B. Kennedy, and A. Kuzmich, Phys. Rev. Lett. 96, 033601 (2006).
* (59) Y. A. Chen, S. Chen, Z. S. Yuan, B. Zhao, C. S. Chuu, J. Schmiedmayer, J. W. Pan, Nature Phys. 4, 103 (2008).
|
arxiv-papers
| 2009-05-15T03:25:02 |
2024-09-04T02:49:02.634384
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Chun-Hua Yuan, Li-Qing Chen, Weiping Zhang",
"submitter": "Chun-Hua Yuan",
"url": "https://arxiv.org/abs/0905.2462"
}
|
0905.2472
|
arxiv-papers
| 2009-05-15T05:09:34 |
2024-09-04T02:49:02.640675
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "L. Shen, J. B. Yi, B. C. Zhao, S. C. Xiang, B. L. Chen, M.-F. Ng,\n S.-W. Yang, L. Wang, J. Ding, and Y. P. Feng",
"submitter": "Lei Shen",
"url": "https://arxiv.org/abs/0905.2472"
}
|
|
0905.2637
|
So-called $N$-body problems arise in many areas (e.g. astrophysics, molecular
dynamics, vortex methods, electrostatics). In these problems, the system is
described by a set of $N$ particles, and the dynamics of the system is the
result of the interactions that occur for every pair of particles. To
calculate all such interactions, the total number of operations required
normally scales as $N^{2}$. One useful way to mathematically express an
$N$-body problem is by means of a matrix-vector multiplication, where the
matrix is dense and represents the particle interactions, and the vector
corresponds to the weights of the particles. Thus, the mat-vec product
corresponds to the evaluation of all pairwise interactions. In a naive
implementation, we would directly create the matrix in computer memory and
then perform the multiplication with the vector. This naive approach would
prove feasible only for small $N$, as the computational requirements in
processing power and memory both grow as $N^{2}$. For this reason, many
efforts have been directed at producing efficient implementations, capable of
performing the mat-vec operation at reduced memory requirements and operation
counts.
The various methods for more efficient calculation of the particle interaction
problem can be broadly classified in two types: mesh-based interpolation
methods, and hierarchical or multiresolution methods. The basic mesh-based
method is the particle-mesh (pm) approach, in which particle information is
interpolated onto a lattice, the field of interest is solved on the mesh by
means of some grid-based method such as finite difference, and the field
information is finally interpolated back to the particle locations
HockneyEastman81. (This method is also called particle-in-cell, pic.) In some
applications, such as molecular dynamics, the smoothing at the short-range
introduced by the interpolations in the pm method is unacceptable. An
alternative mesh-based method can then be used in which all near-field
potentials are calculated directly, while far-field effects are calculated
with the pm method; this is called the particle-particle/particle-mesh method
(p3m).
The second class of methods provides efficient computation of the interactions
by means of a hierarchical or multilevel approach. The main subset of this
class performs a hierarchical subdivision of the computational domain, which
is used to group particles at different length scales, and then approximates
the interactions of clusters of particles using series expansions. The
approximation is applied to far-field interactions, while near-field
interactions are summed directly. This type of methods can be considered
meshfree, and includes: tree codes BarnesHut1986,Appel1985, the fast multipole
method () greengard+rokhlin1987 and its variations
CarrierGreengardRokhlin88,Anderson92,YingBirosZorin2004,GimbutasRokhlin2003.
An alternative multilevel method has been presented by Skeel
SkeelTezcanHardy02, which instead of using series expansions for the far field
approximations utilizes a multigrid approach. The approximation of the field
is in this case performed after a splitting of the particle potentials into a
smooth part (matching the far field) and a nonsmooth part (acting only in the
near field). While the nonsmooth (purely local) part can be calculated
directly at low cost, a multigrid method is used to approximate the smooth
component. For this approximation, the method relies on gridded basis
functions of compact support, like in p3m, but unlike p3m it provides
acceleration via multiple grid levels, achieving complexity. Thus, this method
applies the multilevel approach via field decomposition, rather than spatial
decomposition; it can perhaps be viewed as a hybrid of the mesh-based and
hierarchical methods.
In this paper, we present the theory and development of a parallel fast
multipole library, ††stands for ‘portable extensible toolkit for ’, as in ,
which is ‘portable extensible toolkit for scientific computing’. , belonging
to the meshfree type of methods described above. The overarching goal is to
unify the efforts in the development of -like methods into an open-source
library, that provides a framework with the capacity to accommodate memory
efficiency, parallelism and the data structures used for spatial
decomposition. The software is implemented utilizing
http://www.mcs.anl.gov/petsc/petsc-as/, the parallel library for scientific
computing developed over more than 17 years at Argonne National Laboratory
petsc-manual. At this point, we have a complete implementation of the in
parallel, with dynamic load balancing provided by means of an optimization
approach—minimizing inter-node communications and per-node computational work.
But a prominent feature of this implementation is that it is designed to be
extensible, so that it can effectively unify efforts involving many algorithms
which are based on the same principles of the . The perspectives for
extensibility are described in this paper as well.
The development of this extensible parallel library for $N$-body interactions
is important due to the programming difficulty associated with the algorithm,
which has been a barrier for adoption and arguably diminished its potential
impact. A critical stage in the maturation of a computational field begins
with the widespread availability of community software for the central
algorithms. A good example is molecular dynamics, which blossomed after
introduction of freely available packages, such as charmm CHARMM1983 and namd
NAMD2005. Such game-changing community software does not exist for particle
methods or for $N$-body computations in general.
The present paper does not merely describe a new parallel strategy and
implementation, it also details an exhaustive model for the computation of
tree-based $N$-body algorithms in parallel. Our model is a significant
extension of the time model developed in GreengardGropp1990, which assumed a
uniform distribution of the particles among processors and does not address
load balancing or communication overheads. With our model, which includes both
work estimates and communication estimates, we are able to implement a method
to provide a priori, automatic load balancing.
The first parallel implementations of the was that of Greengard and Gropp
GreengardGropp1990, on a shared memory computer. They also presented a timing
model for a perfectly balanced implementation, of which we say more in §work.
More recent versions of the parallel have been presented in
Ying03anew,Have2003,Ogata2003,kurzakPettit2005. Many of these codes produce a
partition of the data among processors based upon a space-filling curve, as
previously introduced for the case of parallel treecodes in warren+salmon93.
Only one of these parallel codes represents a supported, open source code
available for community use. The kifmm3d code can be downloaded and modified
under the gpl license. It is not an implementation of the classic algorithm,
however, but rather a kernel-independent version developed by the authors.
This algorithm does not utilize series expansions, but instead uses values on
a bounding surface of each subdomain obtained through an iterative solution
method; these ideas are based on Anderson92. The code does not appear to allow
easy extension to traditional , or to the other related algorithms alluded to
above. Moreover, it does not appear to be designed as an embedded library
component, part of a larger multiphysics simulation.
This paper is organized as follows. We present first a brief overview of the
algorithm; this presentation is necessarily cursory, as a large body of
literature has been written about this method. A basic description, however,
is necessary to agree on a common terminology for the rest of the paper. Our
approach is to illustrate the method using graphical representations. The
following section (§particle) describes our client application code, the
vortex particle method for simulation of incompressible flow at high Reynolds
numbers. In this method, the is one of two approaches commonly used to obtain
the velocity of particles from the vorticity field information; the second
approach (as in other $N$-body problems) is to interpolate information back
and forth from a mesh, while solving for the field of interest on the mesh.
Although equally efficient—both can be —the extensive use of interpolations
may introduce numerical diffusion, which is undesirable in certain
applications. Next, §parallel discusses our parallelization strategy. The goal
is achieving optimal distribution of the computational work among processors
and minimal communication requirement. Our approach to parallelization is
original in the use of an optimization method to obtain the parallel
partitioning. To be able to apply such an optimization, there is need for good
estimates of the computational work required for algorithmic components, as
well as communication requirements. The development of these estimates, in
addition to memory estimates, is presented in §work. The subsequent section
(§software) presents details of our software design, implementation and
verification carried out. Results of computational experiments with the
parallel software are presented in §results, and we end with some conclusions
and remarks about future work.
Note that a very simple and flexible, yet efficient and scalable code has been
produced. The entire implementation of is only 2600 lines of C++, including
comments and blank lines. It is already freely downloadable from
http://petsc.cs.iit.edu/petsc/ and we welcome correspondence with potential
users or those who wish to extend it for specific purposes.
|
arxiv-papers
| 2009-05-15T23:46:14 |
2024-09-04T02:49:02.647223
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Felipe A. Cruz, Matthew G. Knepley, L. A. Barba",
"submitter": "Felipe Cruz",
"url": "https://arxiv.org/abs/0905.2637"
}
|
0905.2648
|
# Statistical properties and decoherence of two-mode photon-subtracted
squeezed vacuum††thanks: Work supported by the the National Natural Science
Foundation of China under Grant Nos.10775097 and 10874174.
Li-yun Hu1,2, Xue-xiang Xu1,2 and Hong-yi Fan2 Corresponding author. Email:
hlyun@sjtu.edu.cn; hlyun2008@126.com. 1College of Physics and Communication
Electronics, Jiangxi Normal University, Nanchang 330022, China 2Department of
Physics, Shanghai Jiao Tong University, Shanghai, 200240, China
###### Abstract
We investigate the statistical properties of the photon-subtractions from the
two-mode squeezed vacuum state and its decoherence in a thermal environment.
It is found that the state can be considered as a squeezed two-variable
Hermite polynomial excitation vacuum and the normalization of this state is
the Jacobi polynomial of the squeezing parameter. The compact expression for
Wigner function (WF) is also derived analytically by using the Weyl ordered
operators’ invariance under similar transformations. Especially, the
nonclassicality is discussed in terms of the negativity of WF. The effect of
decoherence on this state is then discussed by deriving the analytical time
evolution results of WF. It is shown that the WF is always positive for any
squeezing parameter and photon-subtraction number if the decay time exceeds an
upper bound ($\kappa t>\frac{1}{2}\ln\frac{2\bar{n}+2}{2\bar{n}+1}).$
Key Words: photon-subtraction; nonclassicality; Wigner function; negativity;
two-mode squeezed vacuum state
PACS numbers: 03.65.Yz, 42.50.Dv
## I Introduction
Entanglement is an important resource for quantum information 1 . In a quantum
optics laboratory, Gaussian states, being characteristic of Gaussian Wigner
functions, have been generated but there is some limitation in using them for
various tasks of quantum information procession 2 . For example, in the first
demonstration of continuous variables quantum teleportation (two-mode squeezed
vacuum state as a quantum channel), the squeezing is low, thus the
entanglement of the quantum channel is such low that the average fidelity of
quantum teleportation is just more $\left(8\pm 2\right)\%$ than the classical
limits. In order to increase the quantum entanglement there have been
suggestions and realizations to engineering the quantum state by subtracting
or adding photons from/to a Gaussian field which are plausible ways to
conditionally manipulate nonclassical state of optical field 2a ; 2b ; 2c ; 2d
; 2e ; 2f ; 2g . In fact, such methods allowed the preparation and analysis of
several states with negative Wigner functions, including one- and two-photon
Fock states 3 ; 4 ; 5 ; 6 , delocalized single photons 7 ; 8 , and photon-
subtracted squeezed vacuum states (PSSV), very similar to quantum
superpositions of coherent states with small amplitudes (a Schrödinger kitten
state 9 ; 10 ; 11 ; 12 ) for single-mode case.
Recently, the two-mode PSSVs (TPSSVs) have been paid enough attention by both
experimentalists and theoreticians 3 ; 4 ; 13 ; 14 ; 15 ; 16 ; 17 ; 18 ; 19 ;
20 ; 21 ; 22 . Olivares et al 13 ; 14 considered the photon subtraction using
on–off photodetectors and showed the improvement of quantum teleportation
depending on various parameters involved. Then they further studied the
nonlocality of photon subtraction state in the presence of noise 15 . Kitagawa
et al 16 , on the other hand, investigated the degree of entanglement for the
TPSSV by using an on-off photondetector. Using operation with single photon
counts, Ourjoumtsev et al .3 ; 4 have demonstrated experimentally that the
entanglement between Gaussian entangled states, can be increased by
subtracting only one photon from two-mode squeezed vacuum states. The resulted
state is a complex quantum state with a negative two-mode Wigner function.
However, so far as we know, there is no report about the nonclassicality and
decoherence of TPSSV for arbitrary number PSSV in literature before.
In this paper, we will explore theoretically the statistical properties and
decoherence of arbitrary number TPSSV. This paper is arranged as follows: in
Sect. II we introduce the TPSSV, denoted as
$a^{m}b^{n}S_{2}(\lambda)\left|00\right\rangle,$ where $S_{2}(\lambda)$ is
two-mode squeezing operator with $\lambda$ being squeezing parameter and $m,n$
are the subtracted photon number from $S_{2}(\lambda)\left|00\right\rangle$
for mode $a$ and $b$, respectively. It is found that it is just a squeezed
two-variable Hermite polynomial excitation on the vacuum state, and then the
normalization factor for $a^{m}b^{n}S_{2}(\lambda)\left|00\right\rangle$ is
derived, which turns out to be a Jacobi polynomial, a remarkable result. In
Sec. III, the quantum statistical properties of the TPSSV, such as
distribution of photon number, squeezing properties, cross-correlation
function and antibunching, are calculated analytically and then be discussed
in details. Especially, in Sec. IV, the explicit analytical expression of
Wigner function (WF) of the TPSSV is derived by using the Weyl ordered
operators’ invariance under similar transformations, which is related to the
two-variable Gaussian-Hermite polynomials, and then its nonclassicality is
discussed in terms of the negativity of WF which implies the highly
nonclassical properties of quantum states. Sec. V is devoted to studying the
effect of the decoherence on the TPSSV in a thermal environment. The
analytical expressions for the time-evolution of the state and its WF are
derived, and the loss of nonclassicality is discussed in reference of the
negativity of WF due to decoherence. We find that the WF for TPSSV has no
chance to present negative value for all parameters $\lambda$ and $m,n$ if the
decay time $\kappa t>\frac{1}{2}\ln\frac{2\bar{n}+2}{2\bar{n}+1}\ $(see
Eq.(46) below), where $\bar{n}$ denotes the average thermal photon number in
the environment with dissipative coefficient $\kappa$.
## II Two-mode photon-subtracted squeezed vacuum states
### II.1 TPSSV as the squeezed two-variable Hermite polynomial excitation
state
The definition of the two-mode squeezed vacuum state is given by
$S_{2}(\lambda)\left|00\right\rangle=\text{sech}\lambda\exp(a^{\dagger}b^{\dagger}\tanh\lambda)\left|00\right\rangle,$
(1)
where $S_{2}(\lambda)=\exp[\lambda(a^{\dagger}b^{\dagger}-ab)]$ is the two-
mode squeezing operator 23 ; 24 ; 25 with $\lambda$ being a real squeezing
parameter, and $a$, $b$ are the Bose annihilation operators,
$[a,a^{\dagger}]=[b,b^{\dagger}]=1$. Theoretically, the TPSSV can be obtained
by repeatedly operating the photon annihilation operator $a$ and $b$ on
$S_{2}(\lambda)\left|00\right\rangle$, defined as
$\left|\lambda,m,n\right\rangle=a^{m}b^{n}S_{2}(\lambda)\left|00\right\rangle,$
(2)
where $\left|\lambda,m,n\right\rangle$ is an un-normalization state. Noticing
the transform relations,
$\displaystyle S_{2}^{\dagger}(\lambda)aS_{2}(\lambda)$
$\displaystyle=a\cosh\lambda+b^{\dagger}\sinh\lambda,$ $\displaystyle
S_{2}^{\dagger}(\lambda)bS_{2}(\lambda)$
$\displaystyle=b\cosh\lambda+a^{\dagger}\sinh\lambda,$ (3)
we can reform Eq.(2) as
$\displaystyle\left|\lambda,m,n\right\rangle$
$\displaystyle=S_{2}(\lambda)S_{2}^{\dagger}(\lambda)a^{m}b^{n}S_{2}(\lambda)\left|00\right\rangle$
$\displaystyle=S_{2}(\lambda)(a\cosh\lambda+b^{\dagger}\sinh\lambda)^{m}(b\cosh\lambda+a^{\dagger}\sinh\lambda)^{n}\left|00\right\rangle$
$\displaystyle=S_{2}(\lambda)\sinh^{n+m}\lambda\sum_{l=0}^{m}\frac{m!\coth^{l}\lambda}{l!\left(m-l\right)!}b^{\dagger
m-l}a^{l}a^{\dagger n}\left|00\right\rangle.$ (4)
Further note that $a^{\dagger
n}\left|0\right\rangle=\sqrt{n!}\left|n\right\rangle$ and
$a^{l}\left|n\right\rangle=\frac{\sqrt{n!}}{\sqrt{(n-l)!}}\left|n-l\right\rangle=\frac{\sqrt{n!}}{(n-l)!}a^{\dagger
n-l}\left|0\right\rangle$, leading to $a^{l}a^{\dagger
n}\left|00\right\rangle=\frac{n!}{(n-l)!}a^{\dagger
n-l}\left|00\right\rangle$, thus Eq.(4) can be re-expressed as
$\displaystyle\left|\lambda,m,n\right\rangle$
$\displaystyle=S_{2}(\lambda)\sinh^{n+m}\lambda\sum_{l=0}^{\min\left(m,n\right)}\frac{m!n!\coth^{l}\lambda}{l!\left(m-l\right)!(n-l)!}a^{\dagger
n-l}b^{\dagger m-l}\left|00\right\rangle$
$\displaystyle=\frac{\sinh^{\left(n+m\right)/2}2\lambda}{\left(i\sqrt{2}\right)^{n+m}}S_{2}(\lambda)\sum_{l=0}^{\min(m,n)}\frac{(-1)^{l}n!m!\left(i\sqrt{\tanh\lambda}b^{\dagger}\right)^{m-l}\left(i\sqrt{\tanh\lambda}a^{\dagger}\right)^{n-l}}{l!(n-l)!\left(m-l\right)!}\left|00\right\rangle$
$\displaystyle=\frac{\sinh^{\left(n+m\right)/2}2\lambda}{\left(i\sqrt{2}\right)^{n+m}}S_{2}(\lambda)H_{m,n}\left(i\sqrt{\tanh\lambda}b^{\dagger},i\sqrt{\tanh\lambda}a^{\dagger}\right)\left|00\right\rangle,$
(5)
where in the last step we have used the definition of the two variables
Hermitian polynomials 26 ; 27 , i.e.,
$H_{m,n}\left(\epsilon,\varepsilon\right)=\sum_{k=0}^{\min\left(m,n\right)}\frac{\left(-1\right)^{k}m!n!\epsilon^{m-k}\varepsilon^{n-k}}{k!(m-k)!(n-k)!}.$
(6)
From Eq.(5) one can see clearly that the TPSSV
$\left|\lambda,m,n\right\rangle$ is equivalent to a two-mode squeezed two-
variable Hermite-excited vacuum state and exhibits the exchanging symmetry,
namely, interchanging $m\Leftrightarrow n$ is equivalent to
$a^{\dagger}\Leftrightarrow b^{\dagger}$. It is clear that, when $m=n=0,$
Eq.(5) just reduces to the two-mode squeezed vacuum state due to $H_{0,0}=1$;
while for $n\neq 0$ and $m=0,$ noticing $H_{0,n}\left(x,y\right)=y^{n},$
Eq.(5) becomes ($N_{\lambda,0,n}^{-1}=n!\sinh^{2n}\lambda)$, see Eq.(11)
below)
$\left|\lambda,0,n\right\rangle=S_{2}(\lambda)\left|n,0\right\rangle,$which is
just a squeezed number state, corresponding to a pure negative binomial state
28 .
### II.2 The normalization of $\left|\lambda,m,n\right\rangle$
To know the normalization factor $N_{\lambda,m,n}$ of
$\left|\lambda,m,n\right\rangle$, let us first calculate the overlap
$\left\langle\lambda,m+s,n+t\right.\left|\lambda,m,n\right\rangle$. For this
purpose, using the first equation in Eq.(5) one can express
$\left|\lambda,m,n\right\rangle$ as
$\left|\lambda,m,n\right\rangle=S_{2}(\lambda)\sum_{l=0}^{\min(m,n)}\frac{m!n!\sinh^{n+m}\lambda\coth^{l}\lambda}{l!\sqrt{(m-l)!(n-l)!}}\left|n-l,m-l\right\rangle,$
(7)
which leading to
$\displaystyle\left\langle\lambda,m+s,n+t\right.\left|\lambda,m,n\right\rangle$
$\displaystyle=m!\left(n+s\right)!\delta_{s,t}\sinh^{2n+2m+2s}\lambda$
$\displaystyle\times\sum_{l=0}^{\min(m,n)}\frac{\left(m+s\right)!n!\coth^{2l+s}\lambda}{l!(m-l)!(n-l)!(l+s)!},$
(8)
where $\delta_{s,t}$ is the Kronecker delta function. Without lossing the
generality, supposing $m<n$ and comparing Eq.(8) with the standard expression
of Jacobi polynomials 29
$P_{m}^{(\alpha,\beta)}(x)=\left(\frac{x-1}{2}\right)^{m}\sum_{k=0}^{m}\left(\begin{array}[c]{c}m+\alpha\\\
k\end{array}\right)\left(\begin{array}[c]{c}m+\beta\\\
m-k\end{array}\right)\left(\frac{x+1}{x-1}\right)^{k},$ (9)
one can put Eq.(8) into the following form
$\left\langle\lambda,m+s,n+t\right.\left|\lambda,m,n\right\rangle=m!\left(n+s\right)!\delta_{s,t}\sinh^{2n+s}\lambda\cosh^{s}\lambda
P_{m}^{(n-m,s)}(\allowbreak\cosh 2\lambda),$ (10)
which is just related to Jacobi polynomials. In particular, when $s=t=0$, the
normalization constant $\emph{N}_{m,n,\lambda}$ for the state
$\left|\lambda,m,n\right\rangle$ is given by
$N_{\lambda,m,n}=\left\langle\lambda,m,n\right.\left|\lambda,m,n\right\rangle=m!n!\sinh^{2n}\lambda
P_{m}^{(n-m,0)}(\cosh 2\lambda),$ (11)
which is important for further studying analytically the statistical
properties of the TPSSV. For the case $m=n$, it becomes Legendre polynomial of
the squeezing parameter $\lambda$, because of $P_{n}^{(0,0)}(x)=P_{n}(x),$
$P_{0}(x)=1$; while for $n\neq 0$ and $m=0,$ noticing that
$P_{0}^{(n,0)}(x)=1$ then $N_{\lambda,0,n}=n!\sinh^{2n}\lambda.$ Therefore,
the normalized TPSSV is
$\left|\left|\lambda,m,n\right\rangle\right.\equiv\left[m!n!\sinh^{2n}\lambda
P_{m}^{(n-m,0)}(\cosh
2\lambda)\right]^{-1/2}a^{m}b^{n}S_{2}(\lambda)\left|00\right\rangle.$ (12)
From Eqs. (2) and (11) we can easily calculate the average photon number in
TPSSV (denoting $\tau=\cosh 2\lambda$),
$\displaystyle\left\langle a^{\dagger}a\right\rangle$
$\displaystyle=N_{\lambda,m,n}^{2}\left\langle
00\right|S_{2}^{{\dagger}}(\lambda)a^{{\dagger}m+1}b^{{\dagger}n}a^{m+1}b^{n}S_{2}(\lambda)\left|00\right\rangle$
$\displaystyle=\left(m+1\right)\frac{P_{m+1}^{(n-m-1,0)}(\tau)}{P_{m}^{(n-m,0)}(\tau)},$
(13) $\displaystyle\left\langle b^{\dagger}b\right\rangle$
$\displaystyle=\left(n+1\right)\sinh^{2}\lambda\frac{P_{m}^{(n-m+1,0)}(\tau)}{P_{m}^{(n-m,0)}(\tau)}.$
(14)
In a similar way we have
$\left\langle
a^{\dagger}b^{\dagger}ab\right\rangle=\left(m+1\right)\left(n+1\right)\sinh^{2}\lambda\frac{P_{m+1}^{(n-m,0)}(\tau)}{P_{m}^{(n-m,0)}(\tau)}.$
(15)
Thus the cross-correlation function $g_{12}^{(2)}$ can be obtained by 30 ; 31
; 32
$\displaystyle g_{12}^{(2)}(\lambda)$ $\displaystyle=\frac{\left\langle
a^{\dagger}b^{\dagger}ab\right\rangle}{\left\langle
a^{\dagger}a\right\rangle\left\langle b^{\dagger}b\right\rangle}$
$\displaystyle=\frac{P_{m+1}^{(n-m,0)}(\tau)}{P_{m+1}^{(n-m-1,0)}(\tau)}\frac{P_{m}^{(n-m,0)}(\tau)}{P_{m}^{(n-m+1,0)}(\tau)}.$
(16)
Actually, the cross-correlation between the two modes reflects correlation
between photons in two different modes, which plays a key role in rendering
many two-mode radiations nonclassically. In Fig.1, we plot the graph of
$g_{12}^{(2)}\left(\lambda\right)$ as the function of $\lambda$ for some
different ($m,n$) values. It is shown that $g_{12}^{(2)}\left(\lambda\right)$
are always larger than unit, thus there exist correlations between the two
modes. We emphasize that the WF has negative region for all $\lambda,$ and
thus the TPSSV is nonclassical. In our following work, we pay attention to the
ideal TPSSV.
Figure 1: (Color online) Cross-correlation function between the two modes $a$
and $b$ as a function of $\lambda$ for different parameters ($m,n$). The
number 1,2,3,4,5,6 in (a) denote that ($m,n$) are eauql to (1,2), (3,4),
(2,4),(6,8),(3,6) and (7,10) respectively.
## III Quantum statistical properties of the TPSSV
### III.1 Squeezing properties
For a two-mode system, the optical quadrature phase amplitudes can be
expressed as follows:
$Q=\frac{Q_{1}+Q_{2}}{\sqrt{2}},\text{ }P=\frac{P_{1}+P_{2}}{\sqrt{2}},\text{
}[Q,P]=\mathtt{i},$ (17)
where $Q_{1}=(a+a^{\dagger})/\sqrt{2}$,
$P_{1}=(a-a^{\dagger})/(\sqrt{2}\mathtt{i})$, $Q_{2}=(b+b^{\dagger})/\sqrt{2}$
and $P_{2}=(b-b^{\dagger})/(\sqrt{2}\mathtt{i})$ are coordinate- and momentum-
operator, respectively. Their variances are $(\Delta Q)^{2}=\left\langle
Q^{2}\right\rangle-\left\langle Q\right\rangle^{2}$ and $(\Delta
P)^{2}=\left\langle P^{2}\right\rangle-\left\langle P\right\rangle^{2}$. The
phase amplifications satisfy the uncertainty relation of quantum mechanics
$\Delta Q\Delta P\geq 1/2$. By using Eqs.(10) and (11), it is easy to see that
$\left\langle a\right\rangle=\left\langle
a^{\dagger}\right\rangle=\left\langle b\right\rangle=\left\langle
b^{\dagger}\right\rangle=0\ $and $\left\langle a^{2}\right\rangle=\left\langle
a^{\dagger 2}\right\rangle=\left\langle b^{2}\right\rangle=\left\langle
b^{\dagger 2}\right\rangle=0$ as well as $\left\langle
ab^{\dagger}\right\rangle=\left\langle a^{\dagger}b\right\rangle=0,$ which
leads to $\left\langle Q\right\rangle=0$, $\left\langle P\right\rangle=0$.
Moreover, using Eq.(10) one can see
$\left\langle a^{\dagger}b^{\dagger}\right\rangle=\left\langle
ab\right\rangle=\frac{n+1}{2}\frac{P_{m}^{(n-m,1)}(\tau)}{P_{m}^{(n-m,0)}(\tau)}\sinh
2\lambda.$ (18)
From Eqs.(13), (14) and (18) it then follows that
$\displaystyle(\Delta Q)^{2}$ $\displaystyle=\frac{1}{2}(\left\langle
a^{\dagger}a\right\rangle+\left\langle b^{\dagger}b\right\rangle+\left\langle
ab\right\rangle+\left\langle a^{\dagger}b^{\dagger}\right\rangle+1)$
$\displaystyle=\frac{1}{2P_{m}^{(n-m,0)}(\tau)}[\left(m+1\right)P_{m+1}^{(n-m-1,0)}(\tau)+\left(n+1\right)P_{m}^{(n-m+1,0)}(\tau)\sinh^{2}\lambda$
$\displaystyle+\left(n+1\right)P_{m}^{(n-m,1)}(\tau)\sinh
2\lambda+P_{m}^{(n-m,0)}(\tau)],$ (19)
and
$\displaystyle(\Delta P)^{2}$ $\displaystyle=\frac{1}{2}(\left\langle
a^{\dagger}a\right\rangle+\left\langle b^{\dagger}b\right\rangle-\left\langle
ab\right\rangle-\left\langle a^{\dagger}b^{\dagger}\right\rangle+1)$
$\displaystyle=\frac{1}{2P_{m}^{(n-m,0)}(\tau)}[\left(m+1\right)P_{m+1}^{(n-m-1,0)}(\tau)+\left(n+1\right)P_{m}^{(n-m+1,0)}(\tau)\sinh^{2}\lambda$
$\displaystyle-\left(n+1\right)P_{m}^{(n-m,1)}(\tau)\sinh
2\lambda+P_{m}^{(n-m,0)}(\tau)].$ (20)
Next, let us analyze some special cases. When $m=n=0,$ corresponding to the
two-mode squeezed state, Eqs. (19) and (20) becomes, respectively, to
$\left.(\Delta Q)^{2}\right|_{m=n=0}=\frac{1}{2}\allowbreak
e^{2\lambda},\left.(\Delta P)^{2}\right|_{m=n=0}=\frac{1}{2}\allowbreak
e^{-2\lambda},\Delta Q\Delta P=\frac{1}{2},$ (21)
which is just the standard squeezing case; while for $m=0,$ $n=1,$ Eqs. (19)
and (20) reduce to
$\left.(\Delta Q)^{2}\right|_{m=0,n=1}=\allowbreak e^{2\lambda},\left.(\Delta
P)^{2}\right|_{m=0,n=1}=\allowbreak\allowbreak e^{-2\lambda},\Delta Q\Delta
P=1,$ (22)
from which one can see that the state
$\left|\left|\lambda,m,n\right\rangle\right.$ is squeezed at the “p-direction”
when $\allowbreak\allowbreak e^{-2\lambda}<\frac{1}{2},$ i.e.,
$\lambda>\frac{1}{2}\ln 2$. In addition, when $m=n=1,$ in a similar way, one
can get
$\displaystyle\left.(\Delta Q)^{2}\right|_{m=n=1}$
$\displaystyle=\frac{1}{2}e^{2\lambda}\left(1+\frac{2e^{2\lambda}-2}{e^{2\lambda}+e^{-2\lambda}}\right),$
$\displaystyle\left.(\Delta P)^{2}\right|_{m=n=1}$
$\displaystyle=\frac{1}{2}\frac{1-e^{2\lambda}-3e^{-2\lambda}\left(1-\allowbreak
e^{-2\lambda}\right)}{e^{2\lambda}+e^{-2\lambda}}+\frac{1}{2}<\frac{1}{2},$
(23)
which indicates that, for any squeezing parameter $\lambda$, there always
exist squeezing effect for state $|\left|\lambda,1,1\right\rangle$ at the
“p-direction”.
In order to see clearly the fluctuations of $(\Delta P)^{2}$ with other
parameters $m,n$ values, the figures are ploted in Fig.2. From Fig.2(a) one
can see that the fluctuations of $(\Delta P)^{2}$ are always less than
$\frac{1}{2}$ when $m=n,$ say, the state $|\left|\lambda,m,m\right\rangle$ is
always squeezed at the “p-direction”; for given $m$ values, there exist the
squeezing effect only when the squeezing parameter exceeds a certain threshold
value that increases with the increasement of $n$ (see Fig.2(b)).
Figure 2: (Color online) The fluctuations variation of $(\Delta P)^{2}$ with
$\lambda$ for several different parameters $m,n$ values: (a) $m=n=1,2,8,35$
from down to up; (b) (1) and (2) denote $m=0$ and $m=1$, respectively, and
$n=2,5,12$ from down to up.
### III.2 Distribution of photon number
In order to obtain the photon number distribution of the TPSSV, we begin with
evaluating the overlap between two-mode number state $\left\langle
n_{a},n_{b}\right|$ and $\left|\lambda,m,n\right\rangle.$ Using Eq.(1) and the
un-normalized coherent state 30 ; 31 ,
$\left|z\right\rangle=\exp\left(za^{\dagger}\right)\left|0\right\rangle$,
leading to $\left\langle
n\right|=\frac{1}{\sqrt{n!}}\left.\frac{\partial^{n}}{\partial z^{\ast
n}}\left\langle z\right|\right|_{z^{\ast}=0}$, it is easy to see that
$\displaystyle\left\langle n_{a},n_{b}\right.\left|\lambda,m,n\right\rangle$
$\displaystyle=\text{sech}\lambda\left\langle
n_{a},n_{b}\right|a^{m}b^{n}e^{a^{\dagger}b^{\dagger}\tanh\lambda}\left|00\right\rangle$
$\displaystyle=\frac{\left(m+n_{a}\right)!}{\sqrt{n_{a}!n_{b}!}}\text{sech}\lambda\tanh^{m+n_{a}}\lambda\delta_{m+n_{a},n+n_{b}}.$
(24)
It is easy to follow that the photon number distribution of
$|\left|\lambda,m,n\right\rangle$, i.e.,
$\displaystyle P(n_{a},n_{b})$
$\displaystyle=N_{\lambda,m,n}^{-1}\left|\left\langle
n_{a},n_{b}\right.\left|\lambda,m,n\right\rangle\right|^{2}$
$\displaystyle=\frac{\left[\left(m+n_{a}\right)!\text{sech}\lambda\tanh^{m+n_{a}}\lambda\delta_{m+n_{a},n+n_{b}}\right]^{2}}{n_{a}!n_{b}!m!n!\sinh^{2n}\lambda
P_{m}^{(n-m,0)}(\cosh 2\lambda)}.$ (25)
From Eq.(25) one can see that there exists a constrained condition,
$m+n_{a}=n+n_{b},$ for the photon number distribution (see Fig. 3). In
particular, when $m=n=0,$ Eq.(25) becomes
$P(n_{a},n_{b})=\left\\{\begin{array}[c]{cc}\text{sech}^{2}\lambda\tanh^{2n_{a}}\lambda,&n_{a}=n_{b}\\\
0,&n_{a}\neq n_{b}\end{array}\right.,$ (26)
which is just the photon number distribution (PND) of two-mode squeezed vacuum
state.
In Fig. 3, we plot the distribution $P(n_{a},n_{b})$ in the Fock space
($n_{a},n_{b}$) for some given $m,n$ values and squeezing parameter $\lambda$.
From Fig. 3 it is found that the PND is constrained by $m+n_{a}=n+n_{b},$
resulting from the paired-present of photons in two-mode squeezed state. By
subtracting photons, we have been able to move the peak from zero photons to
nonzero photons (see Fig.3 (a) and (c)). The position of peak depends on how
many photons are annihilated and how much the state is squeezed initially. In
addition, for example, the PND mainly shifts to the bigger number states and
becomes more “flat” and “wide” with the increasing parameter $\lambda$ (see
Fig.3 (b) and (c)).
Figure 3: (Color online) Photon number distribution $P(n_{a},n_{b})$ in the
Fock space ($n_{a},n_{b}$) for some given $m=n$ values: (a) $m=n=0,$
$\lambda=1$, (b) $m=n=1,$ $\lambda=0.5,$(c) $m=n=1,$ $\lambda=1,$(d)
$m=2,n=5,$ $\lambda=1.$
### III.3 Antibunching effect of the TPSSV
Next we will discuss the antibunching for the TPSSV. The criterion for the
existence of antibunching in two-mode radiation is given by 33
$R_{ab}\equiv\frac{\left\langle a^{\dagger 2}a^{2}\right\rangle+\left\langle
b^{\dagger 2}b^{2}\right\rangle}{2\left\langle
a^{\dagger}ab^{\dagger}b\right\rangle}-1<0.$ (27)
In a similar way to Eq.(13) we have
$\left\langle a^{\dagger
2}a^{2}\right\rangle=\left(m+1\right)\left(m+2\right)\frac{P_{m+2}^{(n-m-2,0)}(\tau)}{P_{m}^{(n-m,0)}(\tau)},$
(28)
and
$\left\langle b^{\dagger
2}b^{2}\right\rangle=\left(n+1\right)\left(n+2\right)\sinh^{4}\lambda\frac{P_{m}^{(n-m+2,0)}(\tau)}{P_{m}^{(n-m,0)}(\tau)},$
(29)
For the state $|\left|\lambda,m,n\right\rangle$, substituting Eqs.(15),
(28)and (29) into Eq.(27), we can recast $R_{ab}$ to
$R_{ab}=\frac{\left(m+1\right)\left(m+2\right)P_{m+2}^{(n-m-2,0)}(\tau)+\left(n+1\right)\left(n+2\right)\sinh^{4}\lambda
P_{m}^{(n-m+2,0)}(\tau)}{2\left(m+1\right)\left(n+1\right)\sinh^{2}\lambda
P_{m+1}^{(n-m,0)}(\tau)}-1.$ (30)
In particular, when $m=n=0$ (corresponding to two-mode squeezed vacuum state),
Eq.(30) reduces to $R_{ab,m=n=0}=-\operatorname{sech}2\lambda<0,$ which
indicates that there always exist antibunching effect for two-mode squeezed
vacuum state. In addition, when $m=n,$ the TPSSV is always antibunching.
However, for any parameter values $m,n(m\neq n)$, the case is not true. The
$R_{ab}$ as a function of $\lambda$ and $m,n$ is plotted in Fig. 4. It is easy
to see that, for a given $m$ the TPSSV presents the antibunching effect when
the squeezing parameter $\lambda$ exceeds to a certain threshold value. For
instance, when $m=0\ $and $n=2$ then $R_{ab}=\frac{5-3\cosh
2\lambda}{6\left(1+2\cosh 2\lambda\right)}$csch${}^{2}\lambda$ may be less
than zero with $\lambda>0.549$ about.
Figure 4: (Color online) The $R_{ab}$ as a function of $\lambda$ and $m,n$.
(1) and (2) in Fig.4 (a) denote$m=0$ and $m=1$ respectively, and $n=2,3,12$
from down to up.
## IV Wigner function of the TPSSV
The Wigner function (WF)25 ; 34 ; 35 is a powerful tool to investigate the
nonclassicality of optical fields. Its partial negativity implies the highly
nonclassical properties of quantum states and is often used to describe the
decoherence of quantum states, e.g., the excited coherent state in both
photon-loss and thermal channels 36 ; 37 , the single-photon subtracted
squeezed vacuum (SPSSV) state in both amplitude decay and phase damping
channels 2d , and so on 4 ; 10 ; 38 ; 39 ; 40 . In this section, we derive the
analytical expression of WF for the TPSSV. For this purpose, we first recall
that the Weyl ordered form of single-mode Wigner operator 41 ; 42 ; 43 ,
$\Delta_{1}\left(\alpha\right)=\frac{1}{2}\genfrac{}{}{0.0pt}{}{\colon}{\colon}\delta\left(\alpha-a\right)\delta\left(\alpha^{\ast}-a^{{\dagger}}\right)\genfrac{}{}{0.0pt}{}{\colon}{\colon},$
(31)
where $\alpha=\left(q_{1}+ip_{1}\right)/\sqrt{2}$ and the symbol
$\genfrac{}{}{0.0pt}{}{\colon}{\colon}\genfrac{}{}{0.0pt}{}{\colon}{\colon}$
denotes Weyl ordering. The merit of Weyl ordering lies in the Weyl ordered
operators’ invariance under similar transformations proved in Ref.41 , which
means
$S\genfrac{}{}{0.0pt}{}{:}{:}\left(\circ\circ\circ\right)\genfrac{}{}{0.0pt}{}{:}{:}S^{-1}=\genfrac{}{}{0.0pt}{}{:}{:}S\left(\circ\circ\circ\right)S^{-1}\genfrac{}{}{0.0pt}{}{:}{:},$
(32)
as if the “fence” $\genfrac{}{}{0.0pt}{}{:}{:}\genfrac{}{}{0.0pt}{}{:}{:}$did
not exist, so $S$ can pass through it.
Following this invariance and Eq.(3) we have
$\displaystyle
S_{2}^{{\dagger}}\left(\lambda\right)\Delta_{1}\left(\alpha\right)\Delta_{2}\left(\beta\right)S_{2}\left(\lambda\right)$
$\displaystyle=\frac{1}{4}S_{2}^{{\dagger}}\left(\lambda\right)\genfrac{}{}{0.0pt}{}{\colon}{\colon}\delta\left(\alpha-a\right)\delta\left(\alpha^{\ast}-a^{{\dagger}}\right)\delta\left(\beta-b\right)\delta\left(\beta^{\ast}-b^{{\dagger}}\right)\genfrac{}{}{0.0pt}{}{\colon}{\colon}S_{2}\left(\lambda\right)$
$\displaystyle=\frac{1}{4}\genfrac{}{}{0.0pt}{}{\colon}{\colon}\delta\left(\alpha-a\cosh\lambda-b^{\dagger}\sinh\lambda\right)\delta\left(\alpha^{\ast}-a^{\dagger}\cosh\lambda-b\sinh\lambda\right)$
$\displaystyle\delta\left(\beta-b\cosh\lambda-a^{\dagger}\sinh\lambda\right)\delta\left(\beta^{\ast}-b^{\dagger}\cosh\lambda-a\sinh\lambda\right)\genfrac{}{}{0.0pt}{}{\colon}{\colon}$
$\displaystyle=\frac{1}{4}\genfrac{}{}{0.0pt}{}{\colon}{\colon}\delta\left(\bar{\alpha}-a\right)\delta\left(\bar{\alpha}^{\ast}-a^{{\dagger}}\right)\delta\left(\bar{\beta}-b\right)\delta\left(\bar{\beta}^{\ast}-b^{{\dagger}}\right)\genfrac{}{}{0.0pt}{}{\colon}{\colon}$
$\displaystyle=\Delta_{1}\left(\bar{\alpha}\right)\Delta_{2}\left(\bar{\beta}\right),$
where $\bar{\alpha}=\alpha\cosh\lambda-\beta^{\ast}\sinh\lambda,$
$\bar{\beta}=\beta\cosh\lambda-\alpha^{\ast}\sinh\lambda,$ and
$\beta=\left(q_{2}+ip_{2}\right)/\sqrt{2}$. Thus employing the squeezed two-
variable Hermite-excited vacuum state of the TPSSV in Eq.(5) and the coherent
state representation of single-mode Wigner operator 44 ,
$\Delta_{1}(\alpha)=e^{2\left|\alpha\right|^{2}}\int\frac{d^{2}z_{1}}{\pi^{2}}\left|z_{1}\right\rangle\left\langle-
z_{1}\right|e^{-2(z_{1}\alpha^{\ast}-\alpha z_{1}^{\ast})},$ (33)
where
$\left|z_{1}\right\rangle=\exp\left(z_{1}a^{{\dagger}}-z_{1}^{\ast}a\right)\left|0\right\rangle$
is Glauber coherent state 30 ; 31 , we finally can obtain the explicit
expression of WF for the TPSSV (see Appendix A),
$\displaystyle W(\alpha,\beta)$
$\displaystyle=\frac{1}{\pi^{2}}\frac{\sinh^{n+m}2\lambda}{2^{n+m}N_{\lambda,m,n}}e^{-2\left|\bar{\alpha}\right|^{2}-2\left|\bar{\beta}\right|^{2}}\sum_{l=0}^{m}\sum_{k=0}^{n}$
$\displaystyle\times\frac{\left[m!n!\right]^{2}\left(-\tanh\lambda\right)^{l+k}}{l!k!\left[\left(m-l\right)!\left(n-k\right)!\right]^{2}}\left|H_{m-l,n-k}\left(B,A\right)\right|^{2},$
(34)
where we have set
$A=-2i\bar{\alpha}\sqrt{\tanh\lambda},B=-2i\bar{\beta}\sqrt{\tanh\lambda}.$
Obviously, the WF $W(\alpha,\beta)$ in Eq.(34) is a real function and is non-
Gaussian in phase space due to the presence of $H_{m-l,n-k}\left(B,A\right)$,
as expected.
In particular, when $m=n=0,$ Eq.(34) reduces to
$W(\alpha,\beta)=\frac{1}{\pi^{2}}e^{-2\left|\bar{\alpha}\right|^{2}-2\left|\bar{\beta}\right|^{2}}=\frac{1}{\pi^{2}}e^{2\left(\alpha^{\ast}\beta^{\ast}+\alpha\allowbreak\beta\right)\sinh
2\lambda-2\left(\alpha\alpha^{\ast}+\beta\beta^{\ast}\allowbreak\right)\cosh
2\lambda}$ corresponding to the WF of two-mode squeezed vacuum state; while
for the case of $m=0$ and $n\neq 0,$ noticing $H_{0,n}\left(x,y\right)=y^{n}$
and $N_{\lambda,0,n}=n!\sinh^{2n}\lambda,$ Eq.(34) becomes
$W(\alpha,\beta)=\frac{\left(-1\right)^{n}}{\pi^{2}}e^{-2\left|\bar{\alpha}\right|^{2}-2\left|\bar{\beta}\right|^{2}}L_{n}\left(4\left|\bar{\alpha}\right|^{2}\right),$
(35)
where $L_{n}$ is $m$-order Laguerre polynomials and Eq.(35) is just the WF of
the negative binomial state $S_{2}(\lambda)\left|n,0\right\rangle$ 28 . In
Figs. 5-7, the phase space Wigner distributions are depicted for several
different parameter values $m,n$, and $\lambda$. As an evidence of
nonclassicality of the state, squeezing in one of quadratures is clear in the
plots. In addition, there are some negative region of the WF in the phase
space which is another indicator of the nonclassicality of the state. For the
case of $m=0$ and $n=1,$ it is easily seen from (35) that at the center of the
phase space ($\alpha=\beta=0$), the WF is always negative in phase space.
Fig.5 shows that the negative region becomes more and visible as the
increasement of photon number subtracted $m(=n)$, which may imply the
nonclassicality of the state can be enhanced due to the augment of photon-
subtraction number. For a given value $m$ and several different values $n$
($\neq m$), the WF distributions are presented in Fig.7, from which it is
interseting to notice that there are around $\left|m-n\right|$ wave valleys
and $\left|m-n\right|+1$ wave peaks.
Figure 5: (Color online) The Wigner function W($\alpha,\beta$) in phase space
($0,0,p_{1},p_{2}$) for several different parameter values $m=n$ with
$\lambda=0.5.$ (a) m=n=0; (b) m=n=1 and (c) m=n=5. Figure 6: (Color online)
The Wigner function W($\alpha,\beta$) in three different phase spaces for
$m=0,n=1$ with $\lambda=0.3$ (first row) and $\lambda=0.5\ $(second low).
Figure 7: (Color online) The Wigner function W($\alpha,\beta$) in phase space
($0,0,p_{1},p_{2}$) for several parameter values $m,n$ with $\lambda=0.5.$ (a)
m1,=n=2; (b) m=1,n=3 and (c) m=1,n=5.
## V Decoherence of TPSSV in thermal environments
In this section, we next consider how this state evolves at the presence of
thermal environment.
### V.1 Model
When the TPSSV evolves in the thermal channel, the evolution of the density
matrix can be described by the following master equation in the interaction
picture45
$\frac{d}{dt}\rho\left(t\right)=\left(L_{1}+L_{2}\right)\rho\left(t\right),$
(36)
where
$\displaystyle L_{i}\rho$
$\displaystyle=\kappa\left(\bar{n}+1\right)\left(2a_{i}\rho
a_{i}^{{\dagger}}-a_{i}^{{\dagger}}a_{i}\rho-\rho
a_{i}^{{\dagger}}a_{i}\right)$
$\displaystyle+\kappa\bar{n}\left(2a_{i}^{{\dagger}}\rho
a_{i}-a_{i}a_{i}^{{\dagger}}\rho-\rho a_{i}a_{i}^{{\dagger}}\right),\text{
}\left(a_{1}=a,a_{2}=b\right),$ (37)
and $\kappa$ represents the dissipative coefficient and $\bar{n}$ denotes the
average thermal photon number of the environment. When $\bar{n}=0,$ Eq.(36)
reduces to the master equation (ME) describing the photon-loss channel. The
two thermal modes are assumed to have the same average energy and coupled with
the channel in the same strength and have the same average thermal photon
number $\bar{n}$. This assumption is reasonable as the two-mode of squeezed
state are in the same frequency and temperature of the environment is normally
the same 46 ; 47 . By introducing two entangled state representations and
using the technique of integration within an ordered product (IWOP) of
operators, we can obtain the infinite operator-sum expression of density
matrix in Eq.(36) (see Appendix B):
$\rho\left(t\right)=\sum_{i,j,r,s=0}^{\infty}M_{i,j,r,s}\rho_{0}M_{i,j,r,s}^{{\dagger}},$
(38)
where $\rho_{0}$ denotes the density matrix at initial time, $M_{i,j,r,s}$ and
$M_{i,j,r,s}^{{\dagger}}$ are Hermite conjugated operators (Kraus operator)
with each other,
$M_{i,j,r,s}=\frac{1}{\bar{n}T+1}\sqrt{\frac{\left(T_{1}\right)^{r+s}\left(T_{3}\right)^{i+j}}{r!s!i!j!}}a^{\dagger
r}b^{\dagger s}e^{\left(a^{\dagger}a+b^{\dagger}b\right)\ln T_{2}}a^{i}b^{j},$
(39)
and we have set $T=1-e^{-2\kappa t}$, as well as
$T_{1}=\frac{\bar{n}T}{\bar{n}T+1},T_{2}=\frac{e^{-\kappa
t}}{\bar{n}T+1},T_{3}=\frac{\left(\bar{n}+1\right)T}{\bar{n}T+1}.$ (40)
It is not difficult to prove the $M_{i,j,r,s}$ obeys the normalization
condition $\sum_{i,j,r,s=0}^{\infty}M_{i,j,r,s}^{{\dagger}}M_{i,j,r,s}=1$ by
using the IWOP technique.
### V.2 Evolution of Wigner function
By using the thermal field dynamics theory 48 ; 49 and thermal entangled
state representation, the time evolution of Wigner function at time $t$ to be
given by the convolution of the Wigner function at initial time and those of
two single-mode thermal state (see Appendix C), i.e.,
$W\left(\alpha,\beta,t\right)=\frac{4}{\left(2\bar{n}+1\right)^{2}T^{2}}\int\frac{d^{2}\zeta
d^{2}\eta}{\pi^{2}}W\left(\zeta,\eta,0\right)e^{-2\frac{\left|\alpha-\zeta
e^{-\kappa t}\right|^{2}+\left|\beta-\eta e^{-\kappa
t}\right|^{2}}{\left(2\bar{n}+1\right)T}}.$ (41)
Eq.(41) is just the evolution formula of Wigner function of two-mode quantum
state in thermal channel. Thus the WF at any time can be obtained by
performing the integration when the initial WF is known.
In a similar way to deriving Eq.(34), substituting Eq.(34) into Eq.(41) and
using the generating function of two-variable Hermite polynomials (A2), we
finally obtain
$\displaystyle W\left(\alpha,\beta,t\right)$
$\displaystyle=\frac{N_{\lambda,m,n}^{-1}\left(E\sinh
2\lambda\right)^{m+n}}{\pi^{2}2^{n+m}\left(2\bar{n}+1\right)^{2}T^{2}D}e^{-\frac{\left|\alpha-\beta^{\ast}\right|^{2}}{e^{-2\lambda-2\kappa
t}+\left(2\bar{n}+1\right)T}-\frac{\left|\alpha+\beta^{\ast}\right|^{2}}{e^{2\lambda-2\kappa
t}+\left(2\bar{n}+1\right)T}}$
$\displaystyle\times\sum_{l=0}^{n}\sum_{k=0}^{m}\frac{\left[m!n!\right]^{2}\left(-\frac{F}{E}\tanh\lambda\right)^{l+k}}{l!k!\left[\left(m-k\right)!\left(n-l\right)!\right]^{2}}\left|H_{m-k,n-l}\left(G/\sqrt{E},K/\sqrt{E}\right)\right|^{2},$
(42)
where we have set
$\displaystyle C$ $\displaystyle=\frac{e^{-2\kappa
t}}{\left(2\bar{n}+1\right)T},\text{
}D=\left(1+Ce^{-2\lambda}\right)\left(1+Ce^{2\lambda}\right),$ $\displaystyle
E$ $\displaystyle=\allowbreak\frac{e^{4\kappa
t}}{D}\left(2\bar{n}T+1\right)^{2}C^{2},\text{ }F=\frac{C^{2}-1}{D},$
$\displaystyle G$ $\displaystyle=\frac{Ce^{\kappa
t}}{D}\left(\bar{B}+B^{\ast}\allowbreak C\right),\text{ }\bar{B}=\allowbreak
i2\sqrt{\tanh\lambda}\left(\beta^{\ast}\cosh\lambda+\alpha\allowbreak\sinh\lambda\right),$
$\displaystyle K$ $\displaystyle=\frac{Ce^{\kappa
t}}{D}\left(\bar{A}+A^{\ast}C\right),\text{
}\bar{A}=i2\sqrt{\tanh\lambda}\left(\alpha^{\ast}\cosh\lambda+\beta\sinh\lambda\right).$
(43)
Eq.(42) is just the analytical expression of WF for the TPSSV in thermal
channel. It is obvious that the WF loss its Gaussian property due to the
presence of two-variable Hermite polynomials.
In particular, at the initial time ($t=0$), noting $E\rightarrow 1$,
$\left(2\bar{n}+1\right)^{2}T^{2}D\rightarrow 1$, $\frac{F}{E}\rightarrow 1$
and $\frac{C^{2}}{D}\rightarrow 1,$ $\frac{C}{D}\rightarrow 0$ as well as
$K\rightarrow A^{\ast}$, $G\rightarrow B^{\ast}$, Eq.(42) just dose reduce to
Eq.(34), i.e., the WF of the TPSSV. On the other hand, when $\kappa
t\rightarrow\infty,$ noticing that $C\rightarrow 0,D\rightarrow 1,E\rightarrow
1,F\rightarrow-1,$ and $G/\sqrt{E}\rightarrow 0,K/\sqrt{E}\rightarrow 0,$ as
well as $H_{m,n}\left(0,0\right)=\left(-1\right)^{m}m!\delta_{m,n},$ as well
as the definition of Jacobi polynomials in Eq.(9), then Eq.(42) becomes
$W\left(\alpha,\beta,\infty\right)=\frac{1}{\pi^{2}\left(2\bar{n}+1\right)^{2}}e^{-\frac{2}{2\bar{n}+1}(\left|\alpha\right|^{2}+\left|\beta\right|^{2})},$
(44)
which is independent of photon-subtraction number $m$ and $n$ and corresponds
to the product of two thermal states with mean thermal photon number
$\bar{n}$. This implies that the two-mode system reduces to two-mode thermal
state after a long time interaction with the environment. Eq.(44) denotes a
Gaussian distribution. Thus the thermal noise causes the absence of the
partial negative of the WF if the decay time $\kappa t$ exceeds a threshold
value. In addition, for the case of $m=n=0$, corresponding to the case of two-
mode squeezed vacuum, Eq.(42) just becomes
$W_{m=n=0}\left(\alpha,\beta,t\right)=\mathfrak{N}^{-1}e^{-\frac{\mathfrak{E}}{\mathfrak{D}}\left(\left|\alpha\right|^{2}+\left|\beta\right|^{2}\right)+\frac{\mathfrak{F}}{\mathfrak{D}}\left(\alpha\beta+\alpha^{\ast}\beta^{\ast}\right)},$
(45)
where $\mathfrak{N}=\pi^{2}\left(2\bar{n}+1\right)^{2}T^{2}D$ is the
normalization factor, $\mathfrak{D}=\left(2\bar{n}+1\right)^{2}T^{2}D,$
$\mathfrak{E}=2\left(2\bar{n}+1\right)T+e^{-2\kappa t}\cosh 2\lambda,$ and
$\mathfrak{F}=2e^{-2\kappa t}\sinh 2\lambda$. Eq.(45) is just the result in
Eq.(14) of Ref. 47 .
In Fig.8, the WFs of the TPSSV for ($m=0,n=1$) are depicted in phase space
with $\lambda=0.3$ and $\bar{n}=1$ for several different $\kappa t.$ It is
easy to see that the negative region of WF gradually disappears as the time
$\kappa t$ increases. Actually, from Eq.(43) one can see that $D>0$ and $E>0$,
so when $F<0$ leading to the following condition:
$\kappa t>\kappa t_{c}\equiv\frac{1}{2}\ln\frac{2\bar{n}+2}{2\bar{n}+1},$ (46)
we know that the WF of TPSSV has no chance to be negative in the whole phase
space when $\kappa t\ $exceeds a threshold value $\kappa t_{c}$. Here we
should point out that the effective threshold value of the decay time
corresponding to the transition of the WF from partial negative to fully
positive definite is dependent of $m\ $and $n.$ When $\kappa t=\kappa t_{c},$
it then follows from Eq.(42) that
$\displaystyle W\left(\alpha,\beta,t_{c}\right)$
$\displaystyle=\frac{\tanh^{m+n}\lambda\operatorname{sech}^{2}\lambda}{4\pi^{2}N_{m,n,\lambda}e^{-4\kappa
t_{c}}}e^{-e^{2\kappa
t_{c}}\left[\left|\alpha\right|^{2}+\left|\beta\right|^{2}-\left(\alpha^{\ast}\beta^{\ast}+\alpha\beta\right)\tanh\lambda\right]}$
$\displaystyle\times\left|H_{m,n}\left(i\sqrt{\tanh\lambda}\beta^{\ast}e^{\kappa
t_{c}},\allowbreak i\sqrt{\tanh\lambda}\alpha^{\ast}e^{\kappa
t_{c}}\right)\right|^{2},$ (47)
which is an Hermite-Gaussian function and positive definite, as expected.
In Figs. 9 and 10, we have presented the time-evolution of WF in phase space
for different $\bar{n}$ and $\lambda,$ respectively. One can see clearly that
the partial negativity of WF decreases gradually as $\bar{n}$ (or $\lambda$)
increases for a given time. This case is true for a given $\bar{n}$ (or
$\kappa t$) as the increasement of $\kappa t$ (or $\bar{n}$). The squeezing
effect in one of quadratures is shown in Fig.10. In principle, by using the
explicit expression of WF in Eq.(42), we can draw its distributions in phase
space. For the case of $m=0,n=2$, there are two negative regions of WF, which
is different from the case of $m=0,n=1$ (see Fig.11). The absolute value of
the negative minimum of the WF decreases as $\kappa t$ increases, which leads
to the full absence of partial negative region.
Figure 8: (Color online) The time evolution of WF $\left(m=0,n=1\right)$ at
$\left(q_{1},q_{2},0,0\right)$ phase space for $\bar{n}=1,\lambda=0.3.$(a)
$\kappa t=0.05,$(b) $\kappa t=0.1,$(c) $\kappa t=0.12,$(d) $\kappa t=0.2.$
Figure 9: (Color online) The time evolution of WF $\left(m=0,n=1\right)$ in
$\left(q_{1},q_{2},0,0\right)$ phase space for $\lambda=0.3\ $and $\kappa
t=0.05\ $with (a) $\bar{n}=0,$(b) $\bar{n}=1,$(c) $\bar{n}=2,$(d) $\bar{n}=7.$
Figure 10: (Color online) The time evolution of WF $\left(m=0,n=1\right)$ in
$\left(q_{1},q_{2},0,0\right)$ phase space for $\bar{n}=1,$and $\kappa t=0.05\
$with (a) $\lambda=0.03,$(b) $\lambda=0.5,$(c) $\lambda=0.8,$(d)
$\lambda=1.2.$ Figure 11: (Color online) The time evolution of WF for
$m=0,n=2$ in $\left(q_{1},q_{2},0,0\right)$ phase space with (a) $\kappa
t=0,$(b) $\kappa t=0.05,$(c) $\kappa t=0.1,$(d) $\kappa t=0.2.$
## VI Conclusions
In summary, we have investigated the statistical properties of two-mode
photon-subtracted squeezed vacuum state (TPSSV) and its decoherence in thermal
channelwith average thermal photon number $\bar{n}$ and dissipative
coefficient $\kappa$. For arbitrary number TPSSV, we have for the first time
calculated the normalization factor, which turns out to be a Jacobi polynomial
of the squeezing parameter $\lambda$, a remarkable result. We also show that
the TPSSV can be treated as a squeezed two-variable Hermite polynomial
excitation vacuum. Based on Jacobi polynomials’ behavior the statistical
properties of the field, such as photon number distribution, squeezing
properties, cross-correlation function and antibunching, are also derived
analytically. Especially, the nonclassicality of TPSSV is discussed in terms
of the negativity of WF after deriving the explicit expression of WF. Then the
decoherence of TPSSV in thermal channel is also demonstrated according to the
compact expression for the WF. The threshold value of the decay time
corresponding to the transition of the WF from partial negative to completely
positive is presented. It is found that the WF has no chance to present
negative value for all parameters $\lambda$ and any photon-subtraction number
($m,n$) if $\kappa t>\frac{1}{2}\ln\frac{2\bar{n}+2}{2\bar{n}+1}\ $for TPSSV.
The technique of integration within an ordered product of operators brings
convenience in our derivation.
Acknowledgments Work supported by the the National Natural Science Foundation
of China under Grant Nos.10775097 and 10874174.
Appendix A: Deriviation of Wigner function Eq.(34) of TPSSV
The definite of the WF of two-mode quantum state $\left|\Psi\right\rangle$ is
given by
$W(\alpha,\beta)=\left\langle\Psi\right|\Delta_{1}\left(\alpha\right)\Delta_{2}\left(\beta\right)\left|\Psi\right\rangle$,
thus by uisng Eqs.(5), and (33) the WF of TPSSV can be calculated as
$\displaystyle W(\alpha,\beta)$
$\displaystyle=\left\langle\lambda,m,n\right||\Delta_{1}\left(\alpha\right)\Delta_{2}\left(\beta\right)|\left|\lambda,m,n\right\rangle$
$\displaystyle=\frac{\sinh^{n+m}2\lambda}{2^{n+m}N_{\lambda,m,n}}\left\langle
00\right|H_{m,n}\left(-i\sqrt{\tanh\lambda}b,-i\sqrt{\tanh\lambda}a\right)\Delta_{1}\left(\bar{\alpha}\right)$
$\displaystyle\otimes\Delta_{2}\left(\bar{\beta}\right)H_{m,n}\left(i\sqrt{\tanh\lambda}b^{\dagger},i\sqrt{\tanh\lambda}a^{\dagger}\right)\left|00\right\rangle$
$\displaystyle=\frac{\sinh^{n+m}2\lambda}{2^{n+m}N_{\lambda,m,n}}e^{2\left|\bar{\alpha}\right|^{2}+2\left|\bar{\beta}\right|^{2}}\int\frac{d^{2}z_{1}d^{2}z_{2}}{\pi^{4}}e^{-\left|z_{1}\right|^{2}-\left|z_{2}\right|^{2}-2(z_{1}\bar{\alpha}^{\ast}-\bar{\alpha}z_{1}^{\ast})-2(z_{2}\bar{\beta}^{\ast}-\bar{\beta}z_{2}^{\ast})}$
$\displaystyle\times
H_{m,n}\left(-i\sqrt{\tanh\lambda}z_{2},-i\sqrt{\tanh\lambda}z_{1}\right)H_{m,n}\left(-i\sqrt{\tanh\lambda}z_{2}^{\ast},-i\sqrt{\tanh\lambda}z_{1}^{\ast}\right).$
(A1)
Further noticing the generating function of two variables Hermitian
polynomials,
$H_{m,n}\left(\epsilon,\varepsilon\right)=\frac{\partial^{m+n}}{\partial
t^{m}\partial t^{\prime n}}\left.\exp\left[-tt^{\prime}+\epsilon t+\varepsilon
t^{\prime}\right]\right|_{t=t^{\prime}=0},$ (A2)
Eq.(A1) can be further rewritten as
$\displaystyle W(\alpha,\beta)$
$\displaystyle=\frac{\sinh^{n+m}2\lambda}{2^{n+m}N_{\lambda,m,n}}e^{2\left|\bar{\alpha}\right|^{2}+2\left|\bar{\beta}\right|^{2}}\frac{\partial^{m+n}}{\partial
t^{m}\partial\tau^{n}}\frac{\partial^{m+n}}{\partial t^{\prime
m}\partial\tau^{\prime n}}e^{-t\tau-t^{\prime}\tau^{\prime}}$
$\displaystyle\times\int\frac{d^{2}z_{1}}{\pi^{2}}\left.e^{-\left|z_{1}\right|^{2}+\left(-2\bar{\alpha}^{\ast}-i\sqrt{\tanh\lambda}\tau\right)z_{1}+\left(2\bar{\alpha}-i\sqrt{\tanh\lambda}\tau^{\prime}\right)z_{1}^{\ast}}\right|_{t=\tau=0}$
$\displaystyle\times\int\frac{d^{2}z_{2}}{\pi^{2}}\left.e^{-\left|z_{2}\right|^{2}+\left(-2\bar{\beta}^{\ast}-i\sqrt{\tanh\lambda}t\right)z_{2}+\left(2\bar{\beta}-i\sqrt{\tanh\lambda}t^{\prime}\right)z_{2}^{\ast}}\right|_{t^{\prime}=\tau^{\prime}=0}$
$\displaystyle=\frac{\sinh^{n+m}2\lambda}{2^{n+m}N_{\lambda,m,n}}e^{-2\left|\bar{\alpha}\right|^{2}-2\left|\bar{\beta}\right|^{2}}\frac{\partial^{m+n}}{\partial
t^{m}\partial\tau^{n}}\frac{\partial^{m+n}}{\partial t^{\prime
m}\partial\tau^{\prime n}}$
$\displaystyle\times\left.e^{-t\tau-t^{\prime}\tau^{\prime}+A^{\ast}\tau\prime+B^{\ast}t^{\prime}\allowbreak+A\tau+Bt-\left(tt^{\prime}+\tau\tau^{\prime}\right)\tanh\lambda}\right|_{t=\tau=t^{\prime}=\tau^{\prime}=0},$
(A3)
where we have set
$B=-2i\bar{\beta}\sqrt{\tanh\lambda},A=-2i\bar{\alpha}\sqrt{\tanh\lambda},$
(A4)
and have used the following integration formula
$\int\frac{d^{2}z}{\pi}e^{\zeta\left|z\right|^{2}+\xi z+\eta
z^{\ast}}=-\frac{1}{\zeta}e^{-\frac{\xi\eta}{\zeta}},\text{Re}\left(\zeta\right)<0.$
(A5)
Expanding the exponential term
$\exp\left[-\left(tt^{\prime}+\tau\tau^{\prime}\right)\tanh\lambda\right],$
and using Eq.(A2), we have
$\displaystyle W(\alpha,\beta)$
$\displaystyle=\frac{\sinh^{n+m}2\lambda}{2^{n+m}N_{\lambda,m,n}}e^{-2\left|\bar{\alpha}\right|^{2}-2\left|\bar{\beta}\right|^{2}}\sum_{l=0}^{\infty}\sum_{k=0}^{\infty}\frac{\left(-\tanh\lambda\right)^{l+k}}{l!k!}$
$\displaystyle\allowbreak\times\frac{\partial^{l+k}}{\partial B^{l}\partial
A^{k}}\frac{\partial^{l+k}}{\partial B^{\ast l}\partial A^{\ast
k}}\frac{\partial^{2m}}{\partial t^{m}\partial\tau^{n}}$
$\displaystyle\times\left.\frac{\partial^{2n}}{\partial t^{\prime
m}\partial\tau^{\prime
n}}e^{-t\tau+A\tau+Bt-t^{\prime}\tau^{\prime}+A^{\ast}\tau\prime+B^{\ast}t^{\prime}\allowbreak}\right|_{t=\tau=t^{\prime}=\tau^{\prime}=0}$
$\displaystyle=\frac{\sinh^{n+m}2\lambda}{2^{n+m}N_{\lambda,m,n}}e^{-2\left|\bar{\alpha}\right|^{2}-2\left|\bar{\beta}\right|^{2}}\sum_{l=0}^{\infty}\sum_{k=0}^{\infty}\frac{\left(-\tanh\lambda\right)^{l+k}}{l!k!}$
$\displaystyle\times\frac{\partial^{l+k}}{\partial B^{l}\partial
A^{k}}\frac{\partial^{l+k}}{\partial B^{\ast l}\partial A^{\ast
k}}H_{m,n}\left(B,A\right)H_{m,n}\left(B^{\ast},A^{\ast}\right).$ (A6)
Noticing the well-known differential relations of
$H_{m,n}\left(\epsilon,\varepsilon\right),$
$\frac{\partial^{l+k}}{\partial\epsilon^{l}\partial\varepsilon^{k}}H_{m,n}\left(\epsilon,\varepsilon\right)=\frac{m!n!H_{m-l,n-k}\left(\epsilon,\varepsilon\right)}{\left(m-l\right)!\left(n-k\right)!},$
(A7)
we can further recast Eq.(A6) to Eq.(34).
Appendix B: Derivation of solution of Eq.(36)
To solve the ME in Eq.(36), we first introduce two entangled state
representations 49a :
$\displaystyle\left|\eta_{a}\right\rangle$
$\displaystyle=\exp\left[-\frac{1}{2}|\eta_{a}|^{2}+\eta_{a}a^{\dagger}-\eta_{a}^{\ast}\tilde{a}^{\dagger}+a^{\dagger}\tilde{a}^{\dagger}\right]\left|0\tilde{0}\right\rangle,$
(B1) $\displaystyle\left|\eta_{b}\right\rangle$
$\displaystyle=\exp\left[-\frac{1}{2}|\eta_{b}|^{2}+\eta_{b}b^{\dagger}-\eta_{b}^{\ast}\tilde{b}^{\dagger}+b^{\dagger}\tilde{b}^{\dagger}\right]\left|0\tilde{0}\right\rangle,$
(B2)
which satisfy the following eigenvector equations, for instance,
$\begin{array}[c]{c}(a-\tilde{a}^{\dagger})\left|\eta_{a}\right\rangle=\eta_{a}\left|\eta_{a}\right\rangle,\;(a^{\dagger}-\tilde{a})\left|\eta_{a}\right\rangle=\eta_{a}^{\ast}\left|\eta_{a}\right\rangle,\\\
\left\langle\eta_{a}\right|(a^{\dagger}-\tilde{a})=\eta_{a}^{\ast}\left\langle\eta_{a}\right|,\
\left\langle\eta_{a}\right|(a-\tilde{a}^{\dagger})=\eta_{a}\left\langle\eta_{a}\right|.\end{array}$
(B3)
which imply operators $(a-\tilde{a}^{\dagger})$ and $(a^{\dagger}-\tilde{a})$
can be replaced by number $\eta_{a}$ and$\ \eta_{a}^{\ast},$
$\left[(a-\tilde{a}^{\dagger}),(a^{\dagger}-\tilde{a})\right]=0.$ Operating
two-side of Eq.(36) on the vector
$\left|I_{a},I_{b}\right\rangle\equiv\left|\eta_{a}=0\right\rangle\otimes\left|\eta_{b}=0\right\rangle$,
(denote
$\left|\rho\left(t\right)\right\rangle\equiv\rho\left(t\right)\left|I_{a},I_{b}\right\rangle),$
and noticing the corresponding relation:
$\begin{array}[c]{c}a\left|I_{a},I_{b}\right\rangle=\tilde{a}^{\dagger}\left|I_{a},I_{b}\right\rangle,\text{
}a^{\dagger}\left|I_{a},I_{b}\right\rangle=\tilde{a}\left|I_{a},I_{b}\right\rangle,\\\
b\left|I_{a},I_{b}\right\rangle=\tilde{b}^{\dagger}\left|I_{a},I_{b}\right\rangle,\text{
}b^{\dagger}\left|I_{a},I_{b}\right\rangle=\tilde{b}\left|I_{a},I_{b}\right\rangle,\end{array}$
(B4)
we can put Eq.(36) into the following form:
$\displaystyle\frac{d}{dt}\left|\rho\left(t\right)\right\rangle$
$\displaystyle=\left[\kappa\left(\bar{n}+1\right)\left(2a\tilde{a}-a^{{\dagger}}a-\tilde{a}^{\dagger}\tilde{a}\right)+\kappa\bar{n}\left(2a^{{\dagger}}\tilde{a}^{\dagger}-aa^{{\dagger}}-\tilde{a}\tilde{a}^{{\dagger}}\right)\right.$
$\displaystyle\left.+\kappa\left(\bar{n}+1\right)\left(2b\tilde{b}-b^{{\dagger}}b-\tilde{b}^{{\dagger}}\tilde{b}\right)+\kappa\bar{n}\left(2b^{{\dagger}}\tilde{b}^{{\dagger}}-bb^{{\dagger}}-\tilde{b}\tilde{b}^{{\dagger}}\right)\right]\left|\rho\left(t\right)\right\rangle.$
(B5)
It’s formal solution is given by
$\displaystyle\left|\rho\left(t\right)\right\rangle$
$\displaystyle=\exp\left[\kappa
t\left(\bar{n}+1\right)\left(2a\tilde{a}-a^{{\dagger}}a-\tilde{a}^{\dagger}\tilde{a}\right)+\kappa
t\bar{n}\left(2a^{{\dagger}}\tilde{a}^{\dagger}-aa^{{\dagger}}-\tilde{a}\tilde{a}^{{\dagger}}\right)\right.$
$\displaystyle\left.+\kappa
t\left(\bar{n}+1\right)\left(2b\tilde{b}-b^{{\dagger}}b-\tilde{b}^{{\dagger}}\tilde{b}\right)+\kappa
t\bar{n}\left(2b^{{\dagger}}\tilde{b}^{{\dagger}}-bb^{{\dagger}}-\tilde{b}\tilde{b}^{{\dagger}}\right)\right]\left|\rho_{0}\right\rangle,$
(B6)
where
$\left|\rho_{0}\right\rangle\equiv\rho_{0}\left|I_{a},I_{b}\right\rangle$. In
order to solve Eq.(B6), noticing that, for example,
$2a\tilde{a}-a^{{\dagger}}a-\tilde{a}^{\dagger}\tilde{a}=-\left(a^{{\dagger}}-\tilde{a}\right)\left(a-\tilde{a}^{{\dagger}}\right)+\tilde{a}a-\tilde{a}^{\dagger}a^{{\dagger}},$
(B7)
we have
$\displaystyle\left|\rho\left(t\right)\right\rangle$
$\displaystyle=\exp\left[\left(a\tilde{a}-\tilde{a}^{\dagger}a^{\dagger}+1\right)\kappa
t\right]$ $\displaystyle\times\exp\left[\frac{2\bar{n}+1}{2}\left(1-e^{2\kappa
t}\right)\left(a^{\dagger}-\tilde{a}\right)\left(a-\tilde{a}^{\dagger}\right)\right]$
$\displaystyle\times\exp\left[\left(b\tilde{b}-\tilde{b}^{\dagger}b^{\dagger}+1\right)\kappa
t\right]$ $\displaystyle\times\exp\left[\frac{2\bar{n}+1}{2}\left(1-e^{2\kappa
t}\right)\left(b^{\dagger}-\tilde{b}\right)\left(b-\tilde{b}^{\dagger}\right)\right]\left|\rho_{0}\right\rangle,$
(B8)
where we have used the identity operator, $\exp[\lambda(A+\sigma
B)]=e^{\lambda A}\exp[\sigma B(1-e^{-\lambda\tau})/\tau]$ valid for
$[A,B]=\tau B.$
Thus the element of $\rho\left(t\right)$ between
$\left\langle\eta_{a},\eta_{b}\right|$ and $\left|I_{a},I_{b}\right\rangle$ is
$\left\langle\eta_{a},\eta_{b}\right|\left.\rho\left(t\right)\right\rangle=\exp\left[-\frac{2\bar{n}+1}{2}T|\left(\eta_{a}|^{2}+|\eta_{b}|^{2}\right)\right]\left\langle\eta_{a}e^{-\kappa
t},\eta_{b}e^{-\kappa t}\right|\left.\rho_{0}\right\rangle,$ (B9)
from which one can see clearly the attenuation due to the presence of
environment.
Further, using the completeness relation of
$\left|\eta_{a},\eta_{b}\right\rangle$,
$\int\frac{d^{2}\eta_{a}d^{2}\eta_{b}}{\pi^{2}}\left|\eta_{a},\eta_{b}\right\rangle\left\langle\eta_{a},\eta_{b}\right|=1$
and the IWOP technique 50 ; 51 , we see
$\displaystyle\left|\rho\left(t\right)\right\rangle$
$\displaystyle=\int\frac{d^{2}\eta_{a}d^{2}\eta_{b}}{\pi^{2}}\left|\eta_{a},\eta_{b}\right\rangle\left\langle\eta_{a},\eta_{b}\right|\left.\rho\left(t\right)\right\rangle$
$\displaystyle=\frac{1}{\left(\bar{n}T+1\right)^{2}}\exp\left[T_{1}\left(a^{\dagger}\tilde{a}^{\dagger}+b^{\dagger}\tilde{b}^{\dagger}\right)\right]$
$\displaystyle\times\exp\left[\left(a^{\dagger}a+b^{\dagger}b+\tilde{a}^{\dagger}\tilde{a}+\tilde{b}^{\dagger}\tilde{b}\right)\ln
T_{2}\right]$
$\displaystyle\times\exp\left[T_{3}\left(a\tilde{a}+b\tilde{b}\right)\right]\rho_{0}\left|I_{a},I_{b}\right\rangle,$
(B10)
where $T_{1},T_{2}$ and $T_{3}$ are defined in Eq.(40). Noticing Eq.(B4), we
can reform Eq.(B10) as
$\rho\left(t\right)=\sum_{i,j,r,s=0}^{\infty}M_{i,j,r,s}\rho_{0}M_{i,j,r,s}^{{\dagger}},$where
$M_{i,j,r,s}$ and $M_{i,j,r,s}^{{\dagger}}$ are defined in Eq.(39).
Appendix C: Deriviation of Eq.(41) by using thermo field dynamics and
entangled state representation
In this appendix, we shall derive the evolution formula of WF, i.e., the
relation between the any time WF and the initial time WF. According to the
definition of WF of density operator $\rho$:
$W\left(\alpha\right)=\mathtt{Tr}\left[\Delta\left(\alpha\right)\rho\right]$,
where $\Delta\left(\alpha\right)$ is the single-mode Wigner operator,
$\Delta\left(\alpha\right)=\frac{1}{\pi}D\left(2\alpha\right)\left(-1\right)^{a^{{\dagger}}a}$.
By using
$\left\langle\tilde{n}\right.\left|\tilde{m}\right\rangle=\delta_{m,n}$ we can
reform $W\left(\alpha\right)$ as 52
$W\left(\alpha\right)=\sum_{m,n}^{\infty}\left\langle
n,\tilde{n}\right|\Delta\left(\alpha\right)\rho\left|m,\tilde{m}\right\rangle=\frac{1}{\pi}\left\langle\xi_{=2\alpha}\right|\left.\rho\right\rangle,$
(C1)
where $\left\langle\xi\right|$ is the conjugate state of
$\left\langle\eta\right|$, whose overlap is
$\left\langle\eta\right|\left.\xi\right\rangle=\frac{1}{2}\exp\left[\frac{1}{2}\left(\xi\eta^{\ast}-\xi^{\ast}\eta\right)\right],$a
Fourier transformation kernel. In a similar way, thus for two-mode quantum
system, the WF is given by
$W\left(\alpha,\beta\right)=\mathtt{Tr}\left[\Delta_{a}\left(\alpha\right)\Delta_{b}\left(\beta\right)\rho\right]=\frac{1}{\pi^{2}}\left\langle\xi_{a=2\alpha},\xi_{b=2\beta}\right|\left.\rho\right\rangle.$
(C2)
Employing the above overlap relation, Eq.(C2) can be recast into the following
form:
$W\left(\alpha,\beta,t\right)=\int\frac{d^{2}\eta_{a}d^{2}\eta_{b}}{4\pi^{4}}e^{\alpha^{\ast}\eta_{a}-\alpha\eta_{a}^{\ast}+\beta^{\ast}\eta_{b}-\beta\eta_{b}^{\ast}}\left\langle\eta_{a},\eta_{b}\right|\left.\rho\left(t\right)\right\rangle.$
(C3)
Then substituting Eq.(B9) into Eq.(C3) and using the completeness of
$\left\langle\xi\right|$,
$\int\frac{d^{2}\xi}{\pi}\left|\xi\right\rangle\left\langle\xi\right|=1,$ we
have
$\displaystyle W\left(\alpha,\beta,t\right)$
$\displaystyle=\int\frac{d^{2}\eta_{a}d^{2}\eta_{b}}{4\pi^{4}}e^{-\frac{2\bar{n}+1}{2}T|\left(\eta_{a}|^{2}+|\eta_{b}|^{2}\right)}$
$\displaystyle\times
e^{\alpha^{\ast}\eta_{a}-\alpha\eta_{a}^{\ast}+\beta^{\ast}\eta_{b}-\beta\eta_{b}^{\ast}}\left\langle\eta_{a}e^{-\kappa
t},\eta_{b}e^{-\kappa t}\right|\left.\rho_{0}\right\rangle$
$\displaystyle=\int\frac{d^{2}\xi_{a}d^{2}\xi_{b}}{\pi^{2}}W\left(\zeta,\eta,0\right)\int\frac{d^{2}\eta_{a}d^{2}\eta_{b}}{4\pi^{2}}e^{-\frac{2\bar{n}+1}{2}T|\left(\eta_{a}|^{2}+|\eta_{b}|^{2}\right)}$
$\displaystyle\times
e^{\alpha^{\ast}\eta_{a}-\alpha\eta_{a}^{\ast}+\beta^{\ast}\eta_{b}-\beta\eta_{b}^{\ast}}\left\langle\eta_{a}e^{-\kappa
t},\eta_{b}e^{-\kappa
t}\right|\left.\xi_{a=2\zeta},\xi_{b=2\eta}\right\rangle.$ (C4)
Performing the integration in Eq.(C4) over $d^{2}\eta_{a}d^{2}\eta_{b}$ then
we can obtain Eq.(41).
Making variables replacement, $\frac{\alpha-\zeta e^{-\kappa
t}}{\sqrt{T}}\rightarrow\zeta,$ $\frac{\beta-\eta e^{-\kappa
t}}{\sqrt{T}}\rightarrow\eta,$ Eq.(41) can be reformed as
$\displaystyle W\left(\alpha,\beta,t\right)$ $\displaystyle=4e^{4\kappa t}\int
d^{2}\zeta d^{2}\eta W_{a}^{th}\left(\zeta\right)W_{b}^{th}\left(\eta\right)$
$\displaystyle\times W\left\\{e^{\kappa
t}\left(\alpha-\sqrt{T}\zeta\right),e^{\kappa
t}\left(\beta-\sqrt{T}\eta\right),0\right\\},$ (C5)
where $W^{th}\left(\zeta\right)$ is the Wigner function of thermal state with
average thermal photon number $\bar{n}$:
$W^{th}\left(\zeta\right)=\frac{1}{\pi\left(2\bar{n}+1\right)}e^{-\frac{2\left|\zeta\right|^{2}}{2\bar{n}+1}}.$
Eq.(C5) is another expression of the evolution of WF and is actually agreement
with that in Refs.46 ; 47 .
## References
* (1) D. Bouwmeester, A. Ekert and A. Zeilinger, The Physics of Quantum Information (Springer-Verlag, 2000).
* (2) M. S. Kim, “Recent developments in photon-level operatoions on travelling light fields,” J. Phys. B: At. Mol. Opt. Phys. 41, 133001-133018 (2008).
* (3) T. Opatrný, G. Kurizki and D-G. Welsch, Phys. Rev. A 61, 032302 (2000).
* (4) A. Zavatta, S. Viciani, and M. Bellini, “Quantum-to-classical transition with single-photon-added coherent states of light,” Science, 306, 660-662 (2004)
* (5) A. Zavatta, S. Viciani, and M. Bellini, ”Single-photon excitation of a coherent state: Catching the elementary step of stimulated light emission,” Phys. Rev. A 72, 023820-023828. (2005).
* (6) A. Biswas and G. S. Agarwal, “Nonclassicality and decoherence of photon- subtracted squeezed states,” Phys. Rev. A 75, 032104-032111 (2007).
* (7) P. Marek, H. Jeong and M. S. Kim, “Generating ‘squeezed’ superposition of coherent states using photon addition and subtraction,” Phys. Rev. A 78, 063811-063818 (2008).
* (8) L. Y. Hu and H. Y. Fan, ”Statistical properties of photon-subtracted squeezed vacuum in thermal environment,” J. Opt. Soc. Am. B, 25, 1955-1964(2008).
* (9) H. Nha and H. J. Carmichael, “Proposed Test of Quantum Nonlocality for Continuous Variables,” Phys. Rev. Lett. 93, 020401-020404 (2004).
* (10) A. Ourjoumtsev, R. Tualle-Brouri and P. Grangier, “Quantum homodyne tomography of a two-photon Fock state,” Phys. Rev. Lett. 96, 213601-213604 (2006).
* (11) A. Ourjoumtsev, A. Dantan, R. Tualle-Brouri and P. Grangier, “Increasing entanglement between Gaussian states by coherent photon subtraction”, Phys. Rev. Lett. 98, 030502-030505 (2007).
* (12) A. I. Lvovsky et al., Phys. Rev. Lett. 87, 050402-050405 (2001).
* (13) A. Zavatta, S. Viciani, and M. Bellini, “Tomographic reconstruction of the single-photon Fock state by high-frequency homodyne detection,” Phys. Rev. A. 70, 053821-053826 (2004).
* (14) M. D’ Angelo, A. Zavatta, V. Parigi, and M. Bellini, “Tomographic test of Bell’s inequality for a time-delocalized single photon,” Phys. Rev. A. 74, 052114-052119 (2004).
* (15) S. A. Babichev, J. Appel and A. I. Lvovsky, “Homodyne tomography characterization and nonlocality of a dual-mode optical qubit,” Phys. Rev. Lett. 92, 193601-193604 (2004).
* (16) J. S. Neergaard-Nielsen, B. Melholt Nielsen, C. Hettich, K. Mølmer, and E. S. Polzik, “Generation of a Superposition of Odd Photon Number States for Quantum Information Networks,” Phys. Rev. Lett. 97, 083604-083607 (2006).
* (17) A. Ourjoumtsev, R. Tualle-Brouri, J. Laurat, Ph. Grangier, “Generating optical Schrödinger kittens for quantum information processing,” Science 312, 83-86 (2006).
* (18) M. Dakna, T. Anhut, T. Opatrny, L. Knoll, and D.-G. Welsch, “Generating Schröinger-cat-like states by means of conditional measurements on a beam splitter,” Phys. Rev. A 55, 3184-3194 (1997).
* (19) S. Glancy and H. M. de Vasconcelos, “Methods for producing optical coherent state superpositions,” J. Opt. Soc. Am. B 25, 712-733 (2008).
* (20) S. Olivares and Matteo G. A. Paris, ”Photon subtracted states and enhancement of nonlocality in the presence of noise,” J. Opt. B: Quantum Semiclass. Opt. 7, S392-S397(2005).
* (21) S. Olivares and M. G. A. Paris, “Enhancement of nonlocality in phase space,” Phys. Rev. A 70, 032112-032117 (2004).
* (22) S. Olivares, M. G. A. Paris and R. Bonifacio, “Teleportation improvement by inconclusive photon subtraction,” Phys. Rev. A 67, 032314-032318 (2003).
* (23) A. Kitagawa, M. Takeoka, M. Sasaki and A. Chefles, “Entanglement evaluation of non-Caussian states generated by photon subtraction from squeezed states,” Phys. Rev. A 73, 042310-042321 (2006).
* (24) P. T. Cochrane, T. C. Ralph, and G. J. Milburn, “Teleportation improvement by condition measurements on the two-mode squeezed vacuum,” Phys. Rev. A 65, 062306-062311 (2002).
* (25) T. Opatrny, G. Kurizki, and D.-G. Welsch, “Improvement on teleportation of continuous variables by photon subtraction via conditional measurement,” Phys. Rev. A 61, 032302-032308 (2000).
* (26) S. D. Bartlett and B. C. Sanders, “Universal continuous-variable quantum computation: Requirement of optical nonlinearity for photon counting,” Phys. Rev. A 65, 042304-042308 (2002).
* (27) M. Sasaki and S. Suzuki, “Multimode theory of measurement-induced non-Gaussian operation on wideband squeezed light: Analytical formula,” Phys. Rev. A 73, 043807-043824 (2006).
* (28) M. S. Kim, E. Park, P. L. Knight and H. Jeong, “Nonclassicality of a photon-substracted Gaussian field,” Phys. Rev. A 71, 043805-043809 (2005).
* (29) C. Invernizzi, S. Olivares, M. G. A. Paris and K. Banaszek, “Effect of noise and enhancement of nonlocality in on/off photodetection,” Phys. Rev. A 72, 042105-042116 (2005).
* (30) V. Buzek, “SU(1,1) Squeezing of SU(1,1) Generalized Coherent States,” J. Mod. Opt. 34, 303-316 (1990).
* (31) R. Loudon and P. L. Knight, ”Squeezed light,” J. Mod. Opt. 34, 709-759(1987).
* (32) P. Schleich Wolfgang, Quantum Optics in Phase Space, (Wiley-VCH, 2001).
* (33) A. Wünsche, “Hermite and Laguerre 2D polynomials,” J. Computational and Appl. Math. 133, 665-678 (2001).
* (34) A. Wünsche, “General Hermite and Laguerre two-dimensional polynomials, ”J. Phys. A: Math. and Gen. 33, 1603-1629 (2000).
* (35) G. S. Agarwal, “Negative binomial states of the field-operator representation and production by state reduction in optical processes,” Phys. Rev. A 45, 1787-1792 (1992).
* (36) W. Magnus et al., Formulas and theorems for the special functions of mathematical physics (Springer, 1996).
* (37) R. Glauber,”Coherent and Incoherent States of the Radiation Field,” Phys. Rev. 131, 2766-2788 (1963).
* (38) J. R. Klauder and B. S. Skargerstam, Coherent States (World Scientific, 1985).
* (39) W. M. Zhang; D. F. Feng; R. Gilmore. ”Coherent state:theory and some applications,” Rev. Mod. Phys. 62, 867-927 (1990).
* (40) C. T. Lee, ”Many-photon antibunching in generalized pair coherent states,” Phys. Rev. A , 41, 1569-1575 (1990).
* (41) E. P. Wigner, “On the quantum correction for thermodynamic equilibrium,” Phys. Rev. 40, 749-759 (1932).
* (42) G. S. Agarwal, E. Wolf, ”Calculus for Functions of Noncommuting Operators and General Phase-Space Methods in Quantum Mechanics. I. Mapping Theorems and Ordering of Functions of Noncommuting Operators,” Phys. Rev. D 2, 2161-2186 (1970).
* (43) M. S. Kim and V. Bužek, “Schrödinger-cat states at finit temperature: Influence of a finite-temperature heat bath on quantum interferences,” Phys. Rev. A 46, 4239-4251 (1992).
* (44) L. Y. Hu and H. Y. Fan, “Statistical properties of photon-added coherent state in a dissipative channel,” Phys. Scr. 79, 035004-035011 (2009).
* (45) H. Jeong, A. P. Lund, and T. C. Ralph, “Production of superpositions of coherent states in traveling optical fields with inefficient photon detection,” Phys. Rev. A 72, 013801-013812 (2005).
* (46) J. S. Neergaard-Nielsen, B. Melholt Nielsen, C. Hettich, K. Mølmer, and E. S. Polzik, “Generation of a Superposition of Odd Photon Number States for Quantum Information Networks,” Phys. Rev. Lett. 97, 083604-083607 (2006).
* (47) H. Jeong, J. Lee and H. Nha, “Decoherence of highly mixed macroscopic quantum superpositions,” J. Opt. Soc. Am. B 25, 1025-1030 (2008).
* (48) H. Y. Fan, ”Weyl ordering quantum mechanical operators by virtue of the IWWP technique,” J. Phys. A 25 3443 (1992).
* (49) H. Y. Fan, J. S. Wang, ”On the Weyl ordering invariant under general n-mode similar transformations,” Mod. Phys. Lett. A 20, 1525 (2005).
* (50) H. Y. Fan, “Newton-Leibniz integration for ket-bra operators in quantum mechanics (IV)—integrations within Weyl ordered product of operators and their applications,” Ann. Phys. 323, 500-526 (2008).
* (51) H. Y. Fan and H. R. Zaidi, Phys. Lett. A 124, 343 (1987).
* (52) C. Gardiner and P. Zoller, Quantum Noise (Springer, Berlin, 2000).
* (53) H. Jeong, J. Lee and M. S. Kim, “Dynamics of nonlocality for a two-mode squeezed state in a thermal environment,” Phys. Rev. A 61, 052101-052105 (2000).
* (54) J. Lee, M. S. Kim and H. Jeong, “Transfer of nonclassical features in quantum teleportation via a mixed quantum channel,” Phys. Rev. A 61, 052101-052105 (2000).
* (55) Y. Takahashi and Umezawa H, Collecive Phenomena 2, 55 (1975); Memorial Issue for Umezawa H, Int. J. Mod. Phys. B 10, 1695 (1996) memorial issue and references therein.
* (56) H. Umezawa, Advanced Field Theory – Micro, Macro, and Thermal Physics (AIP 1993).
* (57) H. Y. Fan and L. Y. Hu, “New approach for analyzing time evolution of density operator in a dissipative channel by the entangled state representation,” Opt. Commun. 281, 5571-5573 (2008).
* (58) H. Y. Fan, H. L. Lu and Y. Fan, ”Newton–Leibniz integration for ket–bra operators in quantum mechanics and derivation of entangled state representations,” Ann. Phys. 321, 480-494 (2006) and references therein.
* (59) A. Wünsche, ”About integration within ordered products in quantum optics,” J. Opt. B: Quantum Semiclass. Opt. 1, R11-R21 (1999).
* (60) L. Y. Hu and H. Y. Fan, “Time evolution of Wigner function in laser process derived by entangled state representation,” quant-ph: arXiv:0903.2900
|
arxiv-papers
| 2009-05-16T02:13:01 |
2024-09-04T02:49:02.652495
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Li-yun Hu, Xue-xiang and Hong-yi Fan",
"submitter": "Liyun Hu",
"url": "https://arxiv.org/abs/0905.2648"
}
|
0905.2694
|
# Manipulation of Vortices by Localized Impurities in Bose-Einstein
Condensates
M.C. Davis1, R Carretero-González1, Z. Shi2, K.J.H. Law2, P.G. Kevrekidis2,
and B.P. Anderson3 1Nonlinear Dynamical Systems Group111URL:
http://nlds.sdsu.edu/, Department of Mathematics and Statistics, and
Computational Science Research Center222URL: http://www.csrc.sdsu.edu/, San
Diego State University, San Diego CA, 92182-7720, USA.
2Department of Mathematics and Statistics, University of Massachusetts,
Amherst MA 01003,USA
3College of Optical Sciences and Department of Physics, University of Arizona,
Tucson AZ, 85721, USA.
###### Abstract
We consider the manipulation of Bose-Einstein condensate vortices by optical
potentials generated by focused laser beams. It is shown that for appropriate
choices of the laser strength and width it is possible to successfully
transport vortices to various positions inside the trap confining the
condensate atoms. Furthermore, the full bifurcation structure of possible
stationary single-charge vortex solutions in a harmonic potential with this
type of impurity is elucidated. The case when a moving vortex is captured by a
stationary laser beam is also studied, as well as the possibility of dragging
the vortex by means of periodic optical lattices.
## I Introduction
Interactions between localized impurities, or pinning centers, and flux lines
in type-II superconductors have long been of interest in condensed matter
physics Anderson1962aCampbell1972aDaldini1974aCivale1991a , with much recent
work focusing on the pinning effects of arrays of impurities
Baert1995aReichhardt2001aGrigorenko2003a . Similar studies of the interactions
between a vortex array in a rotating Bose-Einstein condensate (BEC) and a co-
rotating optical lattice Reijnders2004aPu2005aReijnders2005a have further
contributed to the interest in the physics of manipulating one array of
topological structures with a second array of pinning sites. Depending on the
configuration, depth, and rotation rate of the optical lattice, structural
changes to the vortex array may be induced, and have now been experimentally
observed Tung2006a . Furthermore, combining an optical lattice with a rotating
BEC may enable new investigations of other interesting phenomena, such as for
example, alterations to the superfluid to Mott-insulator transition
Bhat2006aGoldbaum2008a , production of vortex liquids with broken
translational symmetry Dahl2008a , and the existence of stable vortex
molecules and multi-quantum vortices Geurts2008a . Yet despite these
significant advances, the interactions between a _single_ vortex and a single
pinning site within a BEC, and the associated vortex dynamics, are not fully
understood and many problems remain unexplored. A more complete understanding
of such basic interactions may be important for the further development of
many ideas and experiments regarding vortex pinning and manipulation, even for
the case of vortex arrays. Here we undertake a theoretical and numerical study
that examines the possibility of vortex capture and pinning at a localized
impurity within the BEC, and the possibility of vortex manipulation and
dragging by a moving impurity.
Manipulation of coherent nonlinear matter-wave structures BECBOOK ;
NonlinearityReview in trapped BECs has indeed received some examination
Manipulation-SPIE . For example, in the case of negative scattering length
(attractive) BECs in a quasi-one-dimensional (1D) scenario, numerical analysis
shows that it is possible to pin bright solitons away from the center of
harmonic trap. More importantly, pinned bright solitons may be adiabatically
dragged and repositioned within the trap by slowly moving an external impurity
generated by a focused laser beam BS-Imp . Alternatively, bright solitons
might be pinned and dragged by the effective local minima generated by
adiabatically moving optical lattices and superlattices BS-OL1 ; BS-OSL2 . The
case of repulsive interactions has also drawn considerable attention. In the
1D setting, the effect of localized impurities on dark solitons was described
in Ref. DS-Imp , by using direct perturbation theory vvkve , and later in Ref.
fr1 , by the adiabatic perturbation theory for dark solitons KY . Also, the
effects and possible manipulation of dark solitons by optical lattices have
been studied in Refs. DS-OL1 ; DS-OL2 ; DS-OL3 .
In the present work, we limit our study of vortex-impurity interactions and
vortex manipulation to the case of a positive scattering length (repulsive)
pancake-shaped BEC that is harmonically trapped. We envision a single
localized impurity created by the addition of a focused laser beam BECBOOK ,
which may in principle be tuned either above or below the atomic resonance,
thereby creating a repulsive or attractive potential with blue or red
detunings, respectively. We concentrate on the dynamics of a blue-detuned beam
interacting with a single vortex.
Our manuscript is organized as follows. In the next section we describe the
physical setup and its mathematical model. In Sec. III we study the static
scenario of vortex pinning by the localized laser beam by describing in detail
the full bifurcation structure of stationary vortex solutions and their
stability as a function of the laser properties and the pinning position
inside a harmonic trap. In Sec. IV we study vortex dragging by an
adiabatically moving impurity. We briefly describe our observations also for
the case of single vortex manipulation using an optical lattice, and touch
upon the possibility of capturing a precessing vortex by a fixed impurity.
Finally, in Sec. V we summarize our results and discuss some possible
generalizations and open problems.
## II Setup
In the context of BECs at nano-Kelvin temperatures, mean-field theory can be
used to accurately approximate the behavior of matter-waves BECBOOK . The
resulting mathematical model is a particular form of the nonlinear Schrödinger
equation (NLS) known as the Gross-Pitaevskii equation (GPE) Gross:61 ;
Pitaevskii:61 . The GPE in its full dimensional form is as follows:
$i\hbar\psi_{t}=-\frac{\hbar^{2}}{2m}\nabla^{2}\psi+g|\psi|^{2}\psi+V(x,y,z,t)\,\psi,$
(1)
where $\psi(x,y,z,t)$ is the wavefunction describing the condensate, $m$ is
the mass of the condensed atoms, $g={4\pi\hbar^{2}a}/{m}$ and $a$ is their
$s$-wave scattering length. The time-dependent external potential $V(x,y,z,t)$
acting on the condensate is taken to be a combination of a static harmonic
trap (HT) holding the condensed atoms, and a localized impurity (Imp) provided
by a narrowly focused laser beam:
$V(x,y,z,t)=V_{\rm HT}(x,y,z)+V_{\rm Imp}(x,y,z,t).$ (2)
Herein we consider a harmonic trap potential
$V_{\rm
HT}(x,y,z)=\frac{m}{2}\omega_{r}^{2}\left(x^{2}+y^{2}\right)+\frac{m}{2}\omega_{z}^{2}z^{2},$
(3)
with trapping frequencies $\omega_{r}$ and $\omega_{z}$ in the radial and $z$
directions respectively. In general, $V_{\rm Imp}$ can be a negative or
positive quantity, corresponding to an impurity that is an attractive or
repulsive potential for the trapped atoms.
In the present study we further limit our attention to quasi-two-dimensional
condensates, the so-called pancake-shaped condensates, by considering that
$\omega_{z}\gg\omega_{r}$ and that the tight ($z$) direction condensate
profile is described by the harmonic trap ground state in that direction
BECBOOK ; NonlinearityReview . We also consider only cases where $V_{\rm Imp}$
is only a function of $x$ and $y$, and possibly $t$, and hereafter remove the
$z$ dependence from our notation. Under this assumption it is possible to
reduce the three-dimensional GPE (1) to an effective two-dimensional equation
that has the same form as its three-dimensional counterpart but with $g$
replaced by $g_{\rm 2D}=g/\sqrt{2\pi}a_{z}$, where
$a_{z}=\sqrt{\hbar/(m\omega_{z})}$ is the transverse harmonic oscillator
length BECBOOK ; NonlinearityReview .
Furthermore, by measuring, respectively, two-dimensional density, length,
time, and energy in units of $\hbar\omega_{z}/g_{\rm 2D}$, $a_{z}$,
$\omega_{z}^{-1}$, and $\hbar\omega_{z}$, one obtains the standard form for
the adimensionalized GPE in two dimensions:
$iu_{t}=-\frac{1}{2}(u_{xx}+u_{yy})+|u|^{2}u+V(x,y,t)u,$ (4)
where the harmonic potential now reads
$V_{\rm HT}(x,y)=\frac{\Omega^{2}}{2}\left(x^{2}+y^{2}\right),$ (5)
and $\Omega\equiv\omega_{r}/\omega_{z}$ is the adimensionalized harmonic trap
strength. We use throughout this work a typical value for the harmonic trap
strength of $\Omega=0.065$ unless stated otherwise. Other harmonic trap
strengths gave qualitatively similar results. In addition to the harmonic trap
we impose a localized potential stemming from an external localized laser beam
centered about $(x_{a}(t),y_{a}(t))$ that in adimensional form reads
$V_{\rm Imp}(x,y,t)=V^{(0)}_{\rm Imp}\
\exp\left(-\frac{[x-x_{a}(t)]^{2}+[y-y_{a}(t)]^{2}}{\varepsilon^{2}}\right).$
(6)
In this equation, $V^{(0)}_{\rm Imp}$ is proportional to the peak laser
intensity divided by the detuning of the laser from the atomic resonance, and
$\varepsilon=w_{0}/\sqrt{2}$ where $2w_{0}$ is the adimensional Gaussian beam
width. A positive (negative) $V^{(0)}_{\rm Imp}$ corresponds to the intensity
of a blue-(red-)detuned, repulsive (attractive) potential.
Steady-state solutions of the GPE are obtained by separating spatial and
temporal dependencies as $u(x,y,t)=\Psi(x,y)\ e^{-i\mu t}$, where $\Psi$ is
the steady-state, time-independent amplitude of the wavefunction and $\mu$ is
its chemical potential (taken here as $\mu=1$ in adimensional units). Under
the conditions that the location of the impurity is time-independent such that
$x_{a}$ and $y_{a}$ are constant, this leads to the steady-state equation
$\mu\Psi=-\frac{1}{2}\left(\Psi_{xx}+\Psi_{yy}\right)+|\Psi|^{2}\Psi+V(x,y)\Psi.$
(7)
The initial condition used in this study was one that closely approximates a
vortex solution of unit charge $s=\pm 1$ centered at $(x_{0},y_{0})$:
$\begin{array}[]{rcl}\Psi(x,y)&=&\Psi_{\rm
TF}(x,y)\,\tanh[(x-x_{0})^{2}+(y-y_{0})^{2}]\\\\[8.61108pt]
&&\times\exp\left[is\
\tan^{-1}\left\\{(y-y_{0})/(x-x_{0})\right\\}\right],\end{array}$ (8)
where $\Psi_{\rm TF}(x,y)=\sqrt{\max(\mu-V(x,y),0)}$ represents the shape of
the Thomas-Fermi (TF) cloud formed in the presence of the relevant external
potentials BECBOOK . Subsequently, this approximate initial condition was
allowed to converge to the numerically “exact” solutions by means of fixed
point iterations.
## III The Static Picture: Vortex Pinning and the bifurcations beneath
### III.1 Vortex Pinning by the Impurity
It is well-known that a vortex interacting with a harmonic trap undergoes a
precession based upon the healing length of the vortex and the parameters
which define the trap precession1 ; precession2 ; precession3 ; precession4 ;
precession5 ; precession6 ; precession7 ; precession8 . Since we are
introducing a localized impurity into the trap it is worthwhile to first
observe the behavior of a vortex interacting with only the localized impurity,
in the absence of the harmonic potential. We note that in this case, the
parameters $x_{a}$ and $y_{a}$ may be neglected and the parameters $x_{0}$ and
$y_{0}$ can be interpreted as the coordinates of the vortex relative to the
impurity.
Figure 1: (Color online) Density plot showing a snapshot of the interaction of
the vortex with a localized impurity in the absence of the harmonic trap
(i.e., $\Omega=0$). The presence of the impurity (wider field depression)
induces a clockwise rotation of the vortex (narrower field depression) along a
path depicted by the dark dots. The parameters are as follows:
$(\mu,\Omega,s,V_{\rm Imp}^{(0)},\varepsilon)=(1,0,1,5,1)$. The colorbar shows
the condensate density in adimensional units.
Figure 2: (Color online) Phase diagram depicting the vortex pinning by the
localized impurity of strength $V_{\rm imp}^{(0)}$ and width $\varepsilon$.
Each curve represents a different pinning location at the indicated radii
(i.e., distance from the center of the harmonic trap). Parameters are as
follows: $(\mu,\Omega,s)\ =\ (1,0.065,1)$.
By symmetry, a vortex placed at the center of an impurity (i.e.,
$x_{0}=y_{0}=0$) will result in a steady state without precessing. However, a
vortex placed off center with respect to the impurity will precess at constant
speed around the impurity due to the gradient in the background field induced
by the impurity Kivshar98 . In order to study this behavior in a simple
physically meaningful setting we start with a positive-charge ($s=1$) vortex
without the impurity and then the impurity is adiabatically switched on at a
prescribed distance away from the center of the vortex. We find that for
$V_{\rm Imp}^{(0)}>0$ the vortex then begins to precess around the impurity in
a clockwise direction. Reversing the sign of $V_{\rm Imp}^{(0)}$ in order to
create an attractive impurity induces a counter-clockwise precession with
respect to the impurity. An example of the vortex precession induced by the
impurity is shown in Fig. 1. It is crucial to note that if the impurity is
turned on “close enough” to the steady-state vortex such that the impurity is
within the vortex funnel then the vortex would begin its usual rotation but
would be drawn into the center of the impurity, effectively pinning the
vortex. This effective attraction is related to the emission of sound by the
vortex when it is inside the funnel of the impurity as described in Ref.
Parker:04 .
Throughout this work we follow the center of the vortices by detecting the
extrema of the superfluid vorticity $\omega$ defined as
$\mbox{\boldmath$\omega$}=\nabla\times{\mathbf{v}}_{s}$ where the superfluid
velocity in dimensional units is Jackson:98 ; BECBOOK
${\mathbf{v}}_{s}=-\frac{i\hbar}{2m}\frac{\psi^{*}\nabla\psi-\psi\nabla\psi^{*}}{|\psi|^{2}},$
(9)
where $(\cdot)^{*}$ stands for complex conjugation.
We now consider the net effect of the pinning induced by the impurity and the
precession induced by the harmonic trap. Since one of our main goals is to
find conditions needed for the manipulation of vortices using the repulsive
impurity, a _minimum_ requirement would be that the impurity’s pinning
strength is sufficient to overcome the precession inside the trap and thus pin
the vortex very close to the location of the impurity (i.e.,
$(x_{0},y_{0})\approx(x_{a},y_{a})$). Therefore, we seek to find the minimum
conditions that an off-center vortex, at a particular radius measured from the
center of the harmonic trap, could be pinned by a localized impurity at that
same location. For certain combinations of beam parameters (strong beam
intensity, or large beam widths), the vortex will remain localized near this
point. For other parameters (weak intensity, small beam widths), the beam
cannot overcome the vortex precession induced by the harmonic trap and the
vortex would not remain localized near the beam position. This would give us a
lower bound for the possible beam intensities and widths for which a vortex
might be dragged to the corresponding position within the BEC. The existence
of such pinned states was identified by searching in the impurity parameter
space (strength $V_{\rm imp}^{(0)}$ and width $\varepsilon$) for several off-
center radii, i.e., distances measured from the center of the trap. The
results are shown in the phase diagrams of Fig. 2 where each curve
corresponding to a different radius (decreasing from top to bottom) depicts
the boundary in $(V_{\rm imp}^{(0)},\varepsilon)$ parameter space for which
pinning is possible. In other words, for the points in parameter space below a
given curve, one gets primarily vortex precession dynamics induced by the
harmonic trap, whereas above these curves (i.e., for strong or wide enough
impurities), the vortex is trapped by the impurity and stays very close to it.
### III.2 Steady-state bifurcation structure
Figure 3: (Color online) The top panel is a one-dimensional slice (for
$y_{a}=0$) of the solution surfaces (represented in the inset panel) as a
function of the position of the impurity $(x_{a},y_{a})$ for fixed $V_{\rm
Imp}^{(0)}$. The vertical axis corresponds to the $L^{2}$-norm squared of the
solution (i.e., the normalized number of atoms). The dashed lines (yellow
(lighter) surface) correspond to unstable solutions, while the solid lines
(red (darker) surfaces) correspond to stable solutions. The critical value of
the radius, $x_{a,\rm{cr}}=3.72$ as well as the characteristic value $x_{a}=2$
and $x_{a}=0$ are represented by vertical lines. The bottom left panel shows
the squared bifurcating eigenvalues, $\lambda^{2}$, along these branches
($\lambda^{2}>0$ corresponds to an instability), while the bottom right shows
the saddle-node bifurcation as a function of $V_{\rm Imp}^{(0)}$ for $x_{a}=2$
fixed (there are lines at $V_{\rm Imp}^{\rm(cr)}=0.57$, $V_{\rm
Imp}^{(0)}=0.8$, and $V_{\rm Imp}^{(0)}=1$). The solutions and spectra for
each of the three branches, represented by circles (and letters) for each of
the characteristic values of $x_{a}$ and $V_{\rm Imp}^{(0)}$ are presented in
Fig. 4. For these branches $(\mu,s,\varepsilon)=(1,1,0.5)$.
In this section we elaborate our investigation of the pinning statics and the
associated dynamical stability picture. In particular, we thoroughly analyze
the bifurcation structure of the steady states with single-charge vorticity in
the setting investigated above (i.e. solutions to Eq. (7)) including their
stability. The latter will be examined by the eigenvalues of the linearization
around the steady state. Upon obtaining a steady state solution $\Psi$ of Eq.
(7) and considering a separable complex valued perturbation
$\tilde{u}=a(x,y)e^{\lambda t}+b^{*}(x,y)e^{\lambda^{*}t}$ of the steady
state, we arrive at the following eigenvalue problem for the growth rate,
$\lambda$, of the perturbation:
$\bordermatrix{&\cr&\mathcal{L}_{1}&\mathcal{L}_{2}\cr&-\mathcal{L}_{2}^{*}&-\mathcal{L}_{1}^{*}\cr}\bordermatrix{&\cr&a\cr&b\cr}=i\lambda\bordermatrix{&\cr&a\cr&b\cr},$
where
$\displaystyle\mathcal{L}_{1}$ $\displaystyle=$
$\displaystyle-\mu-\frac{1}{2}(\partial_{x}^{2}+\partial_{y}^{2})+V+2|\Psi|^{2}$
$\displaystyle\mathcal{L}_{2}$ $\displaystyle=$ $\displaystyle\Psi^{2}.$
---
Figure 4: (Color online) The solutions (left, zoomed in to the region where
the vortices live) and corresponding linearization spectra in a neighborhood
of the origin (right), for the various parameter values indicated by the
circles in Fig. 3. The top (bottom) row corresponds to $V_{\rm Imp}^{(0)}=1$
($V_{\rm Imp}^{(0)}=0.8$).
As evidenced by the bifurcation diagrams presented in Fig. 3, there exist
three solutions (i.e., steady-state vortex positions) for any impurity
displacement radius in the interval $(0,x_{a,\rm{cr}})$, letting $y_{a}=0$
without any loss of generality. At the low end of the interval (i.e., for an
impurity at the center of the trap, $(x_{a},y_{a})=(0,0)$) there is a
transcritical bifurcation where the left (i.e., ones for negative $x_{a}$) and
right (i.e., for positive $x_{a}$) solutions collide (see below for further
explanation) and exchange stability. At the other end of the interval (i.e.,
for an impurity at $(x_{a},y_{a})=(x_{a,\rm{cr}},0)$) there exists a saddle-
node bifurcation where two new steady-state vortex solutions can be thought of
as emerging as $x_{a}$ decreases to values $x_{a}<x_{a,\rm{cr}}$, or
conversely can be thought of as disappearing as $x_{a}$ increases to values
$x_{a}>x_{a,\rm{cr}}$. Among them, one stable vortex position is found very
close to the impurity (see cases “A” and “D” in Figs. 3 and Fig. 4), and
another unstable vortex position further away from the trap center (see cases
“B” and “E”) in Figs. 3 and Fig. 4). Considering only $x_{a}$, with $y_{a}=0$,
two solution branches are stable: one with the vortex sitting very close to
the impurity (see cases “A” and “D” in Figs. 3 and Fig. 4), and one with the
vortex close to the center of the harmonic trap (see cases “C” and “F” in
Figs. 3 and Fig. 4). The other solution is unstable (see cases “B” and “E” in
Figs. 3 and Fig. 4), with the vortex sitting in the small effective potential
minimum on the side of the impurity opposite the center of the harmonic trap.
This branch of solutions collides with the one that has a vortex pinned at the
impurity and the two disappear in a saddle-node bifurcation as the attraction
of the impurity becomes too weak to “hold” the vortex (this bifurcation
corresponds to the curves of critical parameter values in Fig. 2). At
$x_{a}=0$, the potential becomes radially symmetric and the solution with the
vortex on the outside of the impurity becomes identical (up to rotation) to
the one close to the origin. Indeed, at this point there is a single one-
parameter family of invariant solutions with the vortex equidistant from the
origin (i.e., for any angle in polar coordinates), in addition to the single
solution with the vortex centered at $(0,0)$. The solutions in this invariant
family, not being radially symmetric and being in a radially symmetric trap,
necessarily have an additional pair of zero eigenvalues in the linearization
spectrum to account for the additional invariance, i.e. they have 4 instead of
2 zero eigenvalues (note that the solution with the vortex in the center for
$x_{a}=0$ only has a single pair of zero eigenvalues due to its radial
symmetry). For $x_{a}<0$ the branches exchange roles (transcritical
bifurcation) as the previously imaginary pair of eigenvalues for the stable
$x_{a}>0$ branch (see cases “C” and “F” in Figs. 3 and Fig. 4) emerges on the
real axis and the pair of previously real eigenvalues from the unstable
$x_{a}>0$ branch (see cases “B” and “E” in Figs. 3 and Fig. 4) emerges on the
imaginary axis, i.e. the branches exchange their stability properties,
becoming the reflected versions of one another; see the bottom left panel of
Fig. 3 and the right column of Fig. 4. In summary, for an impurity that is
strong enough and close enough to the trap center (specifically, $0\leq
x_{a}<x_{a,\rm{cr}}$, where $x_{a,\rm{cr}}$ depends of the strength of the
impurity) it is possible to stably pin the vortex very close to the impurity.
However, if the impurity is too far away from the trap’s center (specifically,
$x_{a}>x_{a,\rm{cr}}$) the vortex can no longer be pinned by the impurity. The
top image of Fig. 3 depicts the bifurcations via the $L^{2}$-norm squared
(i.e., the normalized number of atoms) of the solution as a function of the
radius, $x_{a}$ (for $y_{a}=0$), and as a function of arbitrary impurity
location $(x_{a},y_{a})$ (inset).
In fact, the above picture holds for any fixed, sufficiently large $V_{\rm
Imp}^{(0)}$. Conversely, the same bifurcation structure can be represented by
a continuation in the amplitude of the impurity, $V_{\rm Imp}^{(0)}$ (see
bottom right panel of Fig. 3). In particular, for a fixed radius of $x_{a}=2$,
a continuation was performed and a saddle-node bifurcation appears for $V_{\rm
Imp}^{\rm(0,cr)}\approx 0.57$ where an unstable and a stable branch emerge
(corresponding exactly to those presented in the continuation in $x_{a}$). One
can then infer that the critical $V_{\rm Imp}^{(0)}$ decreases as $x_{a}$
decreases.
Figure 5: (Color online) Parameter regions for successful manipulation of a
single vortex inside a harmonic trap. The area above each curve corresponds to
the successful dragging region. The bottom panel depicts a zoomed region of
the top panel where the asterisk and cross correspond, respectively, to the
manipulation success and failure depicted in Fig. 6. These panels indicate
that higher intensity beams, and broader beams, can successfully drag vortices
over shorter timescales than weaker, narrower beams.
Figure 6: Successful (top row) and failed (bottom row) vortex dragging cases
by a moving laser impurity corresponding to the parameters depicted,
respectively, by an asterisk and a cross in the bottom panel of Fig. 5. In
both cases, the laser impurity, marked by a cross, with $(V^{(0)}_{\rm
Imp},\varepsilon)=(5,1)$ is moved adiabatically from $(0,0)$ to $(5.43,-5.43)$
and the snapshots of the density are shown every $0.5t^{*}$. The top row
corresponds to a successful manipulation for $\tau=18.5$ while the bottom row
depicts a failed dragging for a slightly lower adiabaticity of $\tau=18$.
Figure 7: (Color online) Vortex dragging by an optical lattice potential as in
Eq. (12). The central high-intensity maximum of the optical lattice (depicted
by a cross in the panels) is moved adiabatically from the initial position
$(x_{a}(0),y_{a}(0))=(0,0)$ to the final position $(5.43,5.43)$. The top row
depicts the BEC density (the colorbar shows the density in adimensional units)
while the bottom row depicts the phase of the condensate where the vortex
position (“plus” symbol in the right column) can be clearly inferred from the
$2\pi$ phase jump around its core. Observe how the vortex loses its guiding
well and jumps to a neighboring well in the right column. The left, middle and
right columns correspond, respectively, to times $t=0$, $t=0.5t^{*}$ and
$t=2.5t^{*}$. The remaining parameters are as follows:
$(V_{0},k,\tau,\theta_{x},\theta_{y})=(1.4,0.3215,20,0,0)$.
## IV The dynamical picture: Dragging and Capturing
### IV.1 Vortex Dragging
We would like now to take a pinned vortex and adiabatically drag it with the
impurity in a manner akin to what is has been proposed for bright BS-Imp ; BS-
OL1 ; BS-OSL2 and dark solitons DS-OL1 ; DS-OL2 ; DS-OL3 in the quasi-1D
configuration. Manipulation of the vortex begins with the focused laser beam
at the center of the vortex. The laser is then adiabatically moved to a
desired location while continually tracking the position of the vortex.
Adiabaticity for the motion of the impurity is controlled by the adiabaticity
parameter $\tau$ controlling the acceleration of the center
$(x_{a}(t),y_{a}(t))$ of the impurity as:
$\displaystyle x_{a}(t)$ $\displaystyle=$ $\displaystyle
x_{i}-\frac{1}{2}(x_{i}-x_{f})\left(1+\tanh\left[\frac{t-t^{*}}{\tau}\right]\right),$
$\displaystyle y_{a}(t)$ $\displaystyle=$ $\displaystyle
y_{i}-\frac{1}{2}(y_{i}-y_{f})\left(1+\tanh\left[\frac{t-t^{*}}{\tau}\right]\right),$
(10)
where the initial and final positions of the impurity are, respectively,
$(x_{i},y_{i})$ and $(x_{f},y_{f})$. We will assume $y_{a}(t)=0$ (i.e.,
$y_{i}=y_{f}=0$) for the discussion below. The instant of maximum acceleration
is
$t^{*}=\tanh^{-1}\left(\sqrt{1-\delta\tau}\right)\tau,$ (11)
where $\delta$ is a small parameter, $\delta=0.001$, such that the initial
velocity of the impurity is negligible and that $x_{a}(0)\approx x_{i}$ and
$x_{a}(2t^{*})\approx x_{f}$ (and the same for $y$). This condition on $t^{*}$
allows for the reduction of parameters and allows us to ensure that we begin
with a localized impurity very close to the center of the trap [i.e.,
$(x_{a}(0),y_{a}(0))\approx(0,0)$] and that we will drag it adiabatically to
$(x_{f},y_{f})$ during the time interval $[0,2t^{*}]$. The next objective is
to determine the relation between adiabaticity and the various parameters such
as strength ($V^{(0)}_{\rm Imp}$) or the width ($\varepsilon$) of the impurity
in order to successfully drag a vortex outward to a specific distance from the
center of the harmonic trap. In our study we set this distance to be half of
the radius of the cloud (half of the Thomas-Fermi radius). We use the value
$t^{*}$ to also define when to stop dynamically evolving our system. In
particular, we opt to continue monitoring the system’s evolution until
$t_{f}=3t^{*}$. This choice ensures that a vortex that might have been
lingering close to the impurity at earlier times would have either been
“swallowed up” by the impurity and remain pinned for later times, or will have
drifted further away due to the precession induced by the trap.
Applying this technique, along with a bisection method (successively dividing
the parameter step in half and changing the sign of the parameter stepping
once the threshold pinning value is reached) within the span of relevant
parameters yields the phase diagram depicted in Fig. 5. The various curves in
the figure represent the parameter boundaries for successful dragging of the
vortices for different impurity widths (increasing widths from top to bottom).
All the curves for different widths are qualitatively similar corresponding to
higher values of the adiabaticity parameter as the width is decreased. This
trend continues as $\varepsilon$ approaches the existence threshold
established in Fig. 2. In Fig. 6 we depict snapshots for the two cases
depicted by an asterisk (successful dragging) and a cross (failed dragging) in
the lower panel of Fig. 5.
All of the numerical simulations discussed above deal with dragging the vortex
by means of the localized impurity. As with previous works of vortex
manipulations we also attempted to produce similar results via an optical
lattice (OL) potential generated by counter-propagating laser beams BECBOOK .
In one dimension, the case of bright solitons manipulated by OLs has been
studied in Refs. BS-OL1 ; BS-OSL2 while the dark soliton case has been
treated in Refs. DS-OL1 ; DS-OL2 ; DS-OL3 . For a 1D OL, simply described by
$V^{1D}_{\rm OL}(x)=V_{0}\cos^{2}(kx+\theta_{x})$ where $k_{x}$ and
$\theta_{x}$ are the wavenumber and phase of the OL, the potential minima (or
maxima) are isolated from each other providing good effective potential minima
for pinning and dragging. On the other hand, when expanded to 2D, the OL
reads:
$V^{2D}_{\rm
OL}(x,y)=V_{0}\left[\cos^{2}(kx_{a}(t)+\theta_{x})+\cos^{2}(ky_{a}(t)+\theta_{y})\right],$
(12)
where $k$ and $\theta_{x,y}$ are, respectively, the wavenumber and phase of
the OL in the $x$ and $y$ direction. Here we observe that each 2D minimum (or
maximum) is no longer isolated, and that between two minima (or maxima) there
are areas for which the vortex can escape (near the saddle points of the
potential). This is exactly what we observed when attempting to drag a vortex
using the 2D OL (12) without sufficient adiabaticity. The vortex would meander
around the various facets of the lattice outside of our control. To overcome
this one needs to displace the potential with a high degree of adiabaticity.
In doing so, we were successful in dragging the vortices under some restraints
(relatively small displacements from the trap center). An example of a
partially successful vortex dragging by an OL with potential (12) is presented
in Fig. 7. As it can be observed from the figure, the vortex (whose center is
depicted by a “plus”) is dragged by the OL (whose center is depicted by a
cross) for some time. However, before the OL reaches its final destination,
the pinning is lost and the vortex jumps to the neighboring OL well to the
right. This clearly shows that vortex dragging with an OL is a delicate issue
due to the saddle points of the OL that allow the vortex to escape.
Nonetheless, for sufficient adiabaticity, with a strong enough OL and for
small displacements from the trap center, it is possible to successfully drag
the vortex. A more detailed study of the parameters that allow for a
successful dragging with the OL (i.e., relative strength and frequency of the
lattice and adiabaticity) falls outside of the scope of the present manuscript
and will be addressed in a future work.
Figure 8: (Color online) Capturing a precessing vortex by a stationary
impurity. The different paths correspond to isolated vortices that are
released by adiabatically turning off a pinning impurity at the following off-
center locations: $(5,0)$ (thin black line), $(6.5,0)$ (thick red line) and
$(8,0)$ (blue dashed line). The capturing impurity is located at $(-4,0)$. The
first and third cases fail to produce capturing while the second case manages
to capture the vortex. One interesting feature that we observed in the case of
successful capture is that, before its gets captured, the vortex gets drawn
into the impurity potential, but then almost gets knocked back out by the
phonon radiation waves created from the capture which bounce around within the
condensate.
### IV.2 Vortex Capturing
A natural extension of the above results is to investigate whether it is
possible to capture a vortex that is already precessing by an appropriately
located and crafted impurity. This idea of capturing, paired with the dragging
ability, suggests that a vortex created off-center, which is typically the
case in an experimental setting, can be captured, pinned and dragged to a
desired location either at the center of the trap or at some other distance
off-center. We now give a few examples demonstrating that it is indeed
possible for a localized impurity to capture a moving vortex. The simulation
begins with a steady-state solution of a vortex pinned by an impurity at a
prescribed radius and a second impurity on the opposite side of the trap at a
different radius. Initial numerical experiments have been done to determine
the importance of the difference in these distances from the trap center.
As is shown in Fig. 8 the capturing impurity must be located sufficiently
lower (i.e., closer to the trap center) than the trapping impurity in order
for the vortex to be pulled from its precession and be captured by the
impurity. Intuitively one might come to the conclusion that if the vortex and
impurity were located the same distance away from the center of the trap, then
the vortex should be captured. But due to the interaction between the vortex
and the impurity that was discussed earlier, as the vortex approaches the
impurity, it begins to interact with it by precessing clockwise around the
impurity. Thus the orientation of the vortex and impurity with respect to the
trap center greatly determines the dynamics. This combination of the
interactions of the vortex with the trap and the vortex with the impurity then
dictates that for a vortex to be captured by the impurity while precessing
around the harmonic trap and rotating around the impurity, the impurity must
be positioned at least closer to the trap center than the initial distance
between the vortex and the trap center.
## V Conclusions
In summary, we studied the effects on isolated vortices of a localized
impurity generated by a narrowly focused laser beam inside a parabolic
potential in the context of Bose-Einstein condensates (BECs). We not only
examined the dynamics (dragging and capture) of the vortex solutions in this
setting, but also analyzed in detail the stationary (pinned vortex) states,
their linear stability and the underlying bifurcation structure of the
problem.
As is already well known, the harmonic trap is responsible for the precession
of the vortex around the condensed cloud. We have further demonstrated that a
narrowly focused blue-detuned laser beam induces a local attractive potential
that is able to pin the vortex at various positions within the BEC, and we
investigated the dependence of pinning as a function of the laser beam
parameters (width and power) for different locations in the condensed cloud.
For a fixed beam width, we then explored the underlying bifurcation structure
of the stationary solutions in the parameter space of pinning position and
beam power. We found that for sufficiently high beam intensity it is possible
to overcome the vortex precession and to stably pin the vortex at a desired
position inside the condensed cloud. We also studied the conditions for a
vortex to be dragged by an adiabatically moving beam and concluded that for
sufficiently high intensity beams and for sufficient adiabaticity it is
possible to drag the vortex to almost any desired position within the BEC
cloud. The possibility of vortex dragging using periodic, two-dimensional,
optical lattices was also briefly investigated. Due to the lattice’s saddle
points between consecutive wells, the vortex is prone to escape to neighboring
wells and, therefore, dragging with optical lattices is arguably less robust
that its counterpart with focused laser beams. Finally, we presented the
possibility of capturing a precessing vortex by a stationary laser beam. Due
to the combined action of the precession about the harmonic trap and the
precession about the localized impurity, the stationary laser must be
carefully positioned to account for both precessions so that the vortex can be
successfully captured by the laser beam.
This work paves the way for a considerable range of future studies on the
topic of vortex-impurity interactions. Among the many interesting
possibilities that can be considered, we mention the case of more complex
initial conditions, such as higher topological charge ($\pm s$) vortices, and
that of complex dynamics induced by the effects of multiple laser beams. For
example, in the latter setting, we might envision a situation in which a
single vortex is localized to a region within a BEC by appropriate dynamical
manipulation of multiple laser beams _without_ relying on vortex pinning. Such
additional studies may provide a more complete understanding of the physics of
manipulating vortex arrays by optical lattices. Additional investigations will
also need to consider the role of finite temperature and damping, as well as
the consequences of moving impurities located near the Thomas-Fermi radius
where density is low and critical velocities for vortex shedding are much
lower than near the BEC center.
Another natural extension of our work is to study the manipulation of vortex
lines in three-dimensional condensates. It would be interesting to test
whether the beam could stabilize a whole vortex line (suppression of the so-
called Kelvin modes Bretin2003a ) and, moreover, change the orientation of a
vortex Haljan2001a . Along this vein, a more challenging problem would be to
study the pinning and manipulation of vortex rings by laser sheets; see e.g.
hau . These settings would also present the possibility of identifying a
richer and higher-dimensional bifurcation structure.
Acknowledgements. PGK gratefully acknowledges support from the NSF-CAREER
program (NSF-DMS-0349023), from NSF-DMS-0806762 and from the Alexander von
Humboldt Foundation. RCG gratefully acknowledges support from NSF-DMS-0806762.
## References
* (1) P.W. Anderson, Phys. Rev. Lett. 9, 309 (1962); A.M. Campbell and J.E. Evetts, Advan. Phys. 21, 199 (1972); O. Daldini, P. Martinoli, and J.L. Olsen, Phys. Rev. Lett. 32, 218 (1974); L. Civale _et al._ , Phys. Rev. Lett. 67, 648 (1991).
* (2) M. Baert, V.V. Metlushko, R. Jonckheere, V.V. Moshchalkov, and Y. Bruynseraede, Phys. Rev. Lett. 74, 3269 (1995); C. Reichhardt, C.J. Olson, R.T. Scalettar, and G.T. Zimányi, Phys. Rev. B 64, 144509 (2001); A.N. Grigorenko _et al._ , Phys. Rev. Lett. 90, 237001 (2003).
* (3) J.W. Reijnders and R.A. Duine, Phys. Rev. Lett. 93, 060401 (2004); H. Pu, L.O. Baksmaty, S. Yi, and N.P. Bigelow, Phys. Rev. Lett. 94, 190401 (2005); J.W. Reijnders and R.A. Duine, Phys. Rev. A 71, 063607 (2005).
* (4) S. Tung, V. Schweikhard, and E.A. Cornell, Phys. Rev. Lett. 97, 240402 (2006).
* (5) Rajiv Bhat, L.D. Carr, and M.J. Holland, Phys. Rev. Lett. 96, 060405 (2006); Daniel S. Goldbaum and Erich J. Mueller, Phys. Rev. A 77, 033629 (2008).
* (6) E.K. Dahl, E. Babaev, and A. Sudbø, Phys. Rev. Lett. 101, 255301 (2008).
* (7) R. Geurts, M.V. Milošević, and F.M. Peeters, Phys. Rev. A 78, 053610 (2008).
* (8) P.G. Kevrekidis, D.J. Frantzeskakis, and R. Carretero-González (eds). Emergent Nonlinear Phenomena in Bose-Einstein Condensates: Theory and Experiment. Springer Series on Atomic, Optical, and Plasma Physics, Vol. 45, 2008.
* (9) R. Carretero-González, D.J. Frantzeskakis and P.G. Kevrekidis. Nonlinearity, 21 R139 (2008).
* (10) R. Carretero-González, P.G. Kevrekidis, D.J. Frantzeskakis, and B.A. Malomed. Proc. SPIE Int. Soc. Opt. Eng. 5930 (2005) 59300L.
* (11) G. Herring, P.G. Kevrekidis, R. Carretero-González, B.A. Malomed, D.J. Frantzeskakis, and A.R. Bishop. Phys. Lett. A, 345 (2005) 144.
* (12) P.G. Kevrekidis, D.J. Frantzeskakis, R. Carretero-González, B.A. Malomed, G. Herring, and A.R. Bishop. Phys. Rev. A, 71 (2005) 023614.
* (13) M.A. Porter, P.G. Kevrekidis, R. Carretero-González, and D.J. Frantzeskakis. Phys. Lett. A, 352 (2006) 210.
* (14) V.V. Konotop, V.M. Pérez-García, Y.-F. Tang and L. Vázquez, Phys. Lett. A 236, 314 (1997).
* (15) V.V. Konotop and V.E. Vekslerchik, Phys. Rev. E 49, 2397 (1994).
* (16) D.J. Frantzeskakis, G. Theocharis, F.K. Diakonos, P. Schmelcher, and Yu.S. Kivshar, Phys. Rev. A 66, 053608 (2002).
* (17) Yu.S. Kivshar and X. Yang, Phys. Rev. E 49, 1657 (1994).
* (18) G. Theocharis, D.J. Frantzeskakis, R. Carretero-González, P.G. Kevrekidis and B.A. Malomed. Math. Comput. Simulat. 69 (2005) 537.
* (19) P.G. Kevrekidis, R. Carretero-González, G. Theocharis, D.J. Frantzeskakis and B.A. Malomed. Phys. Rev. A, 68 035602 (2003).
* (20) G. Theocharis, D.J. Frantzeskakis, R. Carretero-González, P.G. Kevrekidis, and B.A. Malomed. Phys. Rev. E, 71 (2005) 017602.
* (21) E.P. Gross. Nuovo Cim., 20 (1961) 454.
* (22) L.P. Pitaevskii. Sov. Phys. JETP, 13 (1961) 451.
* (23) A.L. Fetter, J. Low Temp. Phys. 113, 189 (1998).
* (24) A.A. Svidzinsky and A.L. Fetter, Phys. Rev. Lett. 84, 5919 (2000).
* (25) E. Lundh and P. Ao, Phys. Rev. A 61, 063612 (2000).
* (26) J. Tempere and J.T. Devreese, Solid State Comm. 113, 471 (2000).
* (27) D.S. Rokhsar, Phys. Rev. Lett. 79, 2164 (1997).
* (28) S.A. McGee and M.J. Holland, Phys. Rev. A 63, 043608 (2001).
* (29) B.P. Anderson, P.C. Haljan, C.E. Wieman and E.A. Cornell, Phys. Rev. Lett. 85, 2857 (2000).
* (30) P.O. Fedichev and G.V. Shlyapnikov, Phys. Rev. A 60, R1779 (1999).
* (31) Y.S. Kivshar,J. Christou, V. Tikhonenko, B. Luther-Davies, and L.M. Pismen, Opt. Comm. 152, 198 (1998).
* (32) N.G. Parker, N.P. Proukakis, C.F. Barenghi, and C.S. Adams, Phys. Rev. Lett. 92, (2004) 160403.
* (33) B. Jackson, J.F. McCann, and C.S. Adams. Phys. Rev. Lett. 80, 3903 (1998).
* (34) V. Bretin, P. Rosenbusch, F. Chevy, G.V. Shlyapnikov, and J. Dalibard, Phys. Rev. Lett. 90, 100403 (2003).
* (35) P.C. Haljan, B.P. Anderson, I. Coddington, and E.A. Cornell, Phys. Rev. Lett. 86, 2922 (2001).
* (36) N.S. Ginsberg, J. Brand and L.V. Hau, Phys. Rev. Lett. 94, 040403 (2005).
|
arxiv-papers
| 2009-05-16T16:57:11 |
2024-09-04T02:49:02.663946
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "M.C. Davis, R Carretero-Gonz\\'alez, Z. Shi, K.J.H. Law, P.G.\n Kevrekidis, and B.P. Anderson",
"submitter": "Kody Law",
"url": "https://arxiv.org/abs/0905.2694"
}
|
0905.2794
|
The idiots guide to Quantum Error Correction.
Simon J. Devitt^[electronic address: devitt@nii.ac.jp]
National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8340, Japan.
William J. Munro
Hewlett-Packard Laboratories, Filton Road, Stoke Gifford, Bristol BS34 8QZ, UK
National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan
Kae Nemoto
National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8340, Japan.
Quantum Error Correction and fault-tolerant quantum computation represent arguably the most vital
theoretical aspect of quantum information processing.
It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a
catastrophic obstacle to the development of large scale quantum computers. The introduction of Quantum Error Correction in
1995 showed that active techniques could be employed to mitigate this fatal problem. However, Quantum Error
Correction and fault-tolerant computation is now a much more mature field and many new codes, techniques and
methodologies have been developed to implement error correction for large scale quantum algorithms. This development
has been so pronounced that many in the field of quantum information, specifically those new to quantum information or those focused on the many other important
issues in quantum computation have not been able to keep up with the general formalisms and methodologies employed in this area.
In response we have attempted to summarize the basic aspects of Quantum Error Correction and fault-tolerance, not as a
detailed guide, but rather as a basic introduction. Rather than introducing these concepts from a rigorous mathematical and
computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, progressing
from basic examples of the 3-qubit code through to the stabilizer formalism which is now extremely important when understanding large
scale concatenated code structures, quantum circuit synthesis and the more recent developments in subsystem and
topological codes.
03.67.Lx, 03.67.Pp
§ INTRODUCTION
The micro-computer revolution of the late 20th century has arguably been of greater impact to
the world that any other technological revolution in history. The advent of transistors, integrated
circuits, and the modern microprocessor has spawned literally hundreds of devices from
pocket calculators to the iPod, all now integrated through an extensive worldwide communications
system. However, as we enter the 21st century, the rate at which computational power is increasing
is driving us very quickly to the realm of quantum physics. The component size of
individual transistors on modern microprocessors are becoming so small that quantum effects will soon
begin to dominate over classical electronic properties. Unfortunately the current designs for
micro-electronics mean that quantum mechanical behavior will tend to result in unpredictable and
unwanted behavior. Therefore, we have two choices: to keep trying to suppressing quantum effects
in classically fabricated electronics or move to
the field of quantum information processing (QIP) where we instead exploit them. This leads to
a paradigm shift in the way we view and process information and has lead to considerable interest from
physicists, engineers, computer scientists and mathematicians. The counter-intuitive and strange
rules of quantum physics offers enormous possibilities for information processing and the development
of a large scale quantum computer is the holy grail of many groups worldwide.
While the advent of Shor's algorithm [92]
certainly spawned great interest in quantum information processing
and demonstrated that the utilization of a quantum computer could lead to algorithms far more
efficient than those used in classical computing, there was a great deal of debate surrounding the
practicality of building a large scale, controllable, quantum system.
It was well known even before the introduction of
quantum information that coherent quantum states were extremely fragile and many believed that
to maintain large, multi-qubit, coherent quantum states for a long enough time to
complete any quantum algorithm was unrealistic [103]. Additionally,
classical error correction techniques are intrinsically based on a digital framework. Hence,
can the vast amount of knowledge gained from classical coding theory be adapted to the
quantum regime where while the readout of qubits is digital but actual manipulations are
Starting in 1995, several papers appeared, in rapid succession, proposing codes which were
appropriate to perform error correction on quantum data [90, 96, 24, 68].
This was the last theoretical aspect needed
to convince the general community that quantum computation was indeed a possibility.
Since this initial introduction, the progress in this field has been extensive.
Initial work on error correction focused heavily on developing
quantum codes [97, 20, 49, 81], introducing a more rigorous theoretical framework
for the structure and properties of Quantum Error Correction
(QEC) [60, 23, 54, 64, 63] and the introduction of
concepts such as fault-tolerant quantum computation [91, 33, 51] which
leads directly to the threshold theorem for concatenated QEC [65, 1].
In more recent years QEC protocols have been developed for various systems, such as
continuous variables [69, 19, 108, 8], ion-traps and other systems containing motional
degrees of freedom [71, 95], adiabatic computation [58] and
globally controlled quantum computers [11]. Additionally, work still continues on not only developing
more complicated (and in some ways, more technologically useful) protocols such as subsystem codes [10] and
topological codes [59, 30, 85, 42] but also advanced
techniques to implement error correction in a fault-tolerant
manner [98, 101, 25].
Along with QEC, other methods of protecting quantum information were also developed. These other techniques
would technically be placed in a separate category of error avoidance rather than error correction. The most well
known technique of error avoidance is protocols such as decoherence free subspaces (DFS) [27, 29, 115, 114, 28, 70].
While this protocol has the mathematical structure of a self correcting quantum code, it is largely a technique to suppress
certain, well structured, noise models. As with QEC, this field of error avoidance is vast, now incorporating ideas from
optimal control to create specially designed control sequences to counteract the effect of errors induced from environmental coupling.
These new methods of dynamical decoupling can take simple structures such as Bang-Bang control [107, 109, 113],
to more complicated and generalized protocols to help decouple qubits from the environment [106, 40, 104, 105].
This review deals exclusively with the concepts of QEC and fault-tolerant
quantum computation. Many papers have reviewed error correction and Fault-tolerance [50, 76, 52, 61, 100, 53], however to cater for
a large audience, we attempt to describe QEC and Fault-tolerance in a much more basic manner, largely through examples. Instead of providing a
more rigorous review of error correction, we instead try to focus on more practical
issues involved when working with these ideas.
For those who have recently begun investigating quantum information processing or those who are focused on other important theoretical
and/or experimental aspects related to quantum computing, searching through this enormous collection of work is daunting especially
if a basic working knowledge of QEC is all that is required. We hope that this review of the basic
aspects of QEC and fault-tolerance
will allow those with little knowledge of the field to quickly become accustomed to the various techniques and tricks that are commonly
We begin the discussion in section <ref> where we share some preliminary thoughts on the required properties of any
quantum error correcting protocol. In section <ref> we review some basic noise models from the context of
how they influence quantum algorithms. Section <ref>
introduces quantum error correction through the traditional
example of the 3-qubit code, illustrating the circuits used for encoding and correction and why the principal of
redundant encoding suppresses the failure of encoded qubits. Section <ref> then introduces the
stabilizer formalism [50],
demonstrating how QEC circuits are synthesized once the structure of the code is known.
In section <ref> we then briefly return to the noise models and relate the
abstract analysis of QEC, where errors are assumed to be discrete and probabilistic, to some of the
physical mechanisms which can cause errors. Sections <ref> and <ref>
introduces the concept of fault-tolerant error correction, the
threshold theorem and how logical gate operations can be applied directly to quantum data.
We then move on to circuit synthesis in section <ref> presenting a basic
fault-tolerant circuit design for logical state
preparation using the $[[7,1,3]]$ Steane code as a representative example of how to synthesize fault-tolerant
circuits from the stabilizer structure of quantum codes.
Finally in section <ref> we review specific codes for qubit loss and
examine two of the more modern techniques for error correction. We briefly examine
quantum subsystem codes [10] and topological surface codes [30, 42] due to both their
theoretical elegance and their increasing relevance in quantum architecture designs [26].
§ PRELIMINARIES
Before discussing specifically the effect of errors and the basics of Quantum Error Correction (QEC)
we first dispense with the very basics of qubits and quantum gates. We assume a basic
working knowledge with quantum information [37, 76]
and this brief discussion is used simply to define
our notation for the remainder of this review.
The fundamental unit of quantum information, the qubit, which unlike classical bits can exist in
coherent superpositions of two states, denoted $\ket{0}$ and $\ket{1}$. These basis states
can be photonic polarization, spin states, electronic states of an ion or charge states of
superconducting systems. An arbitrary state of an individual qubit, $\ket{\phi}$, can
be expressed as,
\begin{equation}
\ket{\phi} = \alpha\ket{0} + \beta\ket{1}
\end{equation}
where normalization requires, $|\alpha|^2+|\beta|^2 = 1$. Quantum gate operations are
represented by unitary operations acting on the Hilbert space of a qubit array. Unlike
classical information processing, conservation of probability for quantum states require that all
operations be reversible and hence unitary.
When describing a quantum gate on an individual qubit, any dynamical operation, $G$, is
a member of the unitary group $U(2)$, which consists of all $2\times 2$ matrices where
$G^{\dagger} = G^{-1}$. Up to a global (and unphysical) phase factor, any single qubit operation
can be expressed as a linear combination of the generators of $SU(2)$ as,
\begin{equation}
G = c_I \sigma_I + c_x \sigma_x + c_y\sigma_y + c_z\sigma_z
\end{equation}
\begin{equation}
\sigma_x = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \quad
\sigma_y = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, \quad
\sigma_z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix},
\end{equation}
are the Pauli matrices, $\sigma_I$ is the $2\times 2$ identity matrix and the co-efficients
$(c_I,c_x,c_y,c_z) \in \mathcal{C}$ satisfy $|c_I|^2+|c_x|^2+|c_y|^2+|c_z|^2 = 1$.
The concept of Quantum Error Correction (QEC) is fundamental to the large scale viability of quantum information
processing. Although the field is largely based on classical coding theory, there are several issues that need to
be considered when transferring classical error correction to the quantum regime.
First, coding based on data-copying, which
is extensively used in classical error correction cannot be used
due to the no-cloning theorem of quantum
mechanics [112]. This result implies that there exists no transformation resulting in the
following mapping,
\begin{equation}
U\ket{\phi} \otimes \ket{\psi} = \ket{\phi} \otimes \ket{\phi}.
\end{equation}
i.e. it is impossible to perfectly copy an unknown quantum state. This means that quantum data cannot be
protected from errors by simply making multiple copies.
Secondly, direct measurement cannot be used to effectively protect against errors, since this will
act to destroy any quantum superposition that is being used for computation. Error correction protocols
must therefore
be employed which can detect and correct errors without determining any information regarding the
qubit state.
Finally, unlike classical information, qubits can experience traditional bit errors,
$\ket{0} \leftrightarrow \ket{1}$
but unlike classical information, qubit are also susceptible to
phase errors $\ket{1} \leftrightarrow -\ket{1}$.
Hence any error correcting procedure needs to be able to
simultaneously correct for both.
At its most basic level, QEC utilizes the idea of redundant encoding
where quantum data is protected by extending the size of the Hilbert space
for a single, logically encoded qubit and essentially spreading out the information over
multiple qubits.
This way, errors only perturb codeword states by small amounts which can then be
detected and corrected, without directly measuring the
quantum state of any qubit.
§ QUANTUM ERRORS: CAUSE AND EFFECT
Before we even begin discussing the details of quantum error correction, we first examine some of the
common sources of errors in quantum information processing and contextualize what they imply
for computation. We will consider several important sources of errors and how they influence a
trivial, single qubit, quantum algorithm.
This trivial algorithm will be a computation consisting of a single qubit, intitilized in the $\ket{0}$ state
undergoing $N$ identity operations. Such that the final, error free state is,
\begin{equation}
\ket{\psi}_{\text{final}} = \prod^N \sigma_I\ket{0} = \ket{0},
\end{equation}
of the qubit in the $\ket{0}$, $\ket{1}$ basis
will consequently yield the result 0 with a probability of unity. We examine, independently, several
common sources of error from the effect they have on this simple quantum algorithm. Hopefully,
this introductory section will show that while quantum errors are complicated physical effects,
in QIP the relevant measure is the theoretical success probability of a given quantum algorithm.
§.§ Coherent Quantum Errors: You don't know what you are doing!
The first possible source of error is coherent, systematic control errors. This type of
error is typically associated with bad system control and/or characterization where imprecise
manipulation of the qubit introduces inaccurate Hamiltonian dynamics. As this source of
error is produced by inaccurate control of the system dynamics it does not produce mixed
states from pure states (i.e. it is a coherent, unitary error and does not destroy the quantum coherence of
the qubit but instead causes you to apply an undesired gate operation).
In our trivial algorithm, we are able to model this several different ways. To
keep things simple, we assume that incorrect characterization of the control dynamics leads to an
identity gate which is not $\sigma_I$, but instead introduces a small rotation around the
$X$-axis of the Bloch sphere, i.e.
\begin{equation}
\ket{\psi}_{\text{final}} = \prod^N e^{i\epsilon \sigma_x}\ket{0} = \cos(N\epsilon)\ket{0} + i\sin(N\epsilon)\ket{1}.
\end{equation}
We now measure the system in the $\ket{0}$ or $\ket{1}$ state.
In the ideal case, the computer should collapse to the state $\ket{0}$ with a probability of
one, $P(\ket{0})=1$. However we now find,
\begin{equation}
\begin{aligned}
&P(\ket{0}) = \cos^2(N\epsilon) \approx 1- (N\epsilon)^2, \\
&P(\ket{1}) = \sin^2(N\epsilon) \approx (N\epsilon)^2.
\end{aligned}
\end{equation}
Hence, the probability of error in this trivial quantum algorithm is given by $p_{\text{error}} \approx (N\epsilon)^2$, which will be
small given that $N\epsilon \ll 1$. The systematic error in this system is proportional to both the small systematic over rotation
and the total number of applied identity operations.
§.§ Decoherence: The devil is in the environment
Environmental decoherence is another important source of errors in quantum systems. Once again we will take a very basic example of
a decoherence model and examine how it influences our trivial algorithm. Later in section <ref> we will illustrate a more
complicated decoherence model that arises from standard mechanisms.
Consider a very simple environment, which is another two level quantum system. This environment has two basis states, $\ket{e_0}$
and $\ket{e_1}$ which satisfies the completeness relations,
\begin{equation}
\bra{e_i}e_j\rangle = \delta_{ij}, \quad \ket{e_0}\bra{e_0} + \ket{e_1}\bra{e_1} = I.
\end{equation}
We will also assume that the environment couples to the qubit in a specific way. When the qubit is in the $\ket{1}$ state, the coupling
flips the environmental state while if the qubit is in the $\ket{0}$ state nothing happens to the environment.
Additionally, as we anticipate the effect of this decoherence model
we will slightly alter our trivial algorithm.
Rather than considering a qubit prepared in the $\ket{0}$ state and applying $N$ identity operations, we instead modify
the algorithm to the following,
\begin{equation}
\begin{aligned}
\ket{\psi}_{\text{final}} = H\sigma_IH\ket{0} &=H\sigma_I \frac{1}{\sqrt{2}}(\ket{0} + \ket{1}) \\
&= H \frac{1}{\sqrt{2}}(\ket{0}+\ket{1}) = \ket{0}.
\end{aligned}
\end{equation}
Essentially we are performing two $H \equiv $ Hadamard operations separated by a wait stage, represented by the Identity gate.
Finally, this model assumes the system/environment interaction only occurs during this
wait stage of the algorithm. As with
the previous algorithm we should measure the state $\ket{0}$ with probability one. The reason for
modifying our trivial algorithm is because this specific decoherence model acts to reduce coherence between the $\ket{0}$ and $\ket{1}$
basis states and hence we require a coherent superposition to observe any effect from the environmental coupling.
We now assume that the environment starts in the pure state, $\ket{E} = \ket{e_0}$, and couples to the system such that,
\begin{equation}
H\sigma_I H\ket{0}\ket{E} = \frac{1}{2}(\ket{0}+\ket{1})\ket{e_0} + \frac{1}{2}(\ket{0}-\ket{1})\ket{e_1}
\end{equation}
As we are considering environmental decoherence, pure states will be transformed into classical mixtures, hence we now move
into the density matrix representation for the state $H\sigma_I H\ket{0}\ket{E}$,
\begin{equation}
\begin{aligned}
\rho_{f} &= \frac{1}{4}( \ket{0}\bra{0} + \ket{0}\bra{1}+\ket{1}\bra{0}+\ket{1}\bra{1})\ket{e_0}\bra{e_0}\\
&+\frac{1}{4}( \ket{0}\bra{0} - \ket{0}\bra{1}-\ket{1}\bra{0}+\ket{1}\bra{1})\ket{e_1}\bra{e_1} \\
&+ \frac{1}{4}( \ket{0}\bra{0} - \ket{0}\bra{1}+\ket{1}\bra{0}-\ket{1}\bra{1})\ket{e_0}\bra{e_1}\\
&+\frac{1}{4}( \ket{0}\bra{0} + \ket{0}\bra{1}-\ket{1}\bra{0}-\ket{1}\bra{1})\ket{e_1}\bra{e_0}.
\end{aligned}
\end{equation}
Since we do not measure the environmental degrees of freedom, we trace over this part of the system, giving,
\begin{equation}
\begin{aligned}
\text{Tr}_{E}(\rho_f) &= \frac{1}{4}( \ket{0}\bra{0} + \ket{0}\bra{1}+\ket{1}\bra{0}+\ket{1}\bra{1})\\
&+\frac{1}{4}( \ket{0}\bra{0} - \ket{0}\bra{1}-\ket{1}\bra{0}+\ket{1}\bra{1}) \\
&= \frac{1}{2}(\ket{0}\bra{0}+\ket{1}\bra{1}).
\end{aligned}
\end{equation}
Measurement of the system will consequently return $\ket{0}$ 50% of the time and $\ket{1}$ 50% of the time. This final state
is a complete mixture of the qubit states and is consequently a classical system. The coupling to the environment removed all the
coherence between the $\ket{0}$ and $\ket{1}$ states and consequently the second Hadamard transform,
intended to rotate $(\ket{0}+\ket{1})/\sqrt{2} \rightarrow \ket{0}$ has no effect.
Since we assumed that the system/enviroment
coupling during the wait stage causes the environmental degree of freedom to “flip" when the qubit is in the $\ket{1}$ state, this decoherence model implicitly incorporates a temporal effect.
The temporal interval of our identity gate in the above algorithm is long enough to enact this full controlled-flip operation. If we
assumed a controlled rotation that is not a full flip on the environment, the final mixture will not be 50/50. Instead there
would be a residual coherence between the qubit states and an increased probability of our algorithm returning a
$\ket{0}$. Section <ref>
revisits the decoherence model and illustrates how time-dependence is explicitly incorporated.
§.§ Loss, Leakage, Measurement and Initialization: Variations of the above
Other sources of error such as qubit initialization, measurement errors, qubit loss and qubit leakage are modeled in
a very similar manner. Measurement errors and qubit loss are modeled in the same way as environmental decoherence.
Measurement errors are described by utilizing the following measurement projection onto a qubit space,
\begin{equation}
A = (1-p_M)\ket{0}\bra{0} + p_M\ket{1}\bra{1}
\end{equation}
where $p_M \in [0,1]$ is the probability of measurement error. If we have a pure state $\rho = \ket{0}\bra{0}$, the probability of
measuring a $\ket{0}$ is,
\begin{equation}
P(\ket{0}) = \text{Tr}(A\rho) = (1-p)
\end{equation}
indicating that the correct result is observed with probability $1-p$.
Qubit loss is modeled in a slightly similar manner. When a qubit is lost, it is essentially coupled to the environment which acts to measure
the system, with the classical information lost. This coupling follows the decoherence analysis shown earlier, where a 50/50 mixed
state of the qubit results. Therefore the projector onto the qubit space is given by
$A = \frac{1}{2}(\ket{0}\bra{0} + \ket{1}\bra{1})$, which is identical to simply tracing over the lost qubit and equivalent to
a measurement error of probability $p=0.5$.
With this type of error channel, not only is the physical object lost (and hence cannot be directly measured), but an initially
pure qubit is converted to a completely mixed state. While this model of qubit loss is equivalent to environmental
coupling, correcting this type of error requires additional machinery on top of standard
QEC protocols. The
difficulty with qubit loss is the initial detection of whether the qubit is actually present. While standard correction
protocols can protect against the loss of information on a qubit, this still
assumes that the physical object still exists in the
computer. Hence in loss correction protocols, an initial non-demolition detection method must be employed (which
determines if the qubit is actually present without performing a projective measurement on the computational
state) before standard correction can be utilized to correct the error.
Initialization of the qubit can be modeled either using a coherent systematic error model or using the decoherence model. The
specific methodology depends largely on the physical mechanisms used to initialize the system. If a decoherence model is
employed, initialization is modeled exactly the same way as imperfect measurement. If we have a probability $p_I$ of
initialization error, the initial state of the system is given by the mixture,
\begin{equation}
\rho_i = (1-p_I)\ket{0}\bra{0} + p_I\ket{1}\bra{1}.
\end{equation}
In contrast, we could consider an initialization model which is achieved via a coherent unitary operation where the
target is the desired initial state. In this case, the initial state is pure, but contains a non-zero amplitude
of the undesired target, for example,
\begin{equation}
\ket{\psi}_i = \alpha\ket{0} + \beta\ket{1}
\end{equation}
where $|\alpha|^2+|\beta|^2 = 1$ and $|\beta|^2 \ll 1$. The interpretation of these two types of initialization models is identical
to the coherent and incoherent models presented. Again, the effect of these types of errors relates
to the probabilities of measuring the system in an erred state.
One final type of error that we can briefly mention is the problem of qubit leakage. Qubit leakage manifests itself due to the fact
that most systems utilized for qubit applications are not simple two level quantum systems. For example, Fig <ref> (from Ref. [99])
illustrates the energy level structure for a $^{43}$Ca$^+$ ion utilized for ion trap quantum computing
at Oxford.
(from Ref. [99]) Energy level structure for the $^{43}$Ca$^+$ investigated by the Oxford ion-trapping group.
The structure of this ion is clearly not a 2-level quantum system. Hence leakage into non-qubit states is an important factor to
The qubit in this system is
defined only with two electronic states, however the system itself contains many more levels
(including some which are used for qubit readout and initialization through optical pumping and photo-luminescence).
As with systematic errors, leakage can occur when improper control is applied to such a system. In the case of ion-traps,
qubit transitions are performed by focusing finely tuned lasers resonant on the relevant transitions. If the laser frequency
fluctuates or additional levels are not sufficiently detuned from the qubit resonance, the following transformation could
\begin{equation}
U\ket{0} = \alpha\ket{0} + \beta\ket{1} + \gamma\ket{2},
\end{equation}
where the state $\ket{2}$ is a third level which is now populated due to improper control. The actual effect of this type of
error can manifest in several different ways. The primary problem with leakage is that it violates the basic assumption of a
qubit structure to the computer. As quantum circuits and algorithms are fundamentally designed assuming the
computational array is a collection of 2-level systems, operators of the above form (which in this case is operating over
a 3-level space) will naturally induce unwanted dynamics. Another important implication of applying non-qubit operations
is how these levels interact with the environment and hence how decoherence effects the system.
For example, in the above case, the unwanted level, $\ket{2}$, may be extremely short lived leading to an
emission of a photon and the system relaxing back to the ground state. For these reasons, leakage is one of
the most problematic error channels to correct using QEC. In general, leakage induced errors need to be
corrected via the non-demolition detection of a leakage event
(i.e. determining if the quantum system is confined
to a qubit without performing a measurement discriminating the $\ket{0}$ and $\ket{1}$ states [80, 45, 110]) or through the
use of complicated pulse control which acts to re-focus a improperly confined quantum gate back to the
qubit subspace [111, 16]. In the context of mass manufacturing of qubit systems, leakage would be
quantified immediately after the fabrication of a device, using intrinsic characterization protocols such as those
discussed in Ref. [35]. If a particular system is found to be improperly confined to the qubit subspace it would simply be discarded. Employing characterization at this stage would then eliminate the need to
implement pulse control of leakage, shortening gate times and ultimately reducing error rates in the computer.
In this section we introduced the basic ideas of quantum errors and how they effect the success of a
quantum algorithm. Section <ref> will return in a more focused manner to error models and how they relate
to error correction in a quantum computer.
§ QEC, A GOOD STARTING POINT: THE 3-QUBIT CODE
The 3-qubit bit-flip code is traditionally used as a basic introduction to the concept of Quantum Error Correction.
However, it should be emphasized that the 3-qubit code does not represent a full quantum code. This is
due to the fact that the code cannot simultaneously correct for both bit and phase flips (see
section. <ref>), which is
a sufficient condition for correcting errors for an arbitrary error mapping on a single qubit.
This code is a standard repetition code which was extended by
Shor [90] to the full 9-qubit quantum code which was the
first demonstration that QEC was possible.
The 3-qubit code encodes a single logical qubit into three physical qubits with the property that
it can correct for a single $\sigma_x \equiv X$ bit-flip error. The two logical basis states
$\ket{0}_L$ and $\ket{1}_L$ are defined as,
\begin{equation}
\ket{0}_L = \ket{000}, \quad \quad \ket{1}_L = \ket{111},
\end{equation}
such that an arbitrary single qubit state $\ket{\psi} = \alpha\ket{0} + \beta\ket{1}$ is mapped to,
\begin{equation}
\begin{aligned}
\alpha\ket{0} + \beta\ket{1} &\rightarrow \alpha\ket{0}_L + \beta\ket{1}_L \\
&= \alpha\ket{000} +
\beta\ket{111} = \ket{\psi}_L.
\end{aligned}
\end{equation}
Fig. <ref> illustrates the quantum circuit required to encode a single logical qubit via the initialization of
two ancilla qubits and two CNOT gates.
Quantum Circuit to prepare the $\ket{0}_L$ state for the 3-qubit code where an arbitrary
single qubit state, $\ket{\psi}$ is coupled to two freshly initialized ancilla qubits via
CNOT gates to prepare $\ket{\psi}_L$.
The reason why this code is able to correct for a single bit flip error is the binary distance between the two
codeword states. Notice that three individual bit flips are required to take
$\ket{0}_L \leftrightarrow \ket{1}_L$, hence
if we assume $\ket{\psi} = \ket{0}_L$, a single bit flip on any qubit leaves the final state closer to $\ket{0}_L$
than $\ket{1}_L$. The distance between two codeword states, $d$, defines the number of errors that
can be corrected, $t$, as, $t = \lfloor(d-1)/2\rfloor$. In this case, $d=3$, hence $t=1$.
How are we able to correct errors using this code without directly measuring or obtaining information
about the logical state?
Two additional ancilla qubits are introduced, which are used to extract syndrome information (information
regarding possible errors) from the data block
without discriminating the exact state of any qubit, Fig. <ref> illustrates.
Circuit required to encode and correct for a single $X$-error. We assume that after encoding a single
bit-flip occurs on one of the three qubits (or no error occurs). Two initialized ancilla are then
coupled to the data block
which only checks the parity between qubits. These ancilla are then measured, with the measurement result
indicating where (or if) an error has occurred, without directly measuring any of the data qubits. Using this
syndrome information, the error can be corrected with a classically controlled $X$ gate.
For the sake of simplicity we assume that all gate operations are perfect and the only place where the
qubits are susceptible to error is the region between encoding and correction. We will return to
this issue in section <ref> when we discuss Fault-tolerance. We also assume that
at most, a single, complete bit flip error occurs on one of the three data qubits. Correction
proceeds by introducing two ancilla qubits and performing a sequence of CNOT gates, which checks the
parity of the three qubits. Table <ref> summarizes the state of the whole system, for each possible error, just prior to measurement.
Error Location Final State, $\ket{\text{data}}\ket{\text{ancilla}}$
No Error $\alpha\ket{000}\ket{00} + \beta\ket{111}\ket{00}$
Qubit 1 $\alpha\ket{100}\ket{11} + \beta\ket{011}\ket{11}$
Qubit 2 $\alpha\ket{010}\ket{10} + \beta\ket{101}\ket{10}$
Qubit 3 $\alpha\ket{001}\ket{01} + \beta\ket{110}\ket{01}$
Final state of the five qubit system prior to the syndrome measurement for no error or a single
$X$ error on one of the qubits. The last two qubits represent the state of the ancilla. Note that each possible
error will result in a unique measurement result (syndrome)
of the ancilla qubits. This allows for a $X$ correction gate to be applied
to the data block which is classically controlled from the syndrome result. At no point during correction do
we learn anything about $\alpha$ or $\beta$.
For each possible situation, either no error or a single bit-flip error, the ancilla qubits are flipped to a unique state based on
the parity of the data block. These qubits are then measured to obtain the classical syndrome result. The
result of the measurement will then dictate if an $X$ correction gate needs to be applied to a specific qubit, i.e.
\begin{equation}
\begin{aligned}
&\text{Ancilla Measurement:} \quad \ket{00}, \quad \text{Collapsed State:} \quad \alpha\ket{000} + \beta\ket{111} \quad \therefore \text{Clean State} \\
&\text{Ancilla Measurement:} \quad \ket{01}, \quad \text{Collapsed State:} \quad \alpha\ket{001} + \beta\ket{110} \quad \therefore \text{Bit Flip on Qubit 3} \\
&\text{Ancilla Measurement:} \quad \ket{10}, \quad \text{Collapsed State:} \quad \alpha\ket{010} + \beta\ket{101} \quad \therefore \text{Bit Flip on Qubit 2} \\
&\text{Ancilla Measurement:} \quad \ket{11}, \quad \text{Collapsed State:} \quad \alpha\ket{100} + \beta\ket{011} \quad \therefore \text{Bit Flip on Qubit 1} \\
\end{aligned}
\end{equation}
Provided that only a single error has occurred, the data block is restored. Notice that at no point during
correction do we gain any information regarding the co-efficients $\alpha$ and $\beta$, hence the computational
wave-function will remain intact during correction.
This code will only work if a maximum of one error occurs.
If two $X$ errors occur, then by tracking the circuit through
you will see that the syndrome result becomes ambiguous. For example, if an $X$ error occurs on both
qubits one and two, then the syndrome result will be $\ket{01}$.
This will cause us to mis-correct by applying an $X$ gate
to qubit 3. Therefore, two errors will induce a logical bit flip and causes the code to fail, as expected.
To be absolutely clear on how QEC acts to restore the system and protect against errors. Let us now consider a different and
more physically realistic error mapping. We will
assume that the errors acting on the qubits are coherent rotations of the form
$U = \exp (i\epsilon \sigma_x)$ on each qubit, with $\epsilon \ll 1$. We
choose coherent rotations so that we can remain in the state vector representation. This is not a necessary requirement, however more
general incoherent mappings would require us to move to density matrices.
We assume that each qubit experiences the same error, hence the error operator acting on the state is,
\begin{equation}
\begin{aligned}
\ket{\psi}_E = E&\ket{\psi}_L,\\
E = U^{\otimes 3} &= (\cos(\epsilon)I + i\sin(\epsilon)\sigma_x)^{\otimes 3} \\
&= c_0III+ c_1 (\sigma_x\sigma_I\sigma_I+\sigma_I\sigma_x\sigma_I+\sigma_I\sigma_I\sigma_x) \\
&+ c_2 (\sigma_x\sigma_x\sigma_I+\sigma_I\sigma_x\sigma_x+\sigma_x\sigma_I\sigma_x) \\
&+ c_3 \sigma_x\sigma_x\sigma_x.
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
&c_0 = \cos^3(\epsilon), \\
&c_1 = i\cos^2(\epsilon)\sin(\epsilon) \\
&c_2 = -\cos(\epsilon)\sin^2(\epsilon)\\
&c_3 = -i\sin^3(\epsilon).
\end{aligned}
\end{equation}
Now let's examine
the transformation that occurs when we run the error correction circuit in Fig. <ref>, which we denote via the unitary transformation,
$U_{QEC}$, over both the data and ancilla qubits,
\begin{equation}
\begin{aligned}
U_{QEC} E\ket{\psi}_L\ket{00} &=
+c_1 (\sigma_x\sigma_I\sigma_I\ket{\psi}_L\ket{11} + \sigma_I\sigma_x\sigma_I\ket{\psi}_L\ket{10} + \sigma_I\sigma_I\sigma_x\ket{\psi}_L\ket{01}) \\
&+c_2 (\sigma_x\sigma_x\sigma_I \ket{\psi}_L\ket{01} + \sigma_I\sigma_x\sigma_x\ket{\psi}_L\ket{11}
\end{aligned}
\end{equation}
Once again, the ancilla block is measured and the appropriate correction operator is applied, yielding the results (up to renormalization),
\begin{equation}
\begin{aligned}
&\text{Ancilla Measurement:} \quad \ket{00}, \quad \text{Collapsed State (with correction) :} \quad c_0\ket{\psi}_L + c_3\sigma_x\sigma_x\sigma_x\ket{\psi}_L \\
&\text{Ancilla Measurement:} \quad \ket{01}, \quad \text{Collapsed State (with correction) :} \quad c_1\ket{\psi}_L + c_2\sigma_x\sigma_x\sigma_x\ket{\psi}_L \\
&\text{Ancilla Measurement:} \quad \ket{10}, \quad \text{Collapsed State (with correction) :} \quad c_1\ket{\psi}_L + c_2\sigma_x\sigma_x\sigma_x\ket{\psi}_L \\
&\text{Ancilla Measurement:} \quad \ket{11}, \quad \text{Collapsed State (with correction) :} \quad c_1\ket{\psi}_L + c_2\sigma_x\sigma_x\sigma_x\ket{\psi}_L \\
\end{aligned}
\end{equation}
In each case, after correction (based on the syndrome result), we are left with approximately the same state. A superposition
of a “clean state" with the logically flipped state, $\sigma_x\sigma_x\sigma_x\ket{\psi}$.
The important thing to notice is the amplitudes related to the
terms in the superposition.
If we consider the unitary $U$ acting on a single, unencoded qubit, the rotation takes,
\begin{equation}
U\ket{\psi} = \cos(\epsilon)\ket{\psi} + i\sin(\epsilon)\sigma_x\ket{\psi},
\end{equation}
Consequently, the fidelity of the single qubit state is,
\begin{equation}
F_{\text{unencoded}} = |\bra{\psi}U\ket{\psi}|^2 = \cos^2{\epsilon} \approx 1-\epsilon^2
\end{equation}
In contrast, the fidelity of the encoded qubit state after a cycle of error correction is,
\begin{equation}
\begin{aligned}
F_{\text{no detection}} = \frac{|c_0|^2}{|c_0|^2+|c_3|^2} &= \frac{\cos^6(\epsilon)}{\cos^6(\epsilon)+\sin^6(\epsilon)} \\ &\approx 1-\epsilon^6,
\end{aligned}
\end{equation}
with probability $1-3\epsilon^2+O(\epsilon^4)$ and
\begin{equation}
\begin{aligned}
F_{\text{error detected}} &= \frac{|c_1|^2}{|c_1|^2+|c_2|^2} \\ &= \frac{\cos^4(\epsilon)\sin^2(\epsilon)}{\cos^4(\epsilon)\sin^2(\epsilon)+\sin^4(\epsilon)\cos^2(\epsilon)} \\ &\approx 1-\epsilon^2.
\end{aligned}
\end{equation}
with probability $3\epsilon^2 + O(\epsilon^4)$.
This is the crux of how QEC suppresses errors at the logical level. During a round of error correction, if no error is detected (which
if the error rate is small, occurs with high probability), the error on the resulting state is suppressed from $O(\epsilon^2)$ to
$O(\epsilon^6)$, while if a single error is detected, the fidelity of the resulting state remains the same.
This is expected, as the 3-qubit code is a single error correcting code. If one error has already been corrected
then the failure rate of the logical system is conditional on experiencing one further error
(which will be proportional to $\epsilon^2$). As $\epsilon \ll 1$
the majority of correction cycles will detect no error and the fidelity of the resulting
encoded state is higher than when
unencoded. Note, that as $\epsilon^2 \rightarrow 1/3$ the benefit of the code disappears
as every correction cycle detects an error and the resulting fidelity is no better than an
unencoded qubit
It should be stressed that no error correction scheme will, in general, restore a corrupted state to a perfectly clean code-state. The resulting state will
contain a superposition of a clean state and corrupted states, the point is that the fidelity of the corrupted states, at the logical level,
is greater than the corresponding fidelity for unencoded qubits. Consequently the probability of measuring the correct result at the
end of a specific algorithm increases when the system is encoded.
This example shows the basic principles of error correction. As mentioned earlier, the 3-qubit code does not represent a full quantum code and the
error model that we considered neglected imperfect gates and the possibility of errors occurring during state preparation and/or correction.
In the coming sections we will briefly take a look at several full quantum codes, both used for quantum memory and computation and we
will introduce the concept of full QEC using stabilizer codes. This will then lead to a description of full fault-tolerant quantum error correction.
§ THE NINE QUBIT CODE: THE FIRST FULL QUANTUM CODE
The nine qubit error correcting code was first developed by Shor [90] in 1995 and is based largely on the 3-qubit repetition code. The Shor code is a degenerate single error correcting code able to correct a logical qubit from one discrete bit flip, one discrete phase flip or
one of each on any of the nine physical qubits and is therefore
sufficient to correct for any continuous linear combination of errors on a single qubit.
The two basis states for the code are,
\begin{equation}
\begin{aligned}
\ket{0}_L = \frac{1}{\sqrt{8}}(\ket{000}+\ket{111})(\ket{000}+\ket{111})(\ket{000}+\ket{111}) \\
\ket{1}_L = \frac{1}{\sqrt{8}}(\ket{000}-\ket{111})(\ket{000}-\ket{111})(\ket{000}-\ket{111}) \\
\end{aligned}
\end{equation}
and the circuit to perform the encoding is shown in Fig. <ref>.
Circuit required to encode a single qubit with Shor's nine qubit code.
Correction for $X$ errors, for each block of three qubits encoded to $(\ket{000}\pm \ket{111})/\sqrt{2}$
is identical to the three qubit code shown earlier. By performing the correction circuit shown in Fig. <ref> for each block of three
qubits, single $\sigma_x \equiv X$ errors can be detected and corrected. Phase errors
($\sigma_z \equiv Z$) are corrected by examining the sign differences between the
three blocks. The circuit shown in Fig. <ref> achieves this.
Circuit required to perform phase correction for the 9-qubit code.
The first set of six CNOT gates compares the sign of blocks one and two of the code state and the second set of CNOT gates compares the
sign for blocks two and three. Note that a phase flip on any one qubit in a block of three has the same effect, this is why the 9-qubit
code is referred to as a degenerate code. In other error correcting codes, such as the 5- or 7-qubit codes [96, 68], there is a one-to-one
mapping between correctable errors and unique states, in degenerate codes such as this, the mapping is not unique. Hence
provided we know in which block the error occurs it does not matter which qubit we apply the correction operator to.
As the 9-qubit code can correct for single $X$ errors in any one block of three and a single phase error on any of the nine qubits, this
code is a full quantum error correcting code (we will detail in section <ref> why phase and bit correction is
sufficient for the correction
of arbitrary qubit errors). Even if a bit and phase error occurs on the same qubit, the $X$ correction circuit will detect and correct for
bit flips while the $Z$ correction circuit will detect and correct for phase flips. As mentioned, the $X$ error correction does
have the ability to correct for up to three individual bit flips (provided each bit flip occurs in a different block of three). However, in general
the 9-qubit code is only a single error correcting code as it cannot handle multiple errors if they occur in certain locations.
The 9-qubit code is in fact a member of a broader class of error correcting codes known as Bacon-Shor or subsystem codes [10].
Subsystem codes have the property that certain subgroups of error operators do not corrupt the logical space. This can be seen by
considering phase errors that occur in pairs for any block of three. For example, a phase flip on qubits one, two, four and five will leave
both logical states unchanged. Subsystem codes are very nice codes from an architectural point of view. Error correction circuits and
gates are generally simpler than for non-subsystem codes, allowing for circuit structures more amenable to the physical restrictions of
a computer architecture [2]. Additionally as subsystem codes that can correct for a larger number of errors have a similar structure, we
are able to perform dynamical switching between codes, in a fault-tolerant manner, which allows us to adapt the error protection in the computer
to be changed depending on the noise present at a physical level [88]. We will return and revisit subsystem codes later in section <ref>
§ QUANTUM ERROR DETECTION
So far we have focused on the ability to not only detect errors, but also to correct them. Another approach is to not enforce the correction requirement. Post-selected quantum computation, developed by Knill [66] demonstrated that large scale quantum
computing could be achieved with much higher noise rates when error detection is employed instead of more costly correction protocols.
The basic idea in post-selected schemes is to encode the computer with error detecting circuits and if errors are detected, the relevant subroutine of the quantum algorithm is reset and run again, instead of performing active correction. One of the downside
to these types of schemes is that although they lead to large tolerable error rates, the resource requirements are
unrealistically high.
The simplest error detecting circuit is the 4-qubit code [45]. This encodes two logical qubits into four physical qubits with the ability to detect a single
error on either of the two logical qubits. The four basis states for the code are,
\begin{equation}
\begin{aligned}
&\ket{00} = \frac{1}{\sqrt{2}}(\ket{0000}+\ket{1111}), \\
&\ket{01} = \frac{1}{\sqrt{2}}(\ket{1100}+\ket{0011}), \\
&\ket{10} = \frac{1}{\sqrt{2}}(\ket{1010}+\ket{0101}), \\
&\ket{11} = \frac{1}{\sqrt{2}}(\ket{0110}+\ket{1001}).
\end{aligned}
\end{equation}
Fig. <ref> illustrates the error detection circuit that can be utilized to detect a single bit and/or phase flip on one of
these encoded qubits.
Circuit required detect errors in the 4-qubit error detection code. If both ancilla measurements return $\ket{0}$, then
the code state is error free. If either measurement returns $\ket{1}$, an error has occurred. Unlike the 9-qubit code, the detection of
an error does not give sufficient information to correct the state.
If a single bit and/or phase flip occurs on one of the four qubits then the ancilla qubits
will be measured in the $\ket{1}$ state. For example, let us consider the cases when a single bit flip occurs on one of each of the four qubits.
The state of the system, just prior to the measurement of the ancilla is in table. <ref>.
Error Location Final State, $\ket{\text{data}}\ket{\text{ancilla}}$
No Error $\ket{\psi}_L\ket{00}$
Qubit 1 $X_1\ket{\psi}_L\ket{10}$
Qubit 2 $X_2\ket{\psi}_L\ket{10}$
Qubit 3 $X_3\ket{\psi}_L\ket{10}$
Qubit 4 $X_4\ket{\psi}_L\ket{10}$
Qubit and ancilla state, just prior to measurement for the 4-qubit error detection code when a single bit-flip has occurred on at most one
of the four qubits.
Regardless of the location of the bit flip, the ancilla system is measured in the state $\ket{10}$. Similarly if one considers a single
phase error on any of the four qubits the ancilla measurement will return $\ket{01}$. In both cases no information is obtained regarding where
the error has occurred, hence it is not possible to correct the state. Instead the subroutine can
be reset and re-run.
§ STABILIZER FORMALISM
So far we have presented error correcting codes from the perspective of their state representations and their preparation and correction circuits.
This is a rather inefficient method for describing the codes as the state representations and circuits clearly differ from code to code. The majority
of error correcting codes that are used within the literature are members of a class known as stabilizer codes. Stabilizer codes are
very useful to work with. The general formalism applies broadly and there exists general rules to construct preparation circuits, correction circuits
and fault-tolerant logical gate operations once the stabilizer structure of the code is specified.
The stabilizer formalism which was first introduced by Daniel Gottesman [50] uses essentially the Heisenberg
representation for quantum mechanics which describes quantum states in terms of operators rather that the
basis states themselves. An arbitrary state $\ket{\psi}$ is defined to be stabilized by some operator, $K$, if it is a
$+1$ eigenstate of $K$, i.e.
\begin{equation}
K\ket{\psi} = \ket{\psi}.
\end{equation}
For example, the single qubit state $\ket{0}$ is stabilized by the operator $K = \sigma_z$, i.e.
\begin{equation}
\sigma_z\ket{0} = \ket{0}
\end{equation}
Defining multi-qubit states with respect to this formalism relies on the group structure of multi-qubit operators.
Within the group of all possible,
single qubit operators, there exists a subgoup, denoted the Pauli group,
$\mathcal{P}$, which contains the following elements,
\begin{equation}
\mathcal{P} = \{\pm \sigma_I, \pm i \sigma_I, \pm \sigma_x,
\pm i \sigma_x,\pm \sigma_y, \pm i \sigma_y,\pm \sigma_z, \pm i \sigma_z\}.
\end{equation}
It is easy to check that these matrices form a group under multiplication
through the commutation and anti-commutation
rules for the Pauli set, $\{\sigma_i \} = \{ \sigma_x,\sigma_y,\sigma_z\}$,
\begin{equation}
[\sigma_i,\sigma_j] = 2i\epsilon_{ijk}\sigma_k, \quad \quad \{\sigma_i,\sigma_j\} = 2\delta_{ij},
\end{equation}
\begin{equation}
\epsilon_{ijk} = \Bigg \{
\begin{array}{l}
+1\text{ for } (i,j,k) \in \{(1,2,3), (2,3,1), (3,1,2)\}\\
-1 \text{ for } (i,j,k) \in \{(1,3,2), (3,2,1), (2,1,3)\}\\
0 \text{ for } i=j, j=k, \text{ or } k=i
\end{array}
\end{equation}
\begin{equation}
\delta_{ij} = \Bigg \{
\begin{array}{cr}
1\text{ for } i = j\\
0 \text{ for } i \neq j.
\end{array}
\end{equation}
The Pauli group extends over N-qubits by simply taking the $N$ fold tensor product of $\mathcal{P}$, i.e.
\begin{equation}
\begin{aligned}
\mathcal{P}_N &= \mathcal{P}^{\otimes N} \\
&= \{\pm \sigma_I, \pm i \sigma_I, \pm \sigma_x,
\pm i \sigma_x,\pm \sigma_y, \pm i \sigma_y,\pm \sigma_z, \pm i \sigma_z\}^{\otimes N}.
\end{aligned}
\end{equation}
An $N$-qubit stabilizer state, $\ket{\psi}_N$ is then defined via an $N$-element Abelian
subgroup, $\mathcal{G}$, of the $N$-qubit Pauli group, in which $\ket{\psi}_N$ is a $+1$
eigenstate of each element,
\begin{equation}
\begin{aligned}
\mathcal{G} &= \\
&\{\; G_i \;|\; G_i\ket{\psi} = \ket{\psi}, \; [G_i,G_j] = 0 \; \forall \; (i,j) \} \subset \mathcal{P}_N.
\label{eq:stabdef}
\end{aligned}
\end{equation}
Given this definition, the state $\ket{\psi}_N$ can be equivalently defined either through the
state vector representation or by specifying the stabilizer set, $\mathcal{G}$.
Many extremely useful multi-qubit states are stabilizer states, including two-qubit Bell states,
Greenberger-Horne-Zeilinger (GHZ) states [48, 47], Cluster states [18, 82]
and codeword states for QEC. As an example, consider a three qubit GHZ state, defined as,
\begin{equation}
\ket{\text{GHZ}}_3 = \frac{\ket{000} + \ket{111}}{\sqrt{2}}.
\end{equation}
This state can be expressed via any three linearly independent elements
of the $\ket{\text{GHZ}}_3$ stabilizer group for example,
\begin{equation}
\begin{aligned}
G_1 &= \sigma_x\otimes \sigma_x \otimes \sigma_x \equiv XXX, \\
G_2 &= \sigma_z\otimes \sigma_z \otimes \sigma_I \equiv ZZI, \\
G_3 &= \sigma_I \otimes \sigma_z \otimes \sigma_z \equiv IZZ.
\end{aligned}
\end{equation}
where the right-hand side of each equation is the short-hand representation of stabilizers.
Note that these three operators form an Abelian group [Eq. <ref>] as,
\begin{equation}
\begin{aligned}
[G_i,G_j]\ket{\psi} &= G_iG_j\ket{\psi} - G_jG_i\ket{\psi} \\
&= \ket{\psi}-\ket{\psi} = 0, \quad \forall \quad [i,j,\ket{\psi}].
\end{aligned}
\end{equation}
Similarly, the four orthogonal Bell states,
\begin{equation}
\begin{aligned}
\ket{\Phi^{\pm}} &= \frac{\ket{00} \pm \ket{11}}{\sqrt{2}}, \\
\ket{\Psi^{\pm}} &= \frac{\ket{01} \pm \ket{10}}{\sqrt{2}},
\end{aligned}
\end{equation}
are stabilized by the operators, $G_1 = (-1)^aXX$, and $G_2 = (-1)^b ZZ$. Where $[a,b] \in \{0,1\}$ and each of the
four Bell states correspond to the four unique pairs,
$\{\Phi^+,\Psi^+,\Phi^-,\Psi^-\} = \{ [0,0],[0,1],[1,0],[1,1]\}$.
§ QEC WITH STABILIZER CODES
The use of the stabilizer formalism to describe quantum error correction codes is extremely useful since it
allows for easy synthesis of correction circuits and also clearly shows how logical operations can be performed
directly on encoded data. As an introduction we will focus on arguably the most well known quantum code, the
7-qubit Steane code, first proposed in 1996 [96].
The 7-qubit code represents a full quantum code that encodes seven physical qubits into one logical qubit, with
the ability to correct for a single $X$ and/or $Z$ error.
The $\ket{0}_L$ and $\ket{1}_L$ basis states are
defined as,
\begin{equation}
\begin{aligned}
|0\rangle_L = \frac{1}{\sqrt{8}}(&|0000000\rangle + |1010101\rangle + |0110011\rangle + |1100110\rangle +
|0001111\rangle + |1011010\rangle + |0111100\rangle + |1101001\rangle),\\
|1\rangle_L = \frac{1}{\sqrt{8}}(&|1111111\rangle + |0101010\rangle + |1001100\rangle + |0011001\rangle +
|1110000\rangle + |0100101\rangle + |1000011\rangle + |0010110\rangle).
\label{eq:log}
\end{aligned}
\end{equation}
The stabilizer set for the 7-qubit code is fully specified by the six operators,
\begin{equation}
\begin{aligned}
&K^1 = IIIXXXX, \quad \quad K^2 = XIXIXIX,\\
&K^3 = IXXIIXX, \quad \quad K^4 = IIIZZZZ \\
&K^5 = ZIZIZIZ, \quad \quad K^6 = IZZIIZZ.
\end{aligned}
\label{eq:stab7}
\end{equation}
As the 7-qubit codeword states are specified by only six stabilizers, the code contains two basis states,
which are the logical states. With a final operator, $K^7 = ZZZZZZZ=Z^{\otimes 7}$
fixing the state to one of the codewords,
$K^7\ket{0}_L = \ket{0}_L$ and $K^7\ket{1}_L = -\ket{1}_L$. The 7-qubit code is
defined as a $[[n,k,d]] = [[7,1,3]]$
quantum code, where $n=7$ physical qubits encode $k=1$ logical qubit with a distance between basis states $d=3$,
correcting $t = (d-1)/2 = 1$ error. Notice that the stabilizer set separates into $X$ and $Z$ sectors which defines the
code as a Calderbank-Shor-Steane (CSS) code. CSS codes are extreamly useful since they allow for straightforward
logical gate operations to be applied directly to the encoded data [Section <ref>]
and are reasonably easy to derive from classical codes.
Although the 7-qubit code is the most well known Stabilizer code, there are other stabilizer codes
which encode multiple logical qubits and correct for more errors [50].
The downside to these lager codes is
that they require more physical qubits and more complicated error correction circuits.
Tables <ref>
and <ref> shows the stabilizer structure of two other well known codes, the 9-qubit
code [90] which we have examined and the 5-qubit code [68] which
represents the smallest possible quantum code that corrects for a single error.
$K^1$ $Z$ $Z$ $I$ $I$ $I$ $I$ $I$ $I$ $I$
$K^2$ $Z$ $I$ $Z$ $I$ $I$ $I$ $I$ $I$ $I$
$K^3$ $I$ $I$ $I$ $Z$ $Z$ $I$ $I$ $I$ $I$
$K^4$ $I$ $I$ $I$ $Z$ $I$ $Z$ $I$ $I$ $I$
$K^5$ $I$ $I$ $I$ $I$ $I$ $I$ $Z$ $Z$ $I$
$K^6$ $I$ $I$ $I$ $I$ $I$ $I$ $Z$ $I$ $Z$
$K^7$ $X$ $X$ $X$ $X$ $X$ $X$ $I$ $I$ $I$
$K^8$ $X$ $X$ $X$ $I$ $I$ $I$ $X$ $X$ $X$
The eight Stabilizers for the 9-qubit Shor code, encoding nine physical qubits into one logical
qubit to correct for a single $X$ and/or $Z$ error.
$K^1$ $X$ $Z$ $Z$ $X$ $I$
$K^2$ $I$ $X$ $Z$ $Z$ $X$
$K^3$ $X$ $I$ $X$ $Z$ $Z$
$K^4$ $Z$ $X$ $I$ $X$ $Z$
The Four Stabilizers for the [[5,1,3]] quantum code, encoding five physical qubits into one logical qubit
to correct for a single $X$ and/or $Z$ error. Unlike the 7- and 9-qubit codes, the [[5,1,3]] code is a non-CSS
code, since the stabilizer set does not separate into $X$ and $Z$ sectors.
§.§ State Preparation
Using the stabilizer structure for QEC codes, the logical state preparation and error correcting procedure is
Recall that the codeword states are defined as $+1$ eigenstates of the stabilizer set. In order to prepare
a logical state from some arbitrary input, we need to forcibly project qubits into eigenstates of these operators.
Consider the circuit shown in Fig. <ref>.
Quantum Circuit required to project an arbitrary state, $\ket{\psi}_I$ into a $\pm 1$ eigenstate
of the Hermitian operator, $U = U^{\dagger}$.
The measurement result of the ancilla determines which eigenstate $\ket{\psi}_I$ is projected to.
For some arbitrary input state, $\ket{\psi}_I$, an ancilla which is initialized in the $\ket{0}$ state is used as a control qubit
for a Hermitian operation ($U^{\dagger} = U$) on $\ket{\psi}_I$.
After the second Hadamard gate is performed, the state of the system is,
\begin{equation}
\ket{\psi}_F = \frac{1}{2} ( \ket{\psi}_I + U\ket{\psi}_I)\ket{0} + \frac{1}{2}(\ket{\psi}_I - U\ket{\psi}_I)\ket{1}.
\end{equation}
The ancilla qubit is then measured in the computational basis. If the result is $\ket{0}$, the input state is projected to
(neglecting normalization),
\begin{equation}
\ket{\psi}_F = \ket{\psi}_I+U\ket{\psi}_I.
\end{equation}
Since $U$ is Hermitian, $U\ket{\psi}_F=\ket{\psi}_F$, hence $\ket{\psi}_F$ is a $+1$ eigenstate of $U$.
If the ancilla is measured to be $\ket{1}$, then the input is projected to the state,
\begin{equation}
\ket{\psi}_F = \ket{\psi}_I-U\ket{\psi}_I,
\end{equation}
which is the $-1$ eigenstate of $U$. Therefore, provided $U$ is Hermitian, the general circuit of
Fig. <ref>
will project an arbitrary input state to a $\pm 1$ eigenstate of $U$. This procedure is well known and is
refered to as either a “parity" or “operator" measurement [76].
From this construction it should be clear how QEC state preparation proceeds. Taking the $[[7,1,3]]$
code as an example, 7-qubits are first initialized in the
state $\ket{0}^{\otimes 7}$, after which the circuit shown in Fig. <ref> is applied three times with
$U = (K^1,K^2,K^3)$, projecting the input state into a simultaneous $\pm 1$ eigenstate of each $X$ stabilizer
describing the $[[7,1,3]]$ code. The result of each operator measurement is then used to classically control
a single qubit $Z$ gate which is applied to one of the seven qubits at the end of the preparation. This single $Z$
gate converts any $-1$ projected eigenstates into $+1$ eigenstates.
Notice that the final three stabilizers
do not need to be measured due to the input state, $\ket{0}^{\otimes 7}$, already being a $+1$ eigenstate
of $(K^4,K^5,K^6)$. Fig. <ref> illustrates the final circuit, where instead
of one ancilla, three are utilized to speed up the state preparation by
performing each operator measurement in parallel.
Quantum circuit to prepare the $[[7,1,3]]$ logical $\ket{0}$ state. The input state $\ket{0}^{\otimes 7}$ is
projected into an eigenstate of each of the $X$ stabilizers shown in Eq. <ref>. After each ancilla measurement
the classical results are used to apply a single qubit $Z$ gate to qubit $i = 1^{M_2}+2^{M_3}+4^{M_1}$ which
converts the state from a $-1$ eigenstates of $(K^1,K^2,K^3)$ to $+1$ eigenstates.
As a quick aside, let us detail exactly how the relavant logical basis states can be derived from the
stabilizer structure of the code by utilizing the preparation circuit illustrated above.
Instead of the 7-qubit code, we will use the stabilizer set shown in Table <ref>
to calculate the $\ket{0}_L$ state for the 5-qubit code. The four code stabilizers are given by,
\begin{equation}
\begin{aligned}
&K^1 = XZZXI, \quad \quad K^2 = IXZZX,\\
&K^3 = XIXZZ, \quad \quad K^4 = ZXIXZ.
\end{aligned}
\end{equation}
As with the 7-qubit code, projecting an arbitrary state into a $+1$ eigenstate of these operators define
the two, logical basis states $\ket{0}_L$ and $\ket{1}_L$, with the operator $\bar{Z} = ZZZZZ$, fixing the
state to either $\ket{0}_L$ or $\ket{1}_L$. Therefore, calculating $\ket{0}_L$ from some initial un-encoded state
requires us to project the initial state into a $+1$ eigenstate of these operators. If we take the initial, un-encoded
state as $\ket{00000}$, then it already is a $+1$ eigenstate of $\bar{Z}$. Therefore, to find $\ket{0}_L$ we
simply calculate,
\begin{equation}
\begin{aligned}
\ket{0}_L&=\prod_{i=1}^4 (I^{\otimes 5} + K^i)\ket{00000},
\end{aligned}
\end{equation}
up to normalization. Expanding out this product, we find,
\begin{equation}
\begin{aligned}
\ket{0}_L = \frac{1}{4}( &\ket{00000}+\ket{01010}+\ket{10100}-\ket{11110}+\\
\end{aligned}
\end{equation}
Note, that the above state vector does not match up with those given in [68]. However, these vectors are
equivalent up to local rotations on each qubit. Therefore, matching up the original state
requires locally perturbing the stabilizer set to reflect these rotations.
§.§ Error Correction
Error correction using stabilizer codes is a straightforward extension of state preparation. Consider an arbitrary single
qubit state that has been encoded,
\begin{equation}
\alpha\ket{0} + \beta\ket{1} \rightarrow \alpha\ket{0}_L + \beta\ket{1}_L = \ket{\psi}_L.
\end{equation}
Now assume that an error occurs on one (or multiple) qubits which is described via the operator $E$, where $E$ is
a combination of $X$ and/or $Z$ errors over the $N$ physical qubits of the logical state.
By definition of stabilizer
codes, $K^i\ket{\psi}_L = \ket{\psi}_L$, $i \in [1,..,N-k]$, for a code encoding $k$ logical qubits.
Hence the erred state,
$E\ket{\psi}_L$, satisfies,
\begin{equation}
K^iE\ket{\psi}_L = (-1)^m EK^i\ket{\psi}_L = (-1)^m E\ket{\psi}_L.
\end{equation}
where $m$ is defined as, $m=0$, if $[E,K^i]=0$ and $m=1$, if $\{E,K^i\} = 0$. Therefore, if the error operator commutes
with the stabilizer, the state remains a $+1$ eigenstate of $K^i$, if the error operator anti-commutes with the
stabilizer then the logical state is flips to now be a $-1$ eigenstate of $K^i$.
Hence the general procedure for error correction is identical to state preparation. Each of the code
stabilizers are sequentially measured. Since a error free state is already a $+1$ eigenstate of all the
stabilizers, any error which anti-commutes with a stabilizer will flip the eigenstate and consequently the
parity measurement will return a result of $\ket{1}$.
Taking the $[[7,1,3]]$ code as an example, you can see that if the error operator is
$E = X_i$, where $i = (1,...,7)$,
representing a bit-flip on any one of the 7 physical qubits, then regardless of the location, $E$ will
anti-commute with a unique combination of $(K^4,K^5,K^6)$. Hence the classical results of measuring these
three operators will indicate if and where a single $X$ error has occurred. Similarly, if $E=Z_i$, then the
error operator will anti-commute with a unique combination of, $(K^1,K^2,K^3)$. Consequently, the first three
stabilizers for the $[[7,1,3]]$ code correspond to $Z$ sector correction
while the second three stabilizers correspond to $X$ sector correction. Note, that correction for Pauli $Y$ errors
are also taken care of by correcting in the $X$ and $Z$ sector since a $Y$ error on a single qubit is
equivalent to both an $X$ and $Z$ error on the same qubit, i.e. $Y = iXZ$.
Fig. <ref> illustrates
the circuit for full error correction with the $[[7,1,3]]$ code. As you can see it is simply an extension of the
preparation circuit [Fig. <ref>] where all six stabilizers are measured across the data block.
Quantum circuit to to correct for a single $X$ and/or $Z$ error using the $[[7,1,3]]$ code. Each of the
6 stabilizers are measured, with the first three detecting and correcting for $Z$ errors, while the last three
detect and correct for $X$ errors.
Even though we have specifically used the $[[7,1,3]]$ code as an example, the procedure for error correction
and state preparation is identical for all stabilizer codes allowing for full correction for both bit and
phase errors without obtaining any information regarding the state of the logical qubit.
§ DIGITIZATION OF QUANTUM ERRORS
Up until now we have remained fairly abstract regarding the analysis of quantum errors. Specifically,
we have examined QEC from the standpoint of a discrete set of Pauli errors occurring at certain locations
within a larger quantum circuit. In this section we examine how this analysis of errors relates to
more realistic processes such as environmental decoherence and systematic gate errors.
Digitization of quantum noise is often assumed when people examine the stability of quantum circuit
design or attempt to calculate thresholds for concatenated error correction. However, the
equivalence of discrete Pauli errors to more general, continuous, noise only makes sense when
we consider the stabilizer nature of the correction procedure. Recall from section <ref> that
correction is performed by re-projecting a potentially corrupt data block into $+1$ eigenstates of the
stabilizer set. It is this process that acts to digitize quantum noise, since a general continuous
mapping from a “clean" codeword state to a corrupt one will not satisfy the stabilizer conditions.
we will first introduce how a coherent systematic error, caused by imperfect implementation of quantum
gates, are digitized during correction, after which we will briefly discuss environmental decoherence
from the standpoint of the Markovian decoherence model.
§.§ Systematic gate errors
We have already shown an example of how systematic gate errors are digitized into a discrete set of
Pauli operators in Sec. <ref>. However, in that case we only considered a very restrictive type of
error, namely the coherent operator $U=\exp(i\epsilon X)$. We can easily extend this analysis to
cover all forms of systematic gate errors.
Consider an $N$ qubit unitary operation, $U_N$, which is valid on encoded data. Assume that $U_N$ is
applied inaccurately such that the resultant operation is actually $U_N'$. Given a general
encoded state $\ket{\psi}_N$, the final state can be expressed as,
\begin{equation}
U_N' \ket{\psi}_L = U_E U_N \ket{\psi}_L = \sum_j \alpha_j E_j \ket{\psi'}_L,
\end{equation}
where $\ket{\psi'}_L = U_N\ket{\psi}_L$ is the perfectly applied $N$ qubit gate, (i.e. the stabilizer set for $\ket{\psi'}_L$
remains invariant under the operation $U_N$ [see Sec. <ref>]). and
$U_E$ is a coherent error operator which is expanded in terms of the $N$ qubit Pauli Group,
$E_j \in P_N$. Now append two ancilla blocks, $\ket{A_0}^X$ and $\ket{A_0}^Z$, which
are all initialized and are used for $X$ and $Z$ sector correction, then run a full error
correction cycle, which we represent by the unitary operator, $U_{\text{QEC}}$.
It will be assumed that $\ket{\psi}_L$ is encoded with a hypothetical
QEC code which can correct for $N$ errors (both $X$ and/or $Z$), hence there is a one-to-one
mapping between the error operators, $E_j$, and the orthogonal basis states of the ancilla blocks,
\begin{equation}
\begin{aligned}
&U_{\text{QEC}}U_N'\ket{\psi}_L\ket{A_0}^X\ket{A_0}^Z \\
&= U_{\text{QEC}}
\sum_j \alpha_jE_j\ket{\psi'}_L \ket{A_0}^X\ket{A_0}^Z \\
&= \sum_j \alpha_j E_j \ket{\psi'}_L\ket{A_j}^X\ket{A_j}^Z.
\end{aligned}
\end{equation}
The ancilla blocks are then measured, projecting the data blocks into the state $E_j\ket{\psi'}_L$ with
probability $|\alpha_j|^2$, after which the correction $E_j^{\dagger}$ is applied based on the syndrome
result. As the error operation $E_j$ is simply an element of $\mathcal{P}_N$, correcting for $X$ and $Z$
independently is sufficient to correct for all error operators (as $Y$ errors are corrected when a bit and phase
error is detected and corrected on the same qubit).
For very small systematic inaccuracies, the expansion co-efficient, $\alpha_0$, which corresponds to
$E_0 = I^{\otimes N}$ will be very close to 1, with all other co-efficients small. Hence during correction
there will be a very high probability that no error is detected. This is the digitization effect of quantum error
correction. Since codeword states are specific eigenstates of the stabilizers, then the re-projection of
the state when each stabilizer is measured forces any continuous noise operator to collapse to
the discrete Pauli set, with the magnitude of the error dictating the probability that the data block
is projected into discrete perturbation of a “clean" state.
§.§ Environmental decoherence
A complete analysis of environmental decoherence in relation to quantum information is a lengthy topic. Instead
of a detailed review, we will instead simply present a specific example to highlight how QEC
relates to environmental effects.
The Lindblad formalism [44, 76, 36]
provides an elegant method for analyzing the effect of decoherence on open
quantum systems. This model does have several assumptions, most notably that the environmental
bath couples weakly to the system (Born approximation) and that each qubit experiences
un-correlated noise (Markovian approximation). While these assumptions are utilized for a variety of
systems [12, 17, 14], it is known that they may not
hold in some cases [55, 72, 7, 6]. Particularly in superconducting systems
where decoherence can be caused by small numbers of fluctuating charges. In this case
more specific decoherence models need to be considered.
Using this
formalism, the evolution of the density matrix can be written as,
\begin{equation}
\partial_t \rho = -\frac{i}{\hbar} [H,\rho] + \sum_k \Gamma_k \mathcal{L}[\rho].
\end{equation}
Where $H$ is the Hamiltonian, representing coherent, dynamical evolution of the system and
$\mathcal{L}_k[\rho]=([L_k,\rho L_k^{\dagger}]+[L_k\rho, L_k^{\dagger}])/2$ represents the
incoherent evolution. The operators $L_k$ are known as the Lindblad quantum jump operators and
are used to model specific decoherence channels, with each operator parameterized by some rate
$\Gamma_k \geq 0$. This differential equation is known as the quantum louiville equation or more
generally, the density matrix master equation.
To link Markovian decoherence to QEC, consider a special set of decoherence
channels that help to simplify the
calculation, representing a single qubit undergoing dephasing, spontaneous emission and spontaneous
absorption. Dephasing of a single qubit is modelled by the Lindblad operator $L_1 = Z$ while spontaneous
emission/absorption are modelled by the operators $L_2 = \ket{0}\bra{1}$ and $L_3 = \ket{1}\bra{0}$
respectively. For the sake of simplicity we assume that absorption/emission occur at the same rate, $\Gamma$.
Consequently, the density matrix evolution is given by,
\begin{equation}
\partial_t \rho = -\frac{i}{\hbar}[H,\rho] + \Gamma_Z (Z\rho Z - \rho) + \frac{\Gamma}{2}(X\rho X + Y\rho Y -2\rho).
\label{eq:diff}
\end{equation}
If it is assumed that the qubit is not undergoing any coherent evolution ($H = 0$),
i.e. a memory stage within a quantum algorithm,
then Eq. <ref> can be solved by re-expressing the density matrix in the Bloch formalism.
Set $\rho(t) = I/2 + x(t)X + y(t)Y + z(t)Z$, then Eq. <ref>, with $H=0$, reduces to, $\partial_t S(t) = AS(t)$ with
$S(t) = (x(t),y(t),z(t))^T$ and
\begin{equation}
A = \begin{pmatrix} -(\Gamma + 2\Gamma_z) & 0 &0 \\ 0 &-(\Gamma + 2\Gamma_z) &0\\ 0 &0 &-2\Gamma \end{pmatrix}.
\end{equation}
This differential equation is easy to solve, leading to,
\begin{equation}
\begin{aligned}
\rho(t) &= [1-p(t)]\rho(0) + p_x(t) X\rho(0) X \\
&+ p_y(t) Y \rho(0) Y + p_z(t) Z \rho(0) Z,
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
p_x(t) = & p_y(t) = \frac{1}{4}(1-e^{-2\Gamma t}), \\
&p_z(t) = \frac{1}{4}(1+e^{-2\Gamma t}-2e^{-(\Gamma +2\Gamma_z)t}), \\
&p(t) = p_x(t) + p_y(t) + p_z(t).
\end{aligned}
\end{equation}
If this single qubit is part of a QEC encoded data block, then each term represents a single
error on the qubit experiencing decoherence. Two blocks of initialized ancilla qubits are added
to the system and the error correction protocol run.
Once the ancilla qubits are measured, the state will collapse to
no error, with probability $1-p(t)$, or a single $X$,$Y$ or $Z$ error, with probabilities $p_x(t),p_y(t)$
and $p_z(t)$.
We can also see how temporal effects are incorporated into the error correction model. The temporal integration window $t$
of the master equation will influence how probable an error is detected and corrected for a fixed rate $\Gamma$. The longer
between correction cycles, the more probable the qubit experiences an error.
§.§ More General mappings
Both the systematic gate errors and the errors induced by environmental decoherence illustrate the digitization
effect of quantum error correction. However, we can quite easily generalize digitization to arbitrary mappings of
the density matrix. In this case consider a more general Krauss map on a multi-qubit density matrix,
\begin{equation}
\rho \rightarrow \sum_k A_k^{\dagger}\rho A_k
\end{equation}
where $\sum A_k^{\dagger}A_k = I$. For the sake of simplicity let us choose a simple mapping where
$A_1 = (Z_1+iZ_2)/\sqrt{2}$ and $A_k = 0$ for $k\neq 1$. This mapping essentially represents dephasing on two qubits.
However, this type of mapping (when considered in the context of error correction)
represents independent $Z$ errors on either qubit one or two.
To illustrate, first expand out the density matrix (neglecting normalization),
\begin{equation}
\rho \rightarrow A_1^{\dagger}\rho A_1 = Z_1\rho Z_1 + Z_2\rho Z_2 - iZ_1\rho Z_2 + iZ_2 \rho Z_1
\end{equation}
Note that only the first two terms in this expansion, on their own, represent physical mixtures,
the last two off-diagonal terms are actually irrelevant in the context of QEC and are removed
during correction.
To illustrate we again assume that $\rho$ represents a protected qubit, where $Z_1$ and $Z_2$
are physical errors on qubits comprising the codeblock. As we are only considering phase
errors in this example, we will ignore $X$ correction (but the analysis automatically generalizes if the error
mapping contains $X$ terms). A fresh ancilla block, represented by the density matrix $\rho^z_0$ is coupled
to the system and the unitary $U_{QEC}$ is run,
\begin{equation}
\begin{aligned}
U_{QEC}^{\dagger}\rho'\otimes \rho^z_0 U_{QEC} =
&U_{QEC}^{\dagger}Z_1\rho Z_1\otimes \rho^z_0 U_{QEC} + U_{QEC}^{\dagger}Z_2\rho Z_2\otimes \rho^z_0 U_{QEC}\\
- &iU_{QEC}^{\dagger}Z_1\rho Z_2\otimes \rho^z_0 U_{QEC} + iU_{QEC}^{\dagger}Z_2 \rho Z_1\otimes \rho^z_0 U_{QEC} \\
= &Z_1\rho Z_1 \otimes \ket{Z_1}\bra{Z_1} +Z_2\rho Z_2 \otimes \ket{Z_2}\bra{Z_2} -iZ_1\rho Z_2 \otimes \ket{Z_1}\bra{Z_2} +iZ_2\rho Z_1 \ket{Z_2}\bra{Z_1}
\end{aligned}
\end{equation}
where $\ket{Z_1}$ and $\ket{Z_2}$ represent the two orthogonal syndrome states of the ancilla that
are used to detect phase errors on qubits one and two respectively. The important part of the above expression is
that when the syndrome qubits are measured we are calculating $\text{Tr}(\rho \ket{Z_1}\bra{Z_1})$ or
$\text{Tr}(\rho \ket{Z_2}\bra{Z_2})$, therefore the two cross terms in the above expression are
never observed. In this mapping the only two possible states that exist after the measurement of the
ancilla system are,
\begin{equation}
\begin{aligned}
Z_1\rho Z_1 \otimes \ket{Z_1}\bra{Z_1} \quad \text{with Probability } =\frac{1}{2} \\
Z_2\rho Z_2 \otimes \ket{Z_2}\bra{Z_2} \quad \text{with Probability } =\frac{1}{2}
\end{aligned}
\end{equation}
Therefore, not only are the cross terms eliminated via error correction but the final density matrix again
collapses to a single error perturbation of “clean" codeword states with no correlated errors.
Consequently, in standard QEC analysis it is assumed that after each elementary gate operation, measurement,
initialization and memory step, a hypothetical error correction cycle is run. This cycle digitizes all
continuous errors (either systematic or environmental) into either an $X$ and/or $Z$ error on each qubit.
This cycle is assumed to be error free and take zero time. In this way error correction can be analyzed by
assuming perfect gate operations and discrete, probabilistic errors. The probability of each error occuring
can then be independently calculated via a systematic gate analysis or through the evolution of the
master equation.
§ FAULT-TOLERANT QUANTUM ERROR CORRECTION AND THE THRESHOLD THEOREM.
Section <ref> detailed the protocols required to correct for quantum errors, however
this implementation of QEC assumed the following,
* Errors only occur during “memory" regions, i.e. when quantum operations or
error correction are not being performed and we assume errors do not occur on ancilla qubits.
* The quantum gates themselves do not induce any systematic errors within the
logical data block.
Clearly these are two very unrealistic assumptions and error correction procedures and
logical gate operations need to be designed such that they can still correct for errors.
§.§ Fault-tolerance
The concept of Fault-tolerance in computation is not a new idea, it was first developed
in relation to classical computing [77, 43, 9]. However, in recent years the precise manufacturing
of digital circuitry has made large scale error correction and fault-tolerant circuits largely unnecessary.
The basic principle of Fault-tolerance is that the circuits used for gate operations and
error correction procedures should not cause errors to cascade. This can be seen clearly when
we look at a simple CNOT operation between two qubits [Fig. <ref>]. In this circuit
we are performing a sequence of three CNOT gates which act to take the state
$\ket{111}\ket{000} \rightarrow \ket{111}\ket{111}$. In Fig. <ref>a. we consider a single
$X$ error which occurs on the top most qubit prior to the first CNOT. This single error
will cascade through each of the three gates such that the $X$ error has now propagated to
four qubits. Fig. <ref>b. shows a slightly modified design that implements the
same operation, but the single $X$ error now only propagates to two of the six qubits.
Two circuits to implement the transformation
$\ket{111}\ket{000} \rightarrow \ket{111}\ket{111}$. a) shows a version where a single $X$
error can cascade into four errors while b) shows an equivalent circuit where the error only propagates
to a second qubit.
If we consider each block of three as a single logical qubit, then the staggered circuit will only induce
a total of one error in each logical block, given a single $X$ error occurred somewhere during the
gate operations. This is the one of the standard definitions of Fault-tolerance.
fault-tolerant circuit element: A single error will cause at most one error in the output for each logical qubit block.
It should be stressed that the idea of Fault-tolerance is a discrete definition, either a certain
quantum operation is fault-tolerant or it is not. What is defined to be fault-tolerant can change
depending on the error correction code used. For example, for a single error correcting code,
the above definition is the only one available (since any more than one error in a logical qubit
will result in the error correction procedure failing). However, if the quantum code employed
is able to correct multiple errors, then the definition of Fault-tolernace can be relaxed, i.e.
if the code can correct three errors then circuits may be designed such that a single
failure results in at most two errors in the output (which is then correctable). In general, for
an code correcting $t=\lfloor (d-1)/2 \rfloor$ errors, fault-tolerance requires that $\leq t$ errors
during an operation does not result in $> t$ errors in the output for each logical qubit.
§.§ Threshold Theorem
The threshold theorem is truly a remarkable result in quantum information and is a consequence
of fault-tolerant circuit design and the ability to perform dynamical error correction.
Rather than present a detailed derivation of the theorem for a variety of noise models, we will
instead take a very simple case where we utilize a quantum code that can only correct for a
single error, using a model that assumes uncorrelated, errors on individual qubits. For
more rigorous derivations of the theorem see [1, 50, 5].
Consider a quantum computer where each physical
qubit experiences either an $X$ and/or $Z$ error independently
with probability $p$, per gate operation.
Furthermore, it is assumed that each logical gate operation and error
correction circuit is designed according to the rules of Fault-tolerance and that a cycle of
error correction is performed after each elementary logical gate operation. If an error occurs
during a logical gate operation, then Fault-tolerance ensures this error will only propagate
to at most one error in each block, after which a cycle of error correction will remove the error.
Hence if the failure probability of un-encoded qubits per time step is $p$, then a single level
of error correction will ensure that the logical step fails only when two (or more) errors occur. Hence
the failure rate of each logical operation, to leading order, is now $p^1_L = cp^2$, where $p^1_L$ is the
failure rate (per logical gate operation) of a 1st level logical qubit and $c$ is the upper bound for the
number of possible 2-error combinations
which can occur at a physical level within the circuit consisting of the
correction cycle $+$ gate operation $+$
correction cycle [5].
We now repeat the process, re-encoding the computer
such that a level-2 logical qubit is formed, using the same $[[n,k,d]]$
quantum code, from $n$, level-1 encoded
qubits. It is assumed that all error correcting procedures and gate operations at the 2nd level are
self-similar to the level-1 operations (i.e. the circuit structures for the level-2 encoding are
identical to the level-1 encoding). Therefore, if the level-1 failure rate per logical time step is $p^1_L$,
then by the same argument, the failure rate of a 2-level operation is given by,
$p^2_L = c(p^1_L)^2 = c^3p^4$. This iterative procedure is then repeated (referred to as concatenation)
up to the $k$th level, such that the logical failure rate, per time step, of a $k$-level encoded qubit is given by,
\begin{equation}
p^k_L = \frac{(cp)^{2^k}}{c}.
\label{eq:threshold}
\end{equation}
Eq. <ref> implies that for a finite physical error rate, $p$, per qubit, per time step,
the failure rate of the $k$th-level encoded qubit can be made arbitrarily small by simply increasing $k$,
dependent on $cp < 1$. This inequality defines the threshold. The physical error rate
experienced by each qubit per time step must be $p_{th} < 1/c$ to ensure that multiple levels of
error correction reduce the failure rate of logical components.
Hence, provided sufficient resources are available, an arbitrarily large quantum circuit can be
successfully implemented, to arbitrary accuracy, once the physical error rate is below threshold. The
calculation of thresholds is therefore an extremely important aspect to quantum architecture design.
Initial estimates at the threshold, which gave $p_{th} \approx 10^{-4}$ [59, 1, 50]
did not sufficiently
model physical systems in an accurate way. Recent
results [89, 87, 86, 73, 15] have been estimated for more
realistic quantum processor architectures, showing significant differences in threshold when architectural considerations
are taken into account.
§ FAULT-TOLERANT OPERATIONS ON ENCODED DATA
Sections <ref> and <ref> showed how fault-tolerant QEC allows for
any quantum algorithm to be run to arbitrary accuracy. However, the results of the threshold theorem
assume that logical operations can be performed directly on the encoded data without the need for
continual decoding and re-encoding. Using stabilizer codes, a large class of operations can
be performed on logical data in an inherently fault-tolerant way.
If a given logical state, $\ket{\psi}_L$, is stabilized by $K$, and the logical operation $U$ is applied,
the new state, $U\ket{\psi}_L$ is stabilized by $UKU^{\dagger}$, i.e,
\begin{equation}
UKU^{\dagger}U\ket{\psi}_L = UK\ket{\psi}_L = U\ket{\psi}_L.
\end{equation}
In order for the codeword states to remain valid, the stabilizer set for the code, $\mathcal{G}$,
must remain fixed through
every operation. Hence for $U$ to be a valid operation on the data,
$U\mathcal{G}U^{\dagger} = \mathcal{G}$.
§.§ Single Qubit Operations
The logical $\bar{X}$ and $\bar{Z}$ operations on a single encoded qubit are the first examples of
valid codeword operations. Taking the $[[7,1,3]]$ code as an example, $\bar{X}$ and $\bar{Z}$ are
given by,
\begin{equation}
\bar{X} = XXXXXXX \equiv X^{\otimes 7}, \quad \bar{Z} = ZZZZZZZ \equiv Z^{\otimes 7}.
\label{eq:logop}
\end{equation}
Since the single qubit Pauli operators satisfy $XZX = -Z$ and $ZXZ = -X$ then,
$\bar{X}K^{i}\bar{X} = K^{i}$ and $\bar{Z}K^{i}\bar{Z} = K^{i}$ for each of the
$[[7,1,3]]$ stabilizers given in Eq. <ref>. The fact that each stabilizer has a weight of four
guarantees that $UKU^{\dagger}$ picks up an even number of $-1$ factors. Since the
stabilizers remain fixed the operations are valid. However, what transformations do Eq. <ref>
actually perform on encoded data?
For a single qubit, a bit-flip operation $X$ takes $\ket{0} \leftrightarrow \ket{1}$. Recall that for a single
qubit $Z\ket{0} = \ket{0}$ and $Z\ket{1} = -\ket{1}$, hence for $\bar{X}$ to actually induce a logical bit-flip
it must take, $\ket{0}_L \leftrightarrow \ket{1}_L$. For the $[[7,1,3]]$ code, the final operator which
fixes the logical state is $K^7 = Z^{\otimes 7}$,
where $K^7\ket{0}_L = \ket{0}_L$ and $K^7\ket{1}_L = -\ket{1}_L$.
As $\bar{X}K^7\bar{X} = -K^7$, any state stabilized by $K^7$ becomes stabilized by $-K^7$ (and vice-versa)
after the operation of $\bar{X}$. Therefore, $\bar{X}$ represents a logical bit flip.
The same argument can be used for $\bar{Z}$ by considering the stabilizer properties of the states
$\ket{\pm} = (\ket{0} \pm \ket{1})/\sqrt{2}$.
Hence, the logical bit- and phase-flip gates can be applied directly to logical data by simply using seven
single qubit $X$ or $Z$ gates, [Fig. <ref>].
Bit-wise application of single qubit gates in the $[[7,1,3]]$ code. Logical $X$, $Z$
$H$ and $P$ gates can trivially be applied by using seven single qubit gates, fault-tolerantly.
Note that the application of seven $P$ gates results in the logical $\bar{P^{\dagger}}$ being applied
and vice-versa.
Two other useful gates which can be applied in this manner is the Hadamard rotation and phase gate,
\begin{equation}
H = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}, \quad \quad
P = \begin{pmatrix} 1 & 0 \\ 0 & i \end{pmatrix}.
\end{equation}
These gates are useful since when combined with the two-qubit CNOT gate, they can
generate a subgroup of all multi-qubit gates known as the Clifford
group (gates which map Pauli group operators back to the Pauli group).
Again, using the stabilizers of the $[[7,1,3]]$ code and the fact that for single qubits,
\begin{equation}
\begin{aligned}
HXH = Z, \quad \quad HZH = X, \\
PXP^{\dagger} = iXZ, \quad \quad PZP^{\dagger} = Z,
\end{aligned}
\end{equation}
a seven qubit bit-wise Hadamard gate will switch $X$ with $Z$ and therefore will simply flip
$\{K^1,K^2,K^3\}$ with $\{K^4,K^5,K^6\}$, and is a valid operation. The bit-wise application of the
$P$ gate will leave any $Z$ stabilizer invariant, but takes $X \rightarrow iXZ$.
This is still valid since provided there are a multiple
of four non-identity operators for the stabilizer set, the factors of $i$ will cancel. Hence seven
bit-wise $P$ gates is valid for the $[[7,1,3]]$ code.
What does $\bar{H}$ and $\bar{P}$ do to the logical state? For a single qubit, the Hadamard gate
flips any $Z$ stabilized state to a $X$ stabilized state, i.e $\ket{0,1} \leftrightarrow \ket{+,-}$. Looking at
the transformation of $K^7$, $\bar{H}K^7\bar{H} = X^{\otimes 7}$, the bit-wise
Hadamard gate will invoke a logical Hadamard operation. The single qubit $P$ gate leaves a
$Z$ stabilized state invariant, while an $X$ eigenstate becomes stabilized by $iXZ$.
Hence, $\bar{P^{\dagger}}(X^{\otimes 7})\bar{P} = -i(XZ)^{\otimes 7}$ and the bit-wise gate,
$\bar{P}$, represents a logical $P^{\dagger}$ gate on the data block. Similarly,
bit-wise $\bar{P^{\dagger}}$
gates enact a logical $P$ gate [Fig. <ref>].
Each of these fault-tolerant operations on a logically encoded block are commonly
referred to as transversal operations, as a logical operation is obtained by a
set of individual operations acting transversally on the physical qubits.
§.§ Two-qubit gate.
A two-qubit logical CNOT operation can also be applied in the same
transversal way. For un-encoded qubits,
a CNOT operation performs the following mapping on the two qubit stabilizer set,
\begin{equation}
\begin{aligned}
&X\otimes I \rightarrow X\otimes X, \\
&I\otimes Z \rightarrow Z\otimes Z, \\
&Z\otimes I \rightarrow Z\otimes I, \\
&I\otimes X \rightarrow I\otimes X.
\end{aligned}
\label{eq:CNOTtrans}
\end{equation}
Where the first operator corresponds to the control qubit and the second operator corresponds to the target.
Now consider the bit-wise application of seven CNOT gates between logically encoded blocks of
data [Fig. <ref>]. First the stabilizer set must remain invariant, i.e,
\begin{equation}
\mathcal{G} = \{K^{i}\otimes K^{j}\} \rightarrow \{K^{i}\otimes K^{j}\} \; \forall \; (i,j).
\end{equation}
Table <ref> details the transformation for all the
stabilizers under seven bit-wise CNOT gates,
demonstrating that this operation is valid on the $[[7,1,3]]$ code. The transformations in
Eq. <ref> are trivially extended to the logical space, showing that seven
bit-wise CNOT gates invoke a logical CNOT operation.
\begin{equation}
\begin{aligned}
&\bar{X}\otimes I \rightarrow \bar{X}\otimes \bar{X}, \\
&I\otimes \bar{Z} \rightarrow \bar{Z}\otimes \bar{Z}, \\
&\bar{Z}\otimes I \rightarrow \bar{Z}\otimes I, \\
&I\otimes \bar{X} \rightarrow I\otimes \bar{X}.
\end{aligned}
\label{eq:CNOTtrans2}
\end{equation}
$K^i \otimes K^j$ $K^1$ $K^2$ $K^3$ $K^4$ $K^5$ $K^6$
$K^1$ $K^1\otimes I$ $K^1\otimes K^1K^2$
$K^1\otimes K^1K^3$ $K^1K^4 \otimes K^1K^4$
$K^1K^5\otimes K^1K^5$ $K^1K^6\otimes K^1K^6$
$K^2$ $K^2\otimes K^1K^2$ $K^2\otimes I$ $K^2 \otimes K^2K^3$
$K^2K^4\otimes K^2K^4$ $K^2K^5\otimes K^2K^5$ $K^2K^6\otimes K^2K^6$
$K^3$ $K^3\otimes K^3K^1$ $K^3\otimes K^3K^2$ $K^3\otimes I$
$K^3K^4\otimes K^3K^4$ $K^3K^5\otimes K^3K^5$ $K^3K^6\otimes K^3K^6$
$K^4$ $K^4\otimes K^1$ $K^4\otimes K^2$ $K^4\otimes K^3$
$I\otimes K^4$ $K^4K^5\otimes K^5$ $K^4K^6\otimes K^6$
$K^5$ $K^5\otimes K^1$ $K^5\otimes K^2$ $K^5\otimes K^3$
$K^5K^4\otimes K^4$ $I\otimes K^5$ $K^5K^6\otimes K^6$
$K^6$ $K^6\otimes K^1$ $K^6\otimes K^2$ $K^6\otimes K^3$
$K^6K^4\otimes K^4$ $K^6K^5\otimes K^5$ $I\otimes K^6$
Transformations of the $[[7,1,3]]$ stabilizer set under the gate operation
$U=$CNOT$^{\otimes 7}$,
where $\mathcal{G} \rightarrow U^{\dagger}\mathcal{G}U$.
Note that the transformation does not take any stabilizer outside the group generated by
$K^i \otimes K^j\; (i,j)\in [1,..,6]$,
therefore $U=$CNOT$^{\otimes 7}$ represents a valid operation on the codespace.
The issue of Fault-tolerance with these logical operations should be clear. The $\bar{X}$,$\bar{Z}$,
$\bar{H}$ and $\bar{P}$ gates are trivially fault-tolerant since the logical operation is performed
through seven bit-wise single qubit gates. The logical CNOT is also fault-tolerant since each two-qubit
gate only operates between counterpart qubits in each logical block. Hence if any gate is inaccurate,
then at most a single error will be introduced in each block.
Bit-wise application of a CNOT gate between two logical qubits. Since each CNOT
only couples corresponding qubits in each block, this operation is inherently fault-tolerant.
In contrast to the [[7,1,3]] code, let us also take a quick look at the [[5,1,3]] code. As mentioned in section <ref> the [[5,1,3]] code is a non-CSS
code, meaning the the Clifford group of gates cannot be fully implemented in a transversal manner.
To see this clearly we can examine how the
stabilizer group for the code transforms under a transversal Hadamard operation,
\begin{equation}
\begin{pmatrix}
X & Z & Z & X & I \\
I & X & Z & Z & X \\
X & I & X & Z & Z \\
Z & X & I & X & Z \end{pmatrix}
\quad \longrightarrow \quad
\begin{pmatrix}
Z & X& X & Z & I \\
I & Z & X & X & Z \\
Z & I & Z & X & X \\
X & Z & I & Z & X \end{pmatrix}
\end{equation}
The stabilizer group is not preserved under this transformation, therefore the transversal Hadamard operation is not valid for the [[5,1,3]] code. One thing to briefly note
is that there is a method for performing logical Hadamard and phase gates on the [[5,1,3]] code [50]. However, it essentially involves performing a valid, transversal,
three-qubit gate and then measuring out two of the logical ancillae.
While these gates are useful for operating on quantum data, they do not represent a universal set
for quantum computation. In fact it has been shown that by using the stabilizer formalism, these
operations can be efficiently simulated on a classical device [51, 3]. In order to achieve universality
one of the following gates are generally added to the available set,
\begin{equation}
T = \begin{pmatrix} 1 & 0 \\ 0 & e^{i\pi/4} \end{pmatrix},
\end{equation}
or the Toffoli gate [102]. However, neither of these two gates are members of the Clifford group and
applying them in a similar way to the other gates will transform the stabilizers
out the group and consequently does
not represent a valid operation. Circuits implementing these two gates in a fault-tolerant
manner have been developed [76, 46, 93, 89],
but at this stage the circuits are complicated and resource intensive.
This has practical implications to encoded operations. If universality is achieved by adding the
$T$ gate to the list, arbitrary single qubit rotations require
long gate sequences (utilizing the Solovay-Kitaev theorem [59, 32]) to approximate arbitrary
logical qubit rotations and these sequences often require many $T$ gates [41].
Finding more efficient methods to achieve universality on encoded data is therefore still an active area
of research.
§ FAULT-TOLERANT CIRCUIT DESIGN FOR LOGICAL STATE PREPARATION
Section <ref>
introduced the basic rules for fault-tolerant circuit design and how these rules lead
to the threshold theorem for concatenated error correction. However, what does a full fault-tolerant
quantum circuit look like? Here, we introduce a full fault-tolerant circuit to prepare the $[[7,1,3]]$ logical $\ket{0}$ state. As the
$[[7,1,3]]$ code is a single error correcting code, we use the one-to-one definition of Fault-tolerance and
therefore only need to consider the propagation of a single error during the
preparation (any more that one error during correction represents a higher order effect and is ignored).
As described in Section <ref>,
logical state preparation can be done by initializing an appropriate number
of physical qubits and measuring each of the $X$ stabilizers that describe the code. Therefore,
a circuit which allows the measurement of a Hermitian operator in a fault-tolerant manner needs to be
constructed. The general structure of the
circuit used was first developed by Shor [91], however it should be noted that several more recent
methods for fault-tolerant state preparation and correction now exist [98, 101, 25]
which are more efficient than Shor's original method.
The circuits shown in Fig. <ref>a and <ref>b,
which measure the stabilizer $K^1 = IIIXXXX$ are
not fault-tolerant, since a single ancilla is used to control each of the four CNOT gates. Instead,
four ancilla qubits are used
which are prepared in the state $\ket{\mathcal{A}} = (\ket{0000}+\ket{1111})/\sqrt{2}$.
This can be done by initializing four qubits in the $\ket{0}$ state and applying a Hadamard then a
sequence of CNOT gates. Each of these four ancilla are used to control a separate CNOT gate, after
which the ancilla state is decoded and measured.
Three circuits which measure the stabilizer $K^1$. Fig a) represents a
generic operator measurement where a multi-qubit controlled gate is available. Fig. b)
decomposes this into single- and two-qubit gates, but in a non-fault-tolerant manner. Fig. c)
introduces four ancilla such that each CNOT is controlled via a separate qubit. This ensures
By ensuring that each CNOT is controlled via a separate ancilla, any $X$ error will only propagate to a single
qubit in the data block. However, during the preparation of the ancilla state there is the
possibility that a single $X$ error can propagate to multiple ancilla, which are then fed forward into the
data block. In order to combat this, the ancilla block needs to be verified against possible $X$ errors.
Tracking through all the possible locations where a single $X$ error can occur during
ancilla preparation leads to the following unique states.
\begin{equation}
\begin{aligned}
&\ket{\mathcal{A}}_1 = \frac{1}{\sqrt{2}}(\ket{0000}+\ket{1111}),\\
&\ket{\mathcal{A}}_2 = \frac{1}{\sqrt{2}}(\ket{0001} + \ket{1110}),\\
&\ket{\mathcal{A}}_3 = \frac{1}{\sqrt{2}}(\ket{0011} + \ket{1100}),\\
&\ket{\mathcal{A}}_4 = \frac{1}{\sqrt{2}}(\ket{0111} + \ket{1000}).\\
&\ket{\mathcal{A}}_5 = \frac{1}{\sqrt{2}}(\ket{0100} + \ket{1011}).\\
\end{aligned}
\end{equation}
From these possibilities, the last four states have a different parity between the first and forth qubit. Hence to
verify this state, a fifth ancilla is added, initialized and used to perform a parity check on the ancilla block. This
fifth ancilla is then measured. If the result is $\ket{0}$, the ancilla block is clean and can be coupled to the data.
If the ancilla result is $\ket{1}$, then either a single error has occured in the ancilla preparation or
on this verification qubit. In either case, the entire ancilla block is re-initialized and the ancilla prepared
again. This is continued until the verification qubit is measured to be $\ket{0}$ [Fig. <ref>].
Circuit required to measure the stabilizer $K^1$, fault-tolerantly. A four qubit GHZ
state is used as ancilla with the state requiring verification against multiple $X$ errors. After the
state has passed verification it is coupled to the data block and a syndrome is extracted.
Circuit required to prepare the $[[7,1,3]]$ logical $\ket{0}$ state fault-tolerantly.
Each of the $X$ stabilizers are sequentially measured using the circuit in Fig. <ref>. To
maintain Fault-tolerance, each stabilizer is measured 2-3 times with a majority vote taken.
The re-preparation of the ancilla block protects against $X$ errors, which can propagate forward through the
CNOT gates. $Z$ errors on the other hand, propagate in the other direction. Any $Z$ error
which occurs in the ancilla block will propagate straight through to the final measurement. This results in the
measurement not corresponding to the eigenstate the data is projected to and can possibly result in
mis-correction once all stabilizers have been measured. To protect against this, each stabilizer is
measured 2-3 times and a majority vote of the measurement results taken. As any additional error represents
a second order process, if the first or second measurement has been corrupted by an induced $Z$ error, then
the third measurement will only contain additional errors if
a higher order error process has occurred. Therefore, we are free to ignore this
possibility and assume that the third measurement is error free.
The full circuit for $[[7,1,3]]$ state preparation is shown in Fig. <ref>, where each stabilizer is
measured 2-3 times. The total circuit requires a minimum of 12 qubits (7-data qubits and a 5-qubit ancilla block).
As you can see, the circuit constructions for full fault-tolerant state preparation (and error correction) are not simple
circuits. However, they are easy to design in generic ways when employing stabilizer coding.
§ LOSS PROTECTION
So far we have focused the discussion on correction techniques which assume that error
processes maintain the assumption of a qubit structure to the Hilbert space.
As we noted in section <ref>, the loss of physical qubits within the computer violates
this assumption and in general requires additional correction machinery beyond what we
have already discussed.
For the sake of completeness, this section examines some correction techniques for qubit loss.
Specifically, we detail one such scheme which was developed with single photon based
architectures in mind.
Protecting against qubit loss requires a different approach than other general forms of quantum errors such as environmental decoherence
or systematic control imperfections. The cumbersome aspect related to correcting qubit loss is detecting the presence of a qubit at the
physical level. The specific machinery that is required for loss detection is dependent on the underlying physical architecture, but the
basic principal is that the presence or absence of the physical qubit must be determined without discriminating the actual quantum state.
Certain systems allow for loss detection is a more convenient way than others. Electronic spin qubits, for example, can employ Single
Electron Transistors (SET) to detect the presence or absence of the charge without performing measurement on the spin degree of
freedom [34, 21, 4].
Optics in contrast requires more complicated non-demolition measurement [75, 57, 79, 74]. This is due to the fact that typical photonic measurement
is performed via photo-detectors which have the disadvantage of physically destroying the photon.
Once the detection of the presence of the physical qubit has been performed, a freshly initialized qubit can be injected to
replace the lost qubit. Once this has been completed, the standard error correcting procedure can correct for the error. A freshly initialized
qubit state, $\ket{0}$ can be represented as projective collapse of a general qubit state, $\ket{\psi}$, as,
\begin{equation}
\ket{0} \propto \ket{\psi}+Z\ket{\psi}.
\end{equation}
If we consider this qubit as part of an encoded block, then the above corresponds to a 50% error
probability of experiencing a phase flip on this qubit. Therefore, a loss event that is corrected by non-demolition detection
and standard QEC essentially guarantees a correction event in the QEC cycle. Therefore the probability of loss needs to be at a
comparable rate to standard errors as the correction cycle after a loss detection event will, with high probability, detect and correct the error.
Additionally, if a loss event is detected and the qubit replaced, the error detection code shown in
section <ref> becomes a single qubit correction code. This
is due to the fact that erasure errors have known locations. Consequently
error detection is sufficient to perform full correction, in contrast to
non-erasure errors where the location is unknown.
A second method for loss correction is related to systems that have high loss rates compared to systematic and environmental
errors. The most prevalent in optical systems. Due to the high mobility of single photons and their relative immunity to
environmental interactions, loss is a major error channel that generally dominates over other error sources. The use of error
detection and correction codes for photon loss is undesirable due to the need for non-demolition detection of the lost qubit. While techniques exist for measuring the presence or
absence of a photon without direct detection have been developed and implemented [79],
they require multiple ancilla photons and controlled interactions.
Ultimately it is more desirable to redesign the loss correction code such that it can be employed directly with photo-detection rather than
more complicated non-demolition techniques.
One such scheme was developed by Ralph, Hayes and Gilchrist in 2005 [84].
This scheme was a more efficient extension of an original Parity encoding
method developed by Knill, Laflamme and Milburn to protect against photon loss in their controlled-$\sigma_z$ gate [62]. The general
Parity encoding for a logical qubit is an $N$ photon GHZ state in the conjugate basis, i.e,
\begin{equation}
\begin{aligned}
&\ket{0}_L^N = \frac{1}{\sqrt{2}}(\ket{+}^{\otimes N} + \ket{-}^{\otimes N}), \\
&\ket{1}_L^N = \frac{1}{\sqrt{2}}(\ket{+}^{\otimes N} - \ket{-}^{\otimes N}),
\end{aligned}
\end{equation}
where $\ket{\pm} = (\ket{0}\pm \ket{1})/\sqrt{2}$.
The motivation with this type of encoding is that measuring any qubit in the $\ket{0,1}$ basis simply removes it from the state, reducing the
resulting state by one, i.e,
\begin{equation}
\begin{aligned}
P_{0,N} \ket{0}_L^N &= (I_N + Z_N)\ket{0}_L^N \\
& = \frac{1}{\sqrt{2}}(\ket{+}^{N-1} + \ket{-}^{N-1})\ket{0}_N = \ket{0}_L^{N-1}\ket{0}_N \\
P_{1,N} \ket{0}_L^N &= (I_N - Z_N)\ket{0}_L^N \\
&= \frac{1}{\sqrt{2}}(\ket{+}^{N-1} - \ket{-}^{N-1})\ket{1}_N = \ket{1}_L^{N-1}\ket{1}_N \\
\end{aligned}
\label{eq:lossenc}
\end{equation}
where $P_{0,1,N}$ are the projectors corresponding to measurement in the $\ket{0,1}$ basis on the $N^{th}$ qubit (up to normalization). The
effect for the $\ket{1}_L$ state is similar.
Measuring the $N^{th}$ qubit in the $\ket{0}$ state simply removes it from the encoded state, reducing the logical zero
state by one, while measuring the $N^{th}$ qubit as $\ket{1}$ enacts a logical bit flip at the same time as reducing the size of the logical
state. However, since the measurement result is known, this encoded bit flip can be corrected for.
Instead of introducing the full scheme developed in [84], we instead just give the general idea of how such encoding allows for
loss detection without non-demolition measurements. Photon loss in this model is assumed equivalent to measuring the
photon in the $\ket{0},\ket{1}$ basis, but not knowing the answer [Sec <ref>].
Our ignorance of the measurement result could lead to a logical
bit flip error on the encoded state, therefore we require the ability to protect against logical bit flip errors on the above states. As already shown,
the 3-qubit code allows us to achieve such correction. Therefore the final step in this scheme is encoding the above states into a redundancy code
(a generalized version of the 3-qubit code), where an arbitrary logical state, $\ket{\psi}_L$ is now given by,
\begin{equation}
\ket{\psi}_L = \alpha\ket{0}_1^N \ket{0}_2^N...\ket{0}_q^N + \beta \ket{1}_1^N \ket{1}_2^N ... \ket{1}_q^N
\end{equation}
where $\ket{0}^N,\ket{1}^N$ are the parity encoded states shown in Eq. <ref>
and the fully encoded state is $q$-blocks of these parity states.
This form of encoding protects against the loss of qubits by first encoding the system into a code structure that
allows for the removal of qubits without destroying the computational state and then protecting against logical errors that
are induced by loss events. In effect it maps errors that are un-correctable by standard QEC to error channels that
are correctable, in this case qubit loss $\rightarrow$ qubit bit-flips.
This is common with pathological error channels. If a specific type of error violates the standard “qubit" assumption of
QEC, additional correction techniques are always required to map this type of error to a correctable form,
consequently additional physical resources are usually needed.
§ SOME MODERN DEVELOPMENTS IN ERROR CORRECTION
Up until this stage we have restricted our discussions on error correction to the most basic principals and codes.
The ideas and methodologies we have detailed represent the introductory techniques that were developed when
error correction was first proposed. For those readers who are only looking for a basic introduction to the field, you
can quite easily skip the remainder of this paper.
Providing a fair and encompassing review of the more modern and advanced error correction techniques that have been
developed is far outside our goal for this review. However, we would be remiss if we did not briefly examine some
of the more advanced error correction techniques that have been proposed for large scale quantum information processing.
For the remainder of this discussion we choose two closely related error correction techniques, subsystem coding and
topological coding which have been receiving
significant attention in the fields of architecture design and large scale quantum information processing. While some readers
may disagree, we review these two modern error correction protocols because they
are currently two of the most useful correction techniques when discussing the physical construction of a quantum computer.
We again attempt to keep the discussion of these techniques light and provide specific examples when possible. However,
it should be stressed that these error correcting protocols are far more complicated than the basic codes shown earlier. Topological
error correction alone has since its introduction, essentially become its own research topic within the broader error correction field.
Hence we encourage the reader who is interested to refer to the cited articles below for more rigorous and detailed treatment of these
§.§ Subsystem Codes
Quantum subsystem codes [10] are one of the newer and highly flexible techniques to
implement quantum error correction. The traditional stabilizer codes that we have reviewed are more formally identified as
subspace codes, where information is encoded in a relevant coding subspace of a larger multi-qubit system. In contrast,
subsystem coding identifies multiple subspaces of the multi-qubit system as equivalent for storing quantum information.
Specifically, multiple states are identified with the logical $\ket{0}_L$ and $\ket{1}_L$ states.
The primary benefit to utilizing subsystem codes is the general nature of their construction. Moving from smaller to larger error
correcting codes is conceptually straightforward, error correction circuits are much simpler to construct when encoding information
in multiple subsystems [2] and the generality of their construction introduces the ability to perform dynamical code switching in a
fault-tolerant manner [88]. This final property gives subsystem coding significant flexibility as the strength of error correction within a
quantum computer can be changed, fault-tolerantly, during operation of the device.
As with the other codes presented in this review, subsystem codes are stabilizer codes but now defined over a square lattice. The lattice dimensions
represent the $X$ and $Z$ error correction properties and the size of the lattice in either of these two dimensions dictates the total
number of errors the code can correct. In general, a $\mathcal{C}$($n_1$,$n_2$) subsystem code is defined over a
$n_1\times n_2$ square lattice which encodes one logical qubit into $n_1n_2$ physical qubits with the ability to correct
at least $\lfloor\frac{n_1-1}{2}\rfloor$ $Z$ errors and at least $\lfloor\frac{n_2-1}{2}\rfloor$ $X$ errors.
Again, keeping with the spirit of this review, we instead focus on a specific example, the $\mathcal{C}$(3,3) subsystem code.
This code, encoding 9 physical qubits into one logical qubit can correct for one $X$ and one $Z$ error. In order to define
the code structure we begin with a $3\times 3$ lattice of qubits, where each qubit is identified with the vertices of the lattice
(note that this 2D structure represents the structure of the code, it does not imply that a physical array of qubits must be
arranged into a 2D lattice).
Stabilizer structure for the $\mathcal{C}$(3,3) code. Fig a. gives two of the four stabilizers from the group $\mathcal{S}$. Fig. b.
illustrates one of the four encoded Pauli operators from each subsystem defined with the Gauge group, $\mathcal{T}$. Fig. c. gives the
two logical operators from the group $\mathcal{L}$ which enact valid operations on the encoded qubit.
Fig. <ref> illustrate three sets of stabilizer operators which are defined over the lattice. The first group, illustrated in Fig. <ref>a.
is the stabilizer group, $\mathcal{S}$, which is generated by the operators,
\begin{equation}
\begin{aligned}
\mathcal{S} = \langle X_{i,*} X_{i+1,*} ; Z_{*,j}Z_{*,j+1} \ | \ i \in \Z_{2} ; j \in \Z_{2} \rangle,
\end{aligned}
\label{stabilizers:bs}
\end{equation}
where we retain the notation utilized in [2, 88] $U_{i,*}$ and $U_{*,j}$ represent an operator, $U$, acting on all qubits in a given row, $i$, or column, $j$,
respectively, and $\Z_2=\{1,2\}$. The second relevant subsystem is known as the gauge group [Fig. <ref>b.],
$\mathcal{T}$, and is described via the non-Abelian group generated by the pairwise operators
\begin{equation}
\begin{aligned}
\mathcal{T} = &\langle X_{i,j}X_{i+1,j} \ | \ i \in \Z_{2} ; j \in \Z_{3} \rangle, \\
&\langle Z_{i,j}Z_{i,j+1} \ | \ i \in \Z_{3} ; j \in \Z_{2} \rangle.
\end{aligned}
\end{equation}
The third relevant subsystem is the logical space [Fig. <ref>c], $\mathcal{L}$, which can be defined through the logical Pauli operators
\begin{equation}
\mathcal{L} = \langle Z_{*,1} ; X_{1,*} \rangle,
\end{equation}
which when combined with $\mathcal{S}$ form a non-Abelian group.
The stabilizer group $\mathcal{S}$, defines all relevant code states, i.e. every valid logical space is a $+1$ eigenvalue of this set.
For the $\mathcal{C}$(3,3) code, there are a total of nine physical qubits and a total of four independent stabilizers in $\mathcal{S}$,
hence there are five degrees of freedom left in the system which can house $2^5$ logical states which are simultaneous eigenstates of
$\mathcal{S}$. This is where the gauge group, $\mathcal{T}$, becomes relevant. As the gauge group is non-Abelian, there
is no valid code state which is a simultaneous eigenstate of all operators in $\mathcal{T}$. However, if you examine closely there
are a total of four encoded Pauli operations within $\mathcal{T}$. Fig <ref>b. illustrates two such operators.
As all elements of $\mathcal{T}$ commute with all elements of $\mathcal{S}$ we can
identify each of these four sets of valid “logical" qubits to be equivalent, i.e. we define $\{\ket{0}_L,\ket{1}_L\}$ pairs
which are eigenstates of $\mathcal{S}$ and an abelian subgroup of $\mathcal{T}$ and then ignore exactly what
gauge group we are in (each of the four possible $\ket{0}_L$ states can be used to store a single logical qubit in
the $\ket{0}$ state, regardless of which particular $\ket{0}_L$ gauge state we are in).
Hence, each of these gauge states represent a subsystem of the code, with each subsystem logically equivalent.
The final group we considered is the logical group $\mathcal{L}$. This is the set of two Pauli operators which
enact a logical $X$ or $Z$ gate on the encoded qubit regardless of the gauge choice and consequently represent true
logical operations to our encoded space.
In a more formal sense, the definition of these three group structures allows
us to decompose the Hilbert space of the system.
If we let $\mathcal{H}$ denote the Hilbert space of the physical system, $\mathcal{S}$ forms an Abelian group and hence can act as a stabilizer set denoting subspaces of $\mathcal{H}$. If we describe each of these subspaces by the binary vector, $\vec{e}$, formed from the eigenvalues of the stabilizers, $\mathcal{S}$, then each subspace splits into a tensor product structure
\begin{equation}
\mathcal{H} = \bigoplus_{\vec{e}} \mathcal{H}_{\mathcal{T}} \otimes \mathcal{H}_{\mathcal{L}},
\end{equation}
where elements of $\mathcal{T}$ act only on the subsystem $\mathcal{H}_{\mathcal{T}}$ and the operators $\mathcal{L}$ act only on the subsystem $\mathcal{H}_{\mathcal{L}}$. Therefore, in the context of storing qubit information, a logical qubit is encoded into the two
dimensional subsystem $\mathcal{H}_{\mathcal{L}}$. As the system is already stabilized by operators in $\mathcal{S}$ and the operators
in $\mathcal{T}$ act only on the space $\mathcal{H}_{\mathcal{T}}$, qubit information is only altered when operators in the group
$\mathcal{L}$ act on the system.
This formal definition of how subsystem coding works may be more complicated than the standard stabilizer codes shown earlier, but this
slightly more complicated coding structure has significant benefits when we consider how error correction is performed. In general, to perform
error correction, each of the stabilizers of the codespace must be checked to determine which eigenvalue changes have occurred due to
errors. In the case of subsystem code this would appear to be problematic. The stabilizer group, $\mathcal{S}$, consist of qubit operators
that scale with the size of the code. In our specific example, each of the $X$ and $Z$ stabilizers are six-dimensional (and in general, for
a $n_1\times n_2$ lattice, the $X$ stabilizers are $2n_1$ dimensional and the $Z$ stabilizers are $2n_2$ dimensional). If
techniques such as Shor's method [Section <ref>] were used, we would need to prepare a large ancilla state to perform
fault-tolerant correction, which would also scale linearly with the size of the code, this is clearly undesirable. However, due to the
gauge structure of subsystem codes we are able to decompose the error correction procedure [2].
Each of the stabilizers in $\mathcal{S}$ are simply the product of certain elements from $\mathcal{T}$, for example,
\begin{equation}
\begin{aligned}
&X_{1,1}X_{1,2}X_{1,3}X_{2,1}X_{2,2}X_{2,3} \in \mathcal{S} \\
= &( X_{1,1}X_{2,1} ) . (X_{1,2}X_{2,2}).(X_{1,3}X_{2,3}) \in \mathcal{T}.
\end{aligned}
\end{equation}
Therefore if we check the eigenvalues of the three, 2-dimensional operators from $\mathcal{T}$ we are able to calculate what the
eigenvalue is for the 6-dimensional stabilizer. This decomposition of the stabilizer set for the code can only occur since the
decomposition is in terms of operators from $\mathcal{T}$ which, when measured, has no effect on the logical information encoded within the system.
In fact, when error correction is performed the gauge state of the system will almost always change based on the order in which
the eigenvalues of the gauge operators are checked.
This exploitation of the gauge properties of subsystem coding is extremely beneficial for fault-tolerant designs for correction circuits.
As the stabilizer operators can now be decomposed into multiple 2-dimensional operators, fault-tolerant circuits for error correction
do not require any encoded ancilla states. Furthermore, if we decide to scale the code-space to correct more errors (increasing the
lattice size representing the code) we do not require measuring operators with higher dimensionality. Fig <ref> taken from Ref. [2]
illustrates the fault-tolerant circuit constructions for Bacon-Shor subsystem codes.
@C=1ex @R=2.3ex @!R (0.1,15)$j,k$
|+⟩ -2 -1 H
@C=1ex @R=2.3ex @!R (0.1,15)$k,j$
(-12,12)$k,j{+}1$ +1
(From Ref. [2]) Circuits for measuring the gauge operators and hence performing error correction for subsystem
codes. Fig. a. measures, fault-tolerantly, the operator $X_{j,k}X_{j+1,k}$ with only one ancilla. Fig. b. measures $Z_{k,j}Z_{k,j+1}$. The results of these
two qubit parity checks can be used to calculate the parity of the higher dimensional stabilizer operators of the code.
As each ancilla qubit is only coupled to two data qubits, no further circuit constructions are required to ensure fault-tolerance.
The classical results from these 2-dimensional parity checks are then combined to calculate the parity of the higher dimensional
stabilizer of the subsystem code.
A second benefit to utilizing subsystem codes is the ability to construct fault-tolerant circuits to perform dynamical code switching.
When using more traditional error correction codes it is difficult, if not impossible, to fault-tolerantly switch between codes with different error
correcting properties. The Steane [[7,1,3]] code is a single error correcting code for both $X$ and $Z$ channels.
If during the operation of a quantum computer, the user wished to increase the error correcting power of their code to two errors in the
$X$ and $Z$ channel they would first decode the quantum data and then re-encode with the higher distance code. This is clearly a
non fault-tolerant procedure as any error occurring on the decoded information will cause catastrophic failure. Due to the general
lattice structure of subsystem codes switching too and from higher order codes can be achieved without decoding and re-encoding
information, allowing the user of the computer to dynamically adjust the error correction during the computation.
Figs. <ref>, <ref> and <ref>, taken from Ref. [88] illustrates circuits to perform fault-tolerant switching from the $\mathcal{C}$(3,3) and $\mathcal{C}$(3,5) subsystem
code. As noted before, the $\mathcal{C}$(3,3) is a single $X$, single $Z$ error correcting code while the $\mathcal{C}$(3,5) is a single $X$, two $Z$
error correcting code. We will not detail why these circuits successfully implement fault-tolerant code switching, instead we encourage readers to
refer to Ref. [88] for further details.
\[ \Qcircuit @C=0.4em @R=0.36em @!R {
& \qw & \qw & \qw & \qw & \qw & \qw & \qw \\
& \qw & \qw & \qw & \qw & \targ & \qw & \qw \\
& \qw & \qw & \targ & \qw & \qw & \qw & \qw \\
\push{\vert {0} \rangle \hspace{0.1cm}} & \qw & \qw & \qw & \targ & \qw & \qw & \qw \\
\push{\vert {0} \rangle \hspace{0.1cm}} & \qw & \targ & \qw & \qw & \qw & \qw & \qw \\
\push{\vert {0} \rangle \hspace{0.1cm}} & \gate{H} & \qw & \qw & \ctrl{-2} & \ctrl{-4} & \gate{H} & \meter{} & & \\
\push{\vert {0} \rangle \hspace{0.1cm}} & \gate{H} & \ctrl{-2} & \ctrl{-4} & \qw & \qw & \gate{H} & \meter{} & & \\
\put(-0.4,244.5){\footnotesize{$1,j$}}
\put(-0.4,228.0){\footnotesize{$2,j$}}
\put(-0.4,211.5){\footnotesize{$3,j$}}
\put(87,244.5){\footnotesize{$1,j$}}
\put(87,228.0){\footnotesize{$2,j$}}
\put(87,211.5){\footnotesize{$3,j$}}
\put(87,195.0){\footnotesize{$4,j$}}
\put(87,178.0){\footnotesize{$5,j$}}
(From Ref. [88]). Circuit to convert from the $\mathcal{C}$($3$,$3$) subsystem code to the $\mathcal{C}$($5$,$3$)
code for one column, $j$, of the lattice structure of the code.
\[ \Qcircuit @C=0.4em @R=0.36em @!R {
& \qw & \qw & / \qw & \qw & \qw & \qw & \qw & \qw & \gate{X} & \gate{X} & \gate{X^{\otimes3}} & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw \\
& \qw & \qw & / \qw & \qw & \qw & \qw & \qw & \qw & \qw \cwx & \qw \cwx & \qw \cwx & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw \\
& \qw & \qw & / \qw & \qw & \qw & \qw & \qw & \qw & \qw \cwx & \qw \cwx & \qw \cwx & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw \\
& \qw & \qw & / \qw & \qw & \qw & \qw& \qw & \gate{\mathcal{P}} & \qw \cwx & \gate{X} \cwx & \meter{} \cwx & \\
& \qw & \qw & / \qw & \qw & \qw & \qw& \gate{\mathcal{P}} & \qw & \gate{X} \cwx & \qw \cwx & \meter{} \cwx[-1] & \\
\push{\vert {0^{\otimes3}} \rangle \hspace{0.1cm}} & \qw & \qw & / \qw & \qw & \qw & \qw & \qw & \gate{}\qwx[-2] & \qw \cwx & \meter{} \cwx[-1] & \\
\push{\vert {0^{\otimes3}} \rangle \hspace{0.1cm}} & \qw & \qw & / \qw & \qw & \qw & \qw & \gate{} \qwx[-2] & \qw & \meter{} \cwx[-1] & & \\
\put(-0.4,247.5){\footnotesize{$i=1$}}
\put(-0.4,231.0){\footnotesize{$i=2$}}
\put(-0.4,214.5){\footnotesize{$i=3$}}
\put(-0.4,198.0){\footnotesize{$i=4$}}
\put(-0.4,181.0){\footnotesize{$i=5$}}
\put(154,247.5){\footnotesize{$i=1$}}
\put(154,231.0){\footnotesize{$i=2$}}
\put(154,214.5){\footnotesize{$i=3$}}
(From Ref. [88]).
Downconversion from the $\mathcal{C}$($5$,$3$) code to the $\mathcal{C}$($3$,$3$)code .
$\mathcal{P}$ is the gate sequence in Fig. <ref>.
\[ \Qcircuit @C=0.4em @R=0.36em @!R {
& \qw & \qw & \qw & \qw & \qw & \ctrl{4} & \ctrl{3} & \qw & \qw & \\
& \qw & \qw & \qw & \ctrl{4} & \ctrl{3} & \qw & \qw & \qw & \qw & \\
& \qw & \qw & \ctrl{3} & \qw & \qw & \qw & \qw & \ctrl{1} & \qw & \\
\push{\vert 0 \rangle \hspace{0.1cm}} & \qw & \qw & \qw & \qw & \qw & \qw & \targ & \targ & \meter{} \\
\push{\vert 0 \rangle \hspace{0.1cm}} & \qw & \qw & \qw & \qw & \targ & \targ & \qw & \qw & \meter{} \\
\push{\vert 0 \rangle \hspace{0.1cm}} & \qw & \qw & \targ & \targ & \qw & \qw & \qw & \qw & \meter{} \\
\put(-0.4,211.5){\footnotesize{$i,1$}}
\put(-0.4,195.0){\footnotesize{$i,2$}}
\put(-0.4,178.5){\footnotesize{$i,3$}}
(From Ref. [88]) $X$ parity measurement under $\mathcal{C}$($5$,$3$) for one row, $i$, of the lattice structure.
§.§ Topological Codes
A similar coding technique to the Bacon-Shor subsystem codes is the idea of topological error correction, first introduced with the Toric code of Kitaev in 1997 [59]. Topological coding is similar to subsystem codes in that the code structure is defined on a lattice (which, in general, can be
be of dimension $> 2$) and the scaling
of the code to correct more errors is conceptually straightforward. However, in topological coding schemes the protection afforded to
logical information relies on the unlikely application of error chains which define non-trivial topological paths over the code surface.
Topological error correction is a complicated area of quantum error correction and fault-tolerance and any attempt to fairly summarize the
field is not possible within this review. In brief, there are two ways of approaching the problem. The first is simply to treat
topological codes as a class of stabilizer codes over a qubit system. This approach is more amenable to current current information technologies
and is being adapted to methods in cluster state computing [85, 39], optics [26, 31], ion-traps [94] and superconducting systems [56].
The second approach is to construct a physical Hamiltonian model based on the structure of the topological code. This leads to the more
complicated field on anyonic quantum computation [59]. By translating a coding structure into a physical Hamiltonian system, excitations from the
ground state of this Hamiltonian exhibit natural robustness against local errors (since their physical Hamiltonian symmetries reflect the
coding structure imposed). Specifically, quasi-particles arising from a Hamiltonian approach to quantum codes exhibit fractional quantum statistics
(they acquire fractional phase shifts when their positions are exchanged twice with other anyons, in contrast to Bosons or Fermions which
always acquire $\pm 1$ phase shifts). The unique properties of anyons therefore allow for natural, robust, error protection and anyon/anyon
interactions are performed by rotating anyones around each other. However, the major issue with this model is that it relies on quasi-particle
excitations that do not, in general, arise naturally. Although certain physical systems have been shown to exhibit anyonic excitations, most notably in the
fractional quantum hall effect [78] the ability to first manufacture a reliable anyonic system in addition to reliably design and construct a
large scale computing system based on anyons is a daunting task.
As there are several extremely good discussions of both anyonic [78] and non-anyonic topological computing [30, 42, 39] we will not
review any of the anyonic methods for topological computing and simply provide
a brief example of one topological coding scheme, namely the surface code [13, 30, 42].
The surface code for quantum error correction is an extremely good error
correction model for several reasons. As it is defined over a 2-dimensional lattice of qubits it can be implemented on architectures that
only allow for the coupling of nearest neighbor qubits (rather than the arbitrary long distance coupling of qubits in separate regions of the
computer). The surface code also exhibits one of the highest fault-tolerant thresholds of any quantum error correction scheme, recent simulations
estimate a threshold approaching 1% [85].
Finally, the surface code can correct problematic error channels such as qubit loss and leakage naturally.
The surface code, as with subsystem codes, is a stabilizer code defined over a 2-dimensional qubit lattice. Fig. <ref> illustrates. We
now identify each edge of the 2D lattice with a physical qubit. The stabilizer set consists of two types of operators, the first is the
set of $Z^{\otimes 4}$ operators which circle every lattice face (or plaquette). The second is the set of $X^{\otimes 4}$ operators which
encircle every vertex of the lattice. The stabilizer set is consequently generated by the operators,
\begin{equation}
A_p = \bigotimes_{j \in b(p)} Z_j, \quad B_v = \bigotimes_{j \in s(v)} X_j
\end{equation}
where $b(p)$ is the four qubits surrounding a plaquette and $s(v)$ is the four qubits surrounding each vertex in the lattice and
identity operators on the other qubits are implied. First note that
all of these operators commute as any plaquette and vertex stabilizer will share either zero or two qubits. If the lattice is not periodic in either dimension, this stabilizer
set completely specifies one unique state, i.e. for a $N\times N$ lattice there are $2N^2$ qubits and $2N^2$ stabilizer generators.
Hence this stabilizer set defines a unique multi-qubit entangled state which is generally referred to as a “clean" surface.
General structure of the surface code. The edges of the lattice correspond to
physical qubits. The four qubits surrounding each face (or plaquette) are +1 eigenstates
of the operators $A_p$ while the four qubits surrounding each vertex are +1 eigenstates of the
operators $B_v$. If all eigenstate conditions are met, a unique multi-qubit state is defined
as a “clean" surface.
The surface code imbeds two self similar lattices that are interlaced, generally
referred to as the primal and dual lattice. Fig. a. illustrates one lattice where plaquettes are
defined with the stabilizers $A_p$. Fig b. illustrates the dual structure where plaquettes
are now defined by the stabilizer set $B_v$. The two lattice structures are interlaced
and are related by shifting along the diagonal by half a lattice cell. Each of these
equivalent lattices are independently responsible for $X$ and $Z$ error correction
Examples of error chains and their effect on the eigenvalues for each plaquette stabilizer. a). A single $X$ error causes the parity of two
adjacent cells to flip. b) and c). Longer chains of errors only cause the end cells to flip eigenvalue as each intermediate cell will have two
$X$ errors and hence the eigenvalue for the stabilizer will flip twice.
Detailing exactly how this surface can be utilized to perform robust quantum computation is far outside the scope of this review and there
are several papers to which such a discussion can be referred [83, 85, 42, 39].
Instead, we can quite adequately show how robust error correction
is possible by simply examining how a “clean" surface can be maintained in the presence of errors.
The $X$ and $Z$ stabilizer sets, $A_p$ and $B_v$ define two equivalent 2D lattices which are interlaced,
Fig. <ref>, illustrates.
If the total 2D lattice is shifted along the diagonal by half a cell then the operators $B_v$ are now arranged around a plaquette and
the operators $A_p$ are arranged around a lattice vertex. Since protection against $X$ errors are achieved by detecting eigenvalue flips
of $Z$ stabilizers and visa-versa, these two interlaced lattices correspond to error correction against $X$ and $Z$ errors respectively.
Therefore we can quite happily restrict our discussion to one possible error channel, for example correcting $X$ errors (since the correction for
$Z$ errors proceeds identically when considering the stabilizers $B_v$ instead of $A_p$).
Fig <ref>a. illustrates the effect that a singe $X$ error has on a pair of adjacent plaquettes.
Since $X$ and $Z$ anti-commute, a single
bit-flip error on one qubit in the surface will flip the eigenvalue of the $Z^{\otimes 4}$ stabilizers on the two plaquettes adjacent to the
respective qubit. As single qubit errors act to flip the eigenvalue of adjacent plaquette stabilizers we examine how chains of errors
affect the surface. Figs <ref>b. and Fig. <ref>c. examine two longer chains of errors.
As you can see, if multiple errors occur, only the
eigenvalues of the stabilizers associated with the ends of the error chains flip. Each plaquette along the chain will always have two
$X$ errors occurring on different boundaries and consequently the eigenvalue of the $Z^{\otimes 4}$ stabilizer around these plaquettes will
flip twice.
If we now consider an additional ancilla qubit which sits in the center of each plaquette and can couple to the four surrounding qubits, we can
check the parity by running the simple parity circuit shown in Fig <ref>. If we assume that we initially prepare a
perfect “clean" surface, we then, at some later time, check the parity of every plaquette over the surface.
a). Lattice structure to check the parity of a surface plaquette. An additional ancilla qubit is coupled to the four neighboring qubits that
comprise each plaquette. b). Quantum circuit to check the parity of the $Z^{\otimes 4}$ stabilizer for each surface plaquette.
If $X$ errors have occurred on a certain subset
of qubits, the parity associated with the endpoints of error chains will have flipped. We now take this 2-dimensional classical data
tree of eigenvalue flips and pair them up into the most likely set of error chains. Since it is assumed that the probability of error on any individual
qubit is low, the most likely set of errors which reflects the eigenvalue changes observed is the minimum weight set (i.e. connect up all
plaquettes where eigenvalues have changed into pairs such that the total length of all connections is minimized). This classical
data processing is quite common in computer science and minimum weight matching algorithms such as the Blossom package [22, 67] have
a running time polynomial in the total number of data points in the classical set.
Once this minimal matching is achieved, we can identify the likely error chains corresponding to the end points and correction can be applied
The failure of this code is therefore dictated by error chains that cannot be detected through changes in plaquette eigenvalues.
If you examine Fig <ref>, we consider an
error chain that connects one edge of the surface lattice to another. In this case every plaquette has two associated qubits that
have experienced a bit flip and no eigenvalues in the surface have changed. Since we have assumed that we are only wishing to
maintain a “clean" surface, these error chains have no effect, but when one considers the case of storing information in the lattice, these
types of error chains correspond to logical errors on the qubit [42]. Hence undetectable errors are chains which connect boundaries of the
surface to other boundaries (in the case of information processing, qubits are artificial boundaries within the larger lattice surface)
Example of a chain of errors which do not cause any eigenvalue changes in the surface. If errors connect boundaries to other
boundaries, the error correction protocol will not detect them. In the case of a “clean" surface, these error chains are invariants of the
surface code. When computation is considered, qubit information are artificial boundaries within the surface. Hence if error chains connect
these information qubits to other boundaries, logical errors occur.
It should be stressed that this is a simplified description of the full protocol, but it does encapsulate the basic idea. The important
thing to realize is that the failure rate of the error correction procedure is suppressed, exponentially with the size of the lattice. In order for a series of
single qubit errors to be undetectable, they must form a chain connecting one boundary in the surface with another. If we consider an error
model where each qubit experiences a bit flip, independently, with probability $p$, then an error chain of one occurs with probability $p$,
error chains of weight two occur with probability $O(p^2)$, chains of three $O(p^3)$ etc... If we have an $N \times N$ lattice and we extend
the surface by one plaquette in each dimension, then the probability of having an error chain connecting two boundaries will drop
by a factor of $p^2$ (two extra qubits have to experience an error one on each boundary). Extending and $N\times N$ lattice by one
plaquette in each dimension requires $O(N)$ extra qubits, hence this type of error correcting code suppresses the probability of having undetectable
errors exponentially with a qubit resource cost which grows linearly.
As we showed in Section <ref>,
standard concatenated coding techniques allow for a error rate suppression which scales with the
concatenation level as a double exponential while the resource increase scales exponentially. For the surface code, the error rate suppression
scales exponentially while the resource increase scales linearly. While these scaling relations might be mathematically equivalent, the
surface code offers much more flexibility at the architectural level. Being able to increase the error protection in the computer with only a
linear change in the number of physical qubits is far more beneficial than using an exponential increase in resources when utilizing
concatenated correction. Specifically, consider the case where a error protected computer is operating at a logical error rate which is
just above what is required for an algorithm. If concatenated error correction is employed, then adding another later of correction will not
only increase the number of qubits by an exponential amount, but it will also drop the effective logical error rate far below what is actually required.
In contrast, if surface codes are employed, we increase the qubit resources by a linear factor and drop the logical error rate sufficiently for
successful application of the algorithm.
We now leave the discussion regarding topological correction models. We emphasize again that this was a very broad overview
of the general concept of topological codes. There are many details and subtleties that we have deliberately left out if this discussion and
we urge the reader, if they are interested, to refer to the referenced articles for a much more thorough treatment of this topic.
§ CONCLUSIONS AND FUTURE OUTLOOK
This review has hopefully provided a basic introduction to some of
the most important theoretical aspects of quantum error correction and
fault-tolerant quantum computation. The ultimate goal of this discussion was not
to provide a rigorous theoretical framework for QEC and fault-tolerance, but
instead attempted to illustrate most of the important rules, results and techniques that have evolved
out of this field.
We not only covered the basic aspects of QEC through specific examples,
but also we have briefly discussed how physical errors influence quantum
computation and how these processes are interpreted within the context of
QEC. One of the more important aspects of this review is the discussion related to
the stabilizer formalism, circuit synthesis and fault-tolerant circuit construction. Stabilizers are
arguably the most useful theoretical formalism in QEC
as once it is sufficiently understood, most of the
important properties of error correcting codes can be investigated and understood
largely by inspection.
The study of quantum error correction and fault-tolerance is still and active area of
QIP research.
Although the library of quantum codes and error correction techniques are vast there is
still a significant disconnect between the abstract framework of quantum coding and
the more physically realistic implementation of error correction for large scale quantum
There are several future possibilities for the direction of quantum information processing.
Even with the development of many of these advanced techniques, the physical
construction and accuracy of current qubit fabrication is still insufficient to obtain any
benefit from QEC. Many in the field now acknowledge that the future development of
quantum computation will most likely split into two broad categories. The first is arguably the
more physically realistic, namely few qubit application in quantum simulation.
Quantum simulation, i.e. using quantum systems to efficiently simulate other quantum
systems was proposed by Richard Feynmann in the early 1980's [38] and was one
of the primary motivations to the development of the field. In the ideal case, it is
argued that having access to a quantum computer with on the order of 10-100 physical
qubits could allow for simulating physical systems large enough to be impractical
for current classical computers. If we limit our quantum array to the 100 qubit level, then
even implementing active error correction techniques would not be desirable. Instead,
higher quality fabrication and control as well as
techniques in error avoidance (which require far less resources than error correction)
would instead be used in order to lower effective error rates below what is required to
run few qubit applications.
Beyond few qubit quantum simulation we move to truely large scale quantum computation, i.e.
implementing large algorithms such as Shor on qubit arrays well beyond
1000 physical qubits. This would undoubtably require active techniques in
error correction. Future work needs to focus on adapting the many codes and
fault-tolerant techniques to the architectural level. As we noted in section <ref>,
the implementation of QEC at the design level largely influences the fault-tolerant
threshold exhibited by the code itself. Being able to efficiently incorporate
both the actual quantum code and the error correction procedures at the physical level
is extremely important when developing an experimentally viable, large scale quantum computer.
There are many differing opinions within the quantum computing community as to the
future prospects for quantum information processing. Many remain pessimistic regarding
the development of a million qubit device and instead look towards quantum simulation in
the absence of active error correction as the realistic goal of quantum information. However,
in the past few years, the theoretical advances in error correction and
the fantastic speed in the experimental development of few qubit devices continues to
offer hope for the near-term construction of a large scale device, incorporating many of the ideas
presented within this review. While we could never
foresee the possible successes or failures in quantum information science, we remain hopeful
that a large scale quantum computer is still a goal worth pursuing.
§ ACKNOWLEDGMENTS
The authors wish to thank
A. M. Stephens, R. Van Meter, A.G. Fowler, L.C.L. Hollenberg, A. D. Greentree for helpful comments and
acknowledge the support of MEXT, JST, the
EU project QAP.
[1]
D. Aharonov and M. Ben-Or.
Fault-tolerant Quantum Computation with constant error.
Proceedings of 29th Annual ACM Symposium on Theory of
Computing, page 46, 1997.
[2]
P. Aliferis and A.W. Cross.
Subsystem fault tolerance with the Bacon-Shor code.
Phys. Rev. Lett., 98:220502, 2007.
[3]
S. Aaronson and D. Gottesman.
Improved Simulation of Stabilizer Circuits.
Phys. Rev. A., 70:052328, 2004.
[4]
A. Aassime, G. Johansson, G. Wendin, R.J. Schoelkopf, and P. Delsing.
Radio-Frequency Single-Electron Transistor as Readout Device for
Qubits: Charge Sensitivity and Backaction.
Phys. Rev. Lett., 86:3376, 2001.
[5]
P. Aliferis.
PhD Thesis (Caltech), 2007.
[6]
D. Ahn, J. Lee, M.S. Kim, and S.W. Hwang.
Self-Consistent non-Markovian theory of a quantum-state evolution
for quantum-information-processing.
Phys. Rev. A., 66:012302, 2002.
[7]
O. Astafiev, Yu. A. Pashkin, Y. Nakamura, T. Yamamoto, and J.S. Tsai.
Quantum Noise in the Josephson Charge Qubit.
Phys. Rev. Lett., 93:267007, 2005.
[8]
T. Aoki, G. Takahashi, T. Kajiya, J. Yoshikawa, S.L. Braunstein, P. van Loock,
and A. Furusawa.
Quantum Error Correction Beyond Qubits.
arxiv:0811.3734, 2008.
[9]
A. Avizienis.
The Evolution of Fault-Tolerant Computing.
Springer -Verlag, New York, 1987.
[10]
D. Bacon.
Operator Quantum Error-Correcting Subsystems for self-correcting
quantum memories.
Phys. Rev. A., 73:012340, 2006.
[11]
A. Bririd, S.C. Benjamin, and A. Kay.
Quantum error correction in globally controlled arrays.
quant-ph/0308113, 2003.
[12]
N. Boulant, T.F. Havel, M.A. Pravia, and D.G. Cory.
Robust Method for Estimating the Lindblad operators of a dissipative
quantum process from measurements of the density operator at multiple time
Phys. Rev. A., 67:042322, 2003.
[13]
S.B. Bravyi and A.Y. Kitaev.
Quantum codes on a lattice with boundary.
Quant. Computers and Computing, 2:43, 2001.
[14]
G. Burkard, R.H. Koch, and D.P. DiVincenzo.
Multilevel quntum description of decoherence in superconducting
Phys. Rev. B., 69:064503, 2004.
[15]
S. Balensiefer, L. Kregor-Stickles, and M. Oskin.
An Evaluation Framework and Instruction Set Architecture for
Ion-Trap based Quantum Micro-architectures.
SIGARCH Comput. Archit. News, 33(2):186, 2005.
[16]
M.S. Byrd, D.A. Lidar, L.-A. Wu, and P. Zanardi.
Universal leakage elimination.
Phys. Rev. A., 71:052301, 2005.
[17]
S.D. Barrett and G.J. Milburn.
Measuring the Decoherence rate in a semiconductor charge qubit.
Phys. Rev. B., 68:155307, 2003.
[18]
H.-J. Briegel and R. Raussendorf.
Persistent Entanglement in Arrays of Interacting Particles.
Phys. Rev. Lett., 86:910, 2001.
[19]
S.L. Braunstein.
Error Correction for continuous quantum variables.
Phys. Rev. Lett., 80:4084, 1998.
[20]
R. Cleve and D. Gottesman.
Efficient Computations of Encodings for Quantum Error Correction.
Phys. Rev. A., 56:76, 1997.
[21]
V.I. Conrad, A.D. Greentree, D.N. Jamieson, and L.C.L. Hollenberg.
Analysis and Geometric Optimization of Single Electron Transistors
for Read-Out in Solid-State Quantum Computing.
Journal of Computational and Theoretical Nanoscience, 2:214,
[22]
W. Cook and A. Rohe.
Computing minimum-weight perfect matchings.
INFORMS Journal on Computing,, 11:138, 1999.
[23]
A.R. Calderbank, E.M. Rains, P.W. Shor, and N.J.A. Sloane.
Quantum Error Correction via Codes Over GF(4).
IEEE Trans. Inform. Theory, 44:1369, 1998.
[24]
A.R. Calderbank and P.W. Shor.
Good Quantum Error-Correcting codes exist.
Phys. Rev. A., 54:1098, 1996.
[25]
D.P. DiVincenzo and P. Aliferis.
Effective Fault-Tolerant Quantum Computation with slow measurement.
Phys. Rev. Lett., 98:020501, 2007.
[26]
S.J. Devitt, A.G. Fowler, A.M. Stephens, A.D. Greentree, L.C.L. Hollenberg,
W.J. Munro, and K. Nemoto.
Topological Cluster State Computation with Photons.
arxiv:0808.1782, 2008.
[27]
L.-M. Duan and G.-C. Guo.
Preserving Coherence in Quantum Computation by Pairing quantum
Phys. Rev. Lett., 79:1953, 1997.
[28]
L.-M. Duan and G.-C. Guo.
Prevention of dissipation with two particles.
Phys. Rev. A., 57:2399, 1998.
[29]
L.-M. Duan and G.-C. Guo.
Reducing decoherence in quantum computer-memory with all quantum
bits coupling to the same environment.
Phys. Rev. A., 57:737, 1998.
[30]
E. Dennis, A. Kitaev, A. Landahl, and J. Preskill.
Topological Quantum Memory.
J. Math. Phys., 43:4452, 2002.
[31]
S.J. Devitt, W.J. Munro, and K. Nemoto.
High Performance Quantum Computing.
arxiv:0810.2444, 2008.
[32]
C.M. Dawson and M.A. Nielsen.
The Solovay-Kitaev Algorithm.
Quant. Inf. Comp., 6(1):81, 2006.
[33]
D.P. DiVincenzo and P.W. Shor.
Fault-Tolerant error correction with efficient quantum codes.
Phys. Rev. Lett., 77:3260, 1996.
[34]
M.H. Devoret and R.J. Schoelkopf.
Amplifying Quantum Signals with the Single-Electron Transistor.
Nature (London), 406:1039, 2000.
[35]
S.J. Devitt, S.G. Schirmer, D.K.L. Oi, J.H. Cole, and L.C.L. Hollenberg.
Subspace Confinement: How good is your Qubit?
New. J. Phys., 9:384, 2007.
[36]
S. Daffer, K. Wódkiewicz, and J.K. McIver.
Quantum Markov Channels for Qubits.
Phys. Rev. A., 67:062312, 2003.
[37]
A. Ekert and R. Jozsa.
Quantum Computation and Shor's Factoring Algorithm.
Rev. Mod. Phys., 68:733, 1996.
[38]
R.P. Feynman.
Simulating Physics with Computers.
Int. J. Theor. Phys., 21:467, 1982.
[39]
A.G. Fowler and K. Goyal.
Topological cluster state quantum computing.
arxiv:0805.3202, 2008.
[40]
P. Facchi, D.A. Lidar, and S. Pascazio.
Unification of dynamical decoupling and the quantum Zeno effect.
Phys. Rev. A., 69:032314, 2004.
[41]
A.G. Fowler.
PhD Thesis (Melbourne).
quant-ph/0506126, 2005.
[42]
A.G. Fowler, A.M. Stephens, and P. Groszkowski.
High threshold universal quantum computation on the surface code.
arxiv:0803.0272, 2008.
[43]
P. Gács.
Reliable computation with cellular automata.
Proc. ACM Symp. Th. Comput., 15:32, 1983.
[44]
C.W. Gardiner.
Quantum Noise, volume 56 of Springer Series in
Springer -Verlag, Berlin ; New York, 1991.
[45]
M. Grassl, Th. Beth, and T. Pellizzari.
Codes for the quantum erasure channel.
Phys. Rev. A., 56:33, 1997.
[46]
D. Gottesman and I.L. Chuang.
Demonstrating the viability of Universal Quantum Computation using
teleportation and single qubit operations.
Nature (London), 402:390, 1999.
[47]
D.M. Greenberger, M.A. Horne, A. Shimony, and A. Zeilinger.
Bell's theorem without inequalities.
Am. J. Physics, 58:1131, 1990.
[48]
D.M. Greenberger, M.A. Horne, and A. Zeilinger.
Bell's theorem, Quantum, and Conceptions of the Universe.
Kluwer Academic, Dordrecht, 1989.
[49]
D. Gottesman.
A Class of Quantum Error-Correcting Codes Saturating the Quantum
Hamming Bound.
Phys. Rev. A., 54:1862, 1996.
[50]
D. Gottesman.
PhD Thesis (Caltech).
quant-ph/9705052, 1997.
[51]
D. Gottesman.
A theory of Fault-Tolerant quantum computation.
Phys. Rev. A., 57:127, 1998.
[52]
D. Gottesman.
An Introduction to Quantum Error Correction.
Quantum Computation: A Grand Mathematical Challenge for the
Twenty-First Century and the Millennium, ed. S. J. Lomonaco, Jr., pp. 221-235
(American Mathematical Society, Providence, Rhode Island, 2002),
quant-ph/0004072, 2002.
[53]
D. Gottesman.
An Introduction to Quantum Error Correction and Fault-Tolerant
Quantum Computation.
arXiv:0904.2557, 2009.
[54]
L.K. Grover.
Quantum Mechanics Helps in Searching for a Needle in a Haystack.
Phys. Rev. Lett., 79:325, 1997.
[55]
J.J. Hope, G.M. Moy, M.J. Collett, and C.M. Savage.
Steady-State quantum statistics of a non-Markovian atom laser.
Phys. Rev. A., 61:023603, 2000.
[56]
L.B. Ioffe, M.V. Feigel'man, A. Ioselevich, D. Ivanov, M. Troyer, and
G. Blatter.
Topologically protected quantum bits from Josephson junction
Nature (London), 415:503, 2002.
[57]
N. Imoto, H.A. Haus, and Y. Yamamoto.
Quantum nondemolition measurement of the photon number via the
optical Kerr effect.
Phys. Rev. A., 32:2287, 1985.
[58]
S.P. Jordan, E. Farhi, and P.W. Shor.
Error correcting codes for adiabatic quantum computation.
Phys. Rev. A., 74:052322, 2006.
[59]
A.Y. Kitaev.
Quantum Computations: algorithms and error correction.
Russ. Math. Serv., 52(6):1191, 1997.
[60]
E. Knill and R. Laflamme.
A Theory of Quantum Error-Correcting Codes.
Phys. Rev. Lett., 84:2525, 2000.
[61]
E. Knill, R. Laflamme, A. Ashikhmin, H. Barnum, L. Viola, and W.H. Zurek.
Introduction To Quantum Error Correction.
Los Alamos Science, 27:188, 2002.
[62]
E. Knill, R. Laflamme, and G.J. Milburn.
A Scheme for Efficient Quantum Computation with linear optics.
Nature (London), 409:46, 2001.
[63]
D. Kribs, R. Laflamme, and D. Poulin.
Unified and Generalized Approach to Quantum Error Correction.
Phys. Rev. Lett., 94:180501, 2005.
[64]
E. Knill, R. Laflamme, and L. Viola.
Theory of Quantum Error Correction for General Noise.
Phys. Rev. Lett., 84:2525, 2000.
[65]
E. Knill, R. Laflamme, and W.H. Zurek.
Accuracy threshold for Quantum Computation.
quant-ph/9610011, 1996.
[66]
E. Knill.
Quantum computing with realistically noisy devices.
Nature (London), 434:39, 2005.
[67]
V. Kolmogorov.
Blossom V: A new implementation of a minimum cost perfect matching
Technical Report:
http://www.adastral.ucl.ac.uk/ vladkolm/papers/BLOSSOM5.html, 2008.
[68]
R. Laflamme, C. Miquel, J.P. Paz, and W.H. Zurek.
Perfect Quantum Error Correcting Code.
Phys. Rev. Lett., 77:198, 1996.
[69]
S. Lloyd and J. E. Slotine.
Analog Quantum Error Correction.
Phys. Rev. Lett., 80:4088, 1998.
[70]
D.A. Lidar and K.B. Whaley.
Decoherence-Free Subspaces and Subsystems.
quant-ph/0301032, 2003.
[71]
D.A. Lidar and L.-A. Wu.
Encoded Recoupling and Decoupling: An Alternative to Quantum Error
Correcting Codes, Applied to Trapped Ion Quantum Computation.
Phys. Rev. A., 67:032313, 2003.
[72]
J.M. Martinis, K.B. Cooper, R. McDermott, M. Steffen, M. Ansmann, K.D. Osborn,
K. Cicak, S. Oh, D.P. Pappas, R.W. Simmonds, and C.C. Yu.
Decoherence in Josephson Qubits for Dielectric Loss.
Phys. Rev. Lett., 95:210503, 2005.
[73]
T. Metodiev, A. Cross, D. Thaker, K. Brown, D. Copsey, F.T. Chong, and I.L.
Preliminary Results on Simulating a Scalable Fault-Tolerant Ion Trap
system for quantum computation.
In 3rd Workshop on Non-Silicon Computing (NSC-3),
online:www.csif.cs.ucdavis.edu/ metodiev/papers/NSC3-setso.pdf, 2004.
[74]
W.J. Munro, K. Nemoto, R.G. Beausoleil, and T.P. Spiller.
High-effciency quantum-nondemolition single-photon-number-resolving
Phys. Rev. A., 71:033819, 2005.
[75]
G.J. Milburn and D.F. Walls.
State reduction in quantum-counting quantum nondemolition
Phys. Rev. A., 30:56, 1984.
[76]
M.A. Nielsen and I.L. Chuang.
Quantum Computation and Information.
Cambridge University Press, second edition, 2000.
[77]
J. Von Neumann.
Probabilistic logics and the synthesis of reliable organisms from
unreliable components.
Automata Studies, 43, 1955.
[78]
C. Nayak, S.H. Simon, A. Stern, M. Freedman, and S. Das Sarma.
Non-Abelian anyons and topological quantum computation.
Rev. Mod. Phys., 80:1083, 2008.
[79]
G.J. Pryde, J.L. O'Brien, A.G. White, S.D. Bartlett, and T.C. Ralph.
Measuring a photonic qubit without destroying it.
Phys. Rev. Lett., 92:190402, 2004.
[80]
J. Preskill.
Introduction To Quantum Computation.
World Scientific, Singapore, 1998.
[81]
M.B. Plenio, V. Vedral, and P.L. Knight.
Quantum Error Correction in the Presence of Spontaneous Emission.
Phys. Rev. A., 55:67, 1997.
[82]
R. Raussendorf and H.-J. Briegel.
A One way Quantum Computer.
Phys. Rev. Lett., 86:5188, 2001.
[83]
R. Raussendorf and J. Harrington.
Fault-tolerant quantum computation with high threshold in two
Phys. Rev. Lett., 98:190504, 2007.
[84]
T.C. Ralph, A.J.F. Hayes, and A. Gilchrist.
Loss Tolerant Optical Qubits.
Phys. Rev. Lett., 95:100501, 2005.
[85]
R. Raussendorf, J. Harrington, and K. Goyal.
Topological fault-tolerance in cluster state quantum computation.
New J. Phys., 9:199, 2007.
[86]
T. Szkopek, P.O. Boykin, H. Fan, V.P. Roychowdhury, E. Yablonovitch, G. Simms,
M. Gyure, and B. Fong.
Threshold Error Penalty for Fault-Tolerant Computation with Nearest
Neighbour Communication.
IEEE Trans. Nano., 5(1):42, 2006.
[87]
K.M. Svore, D.P. DiVincenzo, and B.M. Terhal.
Noise Threshold for a Fault-Tolerant Two-Dimensional Lattice
Quant. Inf. Comp., 7:297, 2007.
[88]
A. Stephens, Z. W. E. Evans, S. J. Devitt, and L.C.L. Hollenberg.
Subsystem Code Conversion.
arxiv:0708.3969, 2007.
[89]
A. Stephens, A.G. Fowler, and L.C.L. Hollenberg.
Universal Fault-Tolerant Computation on bilinear nearest neighbor
Quant. Inf. Comp., 8:330, 2008.
[90]
P.W. Shor.
Scheme for reducing decoherence in quantum computer memory.
Phys. Rev. A., 52:R2493, 1995.
[91]
P.W. Shor.
Fault-Tolerant quantum computation.
Proc. 37th IEEE Symp. on Foundations of Computer Science.,
pages 56–65, 1996.
[92]
P.W. Shor.
Polynomial-Time algorithms for Prime Factorization and Discrete
Logarithms on a Quantum Computer.
SIAM Journal of Sci. Statist. Comput., 26:1484, 1997.
[93]
A.M. Steane and B. Ibinson.
Fault-Tolerant Logical Gate Networks for Calderbank-Shor-Steane
Phys. Rev. A., 72:052335, 2005.
[94]
R. Stock and D.F.V. James.
A Scalable, high-speed measurement based quantum computer using
trapped ions.
arxiv:0808.1591, 2008.
[95]
J. Steinbach and J. Twamley.
Motional Quantum Error Correction.
quant-ph/9811011, 1998.
[96]
A.M. Steane.
Error Correcting Codes in Quantum Theory.
Phys. Rev. Lett., 77:793, 1996.
[97]
A.M. Steane.
Simple quantum error-correcting codes.
Phys. Rev. A., 54:4741, 1996.
[98]
A.M. Steane.
Active Stabilization, Quantum Computation and Quantum State
Phys. Rev. Lett., 78:2252, 1997.
[99]
A.M. Steane.
The Ion Trap Quantum Information Processor.
Appl. Phys. B, 64:623, 1997.
[100]
A.M. Steane.
Quantum Computing and Error Correction.
Decoherence and its implications in quantum computation and
information transfer, Gonis and Turchi, eds, pp.284-298 (IOS Press,
Amsterdam, 2001), quant-ph/0304016, 2001.
[101]
A.M. Steane.
Fast Fault-Tolerant filtering of quantum codewords.
quant-ph/0202036, 2002.
[102]
T. Toffoli.
Bicontinuous extension of reversible combinatorial functions.
Math. Syst. Theory, 14:13–23, 1981.
[103]
W.G. Unruh.
Maintaining Coherence in Quantum Computers.
Phys. Rev. A., 51:992, 1995.
[104]
L. Viola and E. Knill.
Robust Dynamical Decoupling of Quantum Systems with Bounded
Phys. Rev. Lett., 90:037901, 2003.
[105]
L. Viola and E. Knill.
Random Decoupling Schemes for Quantum Dynamical Control and Error
Phys. Rev. Lett., 94:060502, 2005.
[106]
L. Viola, E. Knill, and S. Lloyd.
Dynamical Decoupling of Open Quantum Systems.
Phys. Rev. Lett., 82:2417, 1999.
[107]
L. Viola and S. Lloyd.
Dynamical suppression of decoherence in two-state quantum systems.
Phys. Rev. A., 58:2733, 1998.
[108]
P. van Loock.
A Note on Quantum Error Correction with Continuous Variables.
arxiv:0811.3616, 2008.
[109]
D. Vitali and P. Tombesi.
Using parity kicks for decoherence control.
Phys. Rev. A., 59:4178, 1999.
[110]
J. Vala, K.B. Whaley, and D.S. Weiss.
Quantum error correction of a qubit loss in an addressable atomic
Phys. Rev. A., 72:052318, 2005.
[111]
L.-A. Wu, M.S. Byrd, and D.A. Lidar.
Efficient Universal Leakage Elimination for Physical and Encoded
Phys. Rev. Lett., 89:127901, 2002.
[112]
W.K. Wooters and W.H. Zurek.
A Single Quantum Cannot be Cloned.
Nature (London), 299:802, 1982.
[113]
P. Zanardi.
Symmetrizing evolutions.
Phys. Lett. A, 258:77, 1999.
[114]
P. Zanardi and M. Rasetti.
Error Avoiding Quantum Codes.
Mod. Phys. Lett. B., 11:1085, 1997.
[115]
P. Zanardi and M. Rasetti.
Noisless Quantum Codes.
Phys. Rev. Lett., 79:3306, 1997.
|
arxiv-papers
| 2009-05-18T03:26:04 |
2024-09-04T02:49:02.676852
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Simon J. Devitt, Kae Nemoto and William J. Munro",
"submitter": "Simon Devitt Dr",
"url": "https://arxiv.org/abs/0905.2794"
}
|
0905.2893
|
# Quasineutral limit of the electro-diffusion model arising in
Electrohydrodynamics
Fucai Li Department of Mathematics, Nanjing University, Nanjing 210093, P.R.
China fli@nju.edu.cn
###### Abstract.
The electro-diffusion model, which arises in electrohydrodynamics, is a
coupling between the Nernst-Planck-Poisson system and the incompressible
Navier-Stokes equations. For the generally smooth doping profile, the
quasineutral limit (zero-Debye-length limit) is justified rigorously in
Sobolev norm uniformly in time. The proof is based on the elaborate energy
analysis and the key point is to establish the uniform estimates with respect
to the scaled Debye length.
###### Key words and phrases:
electro-diffusion model, Nernst-Planck-Poisson system, incompressible Navier-
stokes equations, quasineutral limit, weighted energy functional
###### 2000 Mathematics Subject Classification:
35B25, 35B40, 35Q30, 76W05
## 1\. Introduction and Main Results
In this paper we consider a model describing ionic concentrations, electric
potential, and velocity field in an electrolytic solution. This model is a
coupling between the Nernst-Planck-Poisson system and the Navier-Stokes
equations [18, 17, 2, 14]. The (rescaled) system takes the form
$\displaystyle n^{\lambda}_{t}=\text{div}(\nabla
n^{\lambda}-n^{\lambda}\nabla\Phi^{\lambda}-n^{\lambda}v^{\lambda}),$ (1.1)
$\displaystyle p^{\lambda}_{t}=\text{div}(\nabla
p^{\lambda}+p^{\lambda}\nabla\Phi^{\lambda}-p^{\lambda}v^{\lambda}),$ (1.2)
$\displaystyle\lambda^{2}\Delta\Phi^{\lambda}=n^{\lambda}-p^{\lambda}-D(x),$
(1.3) $\displaystyle v^{\lambda}_{t}+v^{\lambda}\cdot\nabla
v^{\lambda}+\nabla\pi^{\lambda}-\mu\Delta
v^{\lambda}=(n^{\lambda}-p^{\lambda})\nabla\Phi^{\lambda},$ (1.4)
$\displaystyle\text{div}v^{\lambda}=0$ (1.5)
with initial data
$n^{\lambda}(x,0)=n^{\lambda}_{0}(x),\quad
p^{\lambda}(x,0)=p^{\lambda}_{0}(x),\quad
v^{\lambda}(x,0)=v^{\lambda}_{0}(x),\ \ \ x\in\mathbb{T}^{3},$ (1.6)
where $\mathbb{T}^{3}$ is the periodic domain in $\mathbb{R}^{3}$,
${n}^{\lambda}$ and $p^{\lambda}$ denote the negative and positive charges
respectively, $\Phi^{\lambda}$ the electric field, $v^{\lambda}$ the velocity
of the electrolyte, and $\pi^{\lambda}$ the fluid pressure. The parameter
$\lambda>0$ denotes the scaled Debye length and $\mu>0$ the dynamic viscosity.
$D(x)$ is a given function and models the doping profile.
Usually in electrolytes the Debye length is much smaller compared the others
quantities, and the electrolytes is almost electrically neutral. Under the
assumption of space charge neutrality, i.e. $\lambda=0$, we formally arrive at
the following quasineutral Nernst-Planck-Navier-Stokes system
$\displaystyle n_{t}=\text{div}(\nabla n+n\mathcal{E}-nv),$ (1.7)
$\displaystyle p_{t}=\text{div}(\nabla p-p\,\mathcal{E}-pv),$ (1.8)
$\displaystyle n-p-D(x)=0,$ (1.9) $\displaystyle v_{t}+v\cdot\nabla
v+\nabla\pi-\mu\Delta v=-(n-p)\mathcal{E},$ (1.10)
$\displaystyle\text{div}v=0,$ (1.11)
where we assume that the limits $n^{\lambda}\rightarrow n$,
$p^{\lambda}\rightarrow p$, $v^{\lambda}\rightarrow v$,
$-\nabla\Phi^{\lambda}\equiv E^{\lambda}\rightarrow\mathcal{E}$ exist as
$\lambda\rightarrow 0^{+}$.
The purpose of this paper is to justify the above limit rigorously for
sufficiently smooth solutions to the system (1.1)-(1.5).
Since the incompressible Navier-Stokes equations (1.4)-(1.5) are involved in
the system (1.1)-(1.5), it is well known that whether the global classical
solution for general initial data exists or not is open for three spatial
dimensional case and only local classic solution is available. For example, in
[14], Jerome studied the Cauchy problem of the system (1.1)-(1.5) and
established the local existence of unique smooth solution for smooth initial
data. The local existence of unique smooth solution to the incompressible
Navier-Stokes equations can be obtained by standard method, see [12, 19].
The local existence of unique smooth solution to the limiting system
(1.7)-(1.11) with initial smooth data
$n(x,t=0)=n_{0}(x),\ \ p(x,t=0)=p_{0}(x),\ \ v(x,t=0)=v_{0}(x)$ (1.12)
can be obtained by the similar arguments to those stated in [14]. Since we are
interested in the quasineutral limit of the system (1.1)-(1.5), we omit the
detail here.
In this paper we assume that the doping profile is a smooth (sign-changing)
function and the initial data $n^{\lambda}_{0}(x),p^{\lambda}_{0}(x)$ and
$v^{\lambda}_{0}(x)$ are smooth functions satisfying
$\int(n^{\lambda}_{0}(x)-p^{\lambda}_{0}(x)-D(x))dx=0,\quad\int
v^{\lambda}_{0}(x)dx=0.$ (1.13)
The main result of this paper can be stated as follows:
###### Theorem 1.1.
Let $(n^{\lambda},p^{\lambda},E^{\lambda},v^{\lambda})$,
$E^{\lambda}=-\nabla\Phi^{\lambda}$ be the unique local smooth solution to the
system (1.1)-(1.5) with initial data (1.6) on $\mathbb{T}^{3}\times[0,T_{*})$
for some $0<T_{*}\leq\infty$. Let
$(n,p,\mathcal{E},v),\mathcal{E}=-\nabla\Phi$ be the unique smooth solution to
the limiting system (1.7)-(1.11) with initial data (1.12) on
$\mathbb{T}^{3}\times[0,T_{0})$ for some $0<T_{0}\leq+\infty$ satisfying
$n+p\geq\kappa_{0}>0$, where $\kappa_{0}$ is a positive constant. Suppose that
initial data satisfy (1.13) and
$n^{\lambda}_{0}(x)=n_{0}(x),\quad p^{\lambda}_{0}(x)=p_{0}(x)+\lambda^{2}{\rm
div}\mathcal{E}(t=0),\quad v^{\lambda}_{0}(x)=v_{0}(x).$ (1.14)
Then, for any $T\in(0,\min\\{T_{0},T_{*}\\})$, there exist positive constants
$K$ and $\lambda_{0},\lambda_{0}\ll 1$, such that, for any
$\lambda\in(0,\lambda_{0})$,
$\displaystyle\sup\limits_{0\leq t\leq
T}\Big{\\{}||(\tilde{n}^{\lambda},\tilde{p}^{\lambda},\tilde{E}^{\lambda},\tilde{v}^{\lambda})(t)||_{H^{1}}+||(\tilde{n}^{\lambda}_{t},\tilde{p}^{\lambda}_{t},\tilde{v}^{\lambda}_{t})(t)||_{L^{2}}$
$\displaystyle\qquad\qquad+\lambda||\tilde{E}^{\lambda}(t)||_{H^{2}}+\lambda||\tilde{E}^{\lambda}_{t}(t)||_{H^{1}}\Big{\\}}\leq
K\lambda^{1-\sigma/2}$ (1.15)
for any $\sigma\in(0,2)$, independent of $\lambda$. Here
$\tilde{n}^{\lambda}=n^{\lambda}-n,\tilde{p}^{\lambda}=p^{\lambda}-p,\tilde{E}^{\lambda}=E^{\lambda}-\mathcal{E},$
and $\tilde{v}^{\lambda}=v^{\lambda}-v$.
###### Remark 1.1.
In this paper we deal with the three spatial dimensional case, if the problem
(1.1)-(1.5) is considered in two dimensional space, both the problem
(1.1)-(1.5) and the limiting problem (1.7)-(1.11) enjoy global smooth
solutions, thus we can obtain a similar result to that stated in Theorem 1.1
(in fact much easier).
###### Remark 1.2.
If the assumption (1.14) does not hold, we need to consider the initial
layers. On the other hand, if we consider the system (1.1)-(1.5) on the smooth
bounded domain in $\mathbb{R}^{3}$, the boundary layers may appear. These
issues will be studied in the future.
The main difficulty in dealing with the quasineutral limits is the oscillatory
behavior of the electric field (the Poisson equation becomes an algebraic
equation in the limit). Usually it is difficult to obtain uniform estimates on
the electric field with respect to the Debye length $\lambda$ due to a
possible vacuum set of density. To overcome this difficulty, we introduce the
following $\lambda$-weighted Lyapunov-type functionals
$\displaystyle\Gamma^{\lambda}(t)\equiv$
$\displaystyle||(\tilde{z}^{\lambda},\nabla\tilde{z}^{\lambda},\Delta\tilde{z}^{\lambda},\tilde{z}^{\lambda}_{t},\nabla\tilde{z}^{\lambda}_{t})||^{2}+||(\tilde{v}^{\lambda},\nabla\tilde{v}^{\lambda},\Delta\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t},\nabla\tilde{v}^{\lambda}_{t})||^{2}$
$\displaystyle+\lambda^{2}||(\tilde{E}^{\lambda},{\rm
div}\tilde{E}^{\lambda},\nabla{\rm
div}\tilde{E}^{\lambda},\tilde{E}^{\lambda}_{t},{\rm
div}\tilde{E}^{\lambda}_{t})||^{2}+||(\tilde{E}^{\lambda},{\rm
div}\tilde{E}^{\lambda})||^{2}$ (1.16)
and
$\displaystyle
G^{\lambda}(t)\equiv||(\Delta\tilde{z}_{t},\Delta\tilde{v}_{t},\tilde{E}^{\lambda}_{t},{\rm
div}\tilde{E}^{\lambda}_{t})||^{2}_{L^{2}}+\lambda^{2}||\nabla{\rm
div}\tilde{E}^{\lambda}_{t}||^{2},$ (1.17)
where $\tilde{z}^{\lambda}=\tilde{n}^{\lambda}+\tilde{p}^{\lambda}$,
$\tilde{n}^{\lambda}=n^{\lambda}-n,\tilde{p}^{\lambda}=p^{\lambda}-p,\tilde{v}^{\lambda}=v^{\lambda}-v,\tilde{E}^{\lambda}=E^{\lambda}-\mathcal{E}$
and
$(\tilde{n}^{\lambda},\tilde{p}^{\lambda},\tilde{E}^{\lambda},\tilde{v}^{\lambda})$
denotes the difference between the solution to the system (1.1)-(1.5) and the
solution to the limiting system (1.7)-(1.11), see Section 2 below for details.
By a careful energy method, we can prove the following entropy production
integration inequality
$\displaystyle\Gamma^{\lambda}(t)+\int^{t}_{0}G^{\lambda}(s)ds\leq$
$\displaystyle
K\,\Gamma^{\lambda}(t=0)+K\lambda^{q}+K(\Gamma^{\lambda}(t))^{r}+K\int^{t}_{0}\Gamma^{\lambda}(s)G^{\lambda}(s)ds$
$\displaystyle+K\int^{t}_{0}\big{[}\Gamma^{\lambda}(s)+(\Gamma^{\lambda}(s))^{l}\big{]}ds,\quad
t\geq 0$ (1.18)
for some positive constants $q,r,K$ and $l$, independent of $\lambda$, which
implies our desired convergence result by the assumption of small initial data
$\Gamma^{\lambda}(0)$.
###### Remark 1.3.
The inequality (1) is a generalized Gronwall’s type with an extra integration
term where the integrand function is the production of the entropy and the
entropy-dissipation. Hence (1) is called as the entropy production integration
inequality.
###### Remark 1.4.
The $\lambda$-weighted Lyapunov-type functional (1) and (1.17) is motivated by
[10, 21], where the quasineutral limit of drift-diffusion-Poisson model for
semiconductor was studied. However, in our case the incompressible Navier-
Stokes equations are involved and the more refined energy analysis is needed.
We believe that those $\lambda$-weighted Lyapunov-type energy functionals can
also be used to deal with the quasineutral limit problem of other mathematical
models involving in Navier-Stokes equations, for example, the mathematical
model for the deformation of electrolyte droplets:
$\displaystyle\rho(u_{t}+u\cdot\nabla u)+\pi$ $\displaystyle=\nu\Delta
u+(n-p)\nabla V-\nabla\cdot(\nabla\phi\otimes\nabla\phi),$
$\displaystyle\nabla\cdot u$ $\displaystyle=0,$ $\displaystyle
n_{t}+u\cdot\nabla n$ $\displaystyle=\nabla\cdot(D_{n}\nabla n-\mu_{n}n\nabla
V+Mn\nabla\phi),$ $\displaystyle p_{t}+u\cdot\nabla p$
$\displaystyle=\nabla\cdot(D_{p}\nabla p-\mu_{p}p\nabla V+Mp\nabla\phi),$
$\displaystyle\nabla\cdot(\lambda\nabla V)$ $\displaystyle=n-p,$
$\displaystyle\phi_{t}+u\cdot\nabla\phi$
$\displaystyle=\gamma(\Delta\phi-\eta^{-2}W^{\prime}(\phi)),$
where $\gamma,\nu,\eta,D_{n},D_{p},\mu_{n},\mu_{p}$ and $M$ are positive
constants, see [16] for the detailed description on this model.
We point out that the quasineutral limit is a well-known challenging and
physically complex modeling problem for fluid dynamic models and for kinetic
models of semiconductors and plasmas and other fields. In both cases, there
only exist partial results. For time-dependent transport models, the limit
$\lambda\rightarrow 0$ has be performed for the Vlasov-Poisson system by
Brenier [1] and Masmoudi [13], and for the Vlasov-Poisson-Fokker-Planck system
by Hsiao et al [8, 9], respectively. For the fluid dynamic model, the drift-
diffusion-Poisson system is investigate by Gasser et al [6, 7] and Jüngel and
Peng [15], and for the Euler-Poisson system by Cordier and Grenier [3] and
Wang [20]. Recently, Wang et al [21, 10, 23] extends some results cited above
for the general doping profiles, the main idea is to control the strong
nonlinear oscillations caused by small Debye length by the interaction of the
physically motivated entropy and the entropy dissipation. For the Navier-
Stokes-Poisson system, Wang [20, 22] obtained the convergence of the Navier-
Stokes-Poisson system to the incompressible Euler equations. Ju et al [11]
obtained the convergence of weak solutions of the Navier-Stokes-Poisson system
to the strong solutions of incompressible Navier-Stokes equations. Donatelli
and Marcati [4] studied the quasineutral-type limit for the Navier-Stokes-
Poisson system with large initial data in the whole space $\mathbb{R}^{3}$
through the coupling of the zero-Debye-length limit and the low Mach number
limit.
We mention that there are a few other mathematical results on the system
(1.1)-(1.5). Jerome [14] obtained the inviscid limit ($\mu\rightarrow 0)$ of
the system (1.1)-(1.5). Cimatti and Fragalà [2] obtained the unique weak
solution to the system (1.1)-(1.5) with Neumann boundary condition and the
asymptotic behavior of solution when it is a small perturbation of the trivial
solution for the stationary problem. Feireisl [5] studied the system
(1.1)-(1.5) in periodic case without the diffusion terms in the first two
equations and obtained the existence of weak solution.
Before ending this introduction, we give some notations. We denote $||\cdot||$
the standard $L^{2}$ norm with respect to $x$, $H^{k}$ the standard Sobolev
space $W^{k,2}$, and $||\cdot||_{H^{k}}$ the corresponding norm. The notation
$||(A_{1},A_{2},\cdots,A_{n})||^{2}$ means the summation of
$||A_{i}||^{2},i=1,\cdots,n$, and it also applies to other norms. We use
$c_{i}$, $\delta_{i}$, $\epsilon$, $K_{\epsilon}$, $K_{i}$, and $K$ to denote
the constants which are independent of $\lambda$ and may be changed from line
to line. We also omit in integral spatial domain $\mathbb{T}^{3}$ for
convenience. In Section 2, we give some basic energy estimates of the error
system, and the proof of Theorem 1.1 is given in Section 3.
## 2\. The energy estimates
In this section we obtain some energy estimates needed to prove our result. To
this end, we first derive the error system from the original system
(1.1)-(1.5) and the limiting system (1.7)-(1.11) as follows. Setting
$\tilde{n}^{\lambda}=n^{\lambda}-n,\tilde{p}^{\lambda}=p^{\lambda}-p,\tilde{v}^{\lambda}=v^{\lambda}-v,\tilde{\pi}^{\lambda}=\pi^{\lambda}-\pi,\tilde{E}^{\lambda}=E^{\lambda}-\mathcal{E}$
with
$\tilde{E}^{\lambda}=-\nabla\tilde{\Phi}^{\lambda},{E}^{\lambda}=-\nabla{\Phi}^{\lambda},\mathcal{E}=-\nabla\Phi$
and $\tilde{\Phi}^{\lambda}={\Phi}^{\lambda}-{\Phi}$, using the system
(1.1)-(1.5) and the system (1.7)-(1.11), we obtain
$\displaystyle\tilde{n}^{\lambda}_{t}=\text{div}(\nabla\tilde{n}^{\lambda}+n\tilde{E}^{\lambda}+\tilde{n}^{\lambda}(\tilde{E}^{\lambda}+\mathcal{E})-\tilde{n}^{\lambda}(\tilde{v}^{\lambda}+v)-n\tilde{v}^{\lambda}),$
(2.1)
$\displaystyle\tilde{p}^{\lambda}_{t}=\text{div}(\nabla\tilde{p}^{\lambda}-p\tilde{E}^{\lambda}-\tilde{p}^{\lambda}(\tilde{E}^{\lambda}+\mathcal{E})-\tilde{p}^{\lambda}(\tilde{v}^{\lambda}+v)-p\tilde{v}^{\lambda}),$
(2.2)
$\displaystyle-\lambda^{2}\text{div}\tilde{E}^{\lambda}=\tilde{n}^{\lambda}-\tilde{p}^{\lambda}+\lambda^{2}\text{div}\mathcal{E},$
(2.3)
$\displaystyle\tilde{v}^{\lambda}_{t}+\tilde{v}^{\lambda}\cdot\nabla\tilde{v}^{\lambda}+{v}\cdot\nabla\tilde{v}^{\lambda}+\tilde{v}^{\lambda}\cdot\nabla{v}+\nabla\tilde{\pi}^{\lambda}-\mu\Delta\tilde{v}^{\lambda}$
$\displaystyle\qquad=-(\tilde{n}^{\lambda}-\tilde{p}^{\lambda})(\tilde{E}^{\lambda}+\mathcal{E})-(n-p)\tilde{E}^{\lambda},$
(2.4) $\displaystyle\text{div}\tilde{v}^{\lambda}=0.$ (2.5)
Set ${Z}=n+p$, then (1.7)-(1.11) is reduced to
$\displaystyle Z_{t}$ $\displaystyle=\text{div}(\nabla Z+D\mathcal{E}-Zv),$
$\displaystyle 0$ $\displaystyle=\text{div}(\nabla D+Z\mathcal{E}-Dv),$
$\displaystyle v_{t}+v\cdot\nabla v$ $\displaystyle=-\nabla\pi+\mu\Delta
v-D\mathcal{E},$ $\displaystyle\text{div}v$ $\displaystyle=0$
with initial data $Z(x,0)=n_{0}(x)+p_{0}(x)$ and $v(x,0)=v_{0}(x).$
To obtain the desired energy estimates, we introduce new error variable
$\tilde{z}^{\lambda}=\tilde{n}^{\lambda}+\tilde{p}^{\lambda}$, by the Poisson
equation (2.3), we have
$\displaystyle\tilde{n}^{\lambda}=\frac{\tilde{z}^{\lambda}-\lambda^{2}\text{div}\tilde{E}^{\lambda}-\lambda^{2}\text{div}\mathcal{E}}{2},\qquad\tilde{p}^{\lambda}=\frac{\tilde{z}^{\lambda}+\lambda^{2}\text{div}\tilde{E}^{\lambda}+\lambda^{2}\text{div}\mathcal{E}}{2}.$
(2.6)
Thus the error system can be reduced to the following equivalent system
$\displaystyle\tilde{z}_{t}^{\lambda}=\text{div}(\nabla\tilde{z}^{\lambda}+D\tilde{E}^{\lambda})-\lambda^{2}\text{div}(\mathcal{E}\text{div}\tilde{E}^{\lambda}+\tilde{E}^{\lambda}\text{div}\mathcal{E})-\lambda^{2}\text{div}(\mathcal{E}\text{div}\mathcal{E})$
$\displaystyle\qquad\quad-\text{div}(\tilde{z}^{\lambda}\tilde{v}^{\lambda}+\tilde{z}^{\lambda}v)-\text{div}(Z\tilde{v}^{\lambda})-\lambda^{2}\text{div}(\tilde{E}^{\lambda}\text{div}\tilde{E}^{\lambda}),$
(2.7)
$\displaystyle\lambda^{2}[\partial_{t}\text{div}\tilde{E}^{\lambda}-\text{div}(\nabla\text{div}\tilde{E}^{\lambda})]+\text{div}(Z\tilde{E}^{\lambda})$
$\displaystyle\qquad=-\lambda^{2}(\partial_{t}\text{div}\mathcal{E}-\Delta\text{div}\mathcal{E})-\text{div}(\tilde{z}^{\lambda}\mathcal{E})-\text{div}(\tilde{z}^{\lambda}\tilde{E}^{\lambda})+\text{div}(D\tilde{v}^{\lambda})$
$\displaystyle\qquad\quad-\lambda^{2}\text{div}(\tilde{v}^{\lambda}\text{div}\mathcal{E}+v\text{div}\mathcal{E})-\lambda^{2}\text{div}(\tilde{v}^{\lambda}{\rm
div}\tilde{E}^{\lambda}+v\text{div}\tilde{E}^{\lambda}),$ (2.8)
$\displaystyle\tilde{v}^{\lambda}_{t}+\tilde{v}^{\lambda}\cdot\nabla\tilde{v}^{\lambda}+{v}\cdot\nabla\tilde{v}^{\lambda}+\tilde{v}^{\lambda}\cdot\nabla{v}+\nabla\tilde{\pi}^{\lambda}-\mu\Delta\tilde{v}^{\lambda}$
$\displaystyle\qquad=\lambda^{2}\tilde{E}^{\lambda}\text{div}\mathcal{E}+\lambda^{2}\mathcal{E}\text{div}\tilde{E}^{\lambda}+\lambda^{2}\mathcal{E}\text{div}\mathcal{E}-D\tilde{E}^{\lambda}+\lambda^{2}\tilde{E}^{\lambda}\text{div}\tilde{E}^{\lambda},$
(2.9) $\displaystyle\text{div}\tilde{v}^{\lambda}=0.$ (2.10)
For the sake of notional simplicity, we set
$\tilde{\mathbf{w}}^{\lambda}=(\tilde{z}^{\lambda},\tilde{E}^{\lambda},\tilde{v}^{\lambda})$
and define the following $\lambda$-weighted Sobolev’s norm
$|||\tilde{\mathbf{w}}^{\lambda}|||^{2}\equiv||(\tilde{z}^{\lambda},\lambda\tilde{E}^{\lambda},\tilde{v})||^{2}_{H^{2}}+||(\tilde{z}^{\lambda}_{t},\lambda\tilde{E}^{\lambda}_{t},\tilde{v}^{\lambda}_{t})||^{2}_{H^{1}}+||\tilde{E}^{\lambda}||^{2}_{H^{1}}.$
(2.11)
The following basic inequality can be derived from Sobolev’s embedding theorem
and will be used frequently in this paper.
###### Lemma 2.1.
For $f,g\in H^{1}(\mathbb{T}^{3})$, we have
$||fg||_{L^{2}}\leq||f||_{L^{4}}\cdot||g||_{L^{4}}\leq
K||f||_{H^{1}}\cdot||g||_{H^{1}}.$ (2.12)
### 2.1. Low order estimates
In this subsection, we derive the low order energy estimates from the error
system (2.7)-(2.10). The first estimate is the $L^{\infty}_{t}(L^{2}_{x})$
norm of $(\tilde{z}^{\lambda},\tilde{v}^{\lambda},\tilde{E}^{\lambda})$.
###### Lemma 2.2.
Under the assumptions of Theorem 1.1, we have
$\displaystyle||\tilde{z}^{\lambda}||^{2}+||\tilde{v}^{\lambda}||^{2}+\lambda^{2}||\tilde{E}^{\lambda}||^{2}$
$\displaystyle\quad+\int^{t}_{0}\big{(}||\nabla\tilde{z}^{\lambda}||^{2}+\lambda^{2}||{\rm
div}\tilde{E}^{\lambda}||^{2}+||\tilde{E}^{\lambda}||^{2}+||\nabla\tilde{v}^{\lambda}||^{2}\big{)}(s)ds$
$\displaystyle\leq
K(||\tilde{z}^{\lambda}||^{2}+||\tilde{v}^{\lambda}||^{2}+\lambda^{2}||\tilde{E}^{\lambda}||^{2})(t=0)$
$\displaystyle\quad+K\int^{t}_{0}\big{(}||\tilde{z}^{\lambda}||^{2}+||\tilde{v}^{\lambda}||^{2}+|||\tilde{\mathbf{w}}^{\lambda}|||^{4}\big{)}(s)ds+K\lambda^{4}.$
(2.13)
###### Proof.
Multiplying (2.7) by $\tilde{z}^{\lambda}$ and integrating the resulting
equation over $\mathbb{T}^{3}$ with respect to $x$, we get
$\displaystyle\frac{1}{2}\frac{d}{dt}||\tilde{z}^{\lambda}||^{2}+||\nabla\tilde{z}^{\lambda}||^{2}$
$\displaystyle=-\int
D\tilde{E}^{\lambda}\nabla\tilde{z}^{\lambda}dx+\lambda^{2}\int\mathcal{E}{\rm
div}\mathcal{E}\nabla\tilde{z}^{\lambda}dx+\int
v\tilde{z}^{\lambda}\nabla\tilde{z}^{\lambda}dx$
$\displaystyle\quad+\lambda^{2}\int(\tilde{E}^{\lambda}{\rm
div}\mathcal{E}+\mathcal{E}{\rm
div}\tilde{E}^{\lambda})\nabla\tilde{z}^{\lambda}dx+\int
Z\tilde{v}^{\lambda}\nabla\tilde{z}^{\lambda}dx$
$\displaystyle\quad+\int\tilde{z}^{\lambda}\tilde{v}^{\lambda}\nabla\tilde{z}^{\lambda}dx+\lambda^{2}\int{\rm
div}\tilde{E}^{\lambda}\tilde{E}^{\lambda}\nabla\tilde{z}^{\lambda}dx.$ (2.14)
We estimate the terms on the right-hand side of (2.1). For the first five
terms, by Cauchy-Schwartz’s inequality and using the regularity of
$D,\mathcal{E},v$ and $Z$, which can be bounded by
$\epsilon||\nabla\tilde{z}^{\lambda}||^{2}+K_{\epsilon}||(\tilde{E}^{\lambda},\tilde{v}^{\lambda},\tilde{z}^{\lambda})||^{2}+K_{\epsilon}\lambda^{4}||(\tilde{E}^{\lambda},{\rm
div}\tilde{E}^{\lambda})||^{2}+K_{\epsilon}\lambda^{4}.$ (2.15)
For the sixth nonlinear term, by Cauchy-Schwartz’s inequality and Sobolev’s
embedding $H^{2}(\mathbb{T}^{3})\hookrightarrow L^{\infty}(\mathbb{T}^{3})$,
we get
$\displaystyle\int\tilde{z}^{\lambda}\tilde{v}^{\lambda}\nabla\tilde{z}^{\lambda}dx$
$\displaystyle\leq\epsilon||\nabla\tilde{z}^{\lambda}||^{2}+K_{\epsilon}||\tilde{v}^{\lambda}\tilde{z}^{\lambda}||^{2}$
$\displaystyle\leq\epsilon||\nabla\tilde{z}^{\lambda}||^{2}+K_{\epsilon}||\tilde{v}^{\lambda}||_{L^{\infty}}^{2}||\tilde{z}^{\lambda}||^{2}$
$\displaystyle\leq\epsilon||\nabla\tilde{z}^{\lambda}||^{2}+K_{\epsilon}||\tilde{v}^{\lambda}||_{H^{2}}^{2}||\tilde{z}^{\lambda}||^{2}$
$\displaystyle\leq\epsilon||\nabla\tilde{z}^{\lambda}||^{2}+K_{\epsilon}|||\tilde{\mathbf{w}}^{\lambda}|||^{4}.$
(2.16)
Similarly, for the last nonlinear term, we have
$\displaystyle\lambda^{2}\int{\rm
div}\tilde{E}^{\lambda}\tilde{E}^{\lambda}\nabla\tilde{z}^{\lambda}dx$
$\displaystyle\leq\epsilon||\nabla\tilde{z}^{\lambda}||^{2}+K_{\epsilon}\lambda^{4}||\tilde{E}^{\lambda}{\rm
div}\tilde{E}^{\lambda}||^{2}$
$\displaystyle\leq\epsilon||\nabla\tilde{z}^{\lambda}||^{2}+K_{\epsilon}\lambda^{4}||\tilde{E}^{\lambda}||_{L^{\infty}}^{2}||{\rm
div}\tilde{E}^{\lambda}||^{2}$
$\displaystyle\leq\epsilon||\nabla\tilde{z}^{\lambda}||^{2}+K_{\epsilon}\lambda^{4}||\tilde{E}^{\lambda}||_{H^{2}}^{2}||{\rm
div}\tilde{E}^{\lambda}||^{2}$
$\displaystyle\leq\epsilon||\nabla\tilde{z}^{\lambda}||^{2}+K_{\epsilon}|||\tilde{\mathbf{w}}^{\lambda}|||^{4}.$
(2.17)
Thus, putting (2.1)-(2.1) together and taking $\epsilon$ small enough, we
obtain
$\displaystyle\frac{d}{dt}||\tilde{z}^{\lambda}||^{2}+c_{1}||\nabla\tilde{z}^{\lambda}||^{2}\leq$
$\displaystyle
K||(\tilde{z}^{\lambda},\tilde{E}^{\lambda},\tilde{v}^{\lambda})||^{2}+K\lambda^{4}||(\tilde{E}^{\lambda},{\rm
div}\tilde{E}^{\lambda})||^{2}$
$\displaystyle+K|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+K\lambda^{4}.$ (2.18)
Multiplying (2.8) by $-\tilde{\Phi}^{\lambda}$ and integrating the resulting
equation over $\mathbb{T}^{3}$ with respect to $x$, we get
$\displaystyle\frac{\lambda^{2}}{2}\frac{d}{dt}||\tilde{E}^{\lambda}||^{2}+\lambda^{2}||{\rm
div}\tilde{E}^{\lambda}||^{2}+\int Z|\tilde{E}^{\lambda}|^{2}dx$
$\displaystyle\quad=-\lambda^{2}\int(\partial_{t}\mathcal{E}-\Delta\mathcal{E})\tilde{E}^{\lambda}dx-\int\mathcal{E}\tilde{z}^{\lambda}\tilde{E}^{\lambda}dx-\lambda^{2}\int{\rm
div}\mathcal{E}\tilde{v}^{\lambda}\tilde{E}^{\lambda}dx$ $\displaystyle\quad\
\ -\lambda^{2}\int{\rm div}\mathcal{E}v\tilde{E}^{\lambda}dx+\int
D\tilde{v}^{\lambda}\tilde{E}^{\lambda}dx-\lambda^{2}\int
v\tilde{E}^{\lambda}{\rm div}\tilde{E}^{\lambda}dx$ $\displaystyle\quad\ \
-\lambda^{2}\int\tilde{v}^{\lambda}\tilde{E}^{\lambda}{\rm
div}\tilde{E}^{\lambda}dx-\int\tilde{z}^{\lambda}\tilde{E}^{\lambda}\tilde{E}^{\lambda}dx.$
(2.19)
For the first six terms on the right-hand side of (2.1), by Cauchy-Schwartz’s
inequality and using the regularity of $\mathcal{E},v$ and $D$, which can be
bounded by
$\epsilon||\tilde{E}^{\lambda}||^{2}+K_{\epsilon}||(\tilde{z}^{\lambda},\tilde{v}^{\lambda})||^{2}+K_{\epsilon}\lambda^{4}||(\tilde{v}^{\lambda},{\rm
div}\tilde{E}^{\lambda})||^{2}+K_{\epsilon}\lambda^{4}.$ (2.20)
For the seventh nonlinear term, by Cauchy-Schwartz’s inequality and Sobolev’s
embedding $H^{2}(\mathbb{T}^{3})\hookrightarrow L^{\infty}(\mathbb{T}^{3})$,
we get
$\displaystyle-\lambda^{2}\int\tilde{v}^{\lambda}\tilde{E}^{\lambda}{\rm
div}\tilde{E}^{\lambda}dx$
$\displaystyle\leq\epsilon||\tilde{E}^{\lambda}||^{2}+K_{\epsilon}\lambda^{4}||\tilde{v}^{\lambda}{\rm
div}\tilde{E}^{\lambda}||^{2}$
$\displaystyle\leq\epsilon||\tilde{E}^{\lambda}||^{2}+K_{\epsilon}\lambda^{4}||\tilde{v}^{\lambda}||^{2}_{L^{\infty}}||{\rm
div}\tilde{E}^{\lambda}||^{2}$
$\displaystyle\leq\epsilon||\tilde{E}^{\lambda}||^{2}+K_{\epsilon}\lambda^{4}||\tilde{v}^{\lambda}||^{2}_{H^{2}}||{\rm
div}\tilde{E}^{\lambda}||^{2}$
$\displaystyle\leq\epsilon||\tilde{E}^{\lambda}||^{2}+K_{\epsilon}\lambda^{2}|||\tilde{\mathbf{w}}^{\lambda}|||^{4}.$
(2.21)
Similarly, for the last nonlinear term, we have
$\displaystyle-\int\tilde{z}^{\lambda}\tilde{E}^{\lambda}\tilde{E}^{\lambda}dx$
$\displaystyle\leq\epsilon||\tilde{E}^{\lambda}||^{2}+K_{\epsilon}||\tilde{z}^{\lambda}\tilde{E}^{\lambda}||^{2}$
$\displaystyle\leq\epsilon||\tilde{E}^{\lambda}||^{2}+K_{\epsilon}||\tilde{z}^{\lambda}||^{2}_{L^{\infty}}||\tilde{E}^{\lambda}||^{2}$
$\displaystyle\leq\epsilon||\tilde{E}^{\lambda}||^{2}+K_{\epsilon}||\tilde{z}^{\lambda}||^{2}_{H^{2}}||\tilde{E}^{\lambda}||^{2}$
$\displaystyle\leq\epsilon||\tilde{E}^{\lambda}||^{2}+K_{\epsilon}|||\tilde{\mathbf{w}}^{\lambda}|||^{4}.$
(2.22)
Putting (2.1)-(2.1) together, choosing $\epsilon$ small enough, and
restricting $\lambda$ small enough, we get, by the positivity of ${Z}$, that
$\displaystyle\lambda^{2}\frac{d}{dt}||\tilde{E}^{\lambda}||^{2}+2\lambda^{2}||{\rm
div}\tilde{E}^{\lambda}||^{2}+c_{2}||\tilde{E}^{\lambda}||^{2}$
$\displaystyle\qquad\leq
K||(\tilde{z}^{\lambda},\tilde{v}^{\lambda})||^{2}+K|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+K\lambda^{4}.$
(2.23)
Multiplying (2.9) by $\tilde{v}^{\lambda}$ and integrating the resulting
equation over $\mathbb{T}^{3}$ with respect to $x$, by (2.10) and integration
by parts, we obtain
$\displaystyle\frac{1}{2}\frac{d}{dt}||\tilde{v}^{\lambda}||^{2}+\mu||\nabla\tilde{v}^{\lambda}||^{2}$
$\displaystyle\quad=-\int
D\tilde{E}^{\lambda}\tilde{v}^{\lambda}dx+\lambda^{2}\int\tilde{v}^{\lambda}\tilde{E}^{\lambda}{\rm
div}\mathcal{E}dx+\lambda^{2}\int\mathcal{E}\tilde{v}^{\lambda}{\rm
div}\tilde{E}^{\lambda}dx$
$\displaystyle\quad\quad+\lambda^{2}\int\tilde{v}^{\lambda}\mathcal{E}{\rm
div}\mathcal{E}dx-\int(\tilde{v}^{\lambda}\cdot\nabla
v)\tilde{v}^{\lambda}dx+\lambda^{2}\int\tilde{v}^{\lambda}\tilde{E}^{\lambda}{\rm
div}\tilde{E}^{\lambda}dx,$ (2.24)
where we have used the identities
$\int(\tilde{v}^{\lambda}\cdot\nabla\tilde{v}^{\lambda})\tilde{v}^{\lambda}dx=0,\qquad\int({v}\cdot\nabla\tilde{v}^{\lambda})\tilde{v}^{\lambda}dx=0.$
We estimate the terms on the right hand side of (2.1). For the first four
terms, by Cauchy-Schwartz’s inequality and using the regularity of $D$ and
$\mathcal{E}$, which can be bounded by
$K||(\tilde{v}^{\lambda},\tilde{E}^{\lambda})||^{2}+K\lambda^{2}||(\tilde{v}^{\lambda},\tilde{E}^{\lambda},{\rm
div}\tilde{E}^{\lambda})||^{2}+K\lambda^{4}.$ (2.25)
The fifth nonlinear term can be treated as follows
$\int(\tilde{v}^{\lambda}\cdot\nabla v)\tilde{v}^{\lambda}dx\leq K||\nabla
v||_{L^{\infty}}||\tilde{v}^{\lambda}||^{2}\leq K||\tilde{v}^{\lambda}||^{2}.$
(2.26)
For the last nonlinear term, by Cauchy-Schwartz’s inequality and Sobolev’s
embedding $H^{2}(\mathbb{T}^{3})\hookrightarrow L^{\infty}(\mathbb{T}^{3})$,
we get
$\displaystyle\lambda^{2}\int\tilde{v}^{\lambda}\tilde{E}^{\lambda}{\rm
div}\tilde{E}^{\lambda}dx$
$\displaystyle\leq\frac{1}{2}||\tilde{v}^{\lambda}||^{2}+\frac{1}{2}\lambda^{4}||\tilde{E}^{\lambda}{\rm
div}\tilde{E}^{\lambda}||^{2}$
$\displaystyle\leq\frac{1}{2}||\tilde{v}^{\lambda}||^{2}+\frac{1}{2}\lambda^{4}||\tilde{E}^{\lambda}||^{2}_{L^{\infty}}||{\rm
div}\tilde{E}^{\lambda}||^{2}$
$\displaystyle\leq\frac{1}{2}||\tilde{v}^{\lambda}||^{2}+\frac{1}{2}\lambda^{4}||\tilde{E}^{\lambda}||^{2}_{H^{2}}||{\rm
div}\tilde{E}^{\lambda}||^{2}$
$\displaystyle\leq\frac{1}{2}||\tilde{v}^{\lambda}||^{2}+\frac{1}{2}|||\tilde{\mathbf{w}}^{\lambda}|||^{4}.$
(2.27)
Thus, putting (2.1)-(2.1) together, we get
$\displaystyle\frac{d}{dt}||\tilde{v}^{\lambda}||^{2}+2\mu||\nabla\tilde{v}^{\lambda}||^{2}\leq$
$\displaystyle K||(\tilde{v}^{\lambda},\tilde{E}^{\lambda})||^{2}$
$\displaystyle+K\lambda^{2}||(\tilde{v}^{\lambda},\tilde{E}^{\lambda},{\rm
div}\tilde{E}^{\lambda})||^{2}+|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+K\lambda^{4}.$
(2.28)
Combining (2.1) and (2.1) with (2.1), and restricting $\lambda$ small enough,
we get
$\displaystyle\frac{d}{dt}\Big{(}\delta_{1}||\tilde{z}^{\lambda}||^{2}+\delta_{2}||\tilde{v}^{\lambda}||^{2}+\lambda^{2}||\tilde{E}^{\lambda}||^{2}\Big{)}+c_{1}\delta_{1}||\nabla\tilde{z}^{\lambda}||^{2}$
$\displaystyle\quad\ \
+2\mu\delta_{2}||\nabla\tilde{v}^{\lambda}||^{2}+\big{(}2\lambda^{2}-K(\lambda^{2}\delta_{2}+\lambda^{4}\delta_{1})\big{)}||{\rm
div}\tilde{E}^{\lambda}||^{2}$ $\displaystyle\quad\ \
+\big{(}c_{2}-K(\delta_{1}+\delta_{2})-K(\lambda^{2}\delta_{2}+\lambda^{4}\delta_{1})\big{)}||\tilde{E}^{\lambda}||^{2}$
$\displaystyle\quad\leq
K_{1}||(\tilde{z}^{\lambda},\tilde{v}^{\lambda})||^{2}+K_{1}|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+K_{1}\lambda^{4}$
(2.29)
for some $\delta_{1}$ and $\delta_{2}$ sufficient small, which gives the
inequality (2.2). ∎
Next, we estimate the $L^{\infty}_{t}(L^{2}_{x})$ norm of
$(\tilde{z_{t}}^{\lambda},\tilde{v_{t}}^{\lambda},\tilde{E_{t}}^{\lambda})$ by
using the system (2.7)-(2.10).
###### Lemma 2.3.
Under the assumptions of Theorem 1.1, we have
$\displaystyle||\tilde{z}^{\lambda}_{t}||^{2}+||\tilde{v}^{\lambda}_{t}||^{2}+\lambda^{2}||\tilde{E}^{\lambda}_{t}||^{2}$
$\displaystyle\quad+\int^{t}_{0}\big{(}||\nabla\tilde{z}^{\lambda}_{t}||^{2}+||\nabla\tilde{v}^{\lambda}_{t}||^{2}+||\tilde{E}^{\lambda}_{t}||^{2}+\lambda^{2}||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}\big{)}(s)ds$ $\displaystyle\leq
K(||\tilde{z}^{\lambda}_{t}||^{2}+||\tilde{v}^{\lambda}_{t}||^{2}+\lambda^{2}||\tilde{E}^{\lambda}_{t}||^{2})(t=0)$
$\displaystyle\quad+K\int^{t}_{0}\big{(}||(\tilde{z}^{\lambda},\tilde{z}^{\lambda}_{t},\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t},\nabla\tilde{v}^{\lambda},\tilde{E}^{\lambda},{\rm
div}\tilde{E}^{\lambda})||^{2}\big{)}(s)ds$
$\displaystyle\quad+K\int^{t}_{0}\big{\\{}|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+|||\tilde{\mathbf{w}}^{\lambda}|||^{2}||\tilde{E}^{\lambda}_{t}||^{2}\big{\\}}(s)ds+K\lambda^{4}.$
(2.30)
###### Proof.
Differentiating (2.7) with respect to $t$, multiplying the resulting equation
by $\tilde{z}^{\lambda}_{t}$ and integrating it over $\mathbb{T}^{3}$ with
respect to $x$, we get
$\displaystyle\frac{1}{2}\frac{d}{dt}||\tilde{z}^{\lambda}_{t}||^{2}+||\nabla\tilde{z}^{\lambda}_{t}||^{2}$
$\displaystyle\quad=-\int
D\tilde{E}^{\lambda}_{t}\nabla\tilde{z}^{\lambda}_{t}dx+\lambda^{2}\int\partial_{t}(\mathcal{E}{\rm
div}\mathcal{E})\nabla\tilde{z}^{\lambda}_{t}dx+\int\partial_{t}(\tilde{z}^{\lambda}v)\nabla\tilde{z}^{\lambda}_{t}dx$
$\displaystyle\quad\ \ +\lambda^{2}\int\partial_{t}(\mathcal{E}{\rm
div}\tilde{E}^{\lambda}+\tilde{E}^{\lambda}{\rm
div}\mathcal{E})\nabla\tilde{z}^{\lambda}_{t}dx+\int\partial_{t}(Z\tilde{v}^{\lambda})\nabla\tilde{z}^{\lambda}_{t}dx$
$\displaystyle\quad\ \ +\lambda^{2}\int\partial_{t}(\tilde{E}^{\lambda}{\rm
div}\tilde{E}^{\lambda})\nabla\tilde{z}^{\lambda}_{t}dx+\int\partial_{t}(\tilde{z}^{\lambda}\tilde{v}^{\lambda})\nabla\tilde{z}^{\lambda}_{t}dx.$
(2.31)
We estimate the terms on the right-hand side of (2.1). For the first five
terms, by Cauchy-Schwartz’s inequality and using the regularity of
$D,\mathcal{E},v$ and $Z$, which can be bounded by
$\displaystyle\epsilon||\nabla\tilde{z}^{\lambda}_{t}||^{2}+K_{\epsilon}||\tilde{E}^{\lambda}_{t}||^{2}+K_{\epsilon}||(\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t})||^{2}$
$\displaystyle\quad+K_{\epsilon}||(\tilde{z}^{\lambda},\tilde{z}^{\lambda}_{t})||^{2}+K_{\epsilon}\lambda^{4}||(\tilde{E}^{\lambda},\tilde{E}^{\lambda}_{t},{\rm
div}\tilde{E}^{\lambda},{\rm
div}\tilde{E}^{\lambda}_{t})||^{2}+K_{\epsilon}\lambda^{4}.$ (2.32)
For the last two nonlinear terms, by Cauchy-Schwartz’s inequality, Sobolev’s
embedding $H^{2}(\mathbb{T}^{3})\hookrightarrow L^{\infty}(\mathbb{T}^{3})$,
and the inequality (2.12), we get
$\displaystyle\lambda^{2}\int\partial_{t}(\tilde{E}^{\lambda}{\rm
div}\tilde{E}^{\lambda})\nabla\tilde{z}^{\lambda}_{t}dx+\int\partial_{t}(\tilde{z}^{\lambda}\tilde{v}^{\lambda})\nabla\tilde{z}^{\lambda}_{t}dx$
$\displaystyle\leq$
$\displaystyle\epsilon||\nabla\tilde{z}^{\lambda}_{t}||^{2}+K_{\epsilon}\lambda^{4}||\partial_{t}(\tilde{E}^{\lambda}{\rm
div}\tilde{E}^{\lambda})||^{2}+K_{\epsilon}||\partial_{t}(\tilde{z}^{\lambda}\tilde{v}^{\lambda})||^{2}$
$\displaystyle\leq$
$\displaystyle\epsilon||\nabla\tilde{z}^{\lambda}_{t}||^{2}+K_{\epsilon}\lambda^{4}(||\tilde{E}^{\lambda}_{t}{\rm
div}\tilde{E}^{\lambda}||^{2}+||\tilde{E}^{\lambda}||^{2}_{L^{\infty}}||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2})$
$\displaystyle+K_{\epsilon}(||\tilde{z}^{\lambda}_{t}\tilde{v}^{\lambda}||^{2}+||\tilde{z}^{\lambda}\tilde{v}^{\lambda}_{t}||^{2})$
$\displaystyle\leq$
$\displaystyle\epsilon||\nabla\tilde{z}^{\lambda}_{t}||^{2}+K_{\epsilon}\lambda^{4}(||\tilde{E}^{\lambda}_{t}||^{2}_{H^{1}}||{\rm
div}\tilde{E}^{\lambda}||^{2}_{H^{1}}+||\tilde{E}^{\lambda}||^{2}_{H^{2}}||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2})$
$\displaystyle+K_{\epsilon}(||\tilde{z}^{\lambda}_{t}||^{2}_{H^{1}}||\tilde{v}^{\lambda}||^{2}_{H^{1}}+||\tilde{z}^{\lambda}||_{H^{1}}^{2}||\tilde{v}^{\lambda}_{t}||^{2}_{H^{1}})$
$\displaystyle\leq$
$\displaystyle\epsilon||\nabla\tilde{z}^{\lambda}_{t}||^{2}+K_{\epsilon}|||\tilde{\mathbf{w}}^{\lambda}|||^{4}.$
(2.33)
Putting (2.1)-(2.1) together and taking $\epsilon$ small enough, we get
$\displaystyle\frac{1}{2}\frac{d}{dt}||\tilde{z}^{\lambda}_{t}||^{2}+c_{3}||\nabla\tilde{z}^{\lambda}_{t}||^{2}$
$\displaystyle\quad\leq
K||\tilde{E}^{\lambda}_{t}||^{2}+K||(\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t})||^{2}+K||(\tilde{z}^{\lambda},\tilde{z}^{\lambda}_{t})||^{2}$
$\displaystyle\quad\ \ \
+K\lambda^{4}||(\tilde{E}^{\lambda},\tilde{E}^{\lambda}_{t},{\rm
div}\tilde{E}^{\lambda},{\rm
div}\tilde{E}^{\lambda}_{t})||^{2}+K|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+K\lambda^{4}.$
(2.34)
Differentiating (2.8) with respect to $t$, multiplying the resulting equation
by $-\tilde{\Phi}^{\lambda}_{t}$ and integrating it over $\mathbb{T}^{3}$ with
respect to $x$, we get
$\displaystyle\quad\frac{\lambda^{2}}{2}\frac{d}{dt}||\tilde{E}^{\lambda}_{t}||^{2}+\lambda^{2}||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}+\int Z|\tilde{E}^{\lambda}_{t}|^{2}dx$
$\displaystyle\quad=-\int
Z_{t}\tilde{E}^{\lambda}\tilde{E}^{\lambda}_{t}dx-\lambda^{2}\int\partial_{t}(\partial_{t}\mathcal{E}-\Delta\mathcal{E})\tilde{E}^{\lambda}_{t}dx-\int\partial_{t}(\mathcal{E}\tilde{z}^{\lambda})\tilde{E}^{\lambda}_{t}dx$
$\displaystyle\quad\ \ -\lambda^{2}\int\partial_{t}({\rm
div}\mathcal{E}\tilde{v}^{\lambda})\tilde{E}^{\lambda}_{t}dx-\lambda^{2}\int\partial_{t}(v{\rm
div}\mathcal{E})\tilde{E}^{\lambda}_{t}dx+\int\partial_{t}(D\tilde{v}^{\lambda})\tilde{E}^{\lambda}_{t}dx$
$\displaystyle\quad\ \ -\lambda^{2}\int\partial_{t}(v{\rm
div}\tilde{E}^{\lambda})\tilde{E}^{\lambda}_{t}dx-\lambda^{2}\int\partial_{t}(\tilde{v}^{\lambda}{\rm
div}\tilde{E}^{\lambda}){\rm
div}\tilde{E}^{\lambda}_{t}dx-\int\partial_{t}(\tilde{z}^{\lambda}\tilde{E}^{\lambda})\tilde{E}^{\lambda}_{t}dx.$
(2.35)
For the first seven terms on the right-hand side of (2.1), by Cauchy-
Schwartz’s inequality and using the regularity of $\mathcal{E},v,D$ and $Z$,
which can be bounded by
$\displaystyle\epsilon||\tilde{E}^{\lambda}_{t}||^{2}+K_{\epsilon}||\tilde{E}^{\lambda}||^{2}+K_{\epsilon}||(\tilde{z}^{\lambda},\tilde{z}^{\lambda}_{t})||^{2}+K_{\epsilon}||(\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t})||^{2}$
$\displaystyle\qquad+K_{\epsilon}\lambda^{4}||(\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t})||^{2}+K_{\epsilon}\lambda^{4}||({\rm
div}\tilde{E}^{\lambda},{\rm
div}\tilde{E}^{\lambda}_{t})||^{2}+K_{\epsilon}\lambda^{4}.$ (2.36)
For the last two nonlinear terms on the right-hand side of (2.1) by Cauchy-
Schwartz’s inequality, Sobolev’s embedding
$H^{2}(\mathbb{T}^{3})\hookrightarrow L^{\infty}(\mathbb{T}^{3})$, and the
inequality (2.12), they can be estimated as follows
$\displaystyle-\lambda^{2}\int\partial_{t}(\tilde{v}^{\lambda}{\rm
div}\tilde{E}^{\lambda})\tilde{E}^{\lambda}_{t}dx-\int\partial_{t}(\tilde{z}^{\lambda}\tilde{E}^{\lambda})\tilde{E}^{\lambda}_{t}dx$
$\displaystyle\quad\leq\epsilon||\tilde{E}^{\lambda}_{t}||^{2}+K_{\epsilon}\lambda^{4}||\partial_{t}(\tilde{v}^{\lambda}{\rm
div}\tilde{E}^{\lambda})||^{2}+K_{\epsilon}||\partial_{t}(\tilde{z}^{\lambda}\tilde{E}^{\lambda})||^{2}$
$\displaystyle\quad\leq\epsilon||\tilde{E}^{\lambda}_{t}||^{2}+K_{\epsilon}\lambda^{4}(||\tilde{v}^{\lambda}_{t}||^{2}_{H^{1}}||{\rm
div}\tilde{E}^{\lambda}||^{2}_{H^{1}}+||\tilde{v}^{\lambda}||^{2}_{L^{\infty}}||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2})$ $\displaystyle\quad\ \ \
+K_{\epsilon}(||\tilde{z}^{\lambda}_{t}||^{2}_{H^{1}}||\tilde{E}^{\lambda}||^{2}_{H^{1}}+||\tilde{z}^{\lambda}||^{2}_{L^{\infty}}||\tilde{E}^{\lambda}_{t}||^{2})$
$\displaystyle\quad\leq\epsilon||\tilde{E}^{\lambda}_{t}||^{2}+K_{\epsilon}\lambda^{4}(||\tilde{v}^{\lambda}_{t}||^{2}_{H^{1}}||{\rm
div}\tilde{E}^{\lambda}||^{2}_{H^{1}}+||\tilde{v}^{\lambda}||^{2}_{H^{2}}||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2})$ $\displaystyle\quad\ \ \
+K_{\epsilon}(||\tilde{z}^{\lambda}_{t}||^{2}_{H^{1}}||\tilde{E}^{\lambda}||^{2}_{H^{1}}+||\tilde{z}^{\lambda}||^{2}_{H^{2}}||\tilde{E}^{\lambda}_{t}||^{2})$
$\displaystyle\quad\leq\epsilon||\tilde{E}^{\lambda}_{t}||^{2}+K_{\epsilon}\big{(}|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+|||\tilde{\mathbf{w}}^{\lambda}|||^{2}||\tilde{E}^{\lambda}_{t}||^{2}\big{)}.$
(2.37)
Putting (2.1)-(2.1) together, using the positivity of $Z$, and taking
$\epsilon$ small enough, we get
$\displaystyle{\lambda^{2}}\frac{d}{dt}||\tilde{E}^{\lambda}_{t}||^{2}+2\lambda^{2}||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}+c_{4}||\tilde{E}^{\lambda}_{t}||^{2}$
$\displaystyle\quad\leq
K||(\tilde{z}^{\lambda},\tilde{z}^{\lambda}_{t},\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t},\tilde{E}^{\lambda})||^{2}+K\lambda^{4}||({\rm
div}\tilde{E}^{\lambda},{\rm
div}\tilde{E}^{\lambda}_{t})||^{2}+K\lambda^{4}||(\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t})||^{2}$
$\displaystyle\quad\ \
+K\big{(}|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+|||\tilde{\mathbf{w}}^{\lambda}|||^{2}||\tilde{E}^{\lambda}_{t}||^{2}\big{)}+K\lambda^{4}.$
(2.38)
Differentiating (2.9) with respect to $t$, multiplying the resulting equation
by $\tilde{v}^{\lambda}_{t}$, integrating it over $\mathbb{T}^{3}$ with
respect to $x$ and using $\text{div}\tilde{v}^{\lambda}_{t}=0$, we get
$\displaystyle\frac{1}{2}\frac{d}{dt}||\tilde{v}^{\lambda}_{t}||^{2}+\mu||\nabla\tilde{v}^{\lambda}_{t}||^{2}$
$\displaystyle\quad=-\int\partial_{t}(D\tilde{E}^{\lambda})\tilde{v}^{\lambda}_{t}dx+\lambda^{2}\int\partial_{t}(\mathcal{E}{\rm
div}\mathcal{E})\tilde{v}^{\lambda}_{t}dx+\lambda^{2}\int\partial_{t}(\mathcal{E}{\rm
div}\tilde{E}^{\lambda})\tilde{v}^{\lambda}_{t}dx$ $\displaystyle\quad\ \ \
+\lambda^{2}\int\partial_{t}(\tilde{E}{\rm
div}\mathcal{E})\tilde{v}^{\lambda}_{t}dx-\int\partial_{t}({v}\cdot\nabla\tilde{v}^{\lambda})\tilde{v}^{\lambda}_{t}dx-\int\partial_{t}(\tilde{v}^{\lambda}\cdot\nabla{v})\tilde{v}^{\lambda}_{t}dx$
$\displaystyle\quad\ \ \
-\int\partial_{t}(\tilde{v}^{\lambda}\cdot\nabla\tilde{v}^{\lambda})\tilde{v}^{\lambda}_{t}dx+\lambda^{2}\int\partial_{t}(\tilde{E}^{\lambda}{\rm
div}\tilde{E}^{\lambda})\tilde{v}^{\lambda}_{t}dx.$ (2.39)
We estimate the terms on the right-hand side of (2.1). By Cauchy-Schwartz’s
inequality and using the regularity of $D$ and $\mathcal{E}$, we get
$\displaystyle-\int\partial_{t}(D\tilde{E}^{\lambda})\tilde{v}^{\lambda}_{t}dx+\lambda^{2}\int\partial_{t}(\mathcal{E}{\rm
div}\mathcal{E})\tilde{v}^{\lambda}_{t}dx$
$\displaystyle+\lambda^{2}\int\partial_{t}(\mathcal{E}{\rm
div}\tilde{E}^{\lambda})\tilde{v}^{\lambda}_{t}dx+\lambda^{2}\int\partial_{t}(\tilde{E}{\rm
div}\mathcal{E})\tilde{v}^{\lambda}_{t}dx$ $\displaystyle\quad\leq
K(||\tilde{v}^{\lambda}_{t}||^{2}+||\tilde{E}^{\lambda}||^{2}+||\tilde{E}^{\lambda}_{t}||^{2})+K\lambda^{4}||({\rm
div}\tilde{E}^{\lambda},{\rm div}\tilde{E}^{\lambda}_{t})||^{2}$
$\displaystyle\quad\ \
+K\lambda^{4}||(\tilde{E}^{\lambda},\tilde{E}^{\lambda}_{t})||^{2}+K\lambda^{4}.$
(2.40)
Now we deal with the trilinear terms involving
$\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t}$, ${v}$, and ${v}_{t}$. Using the
identities
$\int(\tilde{v}^{\lambda}\cdot\nabla\tilde{v}^{\lambda}_{t})\tilde{v}^{\lambda}_{t}dx=0,\quad\int({v}\cdot\nabla\tilde{v}^{\lambda}_{t})\tilde{v}^{\lambda}_{t}dx=0,$
we have
$\displaystyle\quad-\int\partial_{t}(\tilde{v}^{\lambda}\cdot\nabla\tilde{v}^{\lambda})\tilde{v}^{\lambda}_{t}dx-\int\partial_{t}({v}\cdot\nabla\tilde{v}^{\lambda})\tilde{v}^{\lambda}_{t}dx-\int\partial_{t}(\tilde{v}^{\lambda}\cdot\nabla{v})\tilde{v}^{\lambda}_{t}dx$
$\displaystyle=-\int(\tilde{v}^{\lambda}_{t}\cdot\nabla\tilde{v}^{\lambda})\tilde{v}^{\lambda}_{t}dx-\int({v}_{t}\cdot\nabla\tilde{v}^{\lambda})\tilde{v}^{\lambda}_{t}dx-\int(\tilde{v}^{\lambda}_{t}\cdot\nabla{v})\tilde{v}^{\lambda}_{t}dx-\int(\tilde{v}^{\lambda}\cdot\nabla{v}_{t})\tilde{v}^{\lambda}_{t}dx.$
(2.41)
By Cauchy-Schwartz’s inequality, using the regularity of ${v}$ and the
inequality (2.12), we get
$\displaystyle-\int(\tilde{v}^{\lambda}_{t}\cdot\nabla\tilde{v}^{\lambda})\tilde{v}^{\lambda}_{t}dx$
$\displaystyle\leq\frac{1}{2}||\tilde{v}^{\lambda}_{t}||^{2}+\frac{1}{2}||\tilde{v}^{\lambda}_{t}\cdot\nabla\tilde{v}^{\lambda}||^{2}$
$\displaystyle\leq\frac{1}{2}||\tilde{v}^{\lambda}_{t}||^{2}+\frac{K}{2}||\tilde{v}^{\lambda}_{t}||^{2}_{H^{1}}||\nabla\tilde{v}^{\lambda}||^{2}_{H^{1}}$
$\displaystyle\leq\frac{1}{2}||\tilde{v}^{\lambda}_{t}||^{2}+K|||\tilde{\mathbf{w}}^{\lambda}|||^{4},$
(2.42)
$\displaystyle-\int({v}_{t}\cdot\nabla\tilde{v}^{\lambda})\tilde{v}^{\lambda}_{t}dx$
$\displaystyle\leq\frac{1}{2}||\tilde{v}^{\lambda}_{t}||^{2}+\frac{1}{2}||{v}_{t}\cdot\nabla\tilde{v}^{\lambda}||^{2}$
$\displaystyle\leq\frac{1}{2}||\tilde{v}^{\lambda}_{t}||^{2}+K||\nabla\tilde{v}^{\lambda}||^{2},$
(2.43)
$\displaystyle-\int(\tilde{v}^{\lambda}_{t}\cdot\nabla{v})\tilde{v}^{\lambda}_{t}dx$
$\displaystyle\leq K||\nabla
v||_{L^{\infty}}||\tilde{v}^{\lambda}_{t}||^{2}\leq
K||\tilde{v}^{\lambda}_{t}||^{2},$ (2.44)
$\displaystyle-\int(\tilde{v}^{\lambda}\cdot\nabla{v}_{t})\tilde{v}^{\lambda}_{t}dx$
$\displaystyle\leq\frac{1}{2}||\tilde{v}^{\lambda}_{t}||^{2}+\frac{1}{2}||\tilde{v}^{\lambda}\cdot\nabla{v}_{t}||^{2}$
$\displaystyle\leq\frac{1}{2}||\tilde{v}^{\lambda}_{t}||^{2}+K||\tilde{v}^{\lambda}||^{2}.$
(2.45)
The last nonlinear term can be treated similarly as (2.1)
$\displaystyle\lambda^{2}\int\partial_{t}(\tilde{E}^{\lambda}{\rm
div}\tilde{E}^{\lambda})\tilde{v}^{\lambda}_{t}dx\leq$
$\displaystyle\frac{1}{2}||\tilde{v}^{\lambda}_{t}||^{2}+\frac{1}{2}\lambda^{4}||\partial_{t}(\tilde{E}^{\lambda}{\rm
div}\tilde{E}^{\lambda})||^{2}$ $\displaystyle\leq$
$\displaystyle\frac{1}{2}||\tilde{v}^{\lambda}_{t}||^{2}+\frac{1}{2}\lambda^{4}(||\tilde{E}^{\lambda}_{t}{\rm
div}\tilde{E}^{\lambda}||^{2}+||\tilde{E}^{\lambda}||^{2}_{L^{\infty}}||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2})$ $\displaystyle\leq$
$\displaystyle\frac{1}{2}||\tilde{v}^{\lambda}_{t}||^{2}+\frac{1}{2}\lambda^{4}(||\tilde{E}^{\lambda}_{t}||^{2}_{H^{1}}||{\rm
div}\tilde{E}^{\lambda}||^{2}_{H^{1}}+||\tilde{E}^{\lambda}||^{2}_{H^{2}}||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2})$ $\displaystyle\leq$
$\displaystyle\frac{1}{2}||\tilde{v}^{\lambda}_{t}||^{2}+K|||\tilde{\mathbf{w}}^{\lambda}|||^{4}.$
(2.46)
Putting (2.1)-(2.1) together, we have
$\displaystyle\frac{d}{dt}||\tilde{v}^{\lambda}_{t}||^{2}+\mu||\nabla\tilde{v}^{\lambda}_{t}||^{2}$
$\displaystyle\quad\leq
K||(\tilde{v}^{\lambda},\nabla\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t},\tilde{E}^{\lambda},\tilde{E}^{\lambda}_{t})||^{2}+K\lambda^{4}||(\tilde{E}^{\lambda},\tilde{E}^{\lambda}_{t},{\rm
div}\tilde{E}^{\lambda},{\rm div}\tilde{E}^{\lambda}_{t})||^{2}$
$\displaystyle\quad\ \ +K|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+K\lambda^{4}.$
(2.47)
Combining (2.1), (2.1) and (2.1), and restricting $\lambda$ small enough, we
get
$\displaystyle\frac{d}{dt}(\delta_{3}||\tilde{z}^{\lambda}_{t}||^{2}+\lambda^{2}||\tilde{E}^{\lambda}_{t}||^{2}+\delta_{4}||\tilde{v}^{\lambda}_{t}||^{2})$
$\displaystyle\ \
+2\delta_{3}c_{3}||\nabla\tilde{z}^{\lambda}_{t}||^{2}+\mu\delta_{4}||\nabla\tilde{v}^{\lambda}_{t}||^{2}+c_{5}||\tilde{E}^{\lambda}_{t}||^{2}+c_{6}\lambda^{2}||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}$ $\displaystyle\quad\leq
K_{2}||(\tilde{v}^{\lambda},\nabla\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t},\tilde{z}^{\lambda},\tilde{z}^{\lambda}_{t},\tilde{E}^{\lambda},{\rm
div}\tilde{E}^{\lambda})||^{2}$ $\displaystyle\quad\ \
+K_{2}\big{(}|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+|||\tilde{\mathbf{w}}^{\lambda}|||^{2}||\tilde{E}^{\lambda}_{t}||^{2}\big{)}+K_{2}\lambda^{4}.$
(2.48)
for some $\delta_{3}$ and $\delta_{4}$ sufficient small, which gives the
inequality (2.3). ∎
Using Lemma 2.3, we can obtain the $L^{\infty}_{t}(L^{2}_{x})$ norm of
$(\nabla\tilde{z}^{\lambda},\nabla\tilde{v}^{\lambda},\tilde{E}^{\lambda},\lambda{\rm
div}\tilde{E}^{\lambda})$.
###### Lemma 2.4.
Under the assumptions of Theorem 1.1, we have
$\displaystyle||(\nabla\tilde{z}^{\lambda},\nabla\tilde{v}^{\lambda},\tilde{E}^{\lambda})||^{2}+\lambda^{2}||{\rm
div}\tilde{E}^{\lambda}||^{2}$ $\displaystyle\quad\leq
K||(\tilde{z}^{\lambda},\tilde{z}^{\lambda}_{t},\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t})||^{2}+K\lambda^{2}||\tilde{E}^{\lambda}_{t}||^{2}+K|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+K\lambda^{4}.$
(2.49)
###### Proof.
It follows form (2.1) and Cauchy-Schwartz’s inequality that
$\displaystyle\
c_{1}\delta_{1}||\nabla\tilde{z}^{\lambda}||^{2}+\mu\delta_{2}||\nabla\tilde{v}^{\lambda}||^{2}+\big{(}2\lambda^{2}-K(\lambda^{2}\delta_{2}+\lambda^{4}\delta_{1})\big{)}||{\rm
div}\tilde{E}^{\lambda}||^{2}$
$\displaystyle+\big{(}c_{2}-K(\delta_{1}+\delta_{2})-K(\lambda^{2}\delta_{2}+\lambda^{4}\delta_{1})\big{)}||\tilde{E}^{\lambda}||^{2}$
$\displaystyle\quad\leq-\frac{d}{dt}\Big{(}\delta_{1}||\tilde{z}^{\lambda}||^{2}+\delta_{2}||\tilde{v}^{\lambda}||^{2}+\lambda^{2}||\tilde{E}^{\lambda}||^{2}\Big{)}$
$\displaystyle\quad\quad+K_{1}||(\tilde{z}^{\lambda},\tilde{v}^{\lambda})||^{2}+K_{1}|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+K_{1}\lambda^{4}$
$\displaystyle\quad\leq
K||(\tilde{z}^{\lambda},\tilde{z}^{\lambda}_{t},\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t})||^{2}+K\lambda^{2}||(\tilde{E}^{\lambda},\tilde{E}^{\lambda}_{t})||^{2}+K|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+K\lambda^{4},$
which gives (2.4) by using Lemma 2.3. ∎
### 2.2. High order estimates
In this subsection we will establish the $L^{\infty}_{t}(L^{2}_{x})$ of the
higher order spatial derivatives
$(\Delta\tilde{z}^{\lambda},\Delta\tilde{v}^{\lambda},\text{div}\tilde{E}^{\lambda},\lambda\nabla\text{div}\tilde{E}^{\lambda})$.
###### Lemma 2.5.
Under the assumptions of Theorem 1.1, we have
$\displaystyle||\Delta\tilde{z}^{\lambda}||^{2}+||\Delta\tilde{v}^{\lambda}||^{2}+||{\rm
div}\tilde{E}^{\lambda}||^{2}+\lambda^{2}||\nabla{\rm
div}\tilde{E}^{\lambda}||^{2}$ $\displaystyle\leq$ $\displaystyle
K||(\tilde{z}^{\lambda},\nabla\tilde{z}^{\lambda},\nabla\tilde{z}^{\lambda}_{t},\tilde{v}^{\lambda},\nabla\tilde{v}^{\lambda},\nabla\tilde{v}^{\lambda}_{t},\tilde{E}^{\lambda})||^{2}$
$\displaystyle+K\lambda^{2}||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}+K|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+K\lambda^{4}.$
(2.50)
###### Proof.
Multiplying (2.7) by $-\Delta\tilde{z}^{\lambda}$, integrating the resulting
equation over $\mathbb{T}^{3}$ with respect to $x$, we get
$\displaystyle\frac{1}{2}\frac{d}{dt}||\nabla\tilde{z}^{\lambda}||^{2}+||\Delta\tilde{z}^{\lambda}||^{2}$
$\displaystyle\ \ =-\int{\rm
div}(D\tilde{E}^{\lambda})\Delta\tilde{z}^{\lambda}dx+\lambda^{2}\int{\rm
div}(\tilde{E}^{\lambda}{\rm div}\mathcal{E}+\mathcal{E}{\rm
div}\tilde{E}^{\lambda})\Delta\tilde{z}^{\lambda}dx$ $\displaystyle\quad\ \ \
+\lambda^{2}\int{\rm div}(\mathcal{E}{\rm
div}\mathcal{E})\Delta\tilde{z}^{\lambda}dx+\int{\rm
div}(Z\tilde{v}^{\lambda})\Delta\tilde{z}^{\lambda}dx+\int{\rm
div}(v\tilde{z}^{\lambda})\Delta\tilde{z}^{\lambda}dx$ $\displaystyle\quad\ \
\ +\int{\rm
div}(\tilde{z}^{\lambda}\tilde{v}^{\lambda})\Delta\tilde{z}^{\lambda}dx+\lambda^{2}\int{\rm
div}(\tilde{E}^{\lambda}{\rm
div}\tilde{E}^{\lambda})\Delta\tilde{z}^{\lambda}dx.$ (2.51)
We estimate the terms on the right-hand side of (2.2). By Cauchy-Schwartz’s
inequality and using the regularity of $D,\mathcal{E},Z$ and $v$, the first
two terms can be bounded by
$\epsilon||\Delta\tilde{z}^{\lambda}||^{2}+K_{\epsilon}||(\tilde{E}^{\lambda},{\rm
div}\tilde{E}^{\lambda})||^{2}+K_{\epsilon}\lambda^{4}||(\tilde{E}^{\lambda},{\rm
div}\tilde{E}^{\lambda},\nabla{\rm div}\tilde{E}^{\lambda})||^{2}$ (2.52)
and the third, fourth and fifth terms can be bounded by
$\epsilon||\Delta\tilde{z}^{\lambda}||^{2}+K_{\epsilon}||(\nabla\tilde{z}^{\lambda},\tilde{v}^{\lambda})||^{2}+K_{\epsilon}\lambda^{4},$
(2.53)
where we use the facts that $\text{div}\tilde{v}^{\lambda}=0$ and
$\text{div}{v}=0$. For the last two nonlinear terms, using the facts that
${\rm
div}(\tilde{z}^{\lambda}\tilde{v}^{\lambda})=\nabla\tilde{z}^{\lambda}\tilde{v}^{\lambda},\quad{\rm
div}(\tilde{E}{\rm div}\tilde{E}^{\lambda})=\tilde{E}^{\lambda}\nabla{\rm
div}\tilde{E}^{\lambda}+({\rm div}\tilde{E}^{\lambda})^{2},$
Cauchy-Schwartz’s inequality, Sobolev’s embedding
$H^{2}(\mathbb{T}^{3})\hookrightarrow L^{\infty}(\mathbb{T}^{3})$ and the
inequality (2.12), we have
$\displaystyle\int{\rm
div}(\tilde{z}^{\lambda}\tilde{v}^{\lambda})\Delta\tilde{z}^{\lambda}dx+\lambda^{2}\int{\rm
div}(\tilde{E}^{\lambda}{\rm
div}\tilde{E}^{\lambda})\Delta\tilde{z}^{\lambda}dx$
$\displaystyle\quad\leq\epsilon||\Delta\tilde{z}^{\lambda}||^{2}+K_{\epsilon}||\nabla\tilde{z}^{\lambda}\tilde{v}^{\lambda}||^{2}+K_{\epsilon}\lambda^{4}||{\rm
div}(\tilde{E}^{\lambda}{\rm div}\tilde{E}^{\lambda})||^{2}$
$\displaystyle\quad\leq\epsilon||\Delta\tilde{z}^{\lambda}||^{2}+K_{\epsilon}||\nabla\tilde{z}^{\lambda}||^{2}||\tilde{v}^{\lambda}||^{2}_{L^{\infty}}$
$\displaystyle\quad\ \ \
+K_{\epsilon}\lambda^{4}(||\tilde{E}^{\lambda}||^{2}_{L^{\infty}}||\nabla{\rm
div}\tilde{E}^{\lambda}||^{2}+||{\rm
div}\tilde{E}^{\lambda}||^{2}_{H^{1}}||{\rm
div}\tilde{E}^{\lambda}||^{2}_{H^{1}})$
$\displaystyle\quad\leq\epsilon||\Delta\tilde{z}^{\lambda}||^{2}+K_{\epsilon}||\nabla\tilde{z}^{\lambda}||^{2}||\tilde{v}^{\lambda}||^{2}_{H^{2}}$
$\displaystyle\quad\ \ \
+K_{\epsilon}\lambda^{4}(||\tilde{E}^{\lambda}||^{2}_{H^{2}}||\nabla{\rm
div}\tilde{E}^{\lambda}||^{2}+||{\rm
div}\tilde{E}^{\lambda}||^{2}_{H^{1}}||{\rm
div}\tilde{E}^{\lambda}||^{2}_{H^{1}})$
$\displaystyle\quad\leq\epsilon||\Delta\tilde{z}^{\lambda}||^{2}+K_{\epsilon}|||\tilde{\mathbf{w}}^{\lambda}|||^{4}.$
(2.54)
Putting (2.2)-(2.2) together and choosing $\epsilon$ small enough, we have
$\displaystyle\frac{d}{dt}||\nabla\tilde{z}^{\lambda}||^{2}+c_{7}||\Delta\tilde{z}^{\lambda}||^{2}$
$\displaystyle\quad\leq K||(\tilde{E}^{\lambda},{\rm
div}\tilde{E}^{\lambda})||^{2}+K\lambda^{4}||(\tilde{E}^{\lambda},{\rm
div}\tilde{E}^{\lambda},\nabla{\rm div}\tilde{E}^{\lambda})||^{2}$
$\displaystyle\quad\ \ \
+K||(\nabla\tilde{z}^{\lambda},\tilde{v}^{\lambda})||^{2}+K|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+K\lambda^{4}.$
(2.55)
Multiplying (2.8) by ${\rm div}\tilde{E}^{\lambda}$ and integrating the
resulting equation over $\mathbb{T}^{3}$ with respect to $x$, we get
$\displaystyle\frac{\lambda^{2}}{2}\frac{d}{dt}||{\rm
div}\tilde{E}^{\lambda}||^{2}+\lambda^{2}||\nabla{\rm
div}\tilde{E}^{\lambda}||^{2}+\int Z|{\rm div}\tilde{E}^{\lambda}|^{2}dx$
$\displaystyle=-\int\nabla Z\tilde{E}^{\lambda}{\rm
div}\tilde{E}^{\lambda}dx-\lambda^{2}\int{\rm
div}(\partial_{t}\mathcal{E}-\Delta\mathcal{E}){\rm
div}\tilde{E}^{\lambda}dx-\int{\rm div}(\mathcal{E}\tilde{z}^{\lambda}){\rm
div}\tilde{E}^{\lambda}dx$ $\displaystyle\ \ \
-\lambda^{2}\int\tilde{v}^{\lambda}\nabla{\rm div}\mathcal{E}{\rm
div}\tilde{E}^{\lambda}dx-\lambda^{2}\int v\nabla{\rm div}\mathcal{E}{\rm
div}\tilde{E}^{\lambda}dx+\int\tilde{v}^{\lambda}\nabla D{\rm
div}\tilde{E}^{\lambda}dx$ $\displaystyle\ \ \ -\lambda^{2}\int v\nabla{\rm
div}\tilde{E}^{\lambda}{\rm
div}\tilde{E}^{\lambda}dx-\lambda^{2}\int\tilde{v}^{\lambda}\nabla{\rm
div}\tilde{E}^{\lambda}{\rm div}\tilde{E}^{\lambda}dx-\int{\rm
div}(\tilde{z}^{\lambda}\tilde{E}^{\lambda}){\rm div}\tilde{E}^{\lambda}dx,$
(2.56)
where we have used $\text{div}\tilde{v}^{\lambda}=0$ and $\text{div}{v}=0$. By
Cauchy-Schwartz’s inequality and using the regularity of $\mathcal{E},v,D$ and
$Z$, the first seven terms on the right hand side of (2.2) can be bounded by
$\epsilon||{\rm
div}\tilde{E}^{\lambda}||^{2}+K_{\epsilon}||(\tilde{E}^{\lambda},\tilde{z}^{\lambda},\nabla\tilde{z}^{\lambda},\tilde{v}^{\lambda})||^{2}+K_{\epsilon}\lambda^{4}||(\tilde{v}^{\lambda},\nabla{\rm
div}\tilde{E}^{\lambda})||^{2}+K_{\epsilon}\lambda^{4}.$ (2.57)
By Cauchy-Schwartz’s inequality, Sobolev’s embedding
$H^{2}(\mathbb{T}^{3})\hookrightarrow L^{\infty}(\mathbb{T}^{3})$, and using
the inequality (2.12), the last two nonlinear terms on the right hand side of
(2.2) can be treated as follows
$\displaystyle-\lambda^{2}\int\tilde{v}^{\lambda}\nabla{\rm
div}\tilde{E}^{\lambda}{\rm div}\tilde{E}^{\lambda}dx-\int{\rm
div}(\tilde{z}^{\lambda}\tilde{E}^{\lambda}){\rm div}\tilde{E}^{\lambda}dx$
$\displaystyle\leq$ $\displaystyle\epsilon||{\rm
div}\tilde{E}^{\lambda}||^{2}+K_{\epsilon}\lambda^{4}||\tilde{v}^{\lambda}\nabla{\rm
div}\tilde{E}^{\lambda}||^{2}+K_{\epsilon}||{\rm
div}(\tilde{z}^{\lambda}\tilde{E}^{\lambda})||^{2}$ $\displaystyle\leq$
$\displaystyle\epsilon||{\rm
div}\tilde{E}^{\lambda}||^{2}+K_{\epsilon}\lambda^{4}||\tilde{v}^{\lambda}||^{2}_{L^{\infty}}||\nabla{\rm
div}\tilde{E}^{\lambda}||^{2}$
$\displaystyle+K_{\epsilon}(||\tilde{z}^{\lambda}||^{2}_{L^{\infty}}||{\rm
div}\tilde{E}^{\lambda}||^{2}+||{\nabla}\tilde{z}^{\lambda}||^{2}_{H^{1}}||\tilde{E}^{\lambda}||^{2}_{H^{1}})$
$\displaystyle\leq$ $\displaystyle\epsilon||{\rm
div}\tilde{E}^{\lambda}||^{2}+K_{\epsilon}\lambda^{4}||\tilde{v}^{\lambda}||^{2}_{H^{2}}||\nabla{\rm
div}\tilde{E}^{\lambda}||^{2}$
$\displaystyle+K_{\epsilon}(||\tilde{z}^{\lambda}||^{2}_{H^{2}}||{\rm
div}\tilde{E}^{\lambda}||^{2}+||{\nabla}\tilde{z}^{\lambda}||^{2}_{H^{1}}||\tilde{E}^{\lambda}||^{2}_{H^{1}})$
$\displaystyle\leq$ $\displaystyle\epsilon||{\rm
div}\tilde{E}^{\lambda}||^{2}+K_{\epsilon}(1+\lambda^{2})|||\tilde{\mathbf{w}}^{\lambda}|||^{4}.$
(2.58)
Putting (2.2)-(2.2) together and using the positivity of ${Z}$, we get
$\displaystyle\lambda^{2}\frac{d}{dt}||{\rm
div}\tilde{E}^{\lambda}||^{2}+2\lambda^{2}||\nabla{\rm
div}\tilde{E}^{\lambda}||^{2}+c_{8}||{\rm div}\tilde{E}^{\lambda}||^{2}$
$\displaystyle\quad\leq
K||(\tilde{z}^{\lambda},\nabla\tilde{z}^{\lambda},\tilde{E}^{\lambda},\tilde{v}^{\lambda})||^{2}+K\lambda^{4}||(\tilde{v}^{\lambda},\nabla{\rm
div}\tilde{E}^{\lambda})||^{2}$ $\displaystyle\quad\ \ \
+K(1+\lambda^{2})|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+K\lambda^{4}.$ (2.59)
Multiplying (2.9) by $-\Delta\tilde{v}^{\lambda}$ and integrating the
resulting equation over $\mathbb{T}^{3}$ with respect to $x$, by (2.10) and
integrating it by parts, we have
$\displaystyle\frac{1}{2}\frac{d}{dt}||\nabla\tilde{v}^{\lambda}||^{2}+\mu||\Delta\tilde{v}^{\lambda}||^{2}$
$\displaystyle\quad=\int(v\cdot\nabla\tilde{v}^{\lambda})\Delta\tilde{v}^{\lambda}dx+\int(\tilde{v}^{\lambda}\cdot\nabla{v})\Delta\tilde{v}^{\lambda}dx-\lambda^{2}\int\tilde{E}^{\lambda}\text{div}\mathcal{E}\Delta\tilde{v}^{\lambda}dx$
$\displaystyle\quad\ \ \
-\lambda^{2}\int\mathcal{E}\text{div}\tilde{E}^{\lambda}\Delta\tilde{v}^{\lambda}dx-\lambda^{2}\int\mathcal{E}\text{div}\mathcal{E}\Delta\tilde{v}^{\lambda}dx+\int
D\tilde{E}^{\lambda}\Delta\tilde{v}^{\lambda}dx$ $\displaystyle\quad\ \ \
-\lambda^{2}\int\tilde{E}^{\lambda}\text{div}\tilde{E}^{\lambda}\Delta\tilde{v}^{\lambda}dx+\int(\tilde{v}^{\lambda}\cdot\nabla\tilde{v}^{\lambda})\Delta\tilde{v}^{\lambda}dx$
(2.60)
We estimate the terms on the right-hand side of (2.2). By Cauchy-Schwartz’s
inequality and using the regularity of $v,\mathcal{E}$ and $D$, the first six
terms can be bounded by
$\epsilon||\Delta\tilde{v}^{\lambda}||^{2}+K_{\epsilon}||(\tilde{v}^{\lambda},\nabla\tilde{v}^{\lambda},\tilde{E}^{\lambda})||^{2}+K_{\epsilon}\lambda^{4}||(\tilde{E}^{\lambda},{\rm
div}\tilde{E}^{\lambda})||^{2}+K_{\epsilon}\lambda^{4}.$ (2.61)
By Cauchy-Schwartz’s inequality, Sobolev’s embedding
$H^{2}(\mathbb{T}^{3})\hookrightarrow L^{\infty}(\mathbb{T}^{3})$, and using
the inequality (2.12), the last two nonlinear terms can be treated as follows
$\displaystyle\qquad-\lambda^{2}\int\tilde{E}^{\lambda}\text{div}\tilde{E}^{\lambda}\Delta\tilde{v}^{\lambda}dx+\int(\tilde{v}^{\lambda}\cdot\nabla\tilde{v}^{\lambda})\Delta\tilde{v}^{\lambda}dx$
$\displaystyle\quad\leq\epsilon||\Delta\tilde{v}^{\lambda}||^{2}+K_{\epsilon}||\tilde{v}^{\lambda}\cdot\nabla\tilde{v}^{\lambda}||^{2}+K_{\epsilon}\lambda^{4}||\tilde{E}^{\lambda}\text{div}\tilde{E}^{\lambda}||^{2}$
$\displaystyle\quad\leq\epsilon||\Delta\tilde{v}^{\lambda}||^{2}+K_{\epsilon}||\tilde{v}^{\lambda}||^{2}_{L^{\infty}}||\nabla\tilde{v}^{\lambda}||^{2}+K_{\epsilon}\lambda^{4}||\tilde{E}^{\lambda}||^{2}_{L^{\infty}}||\text{div}\tilde{E}^{\lambda}||^{2}$
$\displaystyle\quad\leq\epsilon||\Delta\tilde{v}^{\lambda}||^{2}+K_{\epsilon}||\tilde{v}^{\lambda}||^{2}_{H^{2}}||\nabla\tilde{v}^{\lambda}||^{2}+K_{\epsilon}\lambda^{4}||\tilde{E}^{\lambda}||^{2}_{H^{2}}||\text{div}\tilde{E}^{\lambda}||^{2}$
$\displaystyle\quad\leq\epsilon||\Delta\tilde{v}^{\lambda}||^{2}+K_{\epsilon}|||\tilde{\mathbf{w}}^{\lambda}|||^{4}.$
(2.62)
Putting (2.2)-(2.2) together and choosing $\epsilon$ small enough, we have
$\displaystyle\frac{d}{dt}||\nabla\tilde{v}^{\lambda}||^{2}+c_{10}\mu||\Delta\tilde{v}^{\lambda}||^{2}$
$\displaystyle\leq
K||(\tilde{v}^{\lambda},\nabla\tilde{v}^{\lambda},\tilde{E}^{\lambda})||^{2}$
$\displaystyle\ \ +K\lambda^{4}||(\tilde{E}^{\lambda},{\rm
div}\tilde{E}^{\lambda})||^{2}+K|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+K\lambda^{4}.$
(2.63)
Combining (2.2), (2.2) and (2.2), and restricting $\lambda$ is small, we get
$\displaystyle\frac{d}{dt}\Big{(}\delta_{5}||\nabla\tilde{z}^{\lambda}||^{2}+\lambda^{2}||{\rm
div}\tilde{E}^{\lambda}||^{2}+\delta_{6}||\nabla\tilde{v}^{\lambda}||^{2}\Big{)}$
$\displaystyle\quad\ \
+\delta_{5}c_{8}||\Delta\tilde{z}^{\lambda}||^{2}+(2\lambda^{2}-K\lambda^{4}\delta_{5}-K\lambda^{4})||\nabla{\rm
div}\tilde{E}^{\lambda}||^{2}+\delta_{6}\mu
c_{9}||\Delta\tilde{v}^{\lambda}||^{2}$ $\displaystyle\quad\ \
+(c_{8}-K\delta_{5}(\lambda^{4}+1)-K\delta_{6}\lambda^{4})||{\rm
div}\tilde{E}^{\lambda}||^{2}$ $\displaystyle\quad\leq
K_{3}||(\tilde{E}^{\lambda},\tilde{z}^{\lambda},\nabla\tilde{z}^{\lambda},\tilde{v}^{\lambda},\nabla\tilde{v}^{\lambda})||^{2}+K_{3}|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+K_{3}\lambda^{4}$
(2.64)
for some $\delta_{5}$ and $\delta_{6}$ sufficient small, which gives the
inequality (2.5). ∎
In order to close the estimates on the right-hand side of (2.5), we need to
obtain the uniform bounds of the time derivatives
$(\nabla\tilde{z}^{\lambda}_{t},\nabla\tilde{v}^{\lambda}_{t},\lambda\text{div}\tilde{E}^{\lambda}_{t})$,
which is given by the next lemma.
###### Lemma 2.6.
Under the assumptions of Theorem 1.1, we have
$\displaystyle\quad||\nabla\tilde{z}^{\lambda}_{t}||^{2}+\lambda^{2}||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}+||\nabla\tilde{v}^{\lambda}_{t}||^{2}$
$\displaystyle\quad+\int^{t}_{0}\big{(}||\Delta\tilde{z}^{\lambda}_{t}||^{2}+||\Delta\tilde{v}^{\lambda}_{t}||^{2}+||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}+\lambda^{2}||\nabla{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}\big{)}(s)dx$ $\displaystyle\leq
K\big{(}||\nabla\tilde{z}^{\lambda}_{t}||^{2}+\lambda^{2}||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}+||\nabla\tilde{v}^{\lambda}_{t}||^{2}\big{)}(t=0)$
$\displaystyle\quad+K\int^{t}_{0}\big{(}||(\tilde{z}^{\lambda},\tilde{z}^{\lambda}_{t},\nabla\tilde{z}^{\lambda},\nabla\tilde{z}^{\lambda}_{t})||^{2}+||(\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t},\nabla\tilde{v}^{\lambda},\nabla\tilde{v}^{\lambda}_{t})||^{2})(s)ds$
$\displaystyle\quad+K\int^{t}_{0}\big{(}||(\tilde{E}^{\lambda},\tilde{E}^{\lambda}_{t},{\rm
div}\tilde{E}^{\lambda},\nabla{\rm
div}\tilde{E}^{\lambda})||^{2})(s)ds+K\int^{t}_{0}|||\tilde{\mathbf{w}}^{\lambda}|||^{4}(s)ds$
$\displaystyle\quad+K\int^{t}_{0}\Big{\\{}|||\tilde{\mathbf{w}}^{\lambda}|||^{2}\big{(}||\tilde{z}^{\lambda}_{t}||^{2}_{H^{2}}+||\nabla\tilde{z}^{\lambda}_{t}||^{2}_{H^{1}}+||\tilde{E}^{\lambda}_{t}||^{2}_{H^{1}}+||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}\big{)}\Big{\\}}(s)ds$
$\displaystyle\quad+K\lambda^{2}\int^{t}_{0}\Big{\\{}|||\tilde{\mathbf{w}}^{\lambda}|||^{2}\big{(}||\tilde{E}^{\lambda}_{t}||^{2}_{H^{2}}+||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}_{H^{1}}+||\nabla{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}\big{)}\Big{\\}}(s)ds+K\lambda^{4},$ (2.65)
###### Proof.
Differentiating (2.7) with respect to $t$, multiplying the resulting equation
by $-\Delta\tilde{z}^{\lambda}_{t}$ and integrating it over $\mathbb{T}^{3}$
with respect to $x$, we get
$\displaystyle\frac{1}{2}\frac{d}{dt}||\nabla\tilde{z}^{\lambda}_{t}||^{2}+||\Delta\tilde{z}^{\lambda}_{t}||^{2}$
$\displaystyle=\int\Big{\\{}-{\rm
div}(D\tilde{E}^{\lambda}_{t})+\lambda^{2}\partial_{t}[\text{div}(\mathcal{E}\text{div}\tilde{E}^{\lambda}+\tilde{E}^{\lambda}\text{div}\mathcal{E})]+\lambda^{2}\partial_{t}[\text{div}(\mathcal{E}\text{div}\mathcal{E})]$
$\displaystyle\quad+\partial_{t}\text{div}(\tilde{z}^{\lambda}v)+\partial_{t}[\text{div}(Z\tilde{v}^{\lambda})]\Big{\\}}\Delta\tilde{z}^{\lambda}_{t}dx$
$\displaystyle\quad+\int\Big{\\{}\partial_{t}[\text{div}(\tilde{z}^{\lambda}\tilde{v}^{\lambda})]+\lambda^{2}\partial_{t}[\text{div}(\tilde{E}^{\lambda}\text{div}\tilde{E}^{\lambda})]\Big{\\}}\Delta\tilde{z}^{\lambda}_{t}dx.$
(2.66)
We estimate the terms on the right-hand side of (2.2). By Cauchy-Schwartz’s
inequality and using the regularity of $D,\mathcal{E},v$ and $Z$, the first
integral can be bounded by
$\displaystyle\epsilon||\Delta\tilde{z}^{\lambda}_{t}||^{2}+K_{\epsilon}||(\tilde{E}^{\lambda}_{t},{\rm
div}\tilde{E}^{\lambda}_{t})||^{2}+K_{\epsilon}||(\nabla\tilde{z}^{\lambda},\nabla\tilde{z}^{\lambda}_{t},\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t})||^{2}$
$\displaystyle+K_{\epsilon}\lambda^{4}||(\tilde{E}^{\lambda},\tilde{E}^{\lambda}_{t},{\rm
div}\tilde{E}^{\lambda},{\rm div}\tilde{E}^{\lambda}_{t},\nabla{\rm
div}\tilde{E}^{\lambda},\nabla{\rm
div}\tilde{E}^{\lambda}_{t})||^{2}+K_{\epsilon}\lambda^{4},$ (2.67)
where we have used the facts ${\rm div}\tilde{v}^{\lambda}=0$ and ${\rm
div}v=0$. For the second integral, by Cauchy-Schwartz’s inequality, Sobolev’s
embedding $H^{2}(\mathbb{T}^{3})\hookrightarrow L^{\infty}(\mathbb{T}^{3})$,
the inequality (2.12), and ${\rm div}\tilde{v}^{\lambda}=0$, we have
$\displaystyle\quad\int\Big{\\{}\partial_{t}[\text{div}(\tilde{z}^{\lambda}\tilde{v}^{\lambda})]+\lambda^{2}\partial_{t}[\text{div}(\tilde{E}^{\lambda}\text{div}\tilde{E}^{\lambda})]\Big{\\}}\Delta\tilde{z}^{\lambda}_{t}dx$
$\displaystyle\leq\epsilon||\Delta\tilde{z}^{\lambda}_{t}||^{2}+K_{\epsilon}||\partial_{t}[\text{div}(\tilde{z}^{\lambda}\tilde{v}^{\lambda})]||^{2}+K_{\epsilon}\lambda^{4}||\partial_{t}[\text{div}(\tilde{E}^{\lambda}\text{div}\tilde{E}^{\lambda})]||^{2}$
$\displaystyle\leq\epsilon||\Delta\tilde{z}^{\lambda}_{t}||^{2}+K_{\epsilon}(||\nabla\tilde{z}^{\lambda}||^{2}_{H^{1}}||\tilde{v}^{\lambda}_{t}||^{2}_{H^{1}}+||\nabla\tilde{z}^{\lambda}_{t}||^{2}||\tilde{v}^{\lambda}||^{2}_{L^{\infty}})$
$\displaystyle\ \
+K_{\epsilon}\lambda^{4}(||\tilde{E}^{\lambda}_{t}||^{2}_{L^{\infty}}||\nabla\text{div}\tilde{E}^{\lambda}||^{2}+2||\text{div}\tilde{E}^{\lambda}||^{2}_{H^{1}}||\text{div}\tilde{E}^{\lambda}_{t}||^{2}_{H^{1}}+||\tilde{E}^{\lambda}||^{2}_{L^{\infty}}||\nabla\text{div}\tilde{E}^{\lambda}_{t}||^{2})$
$\displaystyle\leq\epsilon||\Delta\tilde{z}^{\lambda}_{t}||^{2}+K_{\epsilon}(||\nabla\tilde{z}^{\lambda}||^{2}_{H^{1}}||\tilde{v}^{\lambda}_{t}||^{2}_{H^{1}}+||\nabla\tilde{z}^{\lambda}_{t}||^{2}||\tilde{v}^{\lambda}||^{2}_{H^{2}})$
$\displaystyle\ \
+K_{\epsilon}\lambda^{4}(||\tilde{E}^{\lambda}_{t}||^{2}_{H^{2}}||\nabla\text{div}\tilde{E}^{\lambda}||^{2}+2||\text{div}\tilde{E}^{\lambda}||^{2}_{H^{1}}||\text{div}\tilde{E}^{\lambda}_{t}||^{2}_{H^{1}}+||\tilde{E}^{\lambda}||^{2}_{H^{2}}||\nabla\text{div}\tilde{E}^{\lambda}_{t}||^{2})$
$\displaystyle\leq\epsilon||\Delta\tilde{z}^{\lambda}_{t}||^{2}+K_{\epsilon}|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+K_{\epsilon}\lambda^{2}|||\tilde{\mathbf{w}}^{\lambda}|||^{2}\big{(}||\tilde{E}^{\lambda}_{t}||^{2}_{H^{2}}+||\text{div}\tilde{E}^{\lambda}_{t}||^{2}_{H^{1}}+||\nabla\text{div}\tilde{E}^{\lambda}_{t}||^{2}\big{)}.$
(2.68)
Thus, by putting (2.2)-(2.2) together and taking $\epsilon$ to be small
enough, we obtain
$\displaystyle\frac{d}{dt}||\nabla\tilde{z}^{\lambda}_{t}||^{2}+c_{10}||\Delta\tilde{z}^{\lambda}_{t}||^{2}$
$\displaystyle\leq K||(\tilde{E}^{\lambda}_{t},{\rm
div}\tilde{E}^{\lambda}_{t})||^{2}+K_{\epsilon}||(\nabla\tilde{z}^{\lambda},\nabla\tilde{z}^{\lambda}_{t},\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t})||^{2}$
$\displaystyle\quad+K\lambda^{4}||(\tilde{E}^{\lambda},\tilde{E}^{\lambda}_{t},{\rm
div}\tilde{E}^{\lambda},{\rm div}\tilde{E}^{\lambda}_{t},\nabla{\rm
div}\tilde{E}^{\lambda},\nabla{\rm div}\tilde{E}^{\lambda}_{t})||^{2}$
$\displaystyle\quad+K|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+K\lambda^{2}|||\tilde{\mathbf{w}}^{\lambda}|||^{2}\big{(}||\tilde{E}^{\lambda}_{t}||^{2}_{H^{2}}+||\text{div}\tilde{E}^{\lambda}_{t}||^{2}_{H^{1}}+||\nabla\text{div}\tilde{E}^{\lambda}_{t}||^{2}\big{)}+K\lambda^{4}.$
(2.69)
Differentiating (2.8) with respect to $t$, multiplying the resulting equation
by ${\rm div}\tilde{E}^{\lambda}_{t}$ and integrating it over $\mathbb{T}^{3}$
with respect to $x$, we get
$\displaystyle\frac{\lambda^{2}}{2}\frac{d}{dt}||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}+\lambda^{2}||\nabla{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}+\int Z|{\rm
div}\tilde{E}^{\lambda}_{t}|^{2}dx$
$\displaystyle\quad=-\int\big{(}\partial_{t}(\nabla
Z\tilde{E}^{\lambda})+Z_{t}{\rm div}\tilde{E}^{\lambda}\big{)}{\rm
div}\tilde{E}^{\lambda}_{t}dx-\lambda^{2}\int\partial_{t}[\partial_{t}\text{div}\mathcal{E}-\Delta\text{div}\mathcal{E}]{\rm
div}\tilde{E}^{\lambda}_{t}dx$
$\displaystyle\quad\quad-\int\partial_{t}[\text{div}(\tilde{z}^{\lambda}\mathcal{E})]{\rm
div}\tilde{E}^{\lambda}_{t}dx-\lambda^{2}\int\partial_{t}[\text{div}(\tilde{v}^{\lambda}\text{div}\mathcal{E}+v\text{div}\mathcal{E})]{\rm
div}\tilde{E}^{\lambda}_{t}dx$
$\displaystyle\quad\quad+\int\partial_{t}[\text{div}(D\tilde{v}^{\lambda})]{\rm
div}\tilde{E}^{\lambda}_{t}dx-\lambda^{2}\int\partial_{t}[\text{div}(v\text{div}\tilde{E}^{\lambda})]{\rm
div}\tilde{E}^{\lambda}_{t}dx$
$\displaystyle\quad\quad-\int\partial_{t}[\text{div}(\tilde{z}^{\lambda}\tilde{E}^{\lambda})]{\rm
div}\tilde{E}^{\lambda}_{t}dx-\lambda^{2}\int\partial_{t}[\text{div}(\tilde{v}^{\lambda}{\rm
div}\tilde{E}^{\lambda})]{\rm div}\tilde{E}^{\lambda}_{t}dx$ (2.70)
We estimate each term on the right-hand side of (2.2). Noticing ${\rm
div}\tilde{v}^{\lambda}=0$ and ${\rm div}v=0$, by Cauchy-Schwartz’s
inequality, using the regularity of $Z,\mathcal{E},v$ and $D$, the first six
terms can be bounded by
$\displaystyle\epsilon||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}+K_{\epsilon}||(\tilde{E}^{\lambda},\tilde{E}^{\lambda}_{t},{\rm
div}\tilde{E}^{\lambda})||^{2}+K_{\epsilon}||(\tilde{z}^{\lambda},\tilde{z}^{\lambda}_{t},\nabla\tilde{z}^{\lambda},\nabla\tilde{z}^{\lambda}_{t})||^{2}$
$\displaystyle+K_{\epsilon}||(\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t})||^{2}+K_{\epsilon}\lambda^{4}||(\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t})||^{2}+K_{\epsilon}\lambda^{4}||(\nabla{\rm
div}\tilde{E}^{\lambda},\nabla{\rm
div}\tilde{E}^{\lambda}_{t})||^{2}+K_{\epsilon}\lambda^{4}.$ (2.71)
For the last two nonlinear terms, by Cauchy-Schwartz’s inequality, Sobolev’s
embedding $H^{2}(\mathbb{T}^{3})\hookrightarrow L^{\infty}(\mathbb{T}^{3})$,
the inequality (2.12) and $\text{div}\tilde{v}^{\lambda}=0$, they can be
estimated as follows
$\displaystyle-\int\partial_{t}[\text{div}(\tilde{z}^{\lambda}\tilde{E}^{\lambda})]{\rm
div}\tilde{E}^{\lambda}_{t}dx-\lambda^{2}\int\partial_{t}[\text{div}(\tilde{v}^{\lambda}{\rm
div}\tilde{E}^{\lambda})]{\rm div}\tilde{E}^{\lambda}_{t}dx$
$\displaystyle\leq\epsilon||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}+K_{\epsilon}||\partial_{t}[\text{div}(\tilde{z}^{\lambda}\tilde{E}^{\lambda})]||^{2}+K_{\epsilon}\lambda^{4}||\partial_{t}[\text{div}(\tilde{v}^{\lambda}\text{div}\tilde{E}^{\lambda})]||^{2}$
$\displaystyle\leq\epsilon||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}+K_{\epsilon}\big{(}||\tilde{z}^{\lambda}||^{2}_{L^{\infty}}||\text{div}\tilde{E}^{\lambda}_{t}||^{2}+||\tilde{z}^{\lambda}_{t}||^{2}_{L^{\infty}}||\text{div}\tilde{E}^{\lambda}||^{2}+||\nabla\tilde{z}^{\lambda}||^{2}_{H^{1}}||\tilde{E}^{\lambda}_{t}||^{2}_{H^{1}}$
$\displaystyle\ \
+||\nabla\tilde{z}^{\lambda}_{t}||^{2}_{H^{1}}||\tilde{E}^{\lambda}||^{2}_{H^{1}}\big{)}+K_{\epsilon}\lambda^{4}(||\tilde{v}^{\lambda}_{t}||^{2}_{L^{\infty}}||\nabla{\rm
div}\tilde{E}^{\lambda}||^{2}+||\tilde{v}^{\lambda}||^{2}_{L^{\infty}}||\nabla{\rm
div}\tilde{E}^{\lambda}_{t}||^{2})$ $\displaystyle\leq\epsilon||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}+K_{\epsilon}\big{(}||\tilde{z}^{\lambda}||^{2}_{H^{2}}||\text{div}\tilde{E}^{\lambda}_{t}||^{2}+||\tilde{z}^{\lambda}_{t}||^{2}_{H^{2}}||\text{div}\tilde{E}^{\lambda}||^{2}+||\nabla\tilde{z}^{\lambda}||^{2}_{H^{1}}||\tilde{E}^{\lambda}_{t}||^{2}_{H^{1}}$
$\displaystyle\quad+||\nabla\tilde{z}^{\lambda}_{t}||^{2}_{H^{1}}||\tilde{E}^{\lambda}||^{2}_{H^{1}}\big{)}+K_{\epsilon}\lambda^{4}(||\tilde{v}^{\lambda}_{t}||^{2}_{H^{2}}||\nabla{\rm
div}\tilde{E}^{\lambda}||^{2}+||\tilde{v}^{\lambda}||^{2}_{H^{2}}||\nabla{\rm
div}\tilde{E}^{\lambda}_{t}||^{2})$ $\displaystyle\leq\epsilon||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}+K_{\epsilon}|||\tilde{\mathbf{w}}^{\lambda}|||^{2}(||\text{div}\tilde{E}^{\lambda}_{t}||^{2}+||\tilde{z}^{\lambda}_{t}||^{2}_{H^{2}}+||\tilde{E}^{\lambda}_{t}||^{2}_{H^{1}}+||\nabla\tilde{z}^{\lambda}_{t}||^{2}_{H^{1}}\big{)}$
$\displaystyle\quad+K_{\epsilon}\lambda^{2}|||\tilde{\mathbf{w}}^{\lambda}|||^{2}\big{(}||\tilde{z}^{\lambda}_{t}||^{2}_{H^{2}}+||\nabla\text{div}\tilde{E}^{\lambda}_{t}||^{2}\big{)}$
(2.72)
Putting (2.2)-(2.2) together, using the positivity of $Z$, and taking
$\epsilon$ small enough, we get
$\displaystyle{\lambda^{2}}\frac{d}{dt}||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}+2\lambda^{2}||\nabla{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}+c_{11}\int|{\rm
div}\tilde{E}^{\lambda}_{t}|^{2}dx$ $\displaystyle\leq
K||(\tilde{E}^{\lambda},\tilde{E}^{\lambda}_{t},{\rm
div}\tilde{E}^{\lambda})||^{2}+K_{\epsilon}||(\tilde{z}^{\lambda},\tilde{z}^{\lambda}_{t},\nabla\tilde{z}^{\lambda},\nabla\tilde{z}^{\lambda}_{t})||^{2}$
$\displaystyle\quad+K||(\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t})||^{2}+K\lambda^{4}||(\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t})||^{2}+K\lambda^{4}||(\nabla{\rm
div}\tilde{E}^{\lambda},\nabla{\rm div}\tilde{E}^{\lambda}_{t})||^{2}$
$\displaystyle\quad+K|||\tilde{\mathbf{w}}^{\lambda}|||^{2}(||\text{div}\tilde{E}^{\lambda}_{t}||^{2}+||\tilde{z}^{\lambda}_{t}||^{2}_{H^{2}}+||\tilde{E}^{\lambda}_{t}||^{2}_{H^{1}}+||\nabla\tilde{z}^{\lambda}_{t}||^{2}_{H^{1}}\big{)}$
$\displaystyle\quad+K\lambda^{2}|||\tilde{\mathbf{w}}^{\lambda}|||^{2}\big{(}||\tilde{z}^{\lambda}_{t}||^{2}_{H^{2}}+||\nabla\text{div}\tilde{E}^{\lambda}_{t}||^{2}\big{)}+K\lambda^{4}.$
(2.73)
Differentiating (2.9) with respect to $t$, multiplying the resulting equation
by $-\Delta\tilde{v}^{\lambda}_{t}$, integrating it over $\mathbb{T}^{3}$ with
respect to $x$ and using $\text{div}\tilde{v}^{\lambda}_{t}=0$, we get
$\displaystyle\ \
\frac{1}{2}\frac{d}{dt}||\nabla\tilde{v}^{\lambda}_{t}||^{2}+\mu||\Delta\tilde{v}^{\lambda}_{t}||^{2}$
$\displaystyle\quad=\int\partial_{t}({v}\cdot\nabla\tilde{v}^{\lambda})\Delta\tilde{v}^{\lambda}_{t}dx+\int\partial_{t}(\tilde{v}^{\lambda}\cdot\nabla{v})\Delta\tilde{v}^{\lambda}_{t}dx+\int\partial_{t}(D\tilde{E}^{\lambda})\Delta\tilde{v}^{\lambda}_{t}dx$
$\displaystyle\quad\ \ \ -\lambda^{2}\int\partial_{t}(\mathcal{E}{\rm
div}\mathcal{E})\Delta\tilde{v}^{\lambda}_{t}dx-\lambda^{2}\int\partial_{t}(\mathcal{E}{\rm
div}\tilde{E}^{\lambda})\Delta\tilde{v}^{\lambda}_{t}dx-\lambda^{2}\int\partial_{t}(\tilde{E}{\rm
div}\mathcal{E})\Delta\tilde{v}^{\lambda}_{t}dx$ $\displaystyle\quad\ \ \
+\int\partial_{t}(\tilde{v}^{\lambda}\cdot\nabla\tilde{v}^{\lambda})\Delta\tilde{v}^{\lambda}_{t}dx-\lambda^{2}\int\partial_{t}(\tilde{E}^{\lambda}{\rm
div}\tilde{E}^{\lambda})\Delta\tilde{v}^{\lambda}_{t}dx.$ (2.74)
By the Cauchy-Schwartz’s inequality and using the regularity of $v,D$ and
$\mathcal{E}$, the first six terms on the right-hand side of (2.2) can be
bounded by
$\displaystyle\epsilon||\Delta\tilde{v}^{\lambda}_{t}||^{2}+K_{\epsilon}||(\nabla\tilde{v}^{\lambda},\nabla\tilde{v}^{\lambda}_{t},\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t},\tilde{E}^{\lambda},\tilde{E}^{\lambda}_{t})||^{2}$
$\displaystyle\quad+K_{\epsilon}\lambda^{4}||(\tilde{E}^{\lambda},\tilde{E}^{\lambda}_{t})||^{2}+K_{\epsilon}\lambda^{4}||({\rm
div}\tilde{E}^{\lambda},{\rm
div}\tilde{E}^{\lambda}_{t})||^{2}+K_{\epsilon}\lambda^{4}.$ (2.75)
By Cauchy-Schwartz’s inequality, Sobolev’s embedding
$H^{2}(\mathbb{T}^{3})\hookrightarrow L^{\infty}(\mathbb{T}^{3})$, and using
the inequality (2.12), the last two nonlinear terms on the right-hand side of
(2.2) can be treated as follows
$\displaystyle\int\partial_{t}(\tilde{v}^{\lambda}\cdot\nabla\tilde{v}^{\lambda})\Delta\tilde{v}^{\lambda}_{t}dx-\lambda^{2}\int\partial_{t}(\tilde{E}^{\lambda}{\rm
div}\tilde{E}^{\lambda})\Delta\tilde{v}^{\lambda}_{t}dx$
$\displaystyle\leq\epsilon||\Delta\tilde{v}^{\lambda}_{t}||^{2}+K_{\epsilon}||\partial_{t}(\tilde{v}^{\lambda}\cdot\nabla\tilde{v}^{\lambda})||^{2}+K_{\epsilon}\lambda^{4}||\partial_{t}(\tilde{E}^{\lambda}\text{div}\tilde{E}^{\lambda})||^{2}$
$\displaystyle\leq\epsilon||\Delta\tilde{v}^{\lambda}_{t}||^{2}+K_{\epsilon}(||\tilde{v}^{\lambda}_{t}||^{2}_{H^{1}}||\nabla\tilde{v}^{\lambda}||^{2}_{H^{1}}+||\tilde{v}^{\lambda}||^{2}_{L^{\infty}}||\nabla\tilde{v}^{\lambda}_{t}||^{2})$
$\displaystyle\ \
+K_{\epsilon}\lambda^{4}(||\tilde{E}^{\lambda}_{t}||^{2}_{H^{1}}||\text{div}\tilde{E}^{\lambda}||^{2}_{H^{1}}+||\tilde{E}^{\lambda}||^{2}_{L^{\infty}}||\text{div}\tilde{E}^{\lambda}_{t}||^{2})$
$\displaystyle\leq\epsilon||\Delta\tilde{v}^{\lambda}_{t}||^{2}+K_{\epsilon}(||\tilde{v}^{\lambda}_{t}||^{2}_{H^{1}}||\nabla\tilde{v}^{\lambda}||^{2}_{H^{1}}+||\tilde{v}^{\lambda}||^{2}_{H^{2}}||\nabla\tilde{v}^{\lambda}_{t}||^{2})$
$\displaystyle\ \
+K_{\epsilon}\lambda^{4}(||\tilde{E}^{\lambda}_{t}||^{2}_{H^{1}}||\text{div}\tilde{E}^{\lambda}||^{2}_{H^{1}}+||\tilde{E}^{\lambda}||^{2}_{H^{2}}||\text{div}\tilde{E}^{\lambda}_{t}||^{2})$
$\displaystyle\leq\epsilon||\Delta\tilde{v}^{\lambda}_{t}||^{2}+K_{\epsilon}|||\tilde{\mathbf{w}}^{\lambda}|||^{4}.$
(2.76)
Thus, by putting (2.2)-(2.2) together and taking $\epsilon$ to be small
enough, we obtain
$\displaystyle\frac{d}{dt}||\nabla\tilde{v}^{\lambda}_{t}||^{2}+\mu
c_{12}||\Delta\tilde{v}^{\lambda}_{t}||^{2}$ $\displaystyle\leq
K||(\nabla\tilde{v}^{\lambda},\nabla\tilde{v}^{\lambda}_{t},\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t},\tilde{E}^{\lambda},\tilde{E}^{\lambda}_{t})||^{2}+K\lambda^{4}||({\rm
div}\tilde{E}^{\lambda},{\rm div}\tilde{E}^{\lambda}_{t})||^{2}$
$\displaystyle\quad+K\lambda^{4}||(\tilde{E}^{\lambda},\tilde{E}^{\lambda}_{t})||^{2}+K|||\tilde{\mathbf{w}}^{\lambda}|||^{4}+K\lambda^{4}.$
(2.77)
Combining (2.2), (2.2) and (2.2), and restricting $\lambda$ is small, we get
$\displaystyle\quad\frac{d}{dt}\Big{(}\delta_{7}||\nabla\tilde{z}^{\lambda}_{t}||^{2}+\lambda^{2}||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}+\delta_{8}||\nabla\tilde{v}^{\lambda}_{t}||^{2}\Big{)}+\delta_{7}c_{10}||\Delta\tilde{z}^{\lambda}_{t}||^{2}+\delta_{8}\mu
c_{12}||\Delta\tilde{v}^{\lambda}_{t}||^{2}$
$\displaystyle\quad+(2\lambda^{2}-K\lambda^{4}-\delta_{7}K\lambda^{4})||\nabla{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}+(c_{12}-\delta_{7}K-K\lambda^{4}(\delta_{7}+\delta_{8}))||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}$ $\displaystyle\leq
K_{4}||(\tilde{z}^{\lambda},\tilde{z}^{\lambda}_{t},\nabla\tilde{z}^{\lambda},\nabla\tilde{z}^{\lambda}_{t})||^{2}+K_{4}||(\tilde{v}^{\lambda},\tilde{v}^{\lambda}_{t},\nabla\tilde{v}^{\lambda},\nabla\tilde{v}^{\lambda}_{t})||^{2}$
$\displaystyle\quad+K_{4}||(\tilde{E}^{\lambda},\tilde{E}^{\lambda}_{t},{\rm
div}\tilde{E}^{\lambda},\nabla{\rm
div}\tilde{E}^{\lambda})||^{2}+K_{4}|||\tilde{\mathbf{w}}^{\lambda}|||^{4}$
$\displaystyle\quad+K_{4}|||\tilde{\mathbf{w}}^{\lambda}|||^{2}\big{(}||\tilde{z}^{\lambda}_{t}||^{2}_{H^{2}}+||\nabla\tilde{z}^{\lambda}_{t}||^{2}_{H^{1}}+||\tilde{E}^{\lambda}_{t}||^{2}_{H^{1}}+||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}\big{)}$
$\displaystyle\quad+K_{4}\lambda^{2}|||\tilde{\mathbf{w}}^{\lambda}|||^{2}\big{(}||\tilde{E}^{\lambda}_{t}||^{2}_{H^{2}}+||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}_{H^{1}}+||\nabla{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}\big{)}+K_{4}\lambda^{4},$ (2.78)
for some $\delta_{7}$ and $\delta_{8}$ sufficient small, which give the
inequality (2.6). ∎
## 3\. Proof of Theorem 1.1
In this section, we will use the energy estimates obtained in Section 2 to
establish the entropy production integration inequality and compete the proof
of our main result. First, under the assumption of Theorem 1.1, by the
standard elliptic regularity estimates, we have
$\displaystyle||\tilde{z}^{\lambda}||^{2}_{H^{2}}$ $\displaystyle\leq
K(||\tilde{z}^{\lambda}||^{2}+||\Delta\tilde{z}^{\lambda}||^{2}),$ (3.1)
$\displaystyle||\tilde{z}^{\lambda}_{t}||^{2}_{H^{2}}$ $\displaystyle\leq
K(||\tilde{z}^{\lambda}_{t}||^{2}+||\Delta\tilde{z}^{\lambda}_{t}||^{2}),$
(3.2) $\displaystyle||\tilde{v}^{\lambda}||^{2}_{H^{2}}$ $\displaystyle\leq
K(||\tilde{v}^{\lambda}||^{2}+||\Delta\tilde{v}^{\lambda}||^{2}),$ (3.3)
$\displaystyle||\tilde{v}^{\lambda}_{t}||^{2}_{H^{2}}$ $\displaystyle\leq
K(||\tilde{v}^{\lambda}_{t}||^{2}+||\Delta\tilde{v}^{\lambda}_{t}||^{2}),$
(3.4) $\displaystyle||\tilde{E}^{\lambda}||^{2}_{H^{s}}$ $\displaystyle\leq
K(||\tilde{E}^{\lambda}||^{2}+||{\rm
div}\tilde{E}^{\lambda}||^{2}_{H^{s-1}}),s=1,2,$ (3.5)
$\displaystyle||\tilde{E}^{\lambda}_{t}||^{2}_{H^{s}}$ $\displaystyle\leq
K(||\tilde{E}^{\lambda}_{t}||^{2}+||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}_{H^{s-1}}),s=1,2.$ (3.6)
By the definitions of $\Gamma^{\lambda}(t)$ and
$|||\tilde{\mathbf{w}}^{\lambda}(t)|||$ (see the definitions (1) and (2.11)
above) and using the inequalities (3.1)-(3.6), it is easy to verify that there
exist two constants $K_{1}$ and $K_{2}$, independent of $\lambda$, such that
$K_{1}|||\tilde{\mathbf{w}}^{\lambda}(t)|||^{2}\leq\Gamma^{\lambda}(t)\leq
K_{2}|||\tilde{\mathbf{w}}^{\lambda}(t)|||^{2}.$ (3.7)
Using the inequalities (2.2),(2.3),(2.4),(2.5), and (2.6), we can obtain the
new inequality
$[\eqref{eb3}+\delta\eqref{eb20}+\delta^{2}\eqref{eb35}]+\delta^{3}[\delta\eqref{eb36}+\eqref{eb51}]$.
By taking $\delta$ small enough, restricting $\lambda$ sufficient small, and a
tedious but straightforward computation, we obtain the following relative
entropy production integration inequality
$\displaystyle\quad\Gamma^{\lambda}(t)+K\int^{t}_{0}G^{\lambda}(s)ds$
$\displaystyle\leq
K\bar{\Gamma}^{\lambda}(t=0)+K(\Gamma^{\lambda}(t))^{2}+K\lambda^{4}+K\int^{t}_{0}\Gamma^{\lambda}(s)G^{\lambda}(s)ds$
$\displaystyle+K\int^{t}_{0}\big{\\{}\Gamma^{\lambda}(s)+(\Gamma^{\lambda}(s))^{2}\big{\\}}(s)ds,$
(3.8)
where $G^{\lambda}(t)$ is defined by (1.17) and
$\displaystyle\bar{\Gamma}^{\lambda}(t=0)=$
$\displaystyle\big{[}||\tilde{z}^{\lambda}||^{2}+||\tilde{v}^{\lambda}||^{2}+\lambda^{2}||\tilde{E}^{\lambda}||^{2}+||\tilde{z}^{\lambda}_{t}||^{2}+||\tilde{v}^{\lambda}_{t}||^{2}+\lambda^{2}||\tilde{E}^{\lambda}_{t}||^{2}\big{]}(t=0)$
$\displaystyle+(||\nabla\tilde{z}^{\lambda}_{t}||^{2}+\lambda^{2}||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}+||\nabla\tilde{v}^{\lambda}_{t}||^{2})(t=0).$
(3.9)
The inequality (3) is a generalized Gronwall’s type with an extra integration
term, we have the following result.
###### Lemma 3.1.
Suppose that
$\bar{\Gamma}^{\lambda}(t=0)\leq\bar{K}\lambda^{2},$ (3.10)
where $\bar{K}$ is a positive constant, independent of $\lambda$. Then for any
$T\in(0,T_{\max})$, $T_{\max}\leq+\infty$, there exists a positive constant
$\lambda_{0}\ll 1$ such that for any $\lambda\leq\lambda_{0}$ the inequality
$\displaystyle\Gamma^{\lambda}(t)\leq\bar{K}\lambda^{2-\sigma}$ (3.11)
holds for any $\sigma\in(0,2)$ and $0\leq t\leq T$.
Since the proof of Lemma 3.1 is similar to that of Lemma 10 in [10], we omit
it here and continue our proof of Theorem 1.1. In order to apply Lemma 3.1, we
need to verify (3.10). In fact, by the assumptions (1.14) on the initial data
$(n^{\lambda}_{0},p^{\lambda}_{0},v^{\lambda}_{0})$, we get
$\tilde{E}^{\lambda}(t=0)=0$ since the solution involved here is smooth, in
particular, the solution and its derivatives are continuous with respect to
$x$ and $t$. Then, by using the assumption (1.14),
$\tilde{E}^{\lambda}(t=0)=0$, the continuity of the solution and its
derivatives, and the equations (2.7)-(2.10), we get
$\displaystyle\big{[}||\tilde{z}^{\lambda}||^{2}+||\tilde{v}^{\lambda}||^{2}+||\tilde{z}^{\lambda}_{t}||^{2}+||\tilde{v}^{\lambda}_{t}||^{2}+\lambda^{2}||\tilde{E}^{\lambda}_{t}||^{2}\big{]}(t=0)$
$\displaystyle+(||\nabla\tilde{z}^{\lambda}_{t}||^{2}+\lambda^{2}||{\rm
div}\tilde{E}^{\lambda}_{t}||^{2}+||\nabla\tilde{v}^{\lambda}_{t}||^{2})(t=0)\leq\bar{K}\lambda^{2},$
which gives the inequality (3.10). Thus, by Lemma 3.1, the inequality (3.11)
holds. We easily get the estimate (1.1) by the definition of
$\Gamma^{\lambda}(t)$, the inequality (3.11), and the transform (2.6), which
complete the proof of Theorem 1.1.
Acknowledgements The author would like to express his gratitude to Dr. Jishan
Fan for his valuable suggestions and careful reading of the first draft of
this paper. This work is supported by the National Natural Science Foundation
of China (Grant 10501047).
## References
* [1] Y. Brenier, Convergence of the Vlasov-Poisson system to the incompressible Euler equations, Comm. Partial Differential Equations 25 (2000), 737-754.
* [2] G. Cimatti, I. Fragalà, Invariant regions for the Nernst-Planck equations. Ann. Mat. Pura Appl. (4) 175(1998), 93-118.
* [3] S. Cordier, E. Grenier, Quasineutral limit of an Euler-Poisson system arising from plasma physics, Comm. Partial Differential Equations 23 (2000), 1099-1113.
* [4] D. Donatelli, P. Marcati, A quasineutral type limit for the Navier-Stokes-Poisson system with large data. Nonlinearity 21(2008), no. 1, 135–148.
* [5] E. Feireisl, Weak solutions to a non-linear hyperbolic system arising in the theory of dielectric liquids. Math. Methods Appl. Sci. 18(1995), 1041-1052.
* [6] I. Gasser, L. Hsiao, P. Markowich, S. Wang, Quasineutral limit of a nonlinear drift-diffusion model for semiconductor models, J. Math. Anal. Appl. 268 (2002), 184-199.
* [7] I. Gasser, C.D. Levermore, P. Markowich, C. Schmeiser, The initial time layer problem and the quasineutral limit in the semiconductor drift-diffusion model, European J. Appl. Math. 12 (2001), 497-512.
* [8] L. Hsiao, F. C. Li, S. Wang, Convergence of the Vlasov-Poisson-Fokker-Planck system to the incompressible Euler equations, Sci. China Ser. A 49 (2006), 255-266.
* [9] L. Hsiao, F. C. Li, S. Wang, Coupled quasineutral and inviscid limit of the Vlasov-Poisson-Fokker-Planck system, Communications on Pure and Applied Analysis, 7(2008), no.3, 579-589.
* [10] L. Hsiao, S. Wang, Quasineutral limit of. a time-dependent drift-diffusion-Poisson model for $pn$ junction semiconductor devices, J. Differential Equations, 225(2006), 411-439.
* [11] Q. C. Ju, F. C. Li, S. Wang, Convergence of Navier-Stokes-Poisson system to the incompressible Navier-Stokes equations, J. Math. Phys. 49(2008) 073515\.
* [12] O. A. Ladyzhenskaya, The mathematical theory of viscous incompressible flow. Second English edition, revised and enlarged. Translated from the Russian by Richard A. Silverman and John Chu. Mathematics and its Applications, Vol. 2 Gordon and Breach, Science Publishers, New York-London-Paris 1969
* [13] N. Masmoudi, From Vlasov-Poisson system to the incompressible Euler system, Comm. Partial Differential Equations 26 (2001), 1913-1928.
* [14] J. W. Jerome, Analytical approaches to charge transport in a moving medium. Transport Theory Statist. Phys. 31 (2002), 333-366.
* [15] A. Jüngel, Y.-J. Peng, A hierarchy of hydrodynamic models for plasmas: quasi-neutral limits in the drift-diffusion equations, Asymptot. Anal. 28 (2001), 49-73.
* [16] R. Ryham, C. Liu, L. Zikatanov, Mathematical models for the deformation of electrolyte droplets, Discrete and Continuous Dynamical Systems-Series B, 8(2007), 649-661.
* [17] T. Roubíček, Nonlinear Partial Differential Equations with Applications, Birkhäuser Verlag, Basel, 2005.
* [18] I. Rubinstein, Electro-Diffusion of Ions, SIAM, Philadelphia, PA, 1990\.
* [19] R. Temam, Navier-Stokes equations. Theory and numerical analysis. Reprint of the 1984 edition. AMS Chelsea Publishing, Providence, RI, 2001.
* [20] S. Wang, Quasineutral limit of Euler-Poisson system with and without viscosity, Comm. Partial Differential Equations 29 (2004), 419-456.
* [21] S Wang, Quasineutral limit of the multi-dimensional drift-diffusion-Poisson model for semiconductor with $pn$-junctions, Math. Models Meth. Appl. Sci., 16(2006), 737-57.
* [22] S. Wang, S. Jiang. The convergence of the Navier-Stokes-Poisson system to the incompressible Euler equations. Comm. Partial Differential Equations, 31(2006), 571–591.
* [23] S Wang, Z. P. Xin, P. A. Markowich, Quasineutral limit of the drift diffusion models for semiconductors: The case of general sign-changing doping profile, SIAM J. Math. Anal., 37 (2006), 1854-1889.
|
arxiv-papers
| 2009-05-18T14:24:00 |
2024-09-04T02:49:02.699934
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Fucai Li",
"submitter": "Fucai Li",
"url": "https://arxiv.org/abs/0905.2893"
}
|
0905.2938
|
1–7
# Dust properties of nearby disks: M 31 case
Petko L. Nedialkov1 Antoniya T. Valcheva2
Valentin D. Ivanov3 & Leonardo Vanzi4 1Astronomy Department, University of
Sofia,
5 J. Bourchier blvd., Sofia 1164, Bulgaria
email: japet@phys.uni-sofia.bg
2Institute of Astronomy,
72 Tsarigradsko Chaussee blvd., Sofia 1784, Bulgaria
email: avalcheva@astro.bas.bg
3European Southern Observatory,
3107 Alonso de Cordova Ave., Casilla 19, Santiago 19001, Chile
email: vivanov@eso.org
4Department of Astronomy and Astrophysics, Pontificia Universidad
Católica de Chile, Casilla 306, Santiago 22, Chile
email: lvanzi@eso.org
(2008)
###### Abstract
Several properties of the M 31 disk, namely: opacity, extinction law and gas-
to-dust ratio are studied by means of optical and near-infrared photometry of
ten globular clusters and galaxies seen through the disk. The individual
extinctions of these objects were estimated with respect to several sets of
theoretical spectral energy distributions for globulars and galaxies. Seven
targets are consistent with reddened globulars, two - with starburst galaxies
and one - with an elliptical. The extinction estimates agree with semi-
transparent disk ($\tau_{V}\lesssim 1$) in the inter-arm regions. The total-
to-selective extinction ratio in those regions 2.75$\pm$0.1 is lower on
average than the typical Galactic value of $R_{V}$=3.1. We also obtained a
gas-to-dust ratio, similar to that in the the Milky way. It shows no
correlation with the distance from the M31 center.
###### keywords:
galaxies: ISM, photometry, dust, extinction; globular clusters
††volume: 254††journal: The Galaxy Disk in Cosmological Context††editors: J.
Andersen, J. Bland-Hawthorn & B. Nordström, eds.
## 1 Introduction
The dust in galaxies attenuates the light of background extragalactic sources.
Recent studies ([Holwerda et al. (2005), Holwerda et al. 2005]) of
morphologically representative samples of spirals shows that dust opacity of a
disk arises from two distinct components: (i) optically thicker
($\tau_{V}=0.8\div 8^{m}$) and radially dependent component, associated with
the spiral arms, and (ii) relatively more constant optically thinner disk
($\tau_{V}\sim 1^{m}$), dominating the inter-arm regions and the outskirts of
the disk.
The nearby giant spiral galaxy M 31 is well suited for comprehensive studies
of the interplay between the stars and the ISM. The radial distribution of the
opacity in M 31 spiral arms, based on individual estimates towards OB stars,
shows that the opacity exponentially decreases away from the bulge ([Nedilakov
& Ivanov (1998), Nedilakov & Ivanov 1998]). However, a study of 41 globular
clusters in M 31 indicates the absence of radial extinction gradient with the
galactocentric distance ([SavchevaTassev02, Savcheva & Tassev 2002]).
Measuring the color excesses of objects behind the disks is an alternative
method to constrain the disk opacity. It was applied to M 31 by [Cuillandre et
al. (2001), Cuillandre et al. (2001)] who used background galaxies. They
concluded that the M31 disk is semi-transparent for distances larger than
$R_{25}$.
Here we complement this work, presenting opacity estimates for galactocentric
distance smaller than $R_{25}$, derived from the comparison of apparent colors
of background globulars and ellipticals with models.
## 2 Observations and data reduction
Our sample includes 21 background galaxy candidates located well within the
standard M 31 radius $R_{25}$. They were selected from a number of
heterogeneous sources: visual inspection of DSS and the NOAO archive
photographic plates, dropouts from M 31 globular cluster searches ([Battistini
et al. (1980), Battistini et al. 1980]). Our original intention was to base
the study on background ellipticals only but five of our targets were recently
identified as globulars, and their [Fe/H] and vr became readily available from
[Galleti et al. (2006), Galleti et al. (2006)].
We obtained $HK$ imaging in Dec 1996 and Jan 1997 with ARNICA ([Lisi et al.
(1993), Lisi et al. 1993]) at 1.8m Vatican Advanced Technology Telescope on
Mt. Graham. The instrument is equipped with a NICMOS 3 (256 $\times$ 256
pixels) detector array, with scale of 0.505 arcsec/pixel. The data reduction
includes the typical steps for infrared imaging: “sky” removal, flat-fielding,
alignment and combination of individual images, separately for every filter
and field. Ten of the targets (Table 1, Fig. 1) were identified on the $UBVRI$
images from the Local Group Survey ([Massey et al. (2006), Massey et al.
2006]), obtained with the KPNO Mosaic Camera at 4m Mayall telescope.
Table 1: Sub-set of our original target list, with $UBVRIHK$ photometry.
No. | 2MASX/2MASS | Other | Object1 | vr | [Fe/H] | rgc | N(HI+2H2)2
---|---|---|---|---|---|---|---
| name | name | Type | [km s-1] | | [arcmin] | [$\times$1020 at. cm-2]
1. | 2MASX J00451437+4157405 | Bol 370 | 1 | $-$347 | $-$1.80 | 52.2 | 1.895
2. | 2MASX J00444399+4207298 | Bol 250D | 1 | $-$442 | - | 84.2 | 9.028
3. | 2MASS J00420658+4118062 | Bol 43D | 1 | $-$344 | $-$1.35 | 30.8 | 12.836
4. | 2MASS J00413428+4101059 | Bol 25D | 1 | $-$479 | - | 20.8 | 0.668
5. | 2MASS J00413436+4100497 | Bol 26D | 1 | $-$465 | $-$1.15 | 20.8 | 2.170
6. | 2MASS J00413660+4100182 | Bol 251 | 2 | - | - | 20.6 | 1.350
7. | 2MASS J00430737+4127329 | Bol 269 | 2 | - | - | 19.6 | 12.384
8. | 2MASS J00421236+4119008 | Bol 80 | 2 | - | - | 29.3 | 15.248
9. | 2MASX J00410351+4029529 | Bol 199 | 2 | - | $-$1.59 | 79.1 | 16.120
10. | 2MASS J00425875+4108527 | Bol 140 | 3 | $-$413 | $-$0.88 | 31.7 | 6.654
Notes:
1 Following Galleti et al. (2006): 1 - confirmed globular clusters (GC), 2 -
GC candidates, 3 - uncertain objects
2Total hydrogen column density based on pencil beam estimates of
CO(1$\rightarrow$0) intensity ([Nieten et al. (2006), Nieten et al. 2006]),
converted to molecular hydrogen column density using a constant $X_{\rm CO}$
conversion factor ([Strong & Mattox (1996), Strong & Mattox 1996]) and
$\lambda$21 cm emission from the Westerbork map ([Brinks & Shane (1984),
Brinks & Shane 1984)].
Figure 1: $V$-band images of our targets from the Local Group Survey ([Massey
et al. (2006), Massey et al. 2006]). The field of view is
$30^{\prime\prime}\times\,30^{\prime\prime}$. North is up, and East is to the
left. The white circles show the photometric extraction apertures. The
numbering corresponds to Table 1.
Clouds were present during most of the observations, forcing us to use the
2MASS Point Source Catalog ([Cutri et al. 2003, Cutri et al. 1993]) stars for
the photometric calibration (typically using 4–10 common stars per field). No
color dependence was found, and the r.m.s. of the derived zero-points was
$\sim$0.05 mag for both bands. The typical seeing of the optical images
($\sim$$1.0^{\prime\prime}$) matches well that of the near-infrared data set
($\sim$$1.5^{\prime\prime}$), allowing us to perform simple aperture
photometry with $4^{\prime\prime}$ radius. We used the standard IRAF111IRAF is
the Image Analysis and Reduction Facility made available to the astronomical
community by the National Optical Astronomy Observatories, which are operated
by AURA, Inc., under contract with the U.S. National Science Foundation.
tasks. The zero points of the optical data are based on stars in common with
the catalog of [Massey et al. (2006), Massey et al. (2006)]. The $V$-band
magnitudes and the observed colors together with their errors are listed in
Table 2. The majority of the infrared colors shows excellent agreement with
the available 2MASS colors (see Fig. 2).
Table 2: $UBVRIHK$ photometry of the objects listed in Table 1. Photometric systems: $UBV$ – Johnson, $RI$ \- Cousins, $HK$ \- [Bessell & Brett (2006), Bessell & Brett (1988)]. The uncertainties include both the zero-point errors and the statistical errors of the individual measurements. No. | $V$ | $\sigma_{V}$ | $U-K$ | $\sigma_{U-K}$ | $U-I$ | $\sigma_{U-I}$ | $U-R$ | $\sigma_{U-R}$ | $B-K$ | $\sigma_{B-K}$ | $V-K$ | $\sigma_{V-K}$ | $U-H$ | $\sigma_{U-H}$
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
1. | 16.18 | 0.08 | 3.85 | 0.30 | 2.23 | 0.31 | 1.84 | 0.32 | 3.02 | 0.23 | 2.53 | 0.10 | 3.67 | 0.30
2. | 17.60 | 0.02 | 6.19 | 0.09 | 3.51 | 0.06 | 2.69 | 0.06 | 5.11 | 0.09 | 4.25 | 0.08 | 5.65 | 0.06
3. | 17.15 | 0.02 | 6.46 | 0.05 | 3.90 | 0.06 | 3.08 | 0.05 | 5.16 | 0.07 | 4.21 | 0.05 | 6.13 | 0.06
4. | 18.23 | 0.03 | 6.25 | 0.07 | 3.49 | 0.10 | 2.80 | 0.05 | 5.34 | 0.10 | 4.19 | 0.06 | 5.74 | 0.07
5. | 18.23 | 0.03 | 5.72 | 0.07 | 3.43 | 0.10 | 2.70 | 0.05 | 4.85 | 0.10 | 3.81 | 0.07 | 5.39 | 0.07
6. | 17.77 | 0.03 | 5.82 | 0.07 | 3.46 | 0.10 | 2.79 | 0.04 | 4.65 | 0.09 | 3.69 | 0.06 | 5.35 | 0.06
7. | 18.49 | 0.03 | 5.83 | 0.09 | 3.51 | 0.07 | 2.96 | 0.06 | 4.68 | 0.11 | 3.72 | 0.09 | 5.31 | 0.09
8. | 17.14 | 0.02 | 5.10 | 0.06 | 3.28 | 0.06 | 2.59 | 0.04 | 3.98 | 0.08 | 3.25 | 0.05 | 5.01 | 0.06
9. | 18.26 | 0.02 | 6.36 | 0.07 | 3.76 | 0.07 | 2.95 | 0.06 | 5.20 | 0.08 | 4.27 | 0.04 | 5.90 | 0.08
10. | 17.37 | 0.02 | 4.64 | 0.07 | 2.38 | 0.12 | 2.19 | 0.04 | 3.59 | 0.09 | 3.08 | 0.07 | 4.43 | 0.07
Figure 2: Comparison between the colors derived in this work and the available
2MASS colors. Note that the photometry of Bol 269 (target No. 7 in our list)
was flagged as suspect in the 2MASS Point Source Catalog.
## 3 Dust properties of M 31 disk from $\chi^{2}$-test minimization
The intrinsic near-infrared colors of ellipticals are nearly identical:
($H$$-$$K$)0$\sim$0.22 mag, as demonstrated by [Persson et. al (1979), Persson
et al. (1979)]. Assuming all the targets belong to that Hubble type, the
opacity of the disk that lay between them and the observer can easily be
calculated, taking into account the internal Milky Way extinction, i.e. from
the work of [Schlegel et. al (1998), Schlegel et. al (1998)]. However, the
evident contamination of the sample by globular clusters requires also to
consider for each object the possibility that it is a M 31 globular, with the
typical globular cluster colors. Furthermore, a cluster may be located in
front of the M 31 disk, adding extra degree of complication to the analysis.
To address these issues we developed a multicolor $\chi^{2}$ minimization
technique to derive simultaneously the disk opacity and number of other
parameters: gas-to-dust ratio, extinction law and last but not least – the
nature of the object (elliptical galaxy or globular cluster). It allows also
to fix some of these parameters, while still varying the rest of them. The
intrinsic colors of globulars were adopted from the model of [Kurth et al.
(1999), Kurth et al. (1999)] and for the ellipticals – from [Bicker et al.
(2004), Bicker et al. (2004)].
The results from the test are presented in Table 3. The free parameters in the
case of globular clusters (left side of the Table 3) are: age, abundance $Z$,
$R_{V}$, and in the case of the elliptical galaxy models (right side of the
Table 3) they are: redshift $z$, $R_{V}$, and $A_{V}$. We created a multi-
dimensional grid, with steps of 0.01 along all axes and calculated the
$\chi^{2}$ for every grid node.
Table 3: Summary of the $\chi^{2}$ minimization. The matches of the apparent colors to the globular cluster models of [Kurth et al. (1999), Kurth et al. (1999)] is given in the left and to the intrinsic colors of ellipticals predicted by the GALEV models of [Bicker et al. (2004), Bicker et al. (2004)] is given on the right. The numbers of the targets as the same as in Table 1. The table also reports if the $A_{V}$ corresponding to the minimum $\chi^{2}$ agrees (within the errors) with the total gas density derived from the combined $HI$ map of [Brinks & Shane (1984), Brinks & Shane (1984)] and the CO(1$\rightarrow$0) map of [Nieten et al. (2006), Nieten et al. (2006)]. The asterisk indicates a fixed parameter. No. | Globular Cluster Model Fit | | Elliptical Galaxy Model Fit | | Derived
---|---|---|---|---|---
| Parameters | | Parameters | | Type
| Age | Abun- | $R_{V}$ | $A_{V}$ | $\chi_{min}^{2}$ | dust | Red- | $R_{V}$ | $A_{V}$ | $\chi_{min}^{2}$ | dust |
| [yr] | dance | | [mag] | | vs. | shift | | [mag] | | vs. |
| | $Z$ | | | | gas? | $z$ | | | | gas? |
1. | 0.9$\times$109 | 0.0001 | 3.10* | 1.81 | 1.288 | no | 0.000 | 3.10* | 0.00 | 39.950 | no | Globular
| 0.9$\times$109 | 0.0003* | 3.10* | 1.46 | 2.510 | no | 0.000 | 6.00 | 0.00 | 39.950 | no |
| 0.6$\times$109 | 0.0003* | 2.43 | 1.45 | 1.534 | no | | | | | |
2. | 0.4$\times$109 | 0.0500 | 3.10* | 2.41 | 2.400 | no | 0.075 | 3.10* | 0.98 | 7.562 | yes | Globular
| | | | | | | 0.069 | 3.42 | 1.05 | 6.900 | yes |
3. | 0.8$\times$109 | 0.0500 | 3.10* | 0.50 | 3.406 | yes | 0.026 | 3.10* | 1.41 | 45.111 | yes | Globular
| 3.0$\times$109 | 0.0009* | 3.10* | 2.60 | 19.558 | no | 0.021 | 2.15 | 1.18 | 10.404 | yes |
| 2.0$\times$109 | 0.0009* | 2.77 | 2.54 | 8.552 | no | | | | | |
4. | 3.0$\times$109 | 0.0500 | 3.10* | 0.86 | 2.152 | no | 0.072 | 3.10* | 1.06 | 7.180 | no | Globular
| | | | | | | 0.080 | 2.75 | 0.97 | 5.692 | no |
5. | 8.0$\times$109 | 0.0400 | 3.10* | 0.21 | 0.870 | yes | 0.046 | 3.10* | 0.86 | 24.117 | no | Globular
| 1.1$\times$1010 | 0.0014* | 3.10* | 1.73 | 7.522 | no | 0.045 | 1.77 | 0.62 | 3.649 | no |
| 2.0$\times$109 | 0.0014* | 2.75 | 2.00 | 2.786 | no | | | | | |
6. | 1.4$\times$1010 | 0.0400 | 3.10* | 0.01 | 1.707 | no | 0.061 | 3.10* | 0.83 | 69.838 | no | Globular
| | | | | | | 0.072 | 1.08 | 0.40 | 8.572 | no |
7. | 1.4$\times$1010 | 0.0450 | 3.10* | 0.00 | 5.913 | no | 0.064 | 3.10* | 0.84 | 61.119 | yes | Globular
| | | | | | | 0.068 | 1.00 | 0.40 | 13.223 | yes |
8. | 1.3$\times$1010 | 0.0200 | 3.10* | 0.00 | 34.012 | no | 0.024 | 3.10* | 0.56 | 137.883 | yes | Uncertain
| | | | | | | 0.000 | 1.00 | 0.33 | 27.456 | yes |
9. | 3.0$\times$109 | 0.0500 | 3.10* | 0.97 | 4.321 | yes | 0.042 | 3.10* | 1.26 | 12.802 | yes | Uncertain
| 2.0$\times$109 | 0.0005* | 3.10* | 2.67 | 7.154 | no | 0.061 | 2.63 | 1.11 | 9.039 | yes |
| 2.0$\times$109 | 0.0005* | 3.10 | 2.67 | 7.154 | no | | | | | |
10. | 1.0$\times$109 | 0.0500 | 3.10* | 0.50 | 13.716 | yes | 0.042 | 3.10* | 0.15 | 54.800 | yes | Uncertain
| 0.9$\times$109 | 0.0025* | 3.10* | 2.25 | 26.747 | no | 0.000 | 1.00 | 0.15 | 30.674 | yes |
| 0.9$\times$109 | 0.0025* | 2.69 | 2.06 | 17.238 | no | | | | | |
The preliminary tests reveal that in all cases $BI$-bands dominate the values
of $\chi_{min}^{2}$. These bands have the largest systematics with respect to
external photometry ([Massey et al. (2006), Massey et al. 2006]). To account
for that and to minimize their impact we tentatively added 0.20 mag to the
$B$-band and 0.12 mag to the $I$-band magnitude errors. The errors listed in
Table 2 do not reflect this modification. As a result, we have relatively more
equal contribution of the different colors to the $\chi_{min}^{2}$.
The globular cluster model fits much better the colors of most targets than
the elliptical model. The typical opacity $\tau_{V}$ across the M 31 disk is
$\sim$1 mag. There are two exceptions (objects No. 8 and No. 10) for which
neither model yields a reasonable match.
Interestingly, $R_{V}$ in M31 is lower than the typical Galactic value of 3.1,
and it is similar to the one obtained by [SavchevaTassev02, Savcheva & Tassev
(2002)]. This may indicate a smaller mean size of the dust grains in the
diffuse component of M 31 ISM, in comparison with the Milky way. Although the
targets are located well within standard radius $R_{25}$, where the active
star formation still takes place, all of them are projected in the inter-arm
regions where the opacity of the disk stays relatively law, as seen from the
column density values in Table 1. Here we assumed the Galactic gas-to-dust
ratio ([Bohlin et al. (1978), Bohlin et al., 1978]).
The relation between total gas column densities and the derived extinctions,
corresponding to $R_{V}$=3.1 and $\chi_{min}^{2}$, is plotted in Fig. 3.
Surprisingly, the derived extinctions of the candidate globulars (targets No.
6 and 9) correlate better with the gas density if we use intrinsic colors
derived from the elliptical models. We contribute this to the spatial
variations of the reddening law.
Figure 3: The agreement between the total hydrogen column density N(H) and the
color excess EB-V with respect to [Kurth et al. (1999), Kurth et al. (1999)]
model colors of globulars (left) and with respect to colors of ellipticals
(right) as predicted by the GALEV models ([Bicker et al. (2004), Bicker et al.
2004]). The extinction values are listed in Table 2 and correspond to
$R_{V}$=3.1 and $\chi_{min}^{2}$. Dashed lines represent the expected range
and the thick line is the mean Galactic gas-to-dust ratio ([Bohlin et al.
(1978), Bohlin et al. 1978]).
Our analysis also considers the K-correction. We used the HyperZ code of
[Bolzonella et al. (2000), Bolzonella et al. (2000)], that includes a variety
of spectral energy distributions for different morphological types of
galaxies. The results are presented in Table 4. Both cases – fixed to the
elliptical type and free morphological types yield reasonable $\chi_{min}^{2}$
values. The extinction estimates are lower and the redshifts are higher than
those derived earlier, indicating a degeneracy between these two quantities.
The metallicity-opacity degeneracy is apparent in Table 3 as well: the higher
is the abundance $Z$, the lower is the derived extinction and vice versa. The
HyperZ tends to classify our targets as starburst galaxies, explaining the
larger extinction values in comparison with the case of fixed elliptical
morphological type. Note that a large fraction of the extinction may be
internal to a starburst galaxy and not related to the M 31 disk. This might be
the case for targets No. 8 and 10 which, together with No. 9 are our best
candidates for galaxies, laying behind the M 31 disk.
Table 4: Summary of the $\chi^{2}$ minimization considering the K-corrections, for two different redshifts. The “intrinsic” redshifted colors of the galaxies were determined with the HyperZ ([Bolzonella et al. (2000), Bolzonella et al. 2000]). The numbers of the targets as the same as in Table 1. The grid resolution is 0.05 along all axes and the Galactic extinction law of [Allen (1976), Allen (1976)] is assumed. The rest of the columns are identical with those in Table 3. No. | Red- | $A_{V}$ | $\chi_{min}^{2}$ | dust | Galaxy | Red- | $A_{V}$ | $\chi_{min}^{2}$ | dust | Galaxy
---|---|---|---|---|---|---|---|---|---|---
| shift | [mag] | | vs. | Type | shift | [mag] | | vs. | Type
| $z$ | | | gas? | | $z$ | | | gas? |
1. | 0.130 | 0.05 | 4.562 | yes | elliptical* | 0.135 | 0.05 | 0.673 | yes | starburst
2. | 0.200 | 0.55 | 1.743 | yes | elliptical* | 0.200 | 0.55 | 1.743 | yes | elliptical
3. | 0.145 | 0.70 | 1.109 | yes | elliptical* | 0.140 | 1.00 | 1.025 | yes | starburst
4. | 0.200 | 0.35 | 1.482 | no: | elliptical* | 0.150 | 1.45 | 1.365 | no | starburst
5. | 0.150 | 0.25 | 0.546 | yes | elliptical* | 0.150 | 0.75 | 0.464 | no | starburst
6. | 0.145 | 0.15 | 2.700 | yes | elliptical* | 0.155 | 0.05 | 1.807 | yes | starburst
7. | 0.145 | 0.15 | 3.926 | no | elliptical* | 0.350 | 0.10 | 1.051 | no | starburst
8. | 0.110 | 0.10 | 2.417 | no | elliptical* | 0.105 | 0.95 | 2.230 | yes | starburst
9. | 0.155 | 0.75 | 1.246 | yes | elliptical* | 0.155 | 0.75 | 1.246 | yes | elliptical
10. | 0.050 | 0.00 | 4.659 | no | elliptical* | 0.150 | 0.50 | 2.571 | yes | starburst
## 4 Conclusions
We measure the opacity of the M 31 disk from the color excesses of 21 objects
– a mixture of galaxies behind the disk and globular clusters. Seven of them
are consistent with globulars, two - with starburst galaxies and one - with an
elliptical galaxy. Their extinction estimates are consistent with a semi-
transparent disk ($\tau_{V}\lesssim 1$) in the inter-arm regions. We confirm
the conclusion of [Savcheva & Tassev (2002), Savcheva & Tassev (2002)] that
the total-to-selective extinction value $R_{V}$ in the diffuse ISM of M 31 is
on average lower than the typical Galactic value of $R_{V}$=3.1. The gas-to-
dust ratio appears similar to that in the Milky way and it is independent from
the galactocentric distance, which might indicate Solar abundances along the
line of sight studied here (within 20′$\div$85′ from the M 31 center).
## Acknowledgments
This work was partially supported by the following grants: VU-NZ-01/06,
VU-F-201/06 & VU-F-205/06 of the Bulgarian Science Foundation. One of the
authors (P.N.) thanks to organizing committee of IAU Symposium No.254 for the
grant which allowed him to participate.
## References
* [Allen (1976)] Allen C. W. (1976) Astrophysical Quantities, (University of London: The Athlone Press), p. 264
* [Battistini et al. (1980)] Battistini P., Bonoli F., Braccesi A., Fusi-Pecci F., Malagnini M. L. & Marano B. (1980) A&AS, 42, 357
* [Bessell & Brett (2006)] Bessell M.S. & Brett J.M (2006) PASP, 134, 100
* [Bicker et al. (2004)] Bicker J., Fritze-von Alvensleben U. & Fricke K.J. (2004) Ap&SS, 284, 463
* [Bohlin et al. (1978)] Bohlin R.C., Savage B.D. & Drake J.F. (1978) ApJ, 224, 132
* [Bolzonella et al. (2000)] Bolzonella M., Miralles J.-M. & Pelló R. (2000) A&A, 363, 476
* [Brinks & Shane (1984)] Brinks E. & Shane W. (1984) A&ASS, 55, 179
* [Cuillandre et al. (2001)] Cuillandre, J., Lequeux, J., Allen, R.J., Mellier, Y. & Bertin, E. (2001) ApJ 554, 190
* [Cutri et al. 2003] Cutri, R.M., Skrutskie, M.F. van Dyk, S. et al., The IRSA 2MASS All-Sky Point Source Catalog, NASA/IPAC Infrared Science Archive
* [Galleti et al. (2006)] Galleti S., Federici L., Bellazzini M., Buzzoni A. & Fusi Pecci F. (2006) A&A, 456, 985
* [Holwerda et al. (2005)] Holwerda B.W., González R.A., Allen R.J. & van der Kruit P. C. (2005) AJ, 129, 1396
* [Kurth et al. (1999)] Kurth O.M., Fritze-von Alvensleben U. & Fricke K.J. (1999) A&ASS, 138, 19
* [Lisi et al. (1993)] Lisi F., Baffa C. & Hunt, L.K. (1993) SPIE, 1946, 594
* [Massey et al. (2006)] Massey P., Olsen K.A.G., Hodge P.W. et al. (2006) ApJ, 131, 2478
* [Nedilakov & Ivanov (1998)] Nedialkov P.L. & Ivanov V.D. (1998) A&AT, 17, 367
* [Nieten et al. (2006)] Nieten Ch., Neininger, N. Guélin, M. et al. (2006) A&A, 453, 459
* [Persson et. al (1979)] Persson S. E., Frogel J.A. & Aaronson M. (1979) ApJS, 39, 61
* [Savcheva & Tassev (2002)] Savcheva A. & Tassev S. (2002) PAOB 73, 219
* [Schlegel et. al (1998)] Schlegel D.J., Finkbeiner D.P. & Davis M. (1998) ApJ, 500, 525
* [Strong & Mattox (1996)] Strong, A.W. & Mattox, J.R. (1996) A&A, 308, L21
|
arxiv-papers
| 2009-05-18T17:24:07 |
2024-09-04T02:49:02.711417
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "P. L. Nedialkov (1), A. T. Valcheva (1), V. D. Ivanov (2) and L. Vanzi\n (3) ((1) Astronomy Department, University of Sofia, Bulgaria, (2) European\n Southern Observatory, Chile, (3) Department of Astronomy and Astrophysics,\n Pontificia Universidad Cat\\'olica de Chile, Chile)",
"submitter": "Antoniya Valcheva",
"url": "https://arxiv.org/abs/0905.2938"
}
|
0905.3035
|
# Energy bands and Landau levels of ultracold fermions in the bilayer
honeycomb optical lattice
Jing-Min Hou
Department of Physics, Southeast University, Nanjing, 211189, China E-mail:
jmhou@seu.edu.cn
###### Abstract
We investigate the spectrum and eigenstates of ultracold fermionic atoms in
the bilayer honeycomb optical lattice. In the low energy approximation, the
dispersion relation has parabolic form and the quasiparticles are chiral. In
the presence of the effective magnetic field, which is created for the system
with optical means, the energy spectrum shows an unconventional Landau level
structure. Furthermore, the experimental detection of the spectrum is proposed
with the Bragg scattering techniques.
Keywords: optical lattices; ultracold atoms; Energy bands; Landau levels.
## 1 Introduction
Recently, the studies of cold atoms in optical lattices are extensively
developed. Optical lattices are crystals made of light periodic potentials
that confine ultracold atoms[1, 2]. Because of their precise control over the
system parameters and defect-free properties, ultracold atoms in optical
lattices provide an ideal platform to study many interesting physics in
condensed matters[3] and even high energy physics[4].
Very recently, a strong interest has been raised in the two-dimensional
honeycomb lattice [5, 6, 7], for its physics is closely related to that of the
graphene material[8, 9, 10, 11, 12, 13], which has surprisingly rich
collective behaviors. With the tight-binding approximation, graphene has a
linear dispersion relation resembling the Dirac spectrum for massless
fermions. In the presence of a magnetic field, it has the Landau energy level
with square-root dependence on the quantum number $n$, instead of the usual
linear dependence. In particular, the zero-energy Landau level exists at
$n=0$, which is a direct result of chirality. Recently, McCann et al. have
studied the electronic states and unconventional Landau levels of the bilayer
graphene arranged according to Bernal stacking[14, 15].
In this paper, we investigate the eigenstates and spectrum of ultracold
fermions in the bilayer honeycomb optical lattice with a different stacking
order from that in Reference [14, 15]. In the absence of an effective magnetic
field, the dispersion relation has parabolic form and the quasiparticles are
still chiral like that in the monolayer system. In the presence of an
effective magnetic field, which can be built by coupling the internal
states(spin) of atoms to spatially varying laser beams [16, 17, 18, 19, 20],
the spectrum shows an unconventional Landau level structure. The experimental
detection of the spectrum is proposed with the Bragg scattering techniques.
## 2 The model
We consider a system of ultracold fermions confined in the bilayer honeycomb
lattice. The honeycomb lattice consists of two sublattices denoted by $A$ and
$B$. Then, the bilayer honeycomb lattice considered in our work is formed by
coupling the $B$ sublattices of the two layers with tunneling and leaving the
$A$ sublattices of the two layers uncoupled. One can create the bilayer
honeycomb lattice in the following steps. First, one builds the monolayer
honeycomb lattice as shown in Fig.1 (a) with three laser beams in the $x-y$
plane and two laser beams along the $z$ direction[21]. When the potential
barrier of the optical lattice along the $z$ direction is high enough, the
vertical tunneling between different planes is suppressed seriously, then
every layer is an independent two-dimensional honeycomb lattice. Secondly, one
makes the triangular lattice as shown in Fig.1 (b) with red-detuning laser
fields[22]. Finally, to realize the bilayer honeycomb lattice, one can put the
triangular lattice and the honeycomb lattice together as shown in Fig.1 (c).
There exists an additional micro-trap between sublattices $B$ of every two
layers in the honeycomb lattice. The additional micro-trap lowers the barrier
between sublattices $B$ of these two layers in the honeycomb lattice, or links
sublattices $B$ of these two layers in the honeycomb lattice as an
intermediate point, so that sublattices $B$ of these two layers in the
honeycomb lattice are coupled. Following this scheme, many independent bilayer
honeycomb lattices can be achieved (see Fig.1 (c)), so we only need to
investigated one of them. For convenience, we assume that the whole system is
trapped in a two-dimensional box, which can be achieved by adding four blue
detuning endcap beams at the edges of the optical lattice in $x-y$ plane[23].
With this box trap, the system can be considered to have the hard wall
boundary condition approximately, so we can neglect the boundary effect in our
discussion.
In this scheme, the ultracold atoms have a $\Lambda$-type three-level
configuration, the states $|1\rangle$ and $|2\rangle$ are degenerate states,
which are assumed to be different Zeeman states on the same hyperfine level,
and $|3\rangle$ is an excited state. The ground state $|j\rangle$ with $j=1,2$
and the excited state $|3\rangle$ are coupled though two laser field with the
corresponding Rabi frequencies $\Omega_{j}e^{i\varphi_{j}}$, respectively[18].
The schematic representation of this scheme is as shown in FIG. 2. The total
Hamiltonian reads, $\hat{H}=\hat{H_{0}}+\hat{H_{1}}$. The non-perturbative
Hamiltonian $\hat{H}_{0}$ is given by,
$\displaystyle\hat{H}_{0}=\sum_{\alpha}\int d{\bf
r}\hat{\Psi}^{\dagger}_{\alpha}({\bf
r})\left[-{\hbar^{2}}\nabla^{2}/{2m}+V({\bf r})\right]\hat{\Psi}_{\alpha}({\bf
r}),$ (1)
and the light-atom interaction Hamiltonian $\hat{H}_{1}$ is given by,
$\displaystyle\hat{H}_{1}=\int d{\bf r}\left[\Omega_{1}e^{i\varphi({\bf
r})}\hat{\Psi}_{3}^{\dagger}({\bf r})\hat{\Psi}_{1}({\bf
r})+\Omega_{2}\hat{\Psi}_{3}^{\dagger}({\bf r})\hat{\Psi}_{2}({\bf r})+{\rm
H.c.}\right].$ (2)
Diagonalizing the interaction Hamiltonian with the unitary transformation $S$,
$\displaystyle S=\left(\begin{array}[]{ccc}\cos\theta&-\sin\theta
e^{-i\varphi}&0\cr\frac{\sqrt{2}}{2}\sin\theta
e^{i\varphi}&\frac{\sqrt{2}}{2}\cos\theta&-\frac{\sqrt{2}}{2}\cr\frac{\sqrt{2}}{2}\sin\theta
e^{i\varphi}&\frac{\sqrt{2}}{2}\cos\theta&\frac{\sqrt{2}}{2}\end{array}\right),$
(6)
yields three eigenstates $|\Phi_{1}\rangle$, $|\Phi_{2}\rangle$ and
$|\Phi_{3}\rangle$, where $\tan\theta=|\Omega_{1}|/|\Omega_{2}|$ and
$\varphi=\varphi_{1}-\varphi_{2}$ are both position-dependent variables. The
corresponding eigenvalues are
$E_{i}=(0,-\sqrt{|\Omega_{1}|^{2}+|\Omega_{2}|^{2}},\sqrt{|\Omega_{1}|^{2}+|\Omega_{2}|^{2}})$.
The new field operators corresponding to the eigenstates are related with the
old field operators as
$\displaystyle\left(\begin{array}[]{c}\hat{\Phi}_{1}\\\ \hat{\Phi}_{2}\\\
\hat{\Phi}_{3}\end{array}\right)=S\left(\begin{array}[]{c}\hat{\Psi}_{1}\\\
\hat{\Psi}_{2}\\\ \hat{\Psi}_{3}\end{array}\right).$ (13)
In the new bases, and under the adiabatic condition
$\langle\Phi_{1}|\hat{H}_{0}|\Phi_{j}\rangle\ll|E_{i}-E_{j}|$ for $j=2,3$, we
can apply the adiabatic condition and then neglect the populations of the
states $|\Phi_{2}\rangle$ and $|\Phi_{3}\rangle$. Therefore, the effective
Hamiltonian can be rewritten in the dark-state basis $|\Phi_{1}\rangle$ [16,
17, 18, 19, 20],
$\displaystyle\hat{H}=\int d{\bf r}\hat{\Phi}_{1}^{\dagger}({\bf
r})\left[\frac{1}{2m}(-i\hbar\nabla-{\bf{A}})^{2}+{V}_{eff}({\bf
r})\right]\hat{\Phi}_{1}({\bf r}),$ (14)
where ${\bf A}=-\hbar\sin^{2}\theta\nabla\varphi$ and $\hat{\cal
H}\equiv\frac{1}{2m}(-i\hbar\nabla-{\bf{A}})^{2}+{V}_{eff}({\bf r})$ is the
single particle Hamiltonian with $V_{eff}({\bf r})$ being the effective trap
potential. Here $\bf A$ is the effective gauge potential associated with the
artificial magnetic field ${\bf B}=\nabla\times{\bf A}$. In the practical
case, we choose two counter-propagating Gaussian laser beams as
$\Omega_{j}e^{i\varphi_{i}}=\Omega_{0}\exp[-(x-x_{j})^{2}/\sigma_{0}^{2}]\exp(-ik_{j}y)$
$(j=1,2)$, where the propagating wave vectors $k_{1}=-k_{2}=k_{0}/2$ and the
center position $x_{1}=-x_{2}=\Delta x/2$ [18]. Then the effective trap
potential is, [18]
$\displaystyle V_{eff}({\bf r})=V({\bf
r})+\frac{\hbar^{2}k_{0}^{2}}{2m}\frac{(1+1/4d^{2}k_{0}^{2})}{4\cosh^{2}(x/2d)},$
(15)
and the effective vector gauge potential is
$\displaystyle{\bf A}=\frac{\hbar k_{0}}{1+e^{-x/d}}{\bf e}_{y},$ (16)
with $d=\sigma_{0}^{2}/(4\Delta x)$. Straightforwardly, one can obtain the
effective magnetic field, [18]
$\displaystyle{\bf B}=\frac{\hbar k_{0}}{4d\cosh^{2}(x/2d)}{\bf e}_{z}.$ (17)
Practically, one may set $d\sim 1{\rm mm}$ and $-0.01{\rm mm}<x<0.01{\rm mm}$,
so the condition $|x/d|\ll 1$ is satisfied. In this practical condition, the
effective trap potential can approximately be written as
$\displaystyle V_{eff}({\bf r})\approx V({\bf
r})+\frac{\hbar^{2}k_{0}^{2}}{2m}\frac{(1+1/4d^{2}k_{0}^{2})}{4},$ (18)
which has an additional constant term compared with the original external trap
potential. This additional constant term does not change the geometrical
structure of the original trap potential, so we can drop out it as a constant
chemical potential term. The effective magnetic field can approximately be
written as
$\displaystyle{\bf B}\approx{\bf B}^{(0)}+{\bf B}^{(2)}=\frac{\hbar
k_{0}}{4d}{\bf e}_{z}-\frac{\hbar k_{0}}{4d}\frac{x^{2}}{8d^{2}}{\bf e}_{z}.$
(19)
where the quadratic term can be neglected for $|B^{(2)}/B|<1.25\times
10^{-5}$. Thus, the effective magnetic field can be regarded as a homogeneous
one in the regime considered in our scheme. For the typical parameter value
$k_{0}\sim 2\times 10^{6}{\rm m}^{-1}$ and $d\sim 10^{-3}{\rm m}$, we obtain
the magnitude of the effective magnetic field, $B\sim 3.3\times 10^{-25}{\rm
J\cdot s\cdot m^{-2}}$.
## 3 The effective low energy Hamiltonian
Taking the tight-binding limit, we can superpose the Bloch states to get two
sets of Wannier functions $w_{\alpha}^{A}({\bf r}-{\bf r}_{i})$ and
$w_{\alpha}^{B}({\bf r}-{\bf r}_{j})$ with $\alpha=1,2$, which correspond to
sublattices $A$ and $B$ of layer $\alpha$, respectively. In the presence of
the effective gauge field we can expand the field operator in the lowest band
Wannier functions as,
$\displaystyle\hat{\Phi}_{1}({\bf r})$ $\displaystyle=$
$\displaystyle\sum_{\alpha=1,2}\left[\sum_{i\in A}\hat{a}_{\alpha}({\bf
r}_{i})e^{\frac{i}{\hbar}\int_{0}^{{\bf r}_{i}}{\bf A}\cdot d{\bf
r}}w_{A}({\bf r}-{\bf r}_{i})\right.$ (20) $\displaystyle\left.+\sum_{j\in
B}\hat{b}_{\alpha}({\bf r}_{j})e^{\frac{i}{\hbar}\int_{0}^{{\bf r}_{j}}{\bf
A}\cdot d{\bf r}}w_{B}({\bf r}-{\bf r}_{j})\right].$
Substituting the above expression into Eq.(14), we can rewrite the Hamiltonian
as $\hat{H}=\hat{H}_{0}+\hat{H}_{1}$ with [12],
$\displaystyle\hat{H}_{0}$ $\displaystyle=$
$\displaystyle-t\sum_{\alpha}\sum_{{\bf r}_{i}\in
A}\sum_{j=1,2,3}[\hat{a}_{\alpha}^{\dagger}({\bf r}_{i})\hat{b}_{\alpha}({\bf
r}_{i}+{\bf s}_{j})e^{\frac{i}{\hbar}\int_{0}^{{\bf s}_{j}}{\bf A}\cdot d{\bf
r}}+{\rm H.c.}],$ (21)
and
$\displaystyle\hat{H}_{1}$ $\displaystyle=$ $\displaystyle-t_{\perp}\sum_{{\bf
r}_{i}\in A}[\hat{b}_{1}^{\dagger}({\bf r}_{i})\hat{b}_{2}({\bf r}_{i})+{\rm
H.c.}],$ (22)
where $t$ is the tunneling parameter with $t=-\int d{\bf r}w_{A}^{*}({\bf
r}-{\bf r}_{i})\hat{\cal H}_{0}w_{B}({\bf r}-{\bf r}_{j})=-\int d{\bf
r}w_{B}^{*}({\bf r}-{\bf r}_{j})\hat{\cal H}_{0}w_{A}({\bf r}-{\bf r}_{i})$;
$t_{\perp}=-\int d{\bf r}w_{B_{2}}^{*}({\bf r}-{\bf r}_{i})\hat{\cal
H}_{0}w_{B_{1}}({\bf r}-{\bf r}_{j})$; the energy shifts for sublattice A and
B are $\epsilon_{A}=\int d{\bf r}w_{A}^{*}({\bf r}-{\bf r}_{i})\hat{\cal
H}_{0}w_{A}({\bf r}-{\bf r}_{i})$ and $\epsilon_{B}=\int d{\bf
r}w_{B}^{*}({\bf r}-{\bf r}_{j})\hat{\cal H}_{0}w_{B}({\bf r}-{\bf r}_{j})$
respectively, with $\hat{\cal
H}_{0}\equiv-\frac{\hbar^{2}}{2m}\nabla^{2}+{V}_{eff}({\bf r})$. Here, for
convenience, we can have dropped out a constant term in Hamiltonian (21). The
three vector ${\bf s}_{j}$ in Eq.(21) are ${\bf s}_{1}=(0,-1)a,\ {\bf
s}_{2}=\left({\sqrt{3}}/{2},{1}/{2}\right)a,\ $ and ${\bf
s}_{3}=\left(-{\sqrt{3}}/{2},{1}/{2}\right)a$, where $a$ is the lattice
spacing. The three vector ${\bf s}_{i}(i=1,2,3)$ connect any site of
sublattice $A$ to its nearest neighbor sites belonging to sublattice $B$ in
every layer.
Here, we assume the condition $t_{\perp}\ll t$ being satisfied, so that we can
consider Eq. (22) as a perturbation. We take the Fourier transformation to
$\hat{a}({\bf r})$ and $\hat{b}({\bf r})$ as,
$\displaystyle\hat{a}_{\alpha}({\bf k})=\sum_{{\bf r}_{i}\in A}e^{-i{\bf
k}\cdot{\bf r}_{i}}\hat{a}_{\alpha}({\bf r}),$ (23)
$\displaystyle\hat{b}_{\alpha}({\bf k})=\sum_{{\bf r}_{i}\in B}e^{-i{\bf
k}\cdot{\bf r}_{i}}\hat{b}_{\alpha}({\bf r}_{i}),$ (24)
where ${\bf r}$ is the coordinate on $x-y$ plane. Substituting the above
expressions into Eqs.(21) and (22), we obtain the following Hamiltonian,
$\displaystyle\hat{H}_{0}=\sum_{\alpha}\sum_{k}[\xi({\bf
k})\hat{a}_{\alpha}^{\dagger}({\bf k})\hat{b}_{\alpha}({\bf k})+\xi^{*}({\bf
k})\hat{b}_{\alpha}^{\dagger}({\bf k})\hat{a}_{\alpha}({\bf k})],$ (25)
$\displaystyle\hat{H}_{1}=-t_{\perp}\sum_{{\bf k}}[\hat{b}_{1}({\bf
k})^{\dagger}\hat{b}_{2}(k)+\hat{b}_{2}(k)^{\dagger}\hat{b}_{1}(k)],$ (26)
where $\xi({\bf k})$ is the single-particle energy spectrum without interlayer
tunneling and defined via $\xi({\bf k})=-t\sum_{j=1,2,3}e^{-i({\bf k}\cdot{\bf
s}_{i}-\frac{1}{\hbar}\int_{0}^{{\bf s}_{j}}{\bf A}\cdot d{\bf r})}$. The
energy spectrum contains two zero-energy points at ${\bf
K}_{\pm}=\pm({4\pi}/{3\sqrt{3}a},0)$ around which it is linearized. Neglecting
the coupling between the Fermi points ${\bf K}_{\pm}$, the total Hamiltonian
$\hat{H}$ can be expand around the contact point ${\bf K}_{+}({\bf K}_{-})$ in
coordinate space. Without loss of generality, we expand the total Hamiltonian
around the contact point ${\bf K}_{+}$ as[12, 13],
$\displaystyle\hat{H}=\int d^{2}r\hat{\psi}^{\dagger}({\bf r})\hat{\cal
H}\hat{\psi}({\bf r}),$ (27)
where the spinor $\hat{\psi}=({\hat{\psi}_{1}^{a}\ \ \hat{\psi}_{1}^{b}\ \
\hat{\psi}_{2}^{a}\ \ \hat{\psi}_{2}^{b}})^{T}$ for the Dirac point ${\bf
K}_{+}$. Here, $\hat{\cal H}$ takes the $4\times 4$ matrix form,
$\displaystyle\hat{\cal
H}=\hbar\left(\begin{array}[]{cccc}0&v_{F}\hat{\pi}^{\dagger}&0&0\cr
v_{F}\hat{\pi}&0&0&-t_{\perp}\cr 0&0&0&v_{F}\hat{\pi}^{\dagger}\cr
0&-t_{\perp}&v_{F}\hat{\pi}&0\end{array}\right),$ (32)
where $\hat{\pi}=\hat{\pi}_{x}+i\hat{\pi}_{y}$ and
$\hat{\pi}^{\dagger}=\hat{\pi}_{x}-i\hat{\pi}_{y}$, with
$\hat{{\pi}}_{x}=\hat{p}_{x}-{A}_{x}/\hbar$ and
$\hat{{\pi}}_{y}=\hat{p}_{y}-{A}_{y}/\hbar$, and $v_{F}=3at/2\hbar$ is the
Fermi velocity. Here, $t_{\perp}\ll t$ is assumed. Eliminating the dimer state
components $\hat{\psi}_{1}^{b}$ and $\hat{\psi}_{2}^{b}$, we can reach a two-
component Hamiltonian describing effective hopping between the $A_{1}$-$A_{2}$
sites
$\displaystyle\hat{\cal H}_{\rm
eff}=\frac{\hbar^{2}}{2m}\left(\begin{array}[]{cc}0&\hat{\pi}^{\dagger}\hat{\pi}\cr\hat{\pi}^{\dagger}\hat{\pi}&0\end{array}\right),$
(35)
where $m=t_{\perp}/2v_{F}^{2}$.
## 4 Energy bands
First, we consider the case without gauge fields, i.e.
$\hat{{\pi}}_{x}=-i\partial_{x}$ and $\hat{{\pi}}_{y}=-i\partial_{y}$. The
eigenfunctions of the Hamiltonian (35) are given by
$\displaystyle f_{s{\bf k}}({\bf r})=\frac{1}{\sqrt{2}L}\exp(i{\bf k}\cdot{\bf
r})\left(\begin{array}[]{c}s\cr 1\end{array}\right),$ (38)
where $L^{2}$ is the area of the system, and $s$ denotes the conduction band
with $s=+1$ and the valence band with $s=-1$. The corresponding eigenenergies
are
$\displaystyle E=\frac{s\hbar^{2}k^{2}}{2m},$ (39)
where $k=\sqrt{k_{x}^{2}+k_{y}^{2}}$. This dispersion relation has parabolic
form as shown in Fig.3. Here, the quasiparticles are still chiral like that in
the monolayer system. The pseudospin vector ${\bf n}=(1,0)$ is a constant for
any wave vector ${\bf k}$ in our work, while ${\bf
n}=(\cos(2\phi),\sin(2\phi))$ for ${\bf k}=(k\cos\phi,k\sin\phi)$ in the
bilayer graphene with Bernal stacking order as in References [14, 15]. Thus,
in our bilayer honeycomb lattice configuration, the Berry phase $2\pi$ in the
bilayer graphene with Bernal stacking order is absent.
## 5 Unconventional Landau levels
For the case with an effective magnetic field in the Landau gauge $(0,Bx,0)$,
the eigenfunctions of the Hamiltonian (35) can be obtained as,
$\displaystyle F_{nk_{y}}({\bf r})$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2L}}\exp(ik_{y}y)\left(\begin{array}[]{c}\phi_{|n|}\cr{\rm
sgn}(n)\phi_{|n|}\end{array}\right),$ (42)
with ${\rm sgn}(n)=(1,0,-1)$ for $(n>0,n=0,n<0)$ respectively, for $n\neq 0$,
and
$\displaystyle F_{0k_{y}}({\bf r})$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2L}}\exp(ik_{y}y)\left(\begin{array}[]{c}\phi_{0}\cr\pm\phi_{0}\end{array}\right),$
(45)
for $n=0$. Here, $\phi_{n}$ are harmonic oscillator eigenstates as
$\displaystyle\phi_{|n|}=\frac{1}{\sqrt{2^{|n|}|n|!\sqrt{\pi}u}}\exp\left[-\frac{1}{2}\left(\frac{x-u^{2}k}{u}\right)^{2}\right]H_{|n|}\left(\frac{x-u^{2}k}{u}\right),$
(46)
where the quantum nubmer $n$ is an integer and $u=\sqrt{\hbar/B}$. The
corresponding Landau energy levels are $E_{n}=n\hbar\omega_{c}$ with
$n=\cdots,-2,-1,0,1,2,\cdots$ and $\omega_{c}=2v_{F}^{2}B/t_{\perp}$, which is
shown in Fig.4. This spectrum has a linear form like the conventional Landau
level spectrum. However, the quasiparticles are chiral in our scheme. The zero
modes exist and the zero energy level is twofold degenerate compared to the
non-zero energy levels.
To give a numerical evaluation, the typical values of the parameters can be
taken as $t\sim 10^{-30}{\rm J}$, $t_{\perp}\sim 10^{-33}{\rm J}$, $a\sim
200{\rm nm}$, $B\sim 3.3\times 10^{-25}{\rm J\cdot s\cdot m^{-2}}$. We can
then estimate the magnitude of the cyclotron frequency $\omega_{c}=5.7\times
10^{3}{\rm s^{-1}}$ and obtain the first gap of Landau level for the monolayer
honeycomb lattice $\Delta\sim 6\times 10^{-31}{\rm J}$. The temperature
required to keep atoms in the zeroth Landau level is $43\ {\rm nK}$.
## 6 Bragg spectroscopy
It is not easy to measure the Hall conductivity of cold fermionic atoms in the
bilayer honeycomb lattice. However, an available method to detect the
unconventional Landau levels of ultracold fermions on the bilayer honeycomb
lattice is the Bragg spectroscopy[24], which is extensively used to probe the
excitation spectrum in condensed matter physics. In the Bragg scattering, the
atomic gas is exposed to two laser beams, with wavevectors ${\bf k}_{1}$ and
${\bf k}_{2}$ and a frequency difference $\omega$. The light-atom interaction
Hamiltonian can be written as,
$\displaystyle\hat{H}_{B}=\sum_{s_{1},s_{2},{\bf k},{\bf q}}\Omega_{B}e^{i{\bf
q}\cdot{\bf r}}|f_{s_{2},{\bf k}+{\bf q}}\rangle\langle f_{s_{1},{\bf
k}}|+{\rm H.c.},$ (47)
where $s_{i}=1,2$. In our case, we consider the case of half filling, i.e.,
the bands with $n\leq 0$ are fully occupied and the bands with $n>0$ are
empty. The half filling state can be prepared with the coherent filtering
scheme proposed in Reference [25]. From the Fermi’s golden rule, we obtain the
dynamic structure factor as follows,
$\displaystyle S({\bf q},\omega)$ $\displaystyle=$
$\displaystyle\frac{1}{N\hbar^{2}\Omega^{2}}\sum_{\alpha}|\langle\phi_{\beta}^{(f)}|H_{B}|\phi_{\alpha}^{(i)}\rangle|^{2}$
(48) $\displaystyle\times\delta(\hbar\omega-E_{\beta}+E_{\alpha})$
where $N$ is the total number of atoms in the system; $\phi_{\alpha}^{(i)}$
denotes the initial state and $\phi_{\alpha}^{(f)}$ denotes the final state;
$\alpha$ represents all quantum parameters of quantum state.
For simplicity, we assume that the direction of ${\bf q}$ is the same as the
one of $y$ axis in the wavevector space, i.e. ${\bf q}=q{\bf e}_{y}$.
Following the above formulae, we can straightforwardly evaluate the dynamic
structure factor $S({\bf q},\omega)$. Fig.5(a) shows the dynamic structure
factor $S({\bf q},\omega)$ as a function of $\omega$ for the case without an
effective magnetic field. We can find that $S({\bf q},\omega)$ is zero when
$\omega$ is under $\hbar^{2}q^{2}/2m$ and is finite constant for $\omega$
above $\hbar^{2}q^{2}/2m$. Fig.5 (b) shows the dynamic structure factor
$S({\bf q},\omega)$ as a function of $\omega$ with $q=1.0\sqrt{B/\hbar}$ for
the bilayer honeycomb lattice. Similarly, when a zero-level atom is excited to
other states, the peaks are obtained at
$\omega=(1,2,\cdots,n,\cdots)\Lambda^{2}t^{2}/\hbar t_{\perp}$ with
$\Lambda=3a\sqrt{B/2\hbar}$, which are marked with red stars in FIG.3 (b). The
distances between the neighbor peaks marked with red stars in Fig.3 (b) are
identical.
## 7 Conclusion
In summary, we have proposed a scheme to investigate ultracold fermionic atoms
in the bilayer honeycomb lattice for the cases without and with an effective
magnetic field. The effective magnetic field can be built with optical
techniques. For the case without an effective magnetic field, the dispersion
relation has parabolic form and the quasiparticles are chiral. For the case
with an effective magnetic field, there exist unconventional Landau levels
that include a zero-mode level. We have calculated the dynamic structure
factors for the two cases and proposed to detect them with the Bragg
spectroscopy.
## Acknowledgements
This work was supported by the Teaching and Research Foundation for the
Outstanding Young Faculty of Southeast University.
## References
* [1] Jaksch, D.; Bruder, C.; Cirac, J. I.; Gardiner, C. W.; Zoller, P. Phys. Rev. Lett. 1998, 81, 3108–3111.
* [2] Greiner, M.; Esslinger, T.; Mandel, O.; Hänsch, T. W.; Bloch, I. Nature 2002, 415, 39–44.
* [3] Lewenstein, M.; Sanpera, A.; Ahufinger, V.; Damski, B.; Sen, A.; Sen, U. Adv. Phys. 2007, 56, 243-379.
* [4] Rapp, Á.; Zaránd, G.; Honerkamp, C.; Hofstetter, W. Phys. Rev. Lett. 2007, 98, 160405-1–4.
* [5] Zhao, E.; Paramekanti, A. Phys. Rev. Lett. 2006, 97, 230404-1–4.
* [6] Zhu, S. L.; Wang, B.; Duan, L. M. Phys. Rev. Lett. 2007, 98, 260402-1–4.
* [7] Wu, C.; Bergman, D.; Balents, L.; Das Sarma, S. Phys. Rev. Lett. 2007, 99, 070401-1–4.
* [8] Zheng, Y.; Ando, T. Phys. Rev. B 2002, 65, 245420-1–11.
* [9] Novoselov, K. S.; Geim, A. K.; Morozov, S. V.; Jiang, D.; Katsnelson, M. I.; Grigorieva, I. V.; Dubonos S. V.; Firsov A. A. Nature 2005, 438, 197–200.
* [10] Li, G.; Andrei, E. Y. Nature Phys. 2007, 3, 623-627.
* [11] Zhang, Y.; Tan, Y. W.; Stormer, H. L.; Kim, P. Nature 2005, 438, 201–204.
* [12] Jackiw, R.; Pi, S. Y. Phys. Rev. Lett. 2007 ,98, 266402-1–4.
* [13] Hou, C. Y.; Chamon, C.; Mudry, C. Phys. Rev. Lett. 2007, 98, 186809-1–4.
* [14] McCann, E.; Fal’ko, V. I. Phys. Rev. Lett. 2006, 96, 086805-1–4.
* [15] McCann, E.; Abergel, D. S. L.; Fal’ko, V. I. Solid State Commun. 2007, 143, 110-115.
* [16] Juzeliūnas, G.; Öhberg, P.Phys. Rev. Lett. 2004, 93, 033602-1–4.
* [17] Juzeliūnas, G.; Öhberg, P.; Ruseckas, J.; Klein, A. Phys. Rev. A 2005, 71, 053614-1–9.
* [18] Juzeliūnas, G.; Ruseckas, J.; Öhberg, P.; Fleischhauer, M. Phys. Rev. A 2006, 73, 025602-1–4.
* [19] Liu, X. J.; Liu, X.; Kwek, L. C.; Oh, C. H. Phys. Rev. Lett. 2007, 98, 026602-1–4.
* [20] Zhu, S. L.; Fu, H.; Wu, C. J.; Zhang, S. C.; Duan, L. M. Phys. Rev. Lett. 2007, 97, 240401-1–4.
* [21] Duan, L. M.; Demler, E.; Lukin, M. D. Phys. Rev. Lett. 2003,91, 090402-1–4.
* [22] Grynberg, G.; Robilliard, C. Phys. Rep. 2001, 355, 335-451.
* [23] Meyrath, T. P.; Schreck, F; Hanssen, J. L.; Chuu, C. -S.; Raizen, M. G. Phys. Rev. A 2005, 71, 041604-1–4.
* [24] Stamper-Kurn, D. M.; Chikkatur, A. P.; Görlitz, A.; Inouye, S.; Gupta, S.; Pritchard, D. E.; Ketterle, W. Phys. Rev. Lett. 1999, 83, 2876-2879.
* [25] Rabl, P.; Daley, A. J.; Fedichev, P. O.; Cirac, J. I.; Zoller, P. Phys. Rev. Lett. 2003, 91, 110403-1–4.
Figure 1: (a) The independent monolayer honeycomb lattice. (b) The adding
triangular lattice. (c) The bilayer honeycomb lattice built with putting (a)
and (b) together. Figure 2: The light-atom interactions between fermionic
atoms and two laser beams. Figure 3: Energy bands of cold fermionic atoms in
the bilayer honeycomb lattice without an effective magnetic field. Figure 4:
Landau levels of cold fermionic atoms in the bilayer honeycomb lattice with an
effective magnetic field.
Figure 5: The dynamic structure factors $S(q,\omega)$ (not scaled) for cold
atoms in the bilayer honeycomb lattice in the cases (a) without an effective
magnetic field and (b) with an effective field. Here, $\omega$ is in units of
$9at^{2}q^{2}/2\hbar t_{\perp}$ with $q=1.0\sqrt{B/\hbar}$.
|
arxiv-papers
| 2009-05-19T07:23:15 |
2024-09-04T02:49:02.719059
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Jing-Min Hou",
"submitter": "Jing-Min Hou",
"url": "https://arxiv.org/abs/0905.3035"
}
|
0905.3057
|
# A simple thermodynamical witness showing universality of macroscopic
entanglement
Vlatko Vedral v.vedral@quantuminfo.org The School of Physics and Astronomy,
University of Leeds, Leeds LS2 9JT, England and
Centre for Quantum Technologies, National University of Singapore, 3 Science
Drive 2, Singapore 117543
Department of Physics, National University of Singapore, 2 Science Drive 3,
Singapore 117542
###### Abstract
We show that if the ground state entanglement exceeds the total entropy of a
given system, then this system is in an entangled state. This is a universal
entanglement witness that applies to any physical system and yields a
temperature below which we are certain to find some entanglement. Our witness
is then applied to generic bosonic and fermionic many body systems to derive
the corresponding ”critical” temperatures that have a very broad validity.
Entanglement has recently been extensively investigated and even detected in
various many-body systems Amico . Here we would like to make a claim that all
these different results in fact conform to a certain universal behaviour that
can be uncovered using very simple thermodynamical arguments. In order to do
so, we will have to choose one particular way of thinking about entanglement,
that has clear connections with thermodynamics. We will look at the trade-off
between the amount of entanglement in the ground state of a given system, as
quantified by the relative entropy of entanglement, and the mixedness of the
system at a certain temperature, as quantified by its total entropy.
Suppose that we are given a thermal state
$\rho_{T}=p|\Psi_{0}\rangle\langle\Psi_{0}|+(1-p)\rho_{rest}$, where
$|\Psi_{0}\rangle$ is the ground state, $p=\exp(-E_{0}/kT)/Z$ is the usual
Boltzmann weight and $\rho_{rest}$ involves all higher levels. A very simple
entanglement witness can now be derived by noting that if
$S(|\Psi_{0}\rangle||\rho_{T})<S(|\Psi_{0}\rangle||\rho_{sep})=E(|\Psi_{0}\rangle)$
(1)
where $S(\sigma||\rho)$ is the quantum relative entropy Umegaki , then the
state $\rho_{T}$ must be entangled (as it is closer to $|\Psi_{0}\rangle$ than
the closest separable state, which we denoted as $\rho_{sep}$). $E(\rho)$ is
the relative entropy of entanglement of $\rho$ Vedral1 ; Vedral2 . After a few
simple steps, the above inequality leads to another inequality, satisfied by
entangled thermal states $\rho_{T}$,
$-\ln p<E(|\Psi_{0}\rangle)$ (2)
which was used in Markham to investigate entanglement of some many-body
systems. Exploiting the fact that (see for example Landau )
$p=\frac{e^{-E_{0}/kT}}{Z}=e^{-(E_{0}+kT\ln Z)/kT}\geq
e^{-(U+F)/kT}=e^{-S/k}\;,$ (3)
where $F=-kT\ln Z$ is the free energy and $S=(F+U)/T$ is the entropy, we
finally obtain the inequality
$S(\rho_{T})<kE(|\Psi_{0}\rangle)$ (4)
implying that $\rho_{T}$ is entangled. We now have a very simple criterion
which can be expressed as follows: if the entropy of a thermal state is lower
than the relative entropy of its ground state (multiplied by the Boltzmann
constant $k$), then this thermal state contains some form of entanglement. We
can also adopt the interpretation of relative entropy due to Donald Donald .
According to this, the relative entropy $S(\sigma||\rho_{T})$ is equal to the
free energy gain when we move from the equilibrium state $\rho_{T}$ to another
state $\sigma$. All our inequality then says is this: if moving from the
closest separable state to a pure entangled state requires more free energy
than moving from a thermal state to the same pure state, then this thermal
state must be entangled.
Here we are not really concerned with the type of entanglement we have (e.g.
bi-partite or multipartite, distillable or bound), but we only what to confirm
that the state is not fully separable. It is also very clear that if the
ground state is not entangled, this witness will never detect any entanglement
(since entropy is always a non-negative quantity), even though the state may
in reality be entangled for some range of temperatures.
The entanglement witness based on entropy, though at first sight very simple,
is nevertheless rather powerful as it allows us to talk very generally about
temperatures below which we should start to detect entanglement in a very
generic solid state system. The behaviour of any system can be derived from
its Hamiltonian that specifies all interactions between subsystems. No matter
how complicated this Hamiltonian may be, we can always diagonalise it to the
simple form $H=\sum_{i=1}^{M}\omega_{i}d^{\dagger}_{i}d_{i}$, where
$\omega_{i}$s are its $M$ eigen-energies and $d_{i},d^{\dagger}_{i}$ are the
annihilation and creation operators for the $i$-th eigen-mode. We will keep
the discussion completely general by considering both fermionic and bosonic
commutation relations on $d_{i}$s, as well as completely distinguishable
particles (see for example Fetter ). The free energy is now easily computed to
be: $F=\pm kT\prod_{i}\ln(1\mp e^{\beta(\mu-\omega_{i})})$, where the
convention will always be that the upper (lower) sign corresponds to bosons
(fermions) and $\mu$ is the chemical potential. Entropy then simply follows
via the formula: $S=-\partial F/\partial T$, and is equal to
$S=-\sum_{i}n_{i}\ln n_{i}\mp(1\pm n_{i})\ln(1\pm n_{i})$ (5)
where $n_{i}=1/(\exp\beta(\omega_{i}-\mu)\pm 1)$. What matters now is the
scaling of entropy with $M$ (the number of modes) and $N$ (the average number
of particles). This scaling, in turn, depends on the spectrum of the system
$\omega_{i}$ as well as the temperature and the particle statistics. Since
entropy is lower at low temperatures, this is the regime where we expect the
witness to show entanglement. Let us look at the typical examples of ideal
bosonic and fermionic gasses. Non-ideal systems behave very similarly, with
some for us unimportant corrections. At low $T$, the entropy scales as (see
e.g. Rubia )
$S\sim N(\frac{kT}{\tilde{\omega}_{F,B}})^{p_{F,B}}$ (6)
where $F,B$ refer to fermions and bosons respectively, $N$ is the (average)
number of particles, $\tilde{\omega}$ is some characteristic frequency which
is a function of the spectrum (its form depends on the details of the system
and one particular example will be presented below) and $p\geq 1$. The fact
that this form is the same for more general systems is due to what is known as
the third law of thermodynamics (see Landsberg for example) stating that the
entropy has to go to zero with temperature. We now consider how entanglement
scales in the ground state for fermions and bosons Hayashi . If the number of
particles is comparable to the number of modes, this typically means that
$E\sim N$. The entropy witness then yields a very simple temperature below
which entanglement exists for both fermions and bosons,
$kT<\tilde{\omega}_{F,B}$ (7)
This kind of temperature has been obtained in a multitude of different
systems, ranging from spin chains, via harmonic chains and to (continuous)
quantum fields. Its universality is now justified from a very simple behaviour
of entropy at low temperatures.
In the case of higher temperature, $T\geq\tilde{\omega}$, both bosonic and
fermionic systems approximately obey the Maxwell-Boltzmann statistics of
distinguishable systems (in other words, the thermal de Broglie wavelength of
each particle is much smaller than the volume it occupies on average). We then
obtain the following inequality: $N\ln T+N(1-\ln\tilde{\omega})<E\sim N$,
where $\ln\tilde{\omega}=\sum\ln\omega_{i}/N$ (this gives us an intuition of
how typical temperatures compare to the spectral frequencies). It is clear
that this inequality will never be satisfied, for the range of temperatures we
are considering. Entanglement is thus not expected in an ideal classical gas!
We close with two remarks. The first one is that there is nothing special
about using the relative entropy in equation (1). We could have used any other
distance measure. The second remark is that similar methods can be used to
probe entanglement in quantum phase transitions Sachdev . These occur at low
temperatures (strictly at $T=0$) and are driven, not by temperature, but by
other external parameters, such as a uniform magnetic field. It is impossible
now to use the entropy of the state as a witness, since this quantity is
always zero at zero temperature. We can use instead the fact that if the
energy of a given state $|\Psi\rangle$ exceeds in its absolute value the
highest expected value for any separable state, then the state $|\Psi\rangle$
must itself be entangled. This method has been exploited in many papers (see,
for instance, Brukner ; Toth ; Anders ). Usually, however, appearance of
entanglement in the ground state of interacting systems is not surprising and
is more common than not. Entanglement is also much easier to detect and
quantify since we deal with pure states only. Consequently rigorous results on
scaling of entanglement at $T=0$ and our entropy witness could then be applied
to tell us at which temperatures we expect the relationship between entropy
and area Cramer to fail.
The main motivation of this work was to show that there is a universal finite
temperature for many-body systems below which entanglement is guaranteed to
exist. We should no longer be surprised by the ubiquity of entanglement in the
macroscopic domain.
Acknowledgments: The author acknowledges financial support from the
Engineering and Physical Sciences Research Council, the Royal Society and the
Wolfson Trust in UK as well as the National Research Foundation and Ministry
of Education, in Singapore.
## References
* (1) L. Amico, R. Fazio, A. Osterloh and V. Vedral, Rev. Mod. Phys. 80, 1 (2008).
* (2) H. Umegaki, Kodai Math. Sem. Rep. 14, 59 (1962).
* (3) V. Vedral, M. B. Plenio, M. Rippin and P. L. Knight, Phys. Rev. Lett. 78, 2275 (1997).
* (4) V. Vedral and M. B. Plenio, Phys. Rev. A 57, 1619 (1998).
* (5) D. Markham et al., Eu. Phys. Lett. 81, 400006 (2008).
* (6) M. J. Donald, J. Stat. Phys. 49, 81 (1987).
* (7) L. Landau and E. Lifschitz, Statistical Physics Parts I and II, (Pergamon Press, Oxford).
* (8) M. Hayashi et al. Phys. Rev. A 77, 012104 (2008).
* (9) A. L. Fetter and J. D. Walecka, Quantum Theory of Many-Particle Systems, (Dover, New York, 2003).
* (10) S. Mafe, J. A. Manzanares and J. de la Rubia, Am. J. Phys. 68, 932 (2000).
* (11) P. T. Landsberg, Thermodynamics and Statistical mechanics (Dover, New York, 1990).
* (12) S. Sachdev, Quantum Phase Transitions, (Cambridge University Press, Cambridge, 2000).
* (13) C. Brukner and V. Vedral, Macroscopic thermodynamical witnesses of quantum entanglement, arXiv:quant-ph/0406040
* (14) G. Toth, Phys. Rev. A 71, 010301(R) (2005).
* (15) J. Anders et al. N. J. Phys. 8, 140 (2006).
* (16) M. Cramer, J. Eisert, M. B. Plenio, and J. Dreissig, Phys. Rev. A 73, 012309 (2006).
|
arxiv-papers
| 2009-05-19T10:27:32 |
2024-09-04T02:49:02.724864
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Vlatko Vedral",
"submitter": "W. Son",
"url": "https://arxiv.org/abs/0905.3057"
}
|
0905.3065
|
# On the quantum criticality in the ground and the thermal states of XX model
W. Son Centre for Quantum Technologies, National University of Singapore, 3
Science Drive 2, Singapore 117543 V. Vedral Centre for Quantum Technologies,
National University of Singapore, 3 Science Drive 2, Singapore 117543 The
School of Physics and Astronomy, University of Leeds, Leeds, LS2 9JT, United
Kingdom Department of Physics, National University of Singapore, 2 Science
Drive 3, Singapore 117542
###### Abstract
We compare the critical behavior of the ground state and the thermal state of
the XX model. We analyze the full energy spectrum and the eigenstates to
reconstruct the ground state and the thermally excited state. With the
solutions, we discuss about several physical properties of the states, which
are related to quantum phase transition, in various limits, at zero
temperature as well as at a thermal equilibrium.
Conventional definition of quantum phase transition (QPT) is given by energy
level crossing of ground energy state with varying external parameter
Sachdev99 . The definition is based on the view that the ground energy state
is taken as a quantum phase itself and, thus, QPT is a transition occurring at
the point where a zero temperature state is transited from one state to the
other. Because of the singular structure of the energy transition, the point
of QPT is known to be detected by non-analyticity of free energy.
Within the definition, a difficulty may arise if one tries to consider a
finite temperature QPT. It is because QPT is only defined at $T=0$. Typical
treatment of finite temperature phase transition so far is provided by the
non-analyticity of thermodynamic properties or the characteristic wave length
of the system Chakravarty89 ; Sachdev97 . However, it is not straightforward
how the singularity in the thermal state is consistently combined with the
notion of energy level crossing at zero temperature.
In this article, we study the full solutions of XX spin chain model, not only
the energy eignevalues but also the eignestates in the original spin basis.
With the solutions, we demonstrate how the sequential energy level crossings
at zero temperature is led to a continuous QPT in infinite spin limit,
$N\rightarrow\infty$. In the XX spin model, the number of ground energy level-
crossings grow with the size of the chain. When $N\rightarrow\infty$, the
crossing points become dense within a sharply defined critical region, giving
rise to a continuous QPT. This is a typical example when a ground state is no
longer in a pure state but in a mixed state. At the same time, each crossing
point of the ground energy state becomes fully analytic.
The situation is comparable to a state in finite temperature which is a
mixture of all the state spectrums with the Boltzmann distribution. We
investigate the structure of state in a thermally excited XX spin chain and
the purity of the state to compare it with the ground state of the infinite
chain. In the case, it can be seen that the temperature is acting as a
controlling parameter to determine the degree of state mixture in the system.
Thermodynamic properties of the thermal state are remained analytic in the
region. As temperature is increased, the state moves into a thermal mixture
which destroys the quantum correlations, entanglement, in the system.
Extensive discussions on the relation between QPT and entanglement in the XX
model can also be found at Son08 ; Amico07 ; Hide07 .
## I XX Model and full solutions
We start our discussion by considering $N$ spin $1/2$ particles in a line,
coupled by nearest neighbor XX interaction, with Hamiltonian
$H=-\Big{[}\sum_{i=1}^{N}\frac{J}{2}(\sigma_{i}^{x}\sigma_{i+1}^{x}+\sigma_{i}^{y}\sigma_{i+1}^{y})+B\sigma_{i}^{z}\Big{]}$
(1)
where we have assumed that the exchange interaction $J$ fixes the energy unit.
The model with periodic boundary condition has been firstly solved by Katsura
Katsura62 and recently discussed at finite size in Ref. dep . Here, we assume
open boundaries, with $\sigma_{N+1}=0$. After Jordan-Wigner (JW) and Fourier
transformations,
$d_{k}=\sqrt{\frac{2}{N+1}}\sum_{l=1}^{N}\sin\left(\frac{\pi
kl}{N+1}\right)\,\bigotimes_{m=1}^{l-1}\sigma_{m}^{z}\sigma_{l}^{-},$ (2)
the Hamiltonian takes a diagonal form,
$H=\sum_{k=1}^{N}\Lambda_{k}d_{k}^{\dagger}d_{k}+NB$ where
$\Lambda_{k}=-2B+2\cos\left[(\pi k)/(N+1)\right]$. The $2^{N}$ eigenenergies
are
$\epsilon_{l}\equiv\sum_{k=1}^{N}\Lambda_{k}\langle\psi_{l}|d_{k}^{\dagger}d_{k}|\psi_{l}\rangle+NB$
and the corresponding eigenstates are found
$|\psi_{l}\rangle=\prod_{k=1}^{N}(d^{\dagger}_{k})^{\alpha_{k}^{(l)}}\,|\Omega\rangle,$
(3)
where the binary integer number $\alpha_{k}^{(l)}\in\\{0,1\\}$ identifies
eigenenergy as
$\epsilon_{l}\equiv\sum_{k=1}^{N}\Lambda_{k}\alpha_{k}^{(l)}+NB$. The state
$|\Omega\rangle$ is the vacuum: $d_{k}|\Omega\rangle=0\,\forall k$. It is also
useful to know that $\langle\psi_{i}|\psi_{j}\rangle=\delta_{i,j}$. The energy
diagram of $\epsilon_{l}$ for $N=4$ as a function of $B$ is illustrated in
Fig.1(a).
(a) (b)
Figure 1: (a) The spectrum of $2^{4}$ energy levels for the $N=4$ XX spin
chains. The energies vary as a function of magnetic field $B$. Blue lines are
the energy change of the ground state. Energy level crossings occur at the
critical points in the ground state as well as in excited states. Ground state
entanglement is jumping as the state is switched from an entangled state to
another state. (b) The ground state energy per spin against the magnetic field
$B$ in the thermodynamic limit. The energy is plotted together with large spin
chain case, $N=50$. The continuous energy level crossings occur in the limit
at the region $0<B<1$.
## II Ground state of XX spin chain
In this section, we discuss about the ground state of the XX model. The ground
energy state is the lowest energy state which is a function of the transverse
magnetic field $B$. Letting $B_{k}=\cos[k\pi/(N+1)]$ and defining the $k$-th
region, $B_{k+1}<B<B_{k}$, we can find the ground state
$|\psi_{g}^{k}\rangle=d_{k}d_{k-1}\cdot\cdot\cdot
d_{1}|\psi_{g}^{0}\rangle=\prod_{l=k+1}^{N}d_{l}^{\dagger}|\Omega\rangle$ and
its energy as $\epsilon_{g}^{k}=-(N-2k)B-2\sum_{l=1}^{k}\cos\left(\frac{\pi
l}{N+1}\right)$ (refer Son08 for detail). The index $k$ is an integer in
$1\leq k<N$ and $B_{k}$ represents the points where energy level crossings
occur. When $B<B_{N}$ or $B>B_{1}$, no other crossing exists and the ground
state is simply given by the fermion vacuum state. The ground state energy for
$N=4$ is plotted in Fig. 1 (a) by blue lines.
The ground state in the spin bases after $k$ level crossing can be obtained
recursively
$|\varphi_{g}^{k}\rangle=\prod_{k^{\prime}=1}^{k}\left[\sum_{l=1}^{N}S_{l}^{k^{\prime}}~{}\left(\prod_{m=1}^{l-1}\sigma^{z}_{m}\sigma_{l}^{-}\right)\right]|\uparrow\rangle^{\otimes
N}$ where $S_{l}^{k}\equiv\sqrt{2/(N+1)}\sin\left[(\pi kl)/(N+1)\right]$ and
the explicit form of the state is given as
$|\varphi_{g}^{k}\rangle=\left(\frac{2}{N+1}\right)^{\frac{k}{2}}\sum_{l_{1}<l_{2}<\cdot\cdot\cdot<l_{k}}C_{l_{1}l_{2}\cdot\cdot\cdot
l_{k}}|l_{1},l_{2},\cdots,l_{k}\rangle$ (4)
where $|l_{1},l_{2},\cdots,l_{k}\rangle$ is the state whose $l_{1}$-th,
$l_{2}$-th,$\cdots$, $l_{k}$-th indicate the position of flipped spin. The
amplitudes in the state are $C_{l_{1}l_{2}\cdot\cdot\cdot
l_{k}}=\sum_{P}(-1)^{P}\sin\left(\frac{\pi
P(1)l_{1}}{N+1}\right)\sin\left(\frac{\pi
P(2)l_{2}}{N+1}\right)\cdots\sin\left(\frac{\pi P(k)l_{k}}{N+1}\right)$. Here,
the sum extends over all the permutation of the numbers from 1 to k, (denoted
by $P(k)$). We point out that, at each crossing point (caused by the variation
of $B$), the ground state jumps discontinuously from one symmetric subspace to
the another, orthogonal to the previous one. To see the state explicitly, we
illustrate the ground states for $N=4$ in spin bases which are
$\displaystyle|\varphi_{g}^{0}\rangle$ $\displaystyle=$
$\displaystyle|\uparrow,\uparrow,\uparrow,\uparrow\rangle,~{}~{}~{}~{}|\varphi_{g}^{1}\rangle=a_{1}^{-}|\downarrow,\uparrow,\uparrow,\uparrow\rangle+a_{1}^{+}|\uparrow,\downarrow,\uparrow,\uparrow\rangle+a_{1}^{+}|\uparrow,\uparrow,\downarrow,\uparrow\rangle+a_{1}^{-}|\uparrow,\uparrow,\uparrow,\downarrow\rangle$
$\displaystyle|\varphi_{g}^{2}\rangle$ $\displaystyle=$ $\displaystyle
a_{2}\left(|\downarrow,\downarrow,\uparrow,\uparrow\rangle+2|\uparrow,\downarrow,\downarrow,\uparrow\rangle+2|\downarrow,\uparrow,\uparrow,\downarrow\rangle+|\uparrow,\uparrow,\downarrow,\downarrow\rangle+\sqrt{5}|\downarrow,\uparrow,\downarrow,\uparrow\rangle+\sqrt{5}|\uparrow,\downarrow,\uparrow,\downarrow\rangle\right)$
(5) $\displaystyle|\varphi_{g}^{3}\rangle$ $\displaystyle=$ $\displaystyle
a_{3}^{-}|\uparrow,\downarrow,\downarrow,\downarrow\rangle+a_{3}^{+}|\downarrow,\uparrow,\downarrow,\downarrow\rangle+a_{3}^{+}|\downarrow,\downarrow,\uparrow,\downarrow\rangle+a_{3}^{-}|\downarrow,\downarrow,\downarrow,\uparrow\rangle,~{}~{}~{}~{}|\varphi_{g}^{4}\rangle=|\downarrow,\downarrow,\downarrow,\downarrow\rangle$
where $a_{1}^{\pm}=-a_{3}^{\pm}=\frac{1}{2}\sqrt{1\pm\frac{1}{\sqrt{5}}}$ and
$a_{2}=-\frac{1}{2\sqrt{5}}$. From the state, it is clear that the superscript
of $\varphi$ indicates the number of down spins. The ground state at the
$k$-th region is composed of ${}_{N}C_{k}=N!/[k!(N-k)!]$ orthogonal vectors in
spin bases which are the possible choices of k spins, out of total N spins.
The ground states are in an orthogonal subspace,
$\langle\varphi_{g}^{k}|\varphi_{g}^{k^{\prime}}\rangle=\delta_{k,k^{\prime}}$.
It is found that the ground energy spectrum and the ground state become
continuous at the thermodynamic limit, $N\rightarrow\infty$. In the limit, the
sum in $\epsilon_{g}^{k}$ turns into a definite integral and thus the ground
state energy reads
$\displaystyle\lim_{N\rightarrow\infty}\frac{\epsilon_{g}(B)}{N}=\frac{2}{\pi}\left[B\left(\arccos
B-\frac{\pi}{2}\right)-\sqrt{1-B^{2}}\right].$ (6)
Within the critical region the ground state energy is analytic everywhere
except for $B=\pm 1$. From the finite size analysis we observe that such a
critical line becomes a dense set of level crossing (see Fig. 1 (b)) and
therefore is possible to be considered as a line of continuous QPT. At the
crossing points $B_{k}$, the state exists as a mixed state with equal mixture
of the states in a neighboring $k$-th region, $|\varphi_{g}^{k}\rangle$ and
$|\varphi_{g}^{k+1}\rangle$, i.e.
$\rho(B_{k})=\frac{1}{2}\left(|\varphi_{g}^{k}\rangle\langle\varphi_{g}^{k}|+|\varphi_{g}^{k+1}\rangle\langle\varphi_{g}^{k+1}|\right).$
(7)
Therefore, one can find that the state in the thermodynamics limit is also
varying from a mixed state into the other mixed state continuously as a
function of $B$. That is because the critical points $B_{k}$ is now continuous
function,
$B_{\omega}=\lim_{N\rightarrow\infty}\cos(k\pi/(N+1))=\cos(\pi\omega)$ where
$0<\omega<1$. Moreover, one would find that the mixedness of the state in the
limit is remained as a constant for any $B$ in $-1<B<1$ as
$\mbox{Tr}\rho(B)^{2}=1/2$. The mixedness (or purity) $1/2$ of the ground
state in the limit can also be proved from the limiting case of a thermal
state which will be discussed in the following section.
## III XX Spin chain in thermal equilibrium
In this section, we investigate the state in a thermal equilibrium and discuss
about the properties related with quantum phase transition. At a finite
temperature, the state exists in a mixture of all the energy state distributed
by Boltzmann statistics. The state in a diagonal basis
$|\psi_{l}\rangle=\Pi_{k=1}^{N}(d^{\dagger}_{k})^{\alpha_{k}^{(l)}}\,|\Omega\rangle$,
$\alpha_{k}^{(l)}\in\\{0,1\\}$ is given as
$\rho_{T}=\sum_{l=1}^{2^{N}}p_{l}|\psi_{l}\rangle\langle\psi_{l}|$ (8)
where $p_{l}=\exp(-\beta\epsilon_{l})/Z$ is the Boltzmann distribution of the
thermal state with the partition function $Z=e^{-\beta
NB}\prod_{k=1}^{N}\left(1+e^{-\beta\Lambda_{k}}\right)$. In terms of the spin
basis, the thermal state further can be decomposed into the states which are
in a symmetric subspace as,
$\rho_{T}=\sum_{m=0}^{N}\sum_{r=1}^{{}_{N}C_{m}}q_{r}^{m}|\varphi_{r}^{m}\rangle\langle\varphi_{r}^{m}|.$
(9)
The index $m$ in the superscript indicates a symmetric subspace where vector
components of the state contain the $m$ number of spins flipped and the
subscript index $r$ identifies the ${{}_{N}C_{m}}=N!/[m!(N-m)!]$ orthogonal
vectors within the subspace. Thus, the state satisfies the orthonormal
condition,
$\langle\varphi_{r}^{m}|\varphi_{r^{\prime}}^{m^{\prime}}\rangle=\delta_{r^{\prime},r}\delta_{m^{\prime},m}$.
The state $|\varphi_{r}^{m}\rangle$ can be obtained from the $l$-th state
$|\psi_{l}\rangle$ using the transformation in (2),
$|\varphi_{r}^{m}\rangle=\prod_{k=1}^{N}\left[\sum_{j=1}^{N}S_{j}^{k}~{}\left(\prod_{i=1}^{j-1}\sigma^{z}_{i}\sigma_{j}^{-}\right)\right]^{\alpha_{k}^{(l)}}|\uparrow\rangle^{\otimes
N}$where $S_{j}^{k}\equiv\sqrt{2/(N+1)}\sin\left[(\pi jk)/(N+1)\right]$ and
$\alpha_{k}^{(l)}\in\\{0,1\\}$. The values of the label $l$ for the binary
vectors $\vec{\alpha}_{l}=\sum_{k}\alpha_{k}^{(l)}\hat{v}_{k}$ are determined
by $r$ and $m$ through an index match $l=r+\sum_{s=0}^{m-1}{{}_{N}C_{s}}$
under the constraints $\sum_{k=1}^{N}\alpha_{k}^{(r)}=m$. (It starts from
$l=1$ when $(r,m)=(1,0)$.) Similarly, the probabilities of the Boltzmann
distribution are given as $p_{l}\equiv
q_{r}^{m}=\exp(-\beta\epsilon_{r}^{m})/Z$. The lowest energy state for a fixed
$m$ become a ground state at a given region of $B$, i.e
$|\varphi_{1}^{m}\rangle\equiv|\varphi_{g}^{m}\rangle$. Furthermore, the
corresponding energy eigenvalues by the subspace indexes $m$ and $r$ can be
found as
$\epsilon_{l}\equiv\epsilon_{r}^{m}=-(N-2m)B-2\sum_{k=1}^{N}\cos[\pi
k/(N+1)]\alpha_{k}^{(r)}.$ (10)
This shows that the slop of the energy against $B$ is invariant for the states
in a given symmetric subspace, with a fixed $m$, and, thus, it also gives a
proof that the energy level crossings of excited states occur only between the
states in a different symmetric subspace. The level crossings of the excited
states for $N=4$ spins also can be seen in Fig. 1 (a).
(a) (b)
Figure 2: Distributions of thermal state population for the $N=2$ XX spin
chain when (a) $kT=0.1$ and (b) $kT=1$. The states $|\psi^{-}\rangle$ and
$|\psi^{+}\rangle$ are the singlet and the triplet. Critical behavior
disappears as the system goes into a thermal equilibrium.
As a special case, we inspect the density matrix of the thermally excited
state when $N=2$ which is in a form,
$\rho_{T}^{N=2}=p_{1}|\uparrow,\uparrow\rangle\langle\uparrow,\uparrow|+p_{2}|\psi^{-}\rangle\langle\psi^{-}|+p_{3}|\psi^{+}\rangle\langle\psi^{+}|+p_{4}|\downarrow,\downarrow\rangle\langle\downarrow,\downarrow|$
(11)
where
$|\psi^{\pm}\rangle=\frac{1}{\sqrt{2}}\left(|\uparrow,\downarrow\rangle\pm|\downarrow,\uparrow\rangle\right)$.
The state is expended in the three different symmetric subspaces, where there
is no spin flip (m=0), one spin flip (m=1) and two spin flips (m=2). Each
subspace contains ${}_{2}C_{0}$, ${}_{2}C_{1}$ and ${}_{2}C_{2}$ orthogonal
elements of states, $\\{|\uparrow,\uparrow\rangle\\}$,
$\\{|\psi^{-}\rangle,|\psi^{+}\rangle\\}$ and
$\\{|\downarrow,\downarrow\rangle\\}$. We plot the probability distribution of
the states $p_{l}$ in Fig.2. One would find that the distribution and the
partition function become analytic for any $B$ and $T$ as $T$ is away from
zero. If we investigate entanglement in the system, one would identify that
the state become separable when $4p_{1}p_{4}>(p_{2}-p_{3})^{2}$, (from the
negativity Peres , the negative eigenvalue of partially transposed state),
leading to a critical condition for temperature $kT>1.13459$. Interestingly,
the separability condition for the two qubit thermal state is independent of
the strength of magnetic field $B$ except when $T=0$. Thus, the singular
behavior of entanglement with respect to $B$ is washed out as the temperature
increased. This also has been depicted by the purity of the two qubit system
in Fig. 3.
(a) (b)
Figure 3: (a) Purity of two qubit system as a function of magnetic field $B$
and temperature $T$. (b) Derivatives of the purity as a function of $B$ at
different temperatures, $T=0.01$ (blue), $T=0.3$ (pink), $T=0.8$ (yellow) and
$T=2$ (green). At a low temperature region, the purity of the system shows
peaks near the critical points $B=\pm 1/2$. This is the point where the
derivative of purity w.r.t. B becomes singular when $T=0$. At the high
temperature region, the purity becomes monotonic convex function of B and the
derivative of the purity approaches to a linear function.
With the density matrix of the thermal state (8), we investigate a physical
property which may be directly related to the critical behavior of the system.
The mixedness of the thermal state for any $N$ is
$\mbox{Tr}\rho_{T}^{2}=\sum_{l}p_{l}^{2}=\prod_{k=1}^{N}\Big{[}1-(1+\cosh\beta\Lambda_{k})^{-1}\Big{]}.$
(12)
This characterizes how far the state is from a pure state. The purity is
plotted in Fig.3 for a small number of qubits ($N=2$) and for a large number
of qubits ($N=10$) in Fig.4. At the $T\rightarrow\infty$ limit
($\beta\rightarrow 0$), the purity becomes $1/2^{N}$. Thus, in the high
temperature region and the thermodynamic limit, $N\rightarrow\infty$, the
purity approaches to zero where the state is in a complete mixture of
infinitely many orthogonal states. At the opposite extreme, $T\rightarrow 0$,
the purity becomes either $1$ or $1/2$ depending on the $B$ value. It is $1/2$
when $B$ is $\cos[k\pi/(N+1)]$ and 1 otherwise. When $N\rightarrow\infty$, the
purity becomes $1/2$, i.e. $\lim_{N\rightarrow\infty}\lim_{T\rightarrow
0}\mbox{Tr}\rho_{T}^{2}=1/2$ in $-1<B<1$ because there exits at least a single
value of $B$ in the interval to make $\Lambda_{k}=0$ $\exists k$. In the
$|B|>1$ region, the purity is 1\. This proves that the ground state of the
infinite XX chain in the region $|B|<1$ is the equal mixture of the
neighboring states as (7). Furthermore, it is straightforward to see that
entanglement disappears in the region $|B|>1$ and at high temperature. The
separability of infinite XX chain with entanglement witness has been fully
treated in Hide07 .
In our analysis of the XX spin chain, it has been shown that the fundamental
difference between the thermal state and the ground state of the infinite
chain is only the degree of state mixedness. In both of the cases, the changes
of states are impossible to be detected by singularity due to the continuity
of state changes. In that sense, it can be pointed out that the energy level
crossing is a particular mode of the ground state change when the state is in
a pure state. In general, the state change can be detected by the change of
entanglement, up to local unitary, which is destroyed by mixedness of the
states. In fact, entanglement identifies the true quantum phase (or state) of
a system since it is the property which can exist only in quantum systems
whether they are pure or mixed Werner89 . Our approaches are general enough to
be used for the discussions about the QPT and related properties of any other
many particle systems.
Figure 4: Plot of the purity as a function of $B$ and $T$ when $N=10$. The
state is in a pure state when $\mbox{Tr}\rho^{2}=1$ and is in a completely
mixed state when $\mbox{Tr}\rho^{2}=1/2^{10}$. The purity of state is singular
at the critical points of $B$ where the state exists as a mixture of two
orthogonal symmetric states. It becomes smaller as $T$ is increased.
Acknowledgement\- This work is supported by the National Research Foundation &
Ministry of Education, Singapore.
## References
* (1) S. Sachdev, Quantum Phase Transitions, Cambridge University Press (1999).
* (2) S. Chakravarty, B. I. Halperin and D. R. Nelson, Phys. Rev. B39, 2344(1989).
* (3) S. Sachdev and A. P. Young, Phys. Rev. Lett. 78, 2220 (1997).
* (4) L. Amico, R. Fazio, A. Osterloh and V. Vedral, Rev. Mod. Phys. 80, 517 (2008).
* (5) W. Son, L. Amico, F. Plastina and V. Vedral, arXiv:0807.1602 (2008).
* (6) S. Katsura, Phys. Rev. 127, 1508 (1962).
* (7) A. De Pasquale et al., Eur. Phys. J. Special Topics 160, 127 (2008), see also arxiv: 0801.1394 (2008).
* (8) C. Brukner, V. Vedral, quant-ph/0406040 (2004); J. Hide, W. Son, I. Lawrie and V. Vedral, Phys. Rev. A76, 022319 (2007).
* (9) A. Peres, Phys. Rev. Lett. 77, 1413 (1996); M. Horodecki, P. Horodecki and R. Horodecki, Phys. Rev. A223, 1 (1996).
* (10) R. F. Werner, Phys. Rev. A40, 4277 (1989).
|
arxiv-papers
| 2009-05-19T11:20:11 |
2024-09-04T02:49:02.729454
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Wonmin Son and Vlatko Vedral",
"submitter": "W. Son",
"url": "https://arxiv.org/abs/0905.3065"
}
|
0905.3118
|
JLAB-TH-09-989
address= aHampton University, Hampton, VA 23668, USA
bJefferson Lab, Newport News, VA 23606, USA
# What can we learn from the breaking of the Wandzura–Wilczek relation?
111Talk given by Alberto Accardi at “Spin structure at long distances”,
Jefferson Lab, March 12-13 2009.
Alberto Accardia,b, Alessandro Bacchettab, Marc Schlegelb
###### Abstract
We review the study the Wandzura–Wilczek relation for the structure function
$g_{2}$, with a particular attention on the connection with the framework of
Transverse Momentum Dependent factorization. We emphasize that the relation is
broken by two distinct twist-3 terms. In the light of these findings, we
clarify what can be deduced from the available experimental data on $g_{2}$,
which indicate a breaking of the order 20–40%, and how to individually measure
the twist-3 terms.
###### Keywords:
polarized DIS, higher twist, transverse polarization
###### :
12.38.Bx, 13.60.Hb
At large virtuality $Q^{2}$, the lepton-nucleon Deep Inelastic Scattering
(DIS) cross section scales with $x=Q^{2}/(2M\nu)$ modulo logarithmic
corrections, where $M$ is the target’s mass and $\nu$ the virtual photon
energy. At low $Q^{2}$, power suppressed contributions become important, e.g.,
target mass corrections of $O(M^{2}/Q^{2})$ and jet mass corrections of order
$O(m_{j}^{2}/Q^{2})$ with $m_{j}$ the invariant mass of the current jet
Accardi and Qiu (2008); Accardi and Melnitchouk (2008), and higher-twist (HT)
corrections of $O(\Lambda_{QCD}^{2}/Q^{2})$ related to quark-gluon
correlations inside the nucleon Ellis et al. (1983). There are many reasons
why we need to identify and measure higher-twist terms in experimental data,
for example, (i) to verify quark-hadron duality Melnitchouk et al. (2005),
(ii) to measure twist-2 Parton Distribution Functions (PDF) at large
fractional momentum $x$ and low-$Q^{2}$, e.g., the $d/u$ and $\Delta d/d$
quark ratios, sensitive to the nonperturbative structure of the nucleonFarrar
and Jackson (1975); Brodsky et al. (1995); Isgur (1999), (iii) to measure
multiparton correlations, important to understand the nucleon structure beyond
the PDFsBurkardt:2009 , (iv) to determine the perturbative QCD evolution of
the $g_{2}$ polarized structure function among othersBelitsky (1997), (v) to
calculate the high-$k_{T}$ tails of transverse momentum dependent (TMD) parton
distribution functionsJi et al. (2006).
The inclusive DIS cross section is determined by the hadronic tensor
$W^{\mu\nu}$, defined as the imaginary part of the forward virtual photon
Compton scattering amplitude. $W^{\mu\nu}$ can be decomposed in 2 unpolarized
structure functions, $F_{1,2}$, which we do not discuss here, and 2 polarized
structure functions $g_{1,2}$. Its antisymmetric part reads
$\displaystyle W^{\mu\nu}(P,q)=\frac{1}{P\cdot
q}\varepsilon^{\mu\nu\rho\sigma}q_{\rho}\Big{[}S_{\sigma}g_{1}(x,Q^{2})+\Big{(}S_{\sigma}-\frac{S\cdot
q}{P\cdot q}p_{\sigma}\Big{)}g_{2}(x,Q^{2})\Big{]}\ ,$ (1)
where $P,S$ are the target momentum and spin. Among the structure functions,
$g_{2}$ is unique because it is the only one with twist-3 contributions that
can be measured in inclusive DIS. Furthermore, its higher-twist contribution
can be isolated thanks to the Wandzura-Wilczek (WW) relation, first obtained
in the Operator Product Expansion (OPE) formalism Wandzura and Wilczek (1977):
$\displaystyle g_{2}(x,Q^{2})=g_{2}^{WW}(x,Q^{2})+\Delta(x,Q^{2})\ .$ (2)
Here $g_{2}^{WW}$ is determined by the leading twist (LT) part of $g_{1}$,
which is rather well known experimentally:
$\displaystyle
g_{2}^{WW}(x,Q^{2})=-g_{1}^{LT}(x,Q^{2})+\int_{x}^{1}\frac{dy}{y}g_{1}^{LT}(y,Q^{2})\
.$ (3)
Strictly speaking, the WW relation is the LT part of (2). The breaking term
$\Delta$ is a pure HT term, meaning that its moments are matrix elements of
local operators of twist-3 or higher, with “twist” defined as dimension minus
spin of the local operator Jaffe (1990); Jaffe and Ji (1991). In this talk we
will limit our analysis to twist-3 operators, and will drop the dependence on
$Q^{2}$ for ease of notation. Details can be found in Ref. Accardi et al.
(2009).
## 1 Parton distributions in perturbative QCD
In perturbative QCD the structure functions can be expressed as a convolution
of a perturbatively calculable coefficient, and a number of nonperturbative LT
parton distributions and HT parton correlations. In particular, the WW
relation can be obtained in the framework of collinear factorization Belitsky
(1997) or in transverse momentum dependent factorization, as we will shortly
describe.
Let’s define the quark-quark correlator
$\displaystyle\Phi^{a}_{ij}(x,\vec{k}_{T})=\int\frac{d\xi^{-}d^{2}\xi_{T}}{(2\pi)^{3}}\,e^{ik\cdot\xi}\langle
P,S\,|\,\bar{\psi}_{j}^{a}(0)\,W(0,\xi|n_{-})\,\psi_{i}^{a}(\xi)\,|\,P,S\rangle\Big{|}_{\xi^{+}=0}$
(4)
where $i,j$ are Dirac indices, $a$ is the quark flavor index, $k$ its
4-momentum, $x=k\cdot n_{-}/P\cdot n_{-}$ its fractional momentum and
$\vec{k}_{T}$ its transverse momentum relative to the parent nucleon. The
“plus” and “minus” components of a 4-vector are defined as $a^{\pm}=a\cdot
n_{\mp}$ in terms of two orthogonal light-cone vectors $n_{+}^{2}=n_{-}^{2}=0$
such that $n_{-}\cdot n_{+}=1$, and $n_{+}^{\mu}$ is proportional to $P^{\mu}$
up to mass corrections. $W$ is a Wilson line (gauge link) whose precise form
depends on the process. The direction of the Wilson line is determined by an
additional 4-vector beside $P,S$, which in tree-level analyses such as we
pursue here is identified with the light-cone vector $n_{-}$. In the light-
cone gauge $n_{-}\cdot A=0$ the Wilson line is identically equal to 1 and
$\displaystyle\
\Phi^{a}_{ij}(x,\vec{k}_{T})\stackrel{{\scriptstyle\text{LC}}}{{=}}\int\frac{d\xi^{-}d^{2}\xi_{T}}{(2\pi)^{3}}\,e^{ik\cdot\xi}\langle
P,S\,|\,\bar{\psi}_{j}^{a}(0)\psi_{i}^{a}(\xi)\,|\,P,S\rangle\Big{|}_{\xi^{+}=0}\
.$ (5)
Nonetheless, the dependence on $n_{-}$ appears explicitly in the gauge field
propagators and cannot be in general neglected.
For any Dirac matrix $\Gamma$ we define the projection
$\Phi^{a[\Gamma]}=\text{Tr}[\Gamma\Phi^{a}]/2$. The relevant TMDs are defined
as follows:
$\displaystyle\Phi^{a[\gamma^{+}\gamma_{5}]}(x,\vec{k}_{T})$
$\displaystyle=S_{L}\,g_{1}^{a}(x,\vec{k}_{T}^{2})+\frac{\vec{k}_{T}\cdot\vec{S}_{T}}{M}\,g_{1T}^{a}(x,\vec{k}_{T}^{2})$
$\displaystyle\Phi^{a[\gamma^{i}\gamma_{5}]}(x,\vec{k}_{T})$
$\displaystyle=\frac{M}{P^{+}}S_{T}^{i}\,g_{T}^{a}(x,\vec{k}_{T}^{2})+\ldots$
The inclusive DIS is determined by collinear parton distribution functions
(PDFs) which are defined by transverse momentum integration of the TMDs:
$g_{\sharp}(x)=\int d^{2}k_{T}\,g_{\sharp}(x,\vec{k}_{T})$ and
$g_{\sharp}^{(1)}(x)=\int
d^{2}k_{T}\,\frac{\vec{k}_{T}^{2}}{2M}g_{\sharp}(x,\vec{k}_{T})$, with
$\sharp$ indicating any of the above defined TMDs.
## 2 Equations of motion and Lorentz invariance
The Dirac equations of motions for the quarks, and the Lorentz invariance of
the theory imply the following 2 relations between twist-2 and pure twist-3
functions:
(EOM) $\displaystyle g_{1T}^{a(1)}(x)$
$\displaystyle=xg_{T}^{a}(x)-x\tilde{g}^{a}_{T}(x)+O(m/M)$ (6) (LIR)
$\displaystyle g_{T}^{a}(x)$
$\displaystyle=g_{1}^{a}(x)+\frac{d}{dx}\,g_{1T}^{a(1)}(x)+\hat{g}_{T}^{a}(x)$
(7)
where for light quarks we can neglect the term proportional to the quark mass
$m$ compared to a typical hadronic scale $M$. $\tilde{g}$ and $\hat{g}$ are
pure twist-3 parton correlation functions (PCF) defined in terms of the quark-
gluon-quark correlator, which in the light-cone gauge reads
$\begin{split}&i\Phi_{Fij}^{\alpha}(x,x^{\prime})\stackrel{{\scriptstyle\text{LC}}}{{=}}\int\frac{d\xi^{-}d\eta^{-}}{(2\pi)^{2}}\,e^{ik\cdot\xi}\,e^{i(k^{\prime}-k)\cdot\eta}\langle
P|\bar{\psi}_{j}(0)\,ig\,\partial^{+}_{\eta}A_{T}^{\alpha}(\eta)\,\psi_{i}(\xi)|P\rangle\Big{|}_{\begin{subarray}{c}\xi^{+}=\xi_{T}=0\\\
\eta^{+}=\eta_{T}=0\end{subarray}}\ ,\end{split}$ (8)
where $\alpha$ is a transverse index, $x^{\prime}=\tfrac{k^{\prime}\cdot
n_{-}}{P\cdot n_{-}}$ and $F$ is the QCD field strength tensor. The Lorentz
decomposition of $\Phi_{F}$ defines the relevant PCFs Boer et al. (1998);
Kanazawa and Koike (2000),
$\begin{split}i\Phi_{F}^{\rho}(x,x^{\prime})&=\frac{M}{4}\biggl{[}G_{F}(x,x^{\prime})i\epsilon_{T}^{\rho\alpha}S_{T\alpha}+\tilde{G}_{F}(x,x^{\prime})S_{T}^{\rho}\gamma_{5}+\ldots\biggr{]}n/\;\;_{+}\
,\end{split}$ (9)
where hermiticity and parity constrain
$G_{F}(x,x^{\prime})=G_{F}(x^{\prime},x)$ and
$\tilde{G}_{F}(x,x^{\prime})=-\tilde{G}_{F}(x^{\prime},x)$. The pure twist-3
functions in Eqs. (6)-(7) are particular projections over $x^{\prime}$ of
$G_{F}(x,x^{\prime})$ and $\tilde{G}_{F}(x,x^{\prime})$(PV denotes the
principal value):
$\displaystyle x\tilde{g}_{T}^{a}(x)$ $\displaystyle=\text{PV}\int
dx^{\prime}\,\frac{G_{F}(x,x^{\prime})+\tilde{G}_{F}(x,x^{\prime})}{2(x^{\prime}-x)}.$
(10) $\displaystyle\hat{g}_{T}(x)^{a}$ $\displaystyle=\text{PV}\int
dx^{\prime}\,\frac{\tilde{G}_{F}(x,x^{\prime})/(x-x^{\prime})}{x-x^{\prime}},$
(11)
and as such are sensitive to different parts of the quark-gluon-quark
correlator. It is very important to find several such quantities, because
physically it is only possible to measure $x$ but the full dependence on
$(x,x^{\prime})$ is needed, e.g., to determine the QCD evolution of $g_{2}$ or
to compute the high-$k_{T}$ tails of TMDs. Note also that since the integrand
in Eq. (11) is antisymmetric in $x$, $x^{\prime}$, we obtain the non trivial
property
$\int_{0}^{1}dx\,\hat{g}_{T}^{a}(x)=0\ .$ (12)
## 3 The WW relation
Eliminating $g_{1T}^{a(1)}$ from (6)-(7) one can derive the Wandzura-Wilczek
relation (2) for the structure function
$g_{2}=-g_{1}+\frac{1}{2}\sum_{a}g_{T}^{2}$, and explicitly write down its
breaking term $\Delta=g_{2}-g_{2}^{\rm{WW}}$:
$\begin{split}g_{2}(x)-g_{2}^{\rm{WW}}(x)=\frac{1}{2}\,\sum_{a}e_{a}^{2}\biggl{(}\tilde{g}_{T}^{a}(x)-\int_{x}^{1}\frac{dy}{y}\tilde{g}_{T}^{a}(y)+\int_{x}^{1}\frac{dy}{y}\hat{g}_{T}^{a}(y)\Biggr{)}\
,\end{split}$ (13)
Note that $g_{2}$ explicitly satisfies the Burkhardt–Cottingham sum rule
$\int_{0}^{1}g_{2}(x)=0$, which is not in general guaranteed in the OPE Jaffe
(1990); Jaffe and Ji (1991).
A natural question is: how much is the WW relation broken? Model calculations
have been used to repeatedly argue that the pure twist-3 terms are not
necessarily small Jaffe and Ji (1991); Harindranath and Zhang (1997). However,
in the recent past, since the LIR-breaking $\hat{g}_{T}$ term was not
considered in Eq. (13) and the quark-mass term with $h_{1}$ was neglected, the
breaking of the WW relation was considered to be a direct measurement of the
pure twist-3 term $\tilde{g}_{T}$. Therefore, the presumed experimental
validity of the WW relation, which we are presently going to challenge, was
taken as evidence that $\tilde{g}_{T}$ is small. This observation was also
typically generalized to assume that all pure twist-3 terms are small.
Our present analysis shows instead that, precisely due to the presence of
$\hat{g}_{T}$, the measurement of the breaking of the WW relation does not
offer anymore the possibility of measuring a single pure twist-3 matrix
element, nor to generically infer its size. On the theory side, the quark-
target model of Refs. Harindranath and Zhang (1997); Kundu and Metz (2002) can
be used to determine both $\tilde{g}_{T}$ and $\hat{g}_{T}$, which are both
comparable in size to the the other twist-2 functions. On the experimental
side, we used data on polarized DIS on proton and neutron targets to fit the
WW breaking term $\Delta(x)$ defined as the difference of the experimental
data and $g_{2}^{\rm{WW}}$:
$\displaystyle\Delta(x)=g_{2}^{\rm{ex}}(x,Q^{2})-g_{2}^{\rm{WW}}(x,Q^{2})\ .$
(14)
$g_{2}^{\text{WW}}$ was determined using the LSS06 leading twist $g_{1}$
parametrization Leader et al. (2007), and $\Delta$ fitted to a functional form
allowing for a change in sign and satisfying the Burkhardt–Cottingham sum
rule. The result is presented in Fig. 1, and Table 1, where the deviation from
the WW relation is quantified for a given $[x^{\rm{min}},x^{\rm{max}}]$
interval by
$\displaystyle
r^{2}=\frac{\int_{y^{\rm{min}}}^{y^{\rm{max}}}dy\,x\Delta_{\rm{th}}^{2}(x)}{\int_{y^{\rm{min}}}^{y^{\rm{max}}}dy\,xg_{2}^{2}(x)}\
,$ (15)
with $y=\log(x)$. The value of $r$ is a good approximation to the relative
magnitude of $\Delta$ and $g_{2}$, which are sign-changing functions. For the
proton, we considered three intervals: the whole measured $x$ range, [0.02,1];
the low-$x$ region, [0.02,0.15]; the large-$x$ region, [0.15,1]. For the
neutron, due to the limited statistical significance of the low-x data, we
limit ourselves to quoting the value of $r$ for the large-$x$ region,
[0.15,1].
In summary, we have found that the experimental data are compatible with a
substantial breaking of the WW relation in the 15-40% range.
Figure 1: Top panels: the experimental proton and neutron $g_{2}$ structure function compared to $g_{2}^{\rm{WW}}$. The crosses are $g_{2}^{\rm{WW}}$ computed at the experimental kinematics. The lines are $g_{2}^{\rm{WW}}$ computed at the average $Q^{2}$ of the E155x experiment: the solid (dashed) line is computed with the LSS2006 fits of $g_{1}$, with (solid) and without (dashed) the HT contribution obtained in the fit. Data points for the proton target Abe et al. (1998); Anthony et al. (2003) have been slightly shifted in $x$ for clarity. For the neutron only the high precision data from Anthony et al. (2003); Zheng et al. (2004); Kramer et al. (2005) have been included. Bottom panels: The WW-breaking term $\Delta_{\text{th}}$ for model (I) and (II) compared to the higher-twist contribution to $g_{1}$. See text for further details. | proton | $\chi^{2}$/d.o.f. | $r_{\text{tot}}$ | $r_{\text{low}}$ | $r_{\text{hi}}$
---|---|---|---|---|---
(I) | $\Delta_{\rm{th}}$ | = 0 | 1.22 | | |
(II) | $\Delta_{\rm{th}}$ | = $\alpha(1-x)^{\beta}\bigl{(}(\beta+2)x-1\bigr{)}$ | | | |
| $\alpha$ | = $0.13\pm 0.05$ | | | |
| $\beta$ | = $4.4\pm 1.0$ | 1.05 | 15-32% | 18-36% | 14-31%
| neutron | | | |
(I) | $\Delta_{\rm{th}}$ | = 0 | 1.66 | | |
(II) | $\Delta_{\rm{th}}$ | = $\alpha(1-x)^{\beta}\bigl{(}(\beta+2)x-1\bigr{)}$ | | | |
| $\alpha$ | = $0.64\pm 0.92$ | | | |
| $\beta$ | = $24\pm 10$ | 1.11 | | | 18-40%
Table 1: Results of the 1-parameter fits of the WW breaking term
$\Delta_{\rm{th}}$ for different choices of its functional form. value $r$ of
the relative size of the breaking term is computed for the whole measured $x$
range, [0.02,1]; the low-$x$ region, [0.02,0.15]; the large-$x$ region,
[0.15,1]. See text for further details.
## 4 A proposal for an experimental campaign
Figure 1 clearly shows the need for better precision in $g_{2}$ measurements
with both proton and neutron targets. In particular, for the neutron high
precision is needed away from $x\approx 0.15-0.20$ where JLab E01-012 data
almost completely determine the presented fits. But even if in the future the
WW approximation is found to be more precise than in our analysis, we would
only be able to conclude that
$\sum_{a}e_{a}^{2}\biggl{(}\tilde{g}_{T}^{a}(x)-\int_{x}^{1}\frac{dy}{y}\tilde{g}_{T}^{a}(y)+\int_{x}^{1}\frac{dy}{y}\hat{g}_{T}^{a}(y)\biggr{)}\approx
0\ .$ (16)
This can clearly happen because either $\hat{g}_{T}$ and $\tilde{g}_{T}$ are
both small, or because they accidentally cancel each other. Therefore no
information can be obtained on the size of the twist-3 quark-gluon-quark term
$\tilde{g}_{T}$ from the experimental data on $g_{2}$ alone.222Note that these
results were essentially already obtained in Ref. Metz et al. (2008). In that
work, however, the authors assumed $\tilde{g}_{T}$ small and the WW relation
small, obtaining a small $\hat{g}_{T}$, which is unjustified as we have just
discussed.
However, individually determining the size of $\hat{g}_{T}$ and
$\tilde{g}_{T}$ is very important to gather information on the $x$,
$x^{\prime}$ dependence of the quark-gluon-quark correlator. This can be
experimentally accomplished by using the EOM (6) and LIR (7) and measuring the
$g_{1T}^{(1)}$ function, accessible in semi-inclusive deep inelastic
scattering with transversely polarized targets and longitudinally polarized
lepton beams (see, e.g., Ref. Bacchetta et al. (2007)):
$\displaystyle\begin{split}\hat{g}_{T}^{a}(x)&=g_{T}^{a}(x)-g_{1}^{a}(x)-\frac{d}{dx}\,g_{1T}^{a(1)}(x)\\\
\tilde{g}_{T}^{a}(x)&=g_{T}^{a}(x)-\frac{1}{x}g_{1T}^{a(1)}(x)\ .\end{split}$
(17)
In TMD factorization, $g_{1T}^{(1)}$ is the first transverse moment of a
twist-2 TMD. Its experimental determination is challenging because it requires
measuring a double spin asymmetry in semi-inclusive DIS up to rather large
hadron transverse momentum. Furthermore, in the LIR it appears differentiated
in $x$, which requires a rather fine $x$ binning. Preliminary data from the
E06-014 and SANE (E-07-003) experiments at Jefferson Lab will soon be
available, and will demonstrate the feasibility of the proposed measurement of
$\hat{g}_{T}$ and $\tilde{g}_{T}$.
This measurement is also very important because the EOM (6), LIR (7) and WW
relation breaking (13) provide 3 independent measurement for 2 independent
quantities. Verifying them will constitute a pretty stringent test of TMD
factorization and its connection to collinear factorization.
This work was supported by the DOE contract No. DE-AC05-06OR23177, under which
Jefferson Science Associates, LLC operates Jefferson Lab, and NSF award No.
0653508.
## References
* Accardi and Qiu (2008) A. Accardi, and J.-W. Qiu, _JHEP_ 07, 090 (2008).
* Accardi and Melnitchouk (2008) A. Accardi, and W. Melnitchouk, _Phys. Lett._ B670, 114–118 (2008).
* Ellis et al. (1983) R. K. Ellis, W. Furmanski, and R. Petronzio, _Nucl. Phys._ B212, 29 (1983).
* Melnitchouk et al. (2005) W. Melnitchouk, R. Ent, and C. Keppel, _Phys. Rept._ 406, 127–301 (2005).
* Farrar and Jackson (1975) G. R. Farrar, and D. R. Jackson, _Phys. Rev. Lett._ 35, 1416 (1975).
* Brodsky et al. (1995) S. J. Brodsky, M. Burkardt, and I. Schmidt, _Nucl. Phys._ B441, 197–214 (1995).
* Isgur (1999) N. Isgur, _Phys. Rev._ D59, 034013 (1999).
* (8) M. Burkardt, contribution to these proceedings.
* Belitsky (1997) A. V. Belitsky (1997), hep-ph/9703432.
* Ji et al. (2006) X. Ji, J.-W. Qiu, W. Vogelsang, and F. Yuan, _Phys. Rev. Lett._ 97, 082002 (2006).
* Wandzura and Wilczek (1977) S. Wandzura, and F. Wilczek, _Phys. Lett._ B72, 195 (1977).
* Jaffe (1990) R. L. Jaffe, _Comments Nucl. Part. Phys._ 19, 239 (1990).
* Jaffe and Ji (1991) R. L. Jaffe, and X. Ji, _Phys. Rev._ D43, 724–732 (1991).
* Accardi et al. (2009) A. Accardi, A. Bacchetta, W. Melnitchouk, and M. Schlegel, in preparation.
* Boer et al. (1998) D. Boer, P. J. Mulders, and O. V. Teryaev, _Phys. Rev._ D57, 3057–3064 (1998).
* Kanazawa and Koike (2000) Y. Kanazawa, and Y. Koike, _Phys. Lett._ B478, 121–126 (2000).
* Harindranath and Zhang (1997) A. Harindranath, and W.-M. Zhang, _Phys. Lett._ B408, 347–356 (1997).
* Kundu and Metz (2002) R. Kundu, and A. Metz, _Phys. Rev._ D65, 014009 (2002).
* Leader et al. (2007) E. Leader, A. V. Sidorov, and D. B. Stamenov, _Phys. Rev._ D75, 074027 (2007).
* Abe et al. (1998) K. Abe, et al., _Phys. Rev._ D58, 112003 (1998).
* Anthony et al. (2003) P. L. Anthony, et al., _Phys. Lett._ B553, 18–24 (2003).
* Zheng et al. (2004) X. Zheng, et al., _Phys. Rev._ C70, 065207 (2004).
* Kramer et al. (2005) K. Kramer, et al., _Phys. Rev. Lett._ 95, 142002 (2005).
* Metz et al. (2008) A. Metz, P. Schweitzer, and T. Teckentrup, arXiv:0810.5212 [hep-ph].
* Bacchetta et al. (2007) A. Bacchetta, M. Diehl, K. Goeke, A. Metz, P. J. Mulders, and M. Schlegel, _JHEP_ 02, 093 (2007).
|
arxiv-papers
| 2009-05-19T14:36:35 |
2024-09-04T02:49:02.735549
|
{
"license": "Public Domain",
"authors": "Alberto Accardi, Alessandro Bacchetta, Marc Schlegel (Jefferson Lab)",
"submitter": "Alessandro Bacchetta",
"url": "https://arxiv.org/abs/0905.3118"
}
|
0905.3205
|
This paper has been withdrawn by the author(s), due to need for more
improvement.
|
arxiv-papers
| 2009-05-20T00:46:22 |
2024-09-04T02:49:02.742565
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Weikai Qi, Yong Chen, and Yilong Han",
"submitter": "Yong Chen",
"url": "https://arxiv.org/abs/0905.3205"
}
|
0905.3253
|
†
# Arbitrary rotation invariant random matrix ensembles and supersymmetry:
orthogonal and unitary–symplectic case
Mario Kieburga)†, Johan Grönqvistb) and Thomas Guhra) a)Universität Duisburg-
Essen, Lotharstraße 1, 47048 Duisburg, Germany
b)Matematisk Fysik, LTH, Lunds Universitet, Box 118, 22100 Lund, Sweden
mario.kieburg@uni-due.de
###### Abstract
Recently, the supersymmetry method was extended from Gaussian ensembles to
arbitrary unitarily invariant matrix ensembles by generalizing the
Hubbard–Stratonovich transformation. Here, we complete this extension by
including arbitrary orthogonally and unitary–symplectically invariant matrix
ensembles. The results are equivalent to, but the approach is different from
the superbosonization formula. We express our results in a unifying way. We
also give explicit expressions for all one–point functions and discuss
features of the higher order correlations.
###### pacs:
02.30.Px, 05.30.Ch, 05.30.-d, 05.45.Mt
††: J. Phys. A: Math. Gen.
## 1 Introduction
In random matrix theory, supersymmetry is an indispensable tool [1, 2, 3, 4].
Recently, this method was extended from Gaussian probability densities to
arbitrary rotation invariant ones. Presently, there are two approaches
referred as superbosonization. The first approach is a generalization of the
Hubbard–Stratonovich transformation for rotation invariant random matrix
ensembles [5]. The basic idea is the introduction of a proper
Dirac–distribution in superspace, extending earlier work in the context of
scattering theory [6], universality considerations [7], field theory [8, 9]
and quantum chromodynamics [10]. The second approach is the superbosonization
formula developed in Refs. [11, 12]. It is an identity for integrals over
superfunctions on rectangular supermatrices which are rotation invariant under
an ordinary group.
Here, we further extend the generalized Hubbard–Stratonovich transformation to
the orthogonal and the unitary symplectic symmetry class in a unifying way. To
this end, we use an analog of the Sekiguchi differential operator for ordinary
matrix Bessel–functions. We also aim at a presentation which is mathematically
more sound than the one in Ref. [5].
The article is organized as follows. The problem is posed in Sec. 2. We give
an outline of the calculation in Sec. 3. In Sec. 4, we present the generalized
Hubbard–Stratonovich transformation. In Sec. 5, we carry out the calculation
for arbitrary ensembles as far as possible. Then, we restrict the computation
to the three classical symmetry classes. We, thereby, extend the
supersymmetric Ingham–Siegel integral [5]. In Sec. 6, we give a more compact
expression of the generating function in terms of supermatrix
Bessel–functions. We show that the generating function is independent of the
chosen representation for the characteristic function. The one–point and
higher correlation functions are expressed as eigenvalue integrals in Sec. 7.
In the appendices, we present details of the calculations.
## 2 Posing the problem
We consider a sub-vector space $\mathfrak{M}_{N}$ of the hermitian $N\times
N$–matrices ${\rm Herm\,}(2,N)$. ${\rm Herm\,}(\beta,N)$ is the set of real
orthogonal ($\beta=1$), hermitian ($\beta=2$) and quaternionic self-adjoint
($\beta=4$) matrices and $\beta$ is the Dyson-index. We use the complex
$2\times 2$ dimensional matrix representation for quaternionic numbers
$\mathbb{H}$. The results can easily be extended to other representations of
the quaternionic field. For the relation between the single representations,
we refer to a work by Jiang [13].
The object of interest is an arbitrary sufficiently integrable probability
density $P$ on $\mathfrak{M}_{N}$. Later, we assume that $P$ is an invariant
function under the action of the group
${\rm U\,}^{(\beta)}(N)=\left\\{\begin{array}[]{ll}{\rm O}(N)&,\ \beta=1\\\
{\rm U\,}(N)&,\ \beta=2\\\ {\rm USp}(2N)&,\ \beta=4\end{array}\right.$ (2.1)
and $\mathfrak{M}_{\gamma_{2}N}={\rm Herm\,}(\beta,N)$. Here, we introduce
$\gamma_{2}=1$ for $\beta\in\\{1,2\\}$ and $\gamma_{2}=2$ for $\beta=4$ and,
furthermore, $\gamma_{1}=2\gamma_{2}/\beta$ and
$\tilde{\gamma}=\gamma_{1}\gamma_{2}$. These constants will play an important
role in the sequel.
We are interested in the $k$–point correlation functions
$R_{k}(x)=\mathbf{d}^{k}\int\limits_{\mathfrak{M}_{N}}P(H)\prod\limits_{p=1}^{k}\tr\delta(x_{p}\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{N}-H)d[H]$ (2.2)
with the $k$ energies $x={\rm diag\,}(x_{1},\ldots,x_{k})$. Here, $\mathbf{d}$
is the inverse averaged eigenvalue degeneracy of an arbitrary matrix
$H\in\mathfrak{M}_{N}$. The measure $d[H]$ is defined as in Ref. [14], it is
the product of all real and imaginary parts of the matrix entries. For
example, we have $\mathbf{d}=1/2$ for $\mathfrak{M}_{2N}={\rm Herm\,}(4,N)$
and $\mathbf{d}=1$ for no eigenvalue degeneracy as for $\mathfrak{M}_{N}={\rm
Herm\,}(\beta,N)$ with $\beta\in\\{1,2\\}$. We use in Eq. (2.2) the
$\delta$–distribution which is defined by the matrix Green’s function. The
definition of the $k$–point correlation function (2.2) differs from Mehta’s
[15]. The two definitions can always be mapped onto each other as explained
for example in Ref. [4].
We recall that it is convenient to consider the more general function
$\widehat{R}_{k}\left(x^{(L)}\right)=\mathbf{d}^{k}\int\limits_{\mathfrak{M}_{N}}P(H)\prod\limits_{p=1}^{k}\tr[(x_{p}+L_{p}\imath\varepsilon)\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{N}-H]^{-1}d[H]$ (2.3)
where we have suppressed the normalization constant. The quantities $L_{j}$ in
$x^{(L)}={\rm
diag\,}(x_{1}+L_{1}\imath\varepsilon,\ldots,x_{k}+L_{k}\imath\varepsilon)$ are
elements in $\\{\pm 1\\}$. We define $x^{\pm}={\rm
diag\,}(x_{1}\pm\imath\varepsilon,\ldots,x_{k}\pm\imath\varepsilon)$.
Considering the Fourier transformation of (2.2) we have
$\displaystyle r_{k}(t)$ $\displaystyle=$
$\displaystyle(2\pi)^{-k/2}\int\limits_{\mathbb{R}^{k}}R_{k}(x)\prod\limits_{p=1}^{k}\exp\left(\imath
x_{p}t_{p}\right)d[x]=$ (2.4) $\displaystyle=$
$\displaystyle\left(\frac{\mathbf{d}}{\sqrt{2\pi}}\right)^{k}\int\limits_{\mathfrak{M}_{N}}P(H)\prod\limits_{p=1}^{k}\tr\exp\left(\imath
Ht_{p}\right)d[H]\ .$
The Fourier transformation of (2.3) yields
$\displaystyle\widehat{r}_{k}(t)$ $\displaystyle=$
$\displaystyle(2\pi)^{-k/2}\int\limits_{\mathbb{R}^{k}}\widehat{R}_{k}\left(x^{(L)}\right)\prod\limits_{p=1}^{k}\exp\left(\imath
x_{p}t_{p}\right)d[x]=$ (2.5) $\displaystyle=$
$\displaystyle\prod\limits_{p=1}^{k}\left[-L_{p}\
2\pi\imath\Theta(-L_{p}t_{p})\exp\left(\varepsilon
L_{p}t_{p}\right)\right]r_{k}(t)$
where $\Theta$ is the Heavyside–distribution.
As in Ref. [5], the $k$–point correlation function is completely determined by
Eq. (2.3) with $L_{p}=-1$ for all $p$ if the Fourier transform (2.4) is entire
in all entries, i.e. analytic in all entries with infinite radius of
convergence. We obtain such a Fourier transform if the $k$–point correlation
function $R_{k}$ is a Schwartz–function on $\mathbb{R}^{k}$ with the property
$\int\limits_{\mathbb{R}^{k}}|R_{k}(x)|\prod\limits_{p=1}^{k}\exp\left(\tilde{\delta}x_{p}\right)d[x]<\infty\quad,\quad\forall\tilde{\delta}\in\mathbb{R}\
.$ (2.6)
This set of functions is dense in the set of Schwartz–functions on
$\mathbb{R}^{k}$ without this property. The notion dense refers to uniform
convergence. This is true since every Schwartz–function times a Gaussian
distribution $\exp\left(-\epsilon\sum\limits_{p=1}^{k}x_{p}^{2}\right)$,
$\epsilon>0$, is a Schwartz–function and fulfils Eq. (2.6). We proof that
$r_{k}$, see Eq. (2.4), is indeed entire in all entries for such $k$–point
correlation functions. To this end, we consider the function
$r_{k\delta}(t)=\int\limits_{\mathfrak{B}_{\delta}}R_{k}(x)\prod\limits_{p=1}^{k}\exp\left(\imath
x_{p}t_{p}\right)d[x],$ (2.7)
where $\mathfrak{B}_{\delta}$ is the closed $k$-dimensional real ball with
radius $\delta\in\mathbb{R}^{+}$. Due to the Paley–Wiener theorem [16],
$r_{k\delta}$ is for all $\delta\in\mathbb{R}^{+}$ entire analytic. Let
$\mathfrak{B}_{\tilde{\delta}}^{\mathbb{C}}$ be another $k$-dimensional
complex ball with radius $\tilde{\delta}\in\mathbb{R}^{+}$. Then, we have
$\underset{\delta\to\infty}{\lim}\underset{t\in\mathfrak{B}_{\tilde{\delta}}^{\mathbb{C}}}{\sup}|r_{k\delta}(t)-r_{k}(t)|\leq\underset{\delta\to\infty}{\lim}\int\limits_{\mathbb{R}^{k}\setminus\mathfrak{B}_{\delta}}|R_{k}(x)|\prod\limits_{p=1}^{k}\exp\left(\tilde{\delta}x_{p}\right)d[x]=0\
.$ (2.8)
The limit of $r_{k\delta}$ to $r_{k}$ is uniform on every compact support on
$\mathbb{C}^{k}$. Thus, $r_{k}$ is entire analytic.
The modified correlation function $\widehat{R}_{k}$ for all choices of the
$L_{p}$ can be reconstructed by Eq. (2.5). In Sec. 7, we extend the results by
a limit–value–process in a local convex way to non-analytic functions.
We derive $\widehat{R}_{k}\left(x^{-}\right)$ from the generating function
$Z_{k}\left(x^{-}+J\right)=\int\limits_{\mathfrak{M}_{N}}P(H)\prod\limits_{p=1}^{k}\frac{\det[H-(x_{p}^{-}+J_{p})\leavevmode\hbox{\small
1\kern-3.8pt\normalsize
1}_{N}]}{\det[H-(x_{p}^{-}-J_{p})\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{N}]}d[H]$ (2.9)
by differentiation with respect to the source variables [17]
$\widehat{R}_{k}\left(x^{-}\right)=\left(\frac{\mathbf{d}}{2}\right)^{k}\left.\frac{\partial^{k}}{\prod_{p=1}^{k}\partial
J_{p}}Z_{k}\left(x^{-}+J\right)\right|_{J=0}$ (2.10)
where $x^{-}+J=x^{-}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{4}+{\rm diag\,}(J_{1},\ldots,J_{k})\otimes{\rm
diag\,}(-\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{2},\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{2})$. By
definition, $Z_{k}$ is normalized to unity at $J=0$.
## 3 Sketch of our approach
To provide a guideline through the detailed presentation to follow in the
ensuing Sections, we briefly sketch the main ideas as in Ref. [5] and as
further extended in the present contribution.
To express the generating function (2.9) as an integral in superspace, we
write the determinants as Gaussian integrals over vectors of ordinary and
Grassmann variables. We then perform the ensemble average which is equivalent
to calculating the characteristic function
$\Phi(K)=\int P(H)\exp(\imath\tr HK)d[H]$ (3.1)
of the probability density. The rotation invariance of $P(H)$ carries over to
$\Phi(K)$. The ordinary matrix $K$ contains the abovementioned vectors of
ordinary and Grassmann variables as dyadic matrices. It has a dual matrix $B$
in superspace whose entries are all scalarproducts of these vectors. The
reduction in the degrees of freedom is fully encoded in this duality, as the
dimensions of $K$ and $B$ scale with $N$ and $k$, respectively. The crucial
identity
$\tr K^{m}={\rm Str\,}B^{m},\quad\forall m\in\mathbb{N},$ (3.2)
yields the supersymmetric extension of the rotation invariant characteristic
function,
$\Phi(K)=\Phi(\tr K,\tr K^{2},...)=\Phi({\rm Str\,}B,{\rm
Str\,}B^{2},...)=\Phi(B)\ ,$ (3.3)
which is now viewed as a function in ordinary and superspace. We rewrite it by
inserting a proper Dirac–distribution in superspace,
$\displaystyle\Phi(B)$ $\displaystyle=$
$\displaystyle\int\Phi(\rho)\delta(\rho-B)d[\rho]$ (3.4) $\displaystyle\sim$
$\displaystyle\int\int\Phi(\rho)\exp[\imath{\rm
Str\,}(\rho-B)\sigma]d[\rho]d[\sigma]\ ,$ (3.5)
where the supermatrix $\rho$ and $\sigma$ are introduced as integration
variables. The vectors of ordinary and Grassmann variables now appear as in
the conventional Hubbard–Stratonovich transformation and can hence be
integrated out in the same way. We are left with the integrals over $\rho$ and
$\sigma$. If we do the integral over $\rho$ we arrive at the result
$Z_{k}\left(x^{-}+J\right)\sim\int Q(\sigma){\rm
Sdet\,}^{-N/\gamma_{1}}(\sigma-x^{-}-J)d[\sigma].$ (3.6)
for the generating function. The superfunction $Q$ is the superspace Fourier
transform of $\Phi$ and plays the role of a probability density in superspace,
$Q(\sigma)=\int\Phi(\rho)\exp(\imath{\rm Str\,}\rho\sigma)d[\rho]\ .$ (3.7)
If we choose to integrate over $\sigma$ instead, we obtain another
representation of the generating function
$Z_{k}\left(x^{-}+J\right)\sim\int\Phi(\rho)I(\rho)\exp[-\imath{\rm
Str\,}\rho(x^{-}+J)]d[\rho]\ ,$ (3.8)
which still contains the characteristic function. The distribution $I(\rho)$
appears. It is the supersymmetric version of the Ingham–Siegel integral. It is
a rotation invariant function resulting from the Fourier transformation of the
superdeterminant in Eq. (3.6).
One way to proceed further is to diagonalize the supermatrix $\rho$ and to
integrate over the angles. We may omit Efetov–Wegner terms and have
$Z_{k}\left(x^{-}+J\right)\sim\int\Phi(r)I(r)\varphi(-\imath r,x^{-}+J)d[r],$
(3.9)
where $\varphi$ is a supermatrix Bessel–function. The differentiation with
respect to $J$ gives $\widehat{R}_{k}$. We can introduce other signatures of
$L$ by Fourier transformation of Eq. (3.8) and identification with Eq. (2.5).
Eventually, we find the correlation functions $R_{k}$.
## 4 Generalized Hubbard–Stratonovich transformation
In Sec. 4.1, we express the determinants in Eq. (2.9) as Gaussian integrals
and introduce the characteristic function of the matrix ensemble. In Sec. 4.2,
we qualitatively present the duality between ordinary and superspace which is
quantitatively discussed in Sec. 4.3. Then, we restrict the matrix ensembles
to the classical symmetry classes. In Sec. 4.4, we investigate the
diagonalization of the dyadic matrix $K$ appearing from the Gaussian
integrals. The ambiguity of the supersymmetric extension of the characteristic
function is discussed in Sec. 4.5. In Sec. 4.6, we present the symmetries of
the appearing supermatrices. In Sec. 4.7, we replace the dyadic supermatrix in
the supersymmetric extended characteristic function with a symmetric
supermatrix discussed in the section before.
### 4.1 Average over the ensemble and the characteristic function
To formulate the generating function as a supersymmetric integral, we consider
a complex Grassmann algebra $\Lambda=\bigoplus\limits_{j=0}^{2Nk}\Lambda_{j}$
with $Nk$-pairs $\\{\zeta_{jp},\zeta_{jp}^{*}\\}_{j,p}$ of Grassmann variables
[18]. We define the $k$ anticommuting vectors and their adjoint
$\zeta_{p}=(\zeta_{1p},\ldots,\zeta_{Np})^{T}\ \ \ {\rm and}\ \ \
\zeta_{p}^{\dagger}=(\zeta_{1p}^{*},\ldots,\zeta_{Np}^{*})\ ,$ (4.1)
respectively. For integrations over Grassmann variables, we use the
conventions of Ref. [14]. We also consider $k$ $N$–dimensional complex vectors
$\\{z_{p},z_{p}^{\dagger}\\}_{1\leq p\leq k}$. In the usual way, we write the
determinants as Gaussian integrals and find for Eq. (2.9)
$\displaystyle Z_{k}(x^{-}+J)$ $\displaystyle=$
$\displaystyle(-\imath)^{Nk}\int\limits_{\mathfrak{M}_{N}}\int\limits_{\mathfrak{C}_{kN}}d[\zeta]d[z]d[H]P(H)\times$
(4.2) $\displaystyle\times$ $\displaystyle{\rm
exp}\left(\imath\sum\limits_{p=1}^{k}\left\\{\zeta_{p}^{\dagger}[H-(x_{p}^{-}+J_{p})\leavevmode\hbox{\small
1\kern-3.8pt\normalsize
1}_{N}]\zeta_{p}+z_{p}^{\dagger}[H-(x_{p}^{-}-J_{p})\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{N}]z_{p}\right\\}\right)$
where
$d[\zeta]=\prod\limits_{p=1}^{k}\prod\limits_{j=1}^{N}d\zeta_{jp}d\zeta_{jp}^{*}$,
$d[z]=\prod\limits_{p=1}^{k}\prod\limits_{j=1}^{N}dz_{jp}dz_{jp}^{*}$ and
$\mathfrak{C}_{kN}=\mathbb{C}^{kN}\times\Lambda_{2Nk}$. Using
$\sum\limits_{p=1}^{k}\left(\zeta_{p}^{\dagger}H\zeta_{p}+z_{p}^{\dagger}Hz_{p}\right)=\tr
H\widetilde{K}$ (4.3)
with
$\widetilde{K}=\sum\limits_{p=1}^{k}\left(z_{p}z_{p}^{\dagger}-\zeta_{p}\zeta_{p}^{\dagger}\right)$
(4.4)
leads to
$\displaystyle Z_{k}(x^{-}+J)$ $\displaystyle=$
$\displaystyle(-\imath)^{Nk}\int\limits_{\mathfrak{C}_{kN}}\mathcal{F}P\left(\hat{\pi}(\mathfrak{M}_{N};\widetilde{K})\right)\times$
(4.5) $\displaystyle\times$ $\displaystyle{\rm
exp}\left(-\imath\sum\limits_{p=1}^{k}\left[(x_{p}^{-}+J_{p})\zeta_{p}^{\dagger}\zeta_{p}+(x_{p}^{-}-J_{p})z_{p}^{\dagger}z_{p}\right]\right)d[\zeta]d[z]\
.$
where the integration over $H$ is the Fourier transformation of the
probability density $P$,
$\mathcal{F}P\left(\hat{\pi}(\mathfrak{M}_{N};\widetilde{K})\right)=\int\limits_{\mathfrak{M}_{N}}P(H)\exp\left(\imath\tr
H\widetilde{K}\right)d[H]\ .$ (4.6)
This Fourier transform is called characteristic function and is denoted by
$\Phi$ in Ref. [5] and in Eq. (3.1). The projection operator
$\hat{\pi}(\mathfrak{M}_{N})$ onto the space $\mathfrak{M}_{N}$ is crucial.
For $\mathfrak{M}_{\gamma_{2}N}={\rm Herm\,}(\beta,N)$ the projection operator
is
$\hat{\pi}\left({\rm
Herm\,}(\beta,N);\widetilde{K}\right)=\frac{1}{2}\left[\widetilde{K}+\widehat{Y}(\widetilde{K})\right]$
(4.7)
with
$\widehat{Y}(\widetilde{K})=\left\\{\begin{array}[]{ll}\widetilde{K}^{T}&,\
\beta=1\\\ \widetilde{K}&,\ \beta=2\\\ \left(Y_{{\rm
s}}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{N}\right)\widetilde{K}^{T}\left(Y_{{\rm
s}}^{T}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N}\right)&,\
\beta=4\end{array}\right.$ (4.8)
and the symplectic unit
$Y_{{\rm s}}=\left[\begin{array}[]{cc}0&1\\\ -1&0\end{array}\right]\ ,$ (4.9)
where $\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N}$ is the $N\times
N$–unit matrix. The transposition in Eq. (4.8) can also be replaced by the
complex conjugation due to $\widetilde{K}^{\dagger}=\widetilde{K}$. The
projection onto the set of diagonal matrices
$\bigoplus\limits_{j=1}^{N}\mathbb{R}$ is
$\hat{\pi}\left(\bigoplus_{j=1}^{N}\mathbb{R};\widetilde{K}\right)={\rm
diag\,}\left(\widetilde{K}_{11},\widetilde{K}_{22},\ldots,\widetilde{K}_{NN}\right)\
.$ (4.10)
### 4.2 Duality between ordinary and superspace
Is it always possible to find a supermatrix representation for the
characteristic function $\mathcal{F}P$ such that Eq. (4.5) has an integral
representation over supermatrices as it is known [5, 12] for rotation
invariant $P$ on $\mathfrak{M}_{\gamma_{2}N}={\rm Herm\,}(\beta,N)$? The
integral (4.5) is an integral over the supervectors
$v_{j}=(z_{j1}^{*},\ldots,z_{jk}^{*},-\zeta_{j1}^{*},\ldots,-\zeta_{jk}^{*})^{T}$
and their adjoint
$v_{j}^{\dagger}=(z_{j1},\ldots,z_{jk},\zeta_{j1},\ldots,\zeta_{jk})$. The
adjoint “$\dagger$” is the complex conjugation with the supersymmetric
transposition and “$T$” is the ordinary transposition. The entries of the
matrix $\widetilde{K}$ are $v_{n}^{\dagger}v_{m}$. If we do not use any
symmetry of the matrix ensemble, we can write these scalar products of
supervectors as supertraces
$v_{n}^{\dagger}v_{m}={\rm Str\,}v_{m}v_{n}^{\dagger}\ .$ (4.11)
Then, we can transform each of these supertraces with a Dirac–distribution to
an integral over a $(k+k)\times(k+k)$–supermatrix. We defined the
Dirac–distribution in superspace as in Refs. [19, 10]. The ambiguity discussed
in Ref. [20] occurring by such a transformation is discussed in the
subsections 4.5 and 6.3.
The procedure above is tedious. Using the symmetries of the ensemble
($\mathcal{F}P,\mathfrak{M}_{N}$), we can reduce the number of integrals in
superspace. We will see that the number of commuting real integrals and of
Grassmannian integrals is $2k^{2}+2k^{2}$ ($\beta=2$) or $4k^{2}+4k^{2}$
($\beta\in\\{1,4\\}$) for a rotation invariant matrix ensembles on ${\rm
Herm\,}(\beta,N)$. If there is not a symmetry the number of integrals has not
been reduced. One has to integrate over $N(N+1)$ ordinary hermitian $k\times
k$–matrices and their corresponding anticommuting parameters if the
transformation above is used.
### 4.3 Analysis of the duality between ordinary and superspace
We consider an orthonormal basis $\\{A_{n}\\}_{1\leq n\leq d}$ of
$\mathfrak{M}_{N}$ where $d$ is the dimension of $\mathfrak{M}_{N}$. We use
the trace $\tr{A_{n}A_{m}}=\delta_{nm}$ as the scalar product and recall that
$\mathfrak{M}_{N}$ is a real vector space. Every element of this basis is
represented as
$A_{n}=\sum\limits_{j=1}^{N}\lambda_{jn}e_{jn}e_{jn}^{\dagger}\ \ \ {\rm
with}\ \ \ \sum\limits_{j=1}^{N}\lambda_{jn}^{2}=1\ .$ (4.12)
Here, $e_{jn}$ are the normalized eigenvectors of $A_{n}$ to the eigenvalues
$\lambda_{jn}$. Then, we construct every matrix $H\in\mathfrak{M}_{N}$ in this
basis
$H=\sum\limits_{n=1}^{d}h_{n}A_{n}\ .$ (4.13)
We find for the characteristic function
$\displaystyle\mathcal{F}P\left(\hat{\pi}(\mathfrak{M}_{N};\widetilde{K})\right)$
$\displaystyle=$
$\displaystyle\int\limits_{\mathfrak{M}_{N}}P\left(\sum\limits_{n=1}^{d}h_{n}A_{n}\right){\rm
exp}\left(\imath\sum\limits_{n=1}^{d}h_{n}\tr A_{n}\widetilde{K}\right)d[H]=$
(4.14) $\displaystyle=$
$\displaystyle\mathcal{F}P\left(\sum\limits_{n=1}^{d}\tr\left(\widetilde{K}A_{n}\right)A_{n}\right)\
.$
With help of Eq. (4.12) and an equation analogous to (4.11), the
characteristic function is
$\mathcal{F}P\left(\hat{\pi}(\mathfrak{M}_{N};\widetilde{K})\right)=\mathcal{F}P\left(\sum\limits_{n=1}^{d}{\rm
Str\,}\left(\sum\limits_{j=1}^{N}\lambda_{jn}Ve_{jn}e_{jn}^{\dagger}V^{\dagger}\right)A_{n}\right)$
(4.15)
with $V=(v_{1},\ldots,v_{N})$. We see that the matrix $\widetilde{K}$ is
projected onto
$K=\hat{\pi}(\mathfrak{M}_{N};\widetilde{K})$ (4.16)
where the projection is the argument of the characteristic function in Eq.
(4.14). The matrices in the supertraces of (4.15) can be exchanged by
$(k+k)\times(k+k)$–supermatrices with the Delta–distributions described above.
If the ensemble has no symmetry then we have reduced the number of
supermatrices to the dimension of $\mathfrak{M}_{N}$. Nevertheless, we can
find a more compact supersymmetric expression of the matrix $K$ such that the
number of the resulting integrals only depends on $k$ but not on $N$. This is
possible if $K$ is a dyadic matrix of vectors where the number of vectors is
independent of $N$ and the probability distribution only depends on invariants
of $H$. The ensembles with $\mathfrak{M}_{\gamma_{2}N}={\rm Herm\,}(\beta,N)$
and a probability density $P$ invariant under the action of ${\rm
U\,}^{(\beta)}(N)$ fulfil these properties. It is known [5, 12] that these
cases have a very compact supersymmetric expression. Furthermore, these
ensembles are well analyzed for Gaussian–distributions with help of the
Hubbard–Stratonovitch transformation [1, 3, 2].
In the present context, the cases of interest are
$\mathfrak{M}_{\gamma_{2}N}={\rm Herm\,}(\beta,N)$ with a probability density
$P$ invariant under the action ${\rm U\,}^{(\beta)}(N)$. We need this symmetry
to simplify Eq. (4.15). Let $N\geq\gamma_{1}k$. This restriction also appears
in the superbosonization formula [12]. If $N<\gamma_{1}k$, one has to be
modify the calculations below. For the superbosonization formula, Bunder,
Efetov, Kravtsov, Yevtushenko, and Zirnbauer [20] presented such a
modification.
The symmetries of a function $f$ carry over to its Fourier transform
$\mathcal{F}f$. Thus, the characteristic function $\mathcal{F}P$ is invariant
under the action of ${\rm U\,}^{(\beta)}(N)$. Let $\widetilde{K}_{0}$ be an
arbitrary ordinary hermitian matrix in the Fourier transformation (4.6) of the
probability density. We assume that the characteristic function is analytic in
the eigenvalues of $\widetilde{K}_{0}$. Then, we expand $\mathcal{F}P$ as a
power series in these eigenvalues. Since the characteristic function is
rotation invariant every single polynomial in this power series of a
homogeneous degree is permutation invariant. With help of the fundamental
theorem of symmetric functions [21] we rewrite these polynomials in the basis
of elementary polynomials. This is equivalent to writing these polynomials in
the basis of the traces $\tr\left[\hat{\pi}\left({\rm
Herm\,}(\beta,N),\widetilde{K}_{0}\right)\right]^{m}$, $m\in\mathbb{N}$. The
analytic continuation of $\mathcal{F}P$ from $\widetilde{K}_{0}$ to
$\widetilde{K}$ yields that the characteristic function in (4.6) only depends
on $\tr\left[\hat{\pi}\left({\rm
Herm\,}(\beta,N),\widetilde{K}\right)\right]^{m}$, $m\in\mathbb{N}$.
Defining the matrix
$V^{\dagger}=(z_{1},\ldots,z_{k},Yz_{1}^{*},\ldots,Yz_{k}^{*},\zeta_{1},\ldots,\zeta_{k},Y\zeta_{1}^{*},\ldots,Y\zeta_{k}^{*})$
(4.17)
and its adjoint
$V=(z_{1}^{*},\ldots,z_{k}^{*},Yz_{1},\ldots,Yz_{k},-\zeta_{1}^{*},\ldots,-\zeta_{k}^{*},Y\zeta_{1},\ldots,Y\zeta_{k})^{T}$
(4.18)
with
$Y=\left\\{\begin{array}[]{ll}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{N}&,\ \beta=1\\\ 0&,\ \beta=2\\\ Y_{{\rm
s}}^{T}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N}&,\
\beta=4\end{array}\right.,$ (4.19)
we find
$K=\hat{\pi}\left({\rm
Herm\,}(\beta,N);\widetilde{K}\right)=\frac{1}{\tilde{\gamma}}V^{\dagger}V\ .$
(4.20)
The crucial identity
$\tr(V^{\dagger}V)^{m}={\rm Str\,}(VV^{\dagger})^{m}$ (4.21)
holds for all $\beta$. It connects ordinary and superspace. For $\beta=2$, a
proof can be found in Ref. [5]. In A, we show that the equation
${\rm Str\,}V_{1}V_{2}={\rm Str\,}V_{2}V_{1}$ (4.22)
holds for all rectangular matrices of the form
$V_{1}=\left[\begin{array}[]{cc}\overbrace{A_{1}}^{a}&\overbrace{B_{1}}^{b}\hskip
0.85358pt\\}c\\\ C_{1}&D_{1}\hskip 5.97508pt\\}d\end{array}\right]\ \ \ {\rm
and}\ \ \
V_{2}=\left[\begin{array}[]{cc}\overbrace{A_{2}}^{c}&\overbrace{B_{2}}^{d}\hskip
0.85358pt\\}a\\\ C_{2}&D_{2}\hskip 5.69054pt\\}b\end{array}\right]$ (4.23)
where $A_{j}$ and $D_{j}$ have commuting entries and $B_{j}$ and $C_{j}$
anticommuting ones. This implies in particular that Eq. (4.21) holds for all
$\beta$. Hence, we reduced the amount of supermatrices corresponding to
$\widetilde{K}$ in Eq. (4.15) to one $(2k+2k)\times(2k+2k)$–supermatrix. In
Ref. [5], the characteristic function $\Phi$ was, with help of Eq. (4.21),
extended to superspace. We follow this idea and, then, proceed with the
Dirac–distribution mentioned above.
### 4.4 Problems when diagonalizing $K$
In Ref. [5], two approaches of the duality relation between ordinary and
superspace were presented. The first approach is the duality equation (4.21)
for $\beta=2$. In our article, we follow this idea. In the second approach,
the matrix $K$ was diagonalized. With the eigenvalues of $K$, a projection
operator was constructed for the definition of a reduced probability density
according to the probability density $P$.
The latter approach fails because $K$ is only diagonalizable if it has no
degeneracy larger than $\gamma_{2}$. Moreover for diagonalizable $K$, one can
not find an eigenvalue $\lambda=0$. This is included in the following
statement which we derive in E.
###### Statement 4.1
Let $N,\widetilde{N}\in\mathbb{N}$, $H^{(0)}\in{\rm Herm\,}(\beta,N)$,
$l\in\mathbb{R}^{\widetilde{N}}$ and $\\{\tau_{q}\\}_{1\leq
q\leq\widetilde{N}}$ $\gamma_{2}N$–dimensional vectors consisting of Grassmann
variables $\tau_{q}=(\tau_{q}^{(1)},\ldots,\tau_{q}^{(\gamma_{2}N)})^{T}$.
Then, the matrix
$H=H^{(0)}+\sum\limits_{q=1}^{\widetilde{N}}l_{q}\left[\tau_{q}\tau_{q}^{\dagger}+\widehat{Y}\left(\tau_{q}^{*}\tau_{q}^{T}\right)\right]$
(4.24)
can not be diagonalized $H=U{\rm
diag\,}(\lambda_{1},\ldots,\lambda_{N})U^{\dagger}$ by a matrix $U$ with the
properties
$U^{\dagger}U=UU^{\dagger}=\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{N}\ ,\ \ U^{*}=\widehat{Y}(U)$ (4.25)
and the body of $U$ lies in ${\rm U\,}^{(\beta)}(N)$ iff $H^{(0)}$ has
degeneracy larger than $\gamma_{2}$. Moreover, $H$ has no eigenvalue
$\lambda\in\mathbb{R}$.
In our particular case, $K$ can not be diagonalized for $k<N-1$. Hence, we do
not follow the second approach of Ref. [5]. We emphasize that none of the
other results in Ref. [5] is affected as they are proven by the correct first
approach which we pursue here.
### 4.5 Ambiguity of the characteristic function in the supersymmetric
extension
In this section, we discuss the problem that the extension of the
characteristic function $\mathcal{F}P$ from ordinary matrices to supermatrices
is not unique. This results from the fact that symmetric supermatrices
comprise two kinds of eigenvalues, i.e. bosonic and fermionic eigenvalues.
Whereas ordinary symmetric matrices have only one kind of eigenvalues. In the
supertraces, these two different kinds are differently weighted by a minus
sign. To illustrate this problem, we also give a simple example.
The rotation invariance of $\mathcal{F}P$ enables us to choose a
representation $\mathcal{F}P_{0}$ of $\mathcal{F}P$ acting on an arbitrary
number of matrix invariants
$\mathcal{F}P_{0}\left(\tr K^{m}|m\in\mathbb{N}\right)=\mathcal{F}P(K)\ .$
(4.26)
For this representation, a unique superfunction exists defined by
$\Phi_{0}(\sigma)=\mathcal{F}P_{0}\left({\rm
Str\,}\sigma^{m}|m\in\mathbb{N}\right)$ (4.27)
where
$\mathcal{F}P_{0}\left({\rm
Str\,}B^{m}|m\in\mathbb{N}\right)=\mathcal{F}P_{0}\left(\tr
K^{m}|m\in\mathbb{N}\right)$ (4.28)
with $B=\tilde{\gamma}^{-1}VV^{\dagger}$. However, the choice of the
representation $\mathcal{F}P_{0}$ is not unique. The question arises whether
it is a well defined object. It is clear that two representations
$\mathcal{F}P_{0}$ and $\mathcal{F}P_{1}$ are equal on ${\rm Herm\,}(\beta,N)$
due to the Cayley–Hamilton theorem,
$\mathcal{F}P_{0}(H)=\mathcal{F}P_{1}(H)\ ,\ H\in{\rm Herm\,}(\beta,N).$
(4.29)
The Cayley–Hamilton theorem states that there is a polynomial which is zero
for $H$. Thus, $H^{M}$ with $M>N$ is a polynomial in $\\{H^{n}\\}_{1\leq n\leq
N}$. Plugging an arbitrary symmetric supermatrix $\sigma$ into the
corresponding superfunctions $\Phi_{0}$ and $\Phi_{1}$ we realize that the
choices are not independent such that
$\Phi_{0}(\sigma)\neq\Phi_{1}(\sigma)$ (4.30)
holds for some $\sigma$.
For example with $N=2$, $k=1$ and $\beta=2$, let the characteristic function
$\mathcal{F}P(H)=\mathcal{F}P_{0}\left(\tr H^{3}\right)$. We get with help of
the Cayley–Hamilton theorem
$\mathcal{F}P_{1}\left(\tr H^{2},\tr H\right)=\mathcal{F}P_{0}\left(2\tr H\tr
H^{2}-\tr^{3}H\right)=\mathcal{F}P_{0}\left(\tr H^{3}\right)=\mathcal{F}P(H)\
.$ (4.31)
Let the set of ${\rm U\,}^{(\beta)}(p/q)$–symmetric supermatrices be
$\displaystyle\left\\{\sigma\in{\rm
Mat}(\tilde{\gamma}p/\tilde{\gamma}q)\left|\sigma^{\dagger}=\sigma,\
\sigma^{*}=\widehat{Y}_{{\rm S}}(\sigma)\right.\right\\}{\rm\ and}$ (4.32)
$\displaystyle\widehat{Y}_{{\rm
S}}(\sigma)=\left\\{\begin{array}[]{ll}\left[\begin{array}[]{cc}\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{2p}&0\\\ 0&Y_{{\rm
s}}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{q}\end{array}\right]\sigma\left[\begin{array}[]{cc}\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{2p}&0\\\ 0&Y_{{\rm
s}}^{T}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{q}\end{array}\right]&,\ \beta=1,\\\ \sigma^{*}&,\ \beta=2,\\\
\left[\begin{array}[]{cc}Y_{{\rm s}}\otimes\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{p}&0\\\ 0&\leavevmode\hbox{\small
1\kern-3.8pt\normalsize
1}_{2q}\end{array}\right]\sigma\left[\begin{array}[]{cc}Y_{{\rm
s}}^{T}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{p}&0\\\
0&\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{2q}\end{array}\right]&,\
\beta=4,\end{array}\right.$ (4.44)
with respect to the supergroups
${\rm U\,}^{(\beta)}(p/q)=\left\\{\begin{array}[]{ll}{\rm
UOSp\,}^{(+)}(p/2q)&,\ \beta=1\\\ {\rm U\,}(p/q)&,\ \beta=2\\\ {\rm
UOSp\,}^{(-)}(2p/q)&,\ \beta=4\end{array}\right.\ .$ (4.45)
${\rm Mat}(\tilde{\gamma}p/\tilde{\gamma}q)$ is the set of
$(\tilde{\gamma}p+\tilde{\gamma}q)\times(\tilde{\gamma}p+\tilde{\gamma}q)$–supermatrices
with the complex Grassmann algebra
$\bigoplus\limits_{j=0}^{8k^{2}}\Lambda_{j}$. The definition of the two
representations ${\rm UOSp\,}^{(\pm)}$ of the supergroup ${\rm UOSp\,}$ can be
found in Refs. [22, 14]. We refer to the classification of Riemannian
symmetric superspaces by Zirnbauer [23].
We consider a ${\rm U\,}(1/1)$–symmetric supermatrix $\sigma$. This yields for
the supersymmetric extension of Eq. (4.31)
$\mathcal{F}P_{0}\left(2{\rm Str\,}\sigma{\rm Str\,}\sigma^{2}-{\rm
Str\,}^{3}\sigma\right)\neq\mathcal{F}P_{0}\left({\rm
Str\,}\sigma^{3}\right)=\mathcal{F}P_{0}\left(\frac{1}{4}\left(3\frac{{\rm
Str\,}^{2}\sigma^{2}}{{\rm Str\,}\sigma}+{\rm Str\,}^{3}\sigma\right)\right)\
.$ (4.46)
One obtains the last equation with a theorem similar to the Cayley–Hamilton
theorem. More specificly, there exists a unique polynomial equation of order
two
$\sigma^{2}-\frac{{\rm Str\,}\sigma^{2}}{{\rm
Str\,}\sigma}\sigma-\frac{1}{4}\left({\rm Str\,}^{2}\sigma-\frac{{\rm
Str\,}^{2}\sigma^{2}}{{\rm Str\,}^{2}\sigma}\right)=0\ ,$ (4.47)
for a ${\rm U\,}(1/1)$–symmetric supermatrix $\sigma$.
The resulting integral in Sec. 5 for the generating function
$Z_{k}|_{\mathfrak{M}_{N}={\rm Herm\,}(\beta,N)}$ is invariant under the
choice of $\Phi_{0}$. This is proven in Sec. 6.3. Such an ambiguity of the
supersymmetric extension of the characteristic function was also investigated
by the authors of Ref. [20]. They avoided the question of the definition of a
Dirac–distribution on superspace by the superbosonization formula. We
introduce for the supersymmetric extension from Eq. (4.28) to Eq. (4.27) a
Dirac–distribution depending on the representation of the superfunction.
### 4.6 Symmetries of the supermatrices
We find for a chosen representation $\mathcal{F}P_{0}$
$Z_{k}(x^{-}+J)=(-\imath)^{k_{2}N}\int\limits_{\mathfrak{C}^{k_{2}N}}\Phi_{0}(B)\exp\left[-\imath{\rm
Str\,}(x^{-}+J)B\right]d[\zeta]d[z]\ .$ (4.48)
Here, we introduce $k_{2}=\gamma_{2}k$, $k_{1}=\gamma_{1}k$ and
$\tilde{k}=\tilde{\gamma}k$. We will simplify the integral (4.48) to integrals
over $k_{1}$ eigenvalues in the Boson–Boson block and over $k_{2}$ eigenvalues
in the Fermion–Fermion block.
For every $\beta$, we have
$B^{\dagger}=B\ ,$ (4.49)
i.e. $B$ is self-adjoint. The complex conjugation yields
$B^{*}=\left\\{\begin{array}[]{ll}\widetilde{Y}B\widetilde{Y}^{T}\qquad,\
\beta\in\\{1,4\\}\\\ \widetilde{Y}B^{*}\widetilde{Y}^{T}\qquad,\
\beta=2\end{array}\right.$ (4.50)
with the $(2k+2k)\times(2k+2k)$–supermatrices
$\left.\widetilde{Y}\right|_{\beta=1}=\left[\begin{array}[]{ccc}0&\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{k}&0\\\ \leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{k}&0&0\\\ 0&0&Y_{{\rm
s}}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{k}\end{array}\right]\qquad,\qquad\left.\widetilde{Y}\right|_{\beta=4}=\left[\begin{array}[]{ccc}Y_{{\rm
s}}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k}&0&0\\\
0&0&\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k}\\\
0&\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k}&0\end{array}\right]$
(4.51)
and $\left.\widetilde{Y}\right|_{\beta=2}={\rm
diag\,}(1,0,1,0)\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{k}$. We notice that for the unitary case $B$ is effectively a
$(k+k)\times(k+k)$–supermatrix, i.e. half the dimension. With help of the
properties (4.49) and (4.50) we construct the supermatrix sets
$\widetilde{\Sigma}_{0}(\beta,k)=\left\\{\sigma\in{\rm
Mat}(2k/2k)\left|\sigma^{\dagger}=\sigma,\
\sigma^{*}=\left\\{\begin{array}[]{ll}\widetilde{Y}\sigma\widetilde{Y}^{T}&,\
\beta\in\\{1,4\\}\\\ \widetilde{Y}\sigma^{*}\widetilde{Y}^{T}&,\
\beta=2\end{array}\right\\}\right.\right\\}\ .$ (4.52)
A matrix in $\widetilde{\Sigma}_{0}(\beta,k)$ fulfils the odd symmetry (4.50).
We transform this symmetry with the unitary transformations
$U|_{\beta=1}=\frac{1}{\sqrt{2}}\left[\begin{array}[]{ccc}\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{k}&\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{k}&0\\\ -\imath\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{k}&\imath\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k}&0\\\
0&0&\sqrt{2}\ \leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{2k}\end{array}\right]\ \ ,\ \
U|_{\beta=4}=\frac{1}{\sqrt{2}}\left[\begin{array}[]{ccc}\sqrt{2}\
\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{2k}&0&0\\\
0&\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{k}&\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k}\\\
0&-\imath\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{k}&\imath\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{k}\end{array}\right],$ (4.53)
$U|_{\beta=2}=\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{4k}$,
according to the Dyson–index, arriving at the well–known symmetries of
symmetric supermatrices [23], see also Eq. (4.32). Defining the sets
$\Sigma_{0}(\beta,k)=U\widetilde{\Sigma}_{0}(\beta,k)U^{\dagger}$, we remark
that the body of the Boson–Boson block of any element in these sets is a
matrix in ${\rm Herm\,}(\beta,k_{1})$. The body of the Fermion–Fermion block
of any matrix in $\Sigma_{0}(\beta,k)$ lies in ${\rm Herm\,}(4/\beta,k_{2})$.
We introduce a generalized Wick–rotation $e^{\imath\psi}$ to guarantee the
convergence of the supermatrix integrals. The usual choice of a Wick–rotation
is $e^{\imath\psi}=\imath$ for investigations of Gaussian probability
densities [5, 1, 2]. Here, general Wick–rotations [14] are also of interest.
Probability densities which lead to superfunction as $\exp\left(-{\rm
Str\,}\sigma^{4}\right)$ do not converge with the choice $\imath$. Thus, we
consider the modified sets
$\Sigma_{\psi}(\beta,k)=\widehat{\Psi}_{\psi}\Sigma_{0}(\beta,k)\widehat{\Psi}_{\psi}\
.$ (4.54)
with $\widehat{\Psi}_{\psi}={\rm diag\,}(\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{2k},e^{\imath\psi/2}\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{2k})$. Let $\Sigma_{\psi}^{0}(\beta,k)$ be the set
of supermatrices which contains only zero and first order terms in the
Grassmann variables.
In the sequel, we restrict our calculations to superfunctions which possess a
Wick–rotation such that the integrals below are convergent. We have not
further explored the set of superfunctions with this property, but we know
that this set has to be very large and sufficient for our purposes. For
example, superfunctions of the form
$\Phi_{0}(\sigma)=\widetilde{\Phi}(\sigma)\exp\left(-{\rm
Str\,}\sigma^{2n}\right),\quad n\in\mathbb{N},$ (4.55)
fulfil this property if ${\rm ln}\widetilde{\Phi}(\sigma)$ does not increase
as fast as ${\rm Str\,}\sigma^{2n}$ at infinity.
### 4.7 Transformation to supermatrices by a Dirac–distribution
Following Refs. [6, 5, 10], $\Phi_{0}(B)$ can be written as a convolution in
the space of supermatrices $\Sigma_{\psi}^{0}(\beta,k)$ with a
Dirac–distribution. We have
$\displaystyle Z_{k}(x^{-}+J)$ $\displaystyle=$
$\displaystyle(-\imath)^{k_{2}N}\int\limits_{\mathfrak{C}_{k_{2}N}}\int\limits_{\Sigma_{\psi}^{0}(\beta,k)}\Phi_{0}(\rho)\delta\left(\rho-
UBU^{\dagger}\right)d[\rho]\times$ (4.56) $\displaystyle\times$
$\displaystyle\exp\left[-\imath{\rm Str\,}(x^{-}+J)B\right]d[\zeta]d[z]$
where the measure is defined as
$d[\rho]=d[\rho_{1}]d[\rho_{2}]\underset{1\leq n\leq
k_{1}}{\prod\limits_{1\leq m\leq k_{2}}}d\eta_{nm}d\eta_{nm}^{*}\ .$ (4.57)
Here, $\\{\eta_{nm},\eta_{nm}^{*}\\}$ are pairs of generators of a Grassmann
algebra, while $\rho_{1}$ is the Boson–Boson and $\rho_{2}$ is the
Fermion–Fermion block without the phase of the Wick–rotation. Since $\rho_{1}$
and $\rho_{2}$ are in ${\rm Herm\,}(\beta,k_{1})$ and ${\rm
Herm\,}(4/\beta,k_{2})$, respectively, we use the real measures for
$d[\rho_{1}]$ and $d[\rho_{2}]$ which are defined in Ref. [14]. We exchange
the Dirac–distribution by two Fourier transformations as in Refs. [5, 10].
Then, Eq. (4.56) becomes
$\displaystyle Z_{k}(x^{-}+J)$ $\displaystyle=$
$\displaystyle(-\imath)^{k_{2}N}2^{2k(k-\tilde{\gamma})}\int\limits_{\mathfrak{C}_{k_{2}N}}\int\limits_{\Sigma_{-\psi}^{0}(\beta,k)}\mathcal{F}\Phi_{0}(\sigma)\times$
(4.58) $\displaystyle\times$ $\displaystyle\exp\left[\imath{\rm
Str\,}B\left(U^{\dagger}\sigma U-x^{-}-J\right)\right]d[\sigma]d[\zeta]d[z]$
where the Fourier transform of $\Phi_{0}$ is
$\mathcal{F}\Phi_{0}(\sigma)=\int\limits_{\Sigma_{\psi}^{0}(\beta,k)}\Phi_{0}(\rho)\exp\left(-\imath{\rm
Str\,}\rho\sigma\right)d[\rho]\ .$ (4.59)
We write the supertrace in the exponent in Eq. (4.58) as a sum over
expectation values
${\rm Str\,}B\left(U^{\dagger}\sigma
U-x^{-}-J\right)=\frac{1}{\tilde{\gamma}}\sum\limits_{j=1}^{N}\tr\Psi_{j}^{\dagger}\left(U^{\dagger}\sigma
U-x^{-}-J\right)\Psi_{j}$ (4.60)
with respect to the real, complex or quaternionic supervectors
$\Psi_{j}^{\dagger}=\left\\{\begin{array}[]{ll}\left\\{z_{jn},z_{jn}^{*},\zeta_{jn},\zeta^{*}_{jn}\right\\}_{1\leq
n\leq k}&,\ \beta=1\\\ \left\\{z_{jn},0,\zeta_{jn},0\right\\}_{1\leq n\leq
k}&,\ \beta=2\\\ \left\\{\left[\begin{array}[]{c}z_{jn}\\\
z_{j+N,n}\end{array}\right],\left[\begin{array}[]{c}-z_{j+N,n}^{*}\\\
z_{jn}^{*}\end{array}\right],\left[\begin{array}[]{c}\zeta_{jn}\\\
\zeta_{j+N,n}\end{array}\right],\left[\begin{array}[]{c}-\zeta_{j+N,n}^{*}\\\
\zeta_{jn}^{*}\end{array}\right]\right\\}_{1\leq n\leq k}&,\
\beta=4\end{array}\right.$ (4.61)
The integration over one of these supervectors yields
$\int\limits_{\mathfrak{C}_{k_{2}}}{\rm
exp}\left[\frac{\imath}{\tilde{\gamma}}\tr\Psi_{j}^{\dagger}\left(U^{\dagger}\sigma
U-x^{-}-J\right)\Psi_{j}\right]d[\Psi_{j}]=\imath^{k_{2}}{\rm
Sdet\,}^{-1/\gamma_{1}}\mathfrak{p}\left(\sigma-x^{-}-J\right)\ .$ (4.62)
$\mathfrak{p}$ projects onto the non-zero matrix blocks of
$\Sigma_{-\psi}(\beta,k)$ which are only $(k+k)\times(k+k)$–supermatrices for
$\beta=2$. $\mathfrak{p}$ is the identity for $\beta\in\\{1,4\\}$. The Eq.
(4.62) is true because $U$ commutes with $x^{-}+J$. Then, Eq. (4.58) reads
$Z_{k}(x^{-}+J)=2^{2k(k-\tilde{\gamma})}\int\limits_{\Sigma_{-\psi}^{0}(\beta,k)}\mathcal{F}\Phi_{0}(\sigma){\rm
Sdet\,}^{-N/\gamma_{1}}\mathfrak{p}\left(\sigma-x^{-}-J\right)d[\sigma]\ .$
(4.63)
Indeed, this result coincides with Ref. [5] for $\beta=2$ where the Fourier
transform $\mathcal{F}\Phi_{0}(\sigma)$ was denoted by $Q(\sigma)$. Eq. (4.63)
reduces for Gaussian ensembles with arbitrary $\beta$ to expressions as in
Refs. [3] and [2]. The integral is well defined because $\varepsilon$ is
greater than zero and the body of the eigenvalues of the Boson–Boson block is
real. The representation (4.63) for the generating function can also be
considered as a random matrix ensemble lying in the superspace.
Eq. (4.63) is one reason why we called this integral transformation from the
space over ordinary matrices to supermatrices as generalized
Hubbard–Stratonovich transformation. If the probability density $P$ is
Gaussian then we can choose $\Phi_{0}$ also as a Gaussian. Thus, this
transformation above reduces to the ordinary Hubbard–Stratonovich
transformation and the well-known result (4.63).
## 5 The supersymmetric Ingham–Siegel integral
We perform a Fourier transformation in superspace for the convolution integral
(4.63) and find
$Z_{k}(x^{-}+J)=2^{2k(k-\tilde{\gamma})}\int\limits_{\Sigma_{\psi}^{0}(\beta,k)}\Phi_{0}(\rho)I_{k}^{(\beta,N)}(\rho)\exp\left[-\imath{\rm
Str\,}\rho\left(x^{-}+J\right)\right]d[\rho]\ .$ (5.1)
Here, we have to calculate the supersymmetric Ingham–Siegel integral
$I_{k}^{(\beta,N)}(\rho)=\int\limits_{\Sigma_{-\psi}^{0}(\beta,k)}\exp\left(-\imath{\rm
Str\,}\rho\sigma^{+}\right){\rm
Sdet\,}^{-N/\gamma_{1}}\mathfrak{p}\sigma^{+}d[\sigma]$ (5.2)
with $\sigma^{+}=\sigma+\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{4k}$.
Ingham [24] and Siegel [25] independently calculated a version of (5.2) for
ordinary real symmetric matrices. The case of hermitian matrices was discussed
in Ref. [26]. Since we were unable to find the ordinary Ingham–Siegel integral
also for the quaternionic case, we give the result here. It is related to
Selbergs integral [27]. Let $R\in{\rm Herm\,}(\beta,m)$, $\varepsilon>0$ and a
real number $n\geq m-1+2/\beta$, then we have
$\displaystyle\int\limits_{{\rm Herm\,}(\beta,m)}\exp\left(-\imath\tr
RS^{+}\right){\det}^{-n/\gamma_{1}}S^{+}d[S]=\imath^{-\beta
mn/2}G_{n-m,m}^{(\beta)}\displaystyle{\det}^{\lambda}R\ \Theta(R)$ (5.3)
where $S^{+}=S+\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\gamma_{2}m}$, the exponent is
$\lambda=\frac{n-m}{\gamma_{1}}-\frac{\gamma_{1}-\gamma_{2}}{2}$ (5.4)
and the constant is
$G_{n-m,m}^{(\beta)}=\left(\frac{\gamma_{2}}{\pi}\right)^{\beta
m(n-m+1)/2-m}\prod\limits_{j=n-m+1}^{n}\frac{2\pi^{\beta
j/2}}{\Gamma\left(\beta j/2\right)}\ .$ (5.5)
$\Gamma(.)$ is the Euler gamma–function and $\Theta(.)$ is the Heavyside-
function for matrices which is defined as
$\Theta(R)=\left\\{\begin{array}[]{ll}1&,\ R{\rm\ is\ positive\ definite}\\\
0&,\ {\rm else}\end{array}\right.\ .$ (5.6)
The ordinary Ingham–Siegel integral was recently used in the context of
supersymmetry by Fyodorov [26]. The integral was extended to the superspace
$\Sigma_{\pi/2}^{0}(2,k)$ in Ref. [5]. In this article, we need a
generalization to all $\Sigma_{-\psi}^{0}(\beta,k)$, in particular
$\beta=1,4$.
The integral (5.2) is invariant under the action of ${\rm
U\,}^{(\beta)}(k_{1}/k_{2})$. Thus, it is convenient to consider
$I(r,\varepsilon)$, where $r={\rm
diag\,}(r_{11},\ldots,r_{\tilde{k}1},r_{12},\ldots,r_{\tilde{k}2})$ is the
diagonal matrix of eigenvalues of $\rho$ and contains nilpotent terms. The
authors of Ref. [10] claimed in their proof of Theorem 1 in Chapter 6 that the
diagonalization at this point of the calculation yields Efetov–Wegner terms.
These terms do not appear in the $\rho_{2}$ integration because we do not
change the integration variables, i.e. the integration measure $d[\rho]$
remains the same. For the unitary case, see Ref. [5]. We consider the
eigenvalues of $\rho$ as functions of the Cartesian variables. We may
certainly differentiate a function with respect to the eigenvalues if we keep
track of how these differential operators are defined in the Cartesian
representation.
As worked out in C.1, the supersymmetric Ingham–Siegel integral (5.2) reads
$I_{k}^{(\beta,N)}(\rho)=\displaystyle
C{\det}^{\kappa}r_{1}\Theta(r_{1}){\det}^{k}r_{2}\exp\left(-e^{\imath\psi}\varepsilon\tr
r_{2}\right)\left[D_{k_{2}r_{2}}^{(4/\beta)}\left(\imath
e^{\imath\psi}\gamma_{1}\varepsilon\right)\right]^{N}\frac{\delta(r_{2})}{|\Delta_{k_{2}}(r_{2})|^{4/\beta}}\
.$ (5.7)
The constant is
$C=\displaystyle\left(-\frac{e^{-\imath\psi}}{\gamma_{1}}\right)^{k_{2}N}\left(-\frac{\tilde{\gamma}}{2\pi}\right)^{k_{1}k_{2}}\left(\frac{2\pi}{\gamma_{1}}\right)^{k_{2}}\left(\frac{\pi}{\gamma_{1}}\right)^{2k_{2}(k_{2}-1)/\beta}\frac{G_{Nk_{1}}^{(\beta)}}{g_{k_{2}}^{(4/\beta)}}$
(5.8)
with
$\displaystyle
g_{k_{2}}^{(4/\beta)}=\frac{1}{k_{2}!}\prod\limits_{j=1}^{k_{2}}\frac{\pi^{2(j-1)/\beta}\Gamma\left(2/\beta\right)}{\Gamma\left(2j/\beta\right)}\
.$ (5.9)
while the exponent is given by
$\kappa=\frac{N}{\gamma_{1}}+\frac{\gamma_{2}-\gamma_{1}}{2}$ (5.10)
and the differential operator
$D_{k_{2}r_{2}}^{(4/\beta)}\left(\imath
e^{\imath\psi}\gamma_{1}\varepsilon\right)=\frac{1}{\Delta_{k_{2}}(r_{2})}\det\left[r_{a2}^{N-b}\left(\frac{\partial}{\partial
r_{a2}}+(k_{2}-b)\frac{2}{\beta}\frac{1}{r_{a2}}-e^{\imath\psi}\gamma_{1}\varepsilon\right)\right]_{1\leq
a,b\leq k_{2}}$ (5.11)
is the analog to the Sekiguchi differential operator [28]. We derived it in B.
The complexity of $D_{k_{2}r_{2}}^{(4/\beta)}(\imath
e^{\imath\psi}\varepsilon)$ makes Eq. (5.7) cumbersome, a better
representation is desirable. To simplify Eq. (5.7), we need the following
statement which is shown in C.2.
###### Statement 5.1
We consider two functions $F,f:{\rm
Herm\,}(4/\beta,k_{2})\rightarrow\mathbb{C}$ invariant under the action of
${\rm U\,}^{(4/\beta)}(k_{2})$ and Schwartz–functions of the matrix
eigenvalues. Let $F$ and $f$ have the relation
$F(\rho_{2})=f(\rho_{2})\det\rho_{2}^{N/\gamma_{1}-k}{\rm\ \ for\ all\
}\rho_{2}\in{\rm Herm\,}(4/\beta,k_{2})\ .$ (5.12)
Then, we have
$\displaystyle\int\limits_{\mathbb{R}^{k_{2}}}\int\limits_{{\rm
Herm\,}(4/\beta,k_{2})}F(r_{2}){\det}^{k}r_{2}|\Delta_{k_{2}}(r_{2})|^{4/\beta}\exp\left(\imath\tr
r_{2}\sigma_{2}\right){\det}^{N/\gamma_{1}}\left(e^{-\imath\psi}\sigma_{2}+\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)d[\sigma_{2}]d[r_{2}]=$
$\displaystyle=w_{1}f(0)=\int\limits_{\mathbb{R}^{k_{2}}}F(r_{2})|\Delta_{k_{2}}(r_{2})|^{4/\beta}\left[\frac{w_{2}\exp\left(\varepsilon
e^{\imath\psi}\tr
r_{2}\right)}{|\Delta_{k_{2}}(r_{2})|^{4/\beta}}\prod\limits_{j=1}^{k_{2}}\left(\frac{\partial}{\partial
r_{j2}}\right)^{N-k_{1}}\delta(r_{j2})\right]d[r_{2}]$ (5.13)
where the constants are
$\displaystyle
w_{1}=\left(\frac{2\pi}{\gamma_{1}}\right)^{k_{2}}\left(\frac{\pi}{\gamma_{1}}\right)^{2k_{2}(k_{2}-1)/\beta}\frac{\left(\imath^{N}e^{-\imath\psi
N}\right)^{k_{2}}}{g_{k_{2}}^{(4/\beta)}}\prod_{b=1}^{k_{2}}\prod\limits_{a=1}^{N}\left(\frac{a}{\gamma_{1}}+\frac{b-1}{\gamma_{2}}\right)$
(5.14) $\displaystyle
w_{2}=\frac{(-1)^{k_{1}k_{2}}}{g_{k_{2}}^{(4/\beta)}}\left(\frac{2\pi}{\gamma_{1}}\right)^{k_{2}}\left(\frac{\pi}{\gamma_{1}}\right)^{2k_{2}(k_{2}-1)/\beta}\left[\frac{(-\imath)^{N}e^{-\imath\psi
N}}{\left(N-k_{1}\right)!\gamma_{1}^{N}}\right]^{k_{2}}\prod_{j=0}^{k_{2}-1}\frac{\Gamma\left(N+1+2j/\beta\right)}{\Gamma\left(1+2j/\beta\right)}\
.$ (5.15)
This statement yields for the supersymmetric Ingham–Siegel integral
$I_{k}^{(\beta,N)}(\rho)=\displaystyle
W\Theta(r_{1})\frac{{\det}^{\kappa}r_{1}}{|\Delta_{k_{2}}(r_{2})|^{4/\beta}}\prod\limits_{j=1}^{k_{2}}\left(\frac{\partial}{\partial
r_{j2}}\right)^{N-k_{1}}\delta(r_{j2})$ (5.16)
where the constant reads
$\displaystyle W$ $\displaystyle=$
$\displaystyle\left(\frac{\tilde{\gamma}}{2\pi}\right)^{k_{1}k_{2}}\left(\frac{2\pi}{\gamma_{1}}\right)^{k_{2}}\left(\frac{\pi}{\gamma_{1}}\right)^{2k_{2}(k_{2}-1)/\beta}\left[\frac{\left(-e^{-\imath\psi}\right)^{N}}{\left(N-k_{1}\right)!\gamma_{1}^{N}}\right]^{k_{2}}\times$
(5.17) $\displaystyle\times$
$\displaystyle\frac{G_{Nk_{1}}^{(\beta)}}{g_{k_{2}}^{(4/\beta)}}\prod_{j=0}^{k_{2}-1}\frac{\Gamma\left(N+1+2j/\beta\right)}{\Gamma\left(1+2j/\beta\right)}\
.$
We further simplify this formula for $\beta=1$ and $\beta=2$. The powers of
the Vandermonde–determinant $\Delta_{k_{2}}^{4/\beta}(r_{2})$ are polynomials
of degree $k_{2}\times 2(k_{2}-1)/\beta$. The single power of one eigenvalue
derivative must be $2(k_{2}-1)/\beta$ if we substitute these terms in Eq.
(5.16) by partial derivatives of the eigenvalues, for details see C.2. Hence,
this power is a half-integer for $\beta=4$. Also, $\Delta_{k_{2}}(r_{2})$ has
no symmetric term where all eigenvalues have the same power. Therefore, we can
not simplify the quaternionic case in the same manner.
We use the identities
$\displaystyle\prod\limits_{j=1}^{n}\frac{\partial^{n-1}}{\partial
x_{j}^{n-1}}\Delta_{n}^{2}(x)$ $\displaystyle=$
$\displaystyle(-1)^{n(n-1)/2}n!\left[(n-1)!\right]^{n}\ ,$ (5.18)
$\displaystyle\prod\limits_{j=1}^{n}\frac{\partial^{2(n-1)}}{\partial
x_{j}^{2(n-1)}}\Delta_{n}^{4}(x)$ $\displaystyle=$ $\displaystyle
n!\left[(2n-2)!\right]^{n}\prod\limits_{j=0}^{n-1}(2j+1)$ (5.19)
and find
$\displaystyle I_{k}^{(1,N)}(\rho)$ $\displaystyle=$ $\displaystyle
2^{-k(k-2)}\left[\frac{2\pi e^{-\imath\psi N}}{(N-2)!}\right]^{k}\times$
(5.20) $\displaystyle\times$ $\displaystyle\Theta(r_{1})\det
r_{1}^{(N-1)/2}\prod\limits_{j=1}^{k}\left(-\frac{\partial}{\partial
r_{j2}}\right)^{N-2}\delta(r_{j2})$
and
$\displaystyle I_{k}^{(2,N)}(\rho)$ $\displaystyle=$
$\displaystyle(-1)^{k(k+1)/2}2^{-k(k-1)}\left[\frac{2\pi e^{-\imath\psi
N}}{(N-1)!}\right]^{k}\times$ (5.21) $\displaystyle\times$
$\displaystyle\Theta(r_{1})\det
r_{1}^{N}\prod\limits_{j=1}^{k}\left(-\frac{\partial}{\partial
r_{j2}}\right)^{N-1}\delta(r_{j2})\ .$
For $\beta=4$, we summarize the constants and have
$\displaystyle I_{k}^{(4,N)}(\rho)$ $\displaystyle=$ $\displaystyle
2^{-k(k-2)}\left[\frac{2\pi e^{-\imath\psi N}}{(N-k)!}\right]^{2k}\times$
(5.22) $\displaystyle\times$ $\displaystyle\Theta(r_{1})\det
r_{1}^{N+1/2}\frac{4^{k}k!}{\pi^{k}|\Delta_{2k}(r_{2})|}\prod\limits_{j=1}^{2k}\left(-\frac{\partial}{\partial
r_{j2}}\right)^{N-k}\delta(r_{j2})\ .$
These distributions are true for superfunctions whose Fermion–Fermion block
dependence is as in Eq. (5.12). Eqs. (5.20) and (5.21) can be extended to
distributions on arbitrary Schwartz–functions which is not the case for Eq.
(5.22). The constants in Eqs. (5.20) and (5.21) must be the same due to the
independence of the test–function.
###### Statement 5.2
Equations (5.20) and (5.21) are true for rotation invariant superfunctions
$\Phi_{0}$ which are Schwartz–functions in the Fermion–Fermion block entries
along the Wick–rotated real axis.
We derive this statement in C.3.
Indeed, the Eq. (5.21) is the same as the formula for the supersymmetric
Ingham–Siegel integral for $\beta=2$ in Ref. [5]. Comparing both results, the
different definitions of the measures have to be taken into account. We also
see the similarity to the superbosonization formula [9, 8, 12, 11, 20, 10] for
$\beta\in\\{1,2\\}$. One can replace the partial derivative in Eq. (5.20) and
(5.21) by contour integrals if the characteristic function $\Phi_{0}$ is
analytic. However for $\beta=4$, more effort is needed. For our purposes, Eqs.
(5.7) and (5.22) are sufficient for the quaternionic case. In the unitary
case, the equivalence of Eq. (5.21) with the superbosonization formula was
confirmed with help of Cauchy integrals by Basile and Akemann. [10]
## 6 Final representation of the generating function and its independence of
the choice for $\Phi_{0}$
In Sec. 6.1, we present the generating function as a supersymmetric integral
over eigenvalues and introduce the supersymmetric Bessel–functions. In Sec.
6.2, we revisit the unitary case and point out certain properties of the
generating function. Some of these properties, independence of the
Wick–rotation and the choice of $\Phi_{0}$, are also proven for the orthogonal
and unitary–symplectic case in Sec. 6.3.
### 6.1 Eigenvalue integral representation
The next step of the calculation of the generating function $Z_{k}(x^{-}+J)$
is the integration over the supergroup. The function
$\Phi_{0}(\rho)I_{k}^{(\beta,N)}(\rho)$ is invariant under the action of ${\rm
U\,}^{(\beta)}(k_{1}/k_{2})$.
We define the supermatrix Bessel–function
$\varphi_{k_{1}k_{2}}^{(\beta)}(s,r)=\int\limits_{{\rm
U\,}^{(\beta)}(k_{1}/k_{2})}\exp\left({\rm Str\,}sUrU^{\dagger}\right)d\mu(U)$
(6.1)
as in Refs. [29, 14]. We choose the normalization
$\displaystyle\int\limits_{\Sigma^{0}_{\psi}(\beta,k)}f(\sigma)\exp\left({\rm
Str\,}\sigma
x\right)d[e^{-\imath\psi/2}\eta]d[e^{\imath\psi}\sigma_{2}]d[\sigma_{1}]=$
(6.2) $\displaystyle=$
$\displaystyle\int\limits_{\mathbb{R}^{k_{1}}}\int\limits_{\mathbb{R}^{k_{2}}}f(s)\varphi_{k_{1}k_{2}}^{(\beta)}(s,x)\left|B_{k}^{(\beta)}(s_{1},e^{\imath\psi}s_{2})\right|d[e^{\imath\psi}s_{2}]d[s_{1}]+{\rm
b.t.}$
which holds for every rotation invariant function $f$. This normalization
agrees with Refs. [30, 31, 29, 5, 14]. The boundary terms (${\rm b.t.}$)
referred to as Efetov–Wegner terms [32, 33, 10] appear upon changing the
integration variables [34] or, equivalently, upon partial integration [14].
The Berezinian is
$B_{k}^{(\beta)}(s_{1},e^{\imath\psi}s_{2})=\displaystyle\frac{\Delta_{k_{1}}^{\beta}(s_{1})\Delta_{k_{2}}^{4/\beta}(e^{\imath\psi}s_{2})}{V_{k}^{2}(s_{1},e^{\imath\psi}s_{2})}$
(6.3)
where
$V_{k}(s_{1},e^{\imath\psi}s_{2})=\prod\limits_{n=1}^{k_{1}}\prod\limits_{m=1}^{k_{2}}\left(s_{n1}-e^{\imath\psi}s_{m2}\right)$
mixes bosonic and fermionic eigenvalues. These Berezinians have a
determinantal structure
$B_{k}^{(\beta)}(s_{1},e^{\imath\psi}s_{2})=\left\\{\begin{array}[]{ll}\displaystyle\det\left[\frac{1}{s_{a1}-e^{\imath\psi}s_{b2}}\
,\ \frac{1}{(s_{a1}-e^{\imath\psi}s_{b2})^{2}}\right]\underset{1\leq b\leq
k}{\underset{1\leq a\leq 2k}{}}&,\ \beta=1\\\
\displaystyle{\det}^{2}\left[\frac{1}{s_{a1}-e^{\imath\psi}s_{b2}}\right]_{1\leq
a,b\leq k}&,\ \beta=2\\\ \displaystyle
B_{k}^{(1)}(e^{\imath\psi}s_{2},s_{1})&,\ \beta=4\end{array}\right.\ .$ (6.4)
For $\beta=2$ this formula was derived in Ref. [32]. The other cases are
derived in D. We notice that this determinantal structure is similar to the
determinantal structure of the ordinary Vandermonde–determinant raised to the
powers $2$ and $4$. This structure was explicitly used [15] to calculate the
$k$–point correlation function of the GUE and the GSE.
We find for the generating function
$\displaystyle Z_{k}(x^{-}+J)$ $\displaystyle=$ $\displaystyle
2^{2k(k-\tilde{\gamma})}e^{\imath\psi
k_{1}}\int\limits_{\mathbb{R}^{k_{1}}}\int\limits_{\mathbb{R}^{k_{2}}}\Phi_{0}(r)I_{k}^{(\beta,N)}(r)\times$
(6.5) $\displaystyle\times$
$\displaystyle\varphi_{k_{1}k_{2}}^{(\beta)}(-\imath
r,x^{-}+J)\left|B_{k}^{(\beta)}(r_{1},e^{\imath\psi}r_{2})\right|d[r_{2}]d[r_{1}]+{\rm
b.t.}\ .$
The normalization of $Z_{k}$ is guaranteed by the Efetov–Wegner terms. When
setting $(k-l)$ with $l<k$ of the source variables $J_{p}$ to zero then we
have
$\left.Z_{k}(x^{-}+J)\right|_{J_{l}=\ldots=J_{k}=0}=Z_{l-1}(\tilde{x}^{-}+\widetilde{J})\
,$ (6.6)
$\tilde{x}={\rm diag\,}(x_{1},\ldots,x_{l-1}),\ \widetilde{J}={\rm
diag\,}(J_{1},\ldots,J_{l-1})$, by the integration theorems in Ref. [1, 35,
36, 37, 3, 14]. This agrees with the definition (2.9).
### 6.2 The unitary case revisited
To make contact with the discussion in Ref. [5], we revisit the unitary case
using the insight developed here.
For a further calculation we need the explicit structure of the supersymmetric
matrix Bessel–functions. However, the knowledge of these functions is limited.
Only for certain $\beta$ and $k$ we know the exact structure. In particular
for $\beta=2$ the supermatrix Bessel–function was first calculated in Ref.
[32, 30] with help of the heat equation. Recently, this function was re-
derived by integrating the Grassmann variables in Cartesian coordinates [14],
$\displaystyle\varphi_{kk}^{(2)}(-\imath
r,x^{-}+J)=\displaystyle\frac{\imath^{k}\exp\left(-\varepsilon{\rm
Str\,}r\right)}{2^{k^{2}}\pi^{k}}\times$
$\displaystyle\times\frac{\det\left[\exp\left(-\imath
r_{m1}(x_{n}-J_{n})\right)\right]_{1\leq m,n\leq k}\det\left[\exp\left(\imath
e^{\imath\psi}r_{m2}(x_{n}+J_{n})\right)\right]_{1\leq m,n\leq
k}}{\sqrt{B_{k}^{(2)}(r_{1},e^{\imath\psi}r_{2})B_{k}^{(2)}\left(x-J,x+J\right)}}$
(6.7)
with $x\pm J={\rm diag\,}(x_{1}\pm J_{1},\ldots,x_{k}\pm J_{k})$ and the
positive square root of the Berezinian
$\displaystyle\sqrt{B_{k}^{(2,2)}(r_{1},e^{\imath\psi}r_{2})}=\displaystyle\det\left[\frac{1}{r_{a1}-e^{\imath\psi}r_{b2}}\right]_{1\leq
a,b\leq
k}=(-1)^{k(k-1)/2}\frac{\Delta_{k}(s_{1})\Delta_{k}(e^{\imath\psi}s_{2})}{V_{k}(s_{1},e^{\imath\psi}s_{2})}\
.$ (6.8)
Due to the structure of $\varphi_{kk}^{(2)}$ and $B_{k}^{(2)}$, we write the
generating function for $\beta=2$ as an integral over $\Phi_{0}$ times a
determinant [5]
$\displaystyle Z_{k}(x^{-}+J)$ $\displaystyle=$
$\displaystyle(-1)^{k(k+1)/2}\displaystyle{\det}^{-1}\left[\frac{1}{x_{a}-x_{b}-J_{a}-J_{b}}\right]_{1\leq
a,b\leq
k}\int\limits_{\mathbb{R}^{k}}\int\limits_{\mathbb{R}^{k}}\Phi_{0}(r)\times$
(6.9) $\displaystyle\times$
$\displaystyle\det\left[\mathfrak{F}_{N}(\tilde{r}_{mn},\tilde{x}_{mn})\Theta(r_{m1})\exp\left(-\varepsilon{\rm
Str\,}\tilde{r}_{mn}\right)\right]_{1\leq m,n\leq k}d[r_{2}]d[r_{1}]+{\rm
b.t.}$
where $\tilde{r}_{mn}={\rm diag\,}\left(r_{m1},e^{\imath\psi}r_{n2}\right)$,
$\tilde{x}_{mn}={\rm diag\,}\left(x_{m}-J_{m},x_{n}+J_{n}\right)$ and
$\mathfrak{F}_{N}(\tilde{r}_{mn},\tilde{x}_{mn})=\frac{\imath
r_{m1}^{N}\exp\left(-\imath{\rm
Str\,}\tilde{r}_{mn}\tilde{x}_{mn}\right)}{(N-1)!(r_{m1}-e^{\imath\psi}r_{n2})}\left(-e^{-\imath\psi}\frac{\partial}{\partial
r_{n2}}\right)^{N-1}\delta(r_{n2})\ .$ (6.10)
Then, the modified $k$–point correlation function is
$\displaystyle\qquad\quad\widehat{R}_{k}(x^{-})$ $\displaystyle=$
$\displaystyle\int\limits_{\mathbb{R}^{k}}\int\limits_{\mathbb{R}^{k}}\Phi_{0}(r)\times$
(6.11) $\displaystyle\times$
$\displaystyle\det\left[\mathfrak{F}_{N}(\tilde{r}_{mn},x_{mn})\Theta(r_{m1})\exp\left(-\varepsilon{\rm
Str\,}\tilde{r}_{mn}\right)\right]_{1\leq m,n\leq k}d[r_{2}]d[r_{1}]+{\rm
b.t.}$
and the $k$–point correlation function is
$R_{k}(x)=\int\limits_{\mathbb{R}^{k}}\int\limits_{\mathbb{R}^{k}}\Phi_{0}(r)\det\left[\frac{\mathfrak{F}_{N}(\tilde{r}_{mn},x_{mn})}{2\pi\imath}\right]_{1\leq
m,n\leq k}d[r_{2}]d[r_{1}]+{\rm b.t.}\ .$ (6.12)
We defined $x_{mn}={\rm diag\,}(x_{m},x_{n})$. The boundary terms comprise the
lower correlation functions. The $k$–point correlation function for $\beta=2$
is a determinant of the fundamental function
$R^{({\rm
fund})}(x_{m},x_{n})=\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\Phi_{0}(r)\frac{\mathfrak{F}_{N}(r,x_{mn})}{2\pi\imath}dr_{2}dr_{1}$
(6.13)
if there is one characteristic function $\mathcal{F}P_{0}$ with a
supersymmetric extension $\Phi_{0}$ factorizing for diagonal supermatrices,
$\Phi_{0}(r)={\rm Sdet\,}{\rm
diag\,}\left[\widehat{\Phi}_{0}(r_{11}),\ldots,\widehat{\Phi}_{0}(r_{k1}),\widehat{\Phi}_{0}\left(e^{\imath\psi}r_{12}\right),\ldots,\widehat{\Phi}_{0}\left(e^{\imath\psi}r_{k2}\right)\right]\
,$ (6.14)
with $\widehat{\Phi}_{0}:\mathbb{C}\rightarrow\mathbb{C}$. For example, the
shifted Gaussian ensemble in App. F of Ref. [5] is of such a type.
In Eq. (6.13) we notice that this expression is independent of the generalized
Wick–rotation. Every derivative of the fermionic eigenvalue $r_{2}$ contains
the inverse Wick–rotation as a prefactor. Moreover, the Wick–rotation in the
functions are only prefactors of $r_{2}$. Thus, an integration over the
fermionic eigenvalues $r_{2}$ in Eq. (6.11) cancels the Wick–rotation out by
using the Dirac–distribution. Also, this integration shows that every
representation of the characteristic function gives the same result, see
Theorem 6.1 in the next subsection. However, the determinantal structure with
the fundamental function in Eq. (6.13) depends on a special choice of
$\Phi_{0}$.
### 6.3 Independence statement
For $\beta=1$ and $\beta=4$ we do not know the ordinary matrix Bessel–function
explicitly. Hence, we can not give such a compact expression as in the case
$\beta=2$. On the other hand, we can derive the independence of the
Wick–rotation and of the $\Phi_{0}$ choice of the generating function.
###### Statement 6.1
The generating function $Z_{k}$ is independent of the Wick–rotation and of the
choice of the characteristic functions supersymmetric extension $\Phi_{0}$
corresponding to a certain matrix ensemble $(P,{\rm Herm\,}(\beta,N))$.
Derivation:
We split the derivation in two parts. The first part regards the Wick–rotation
and the second part yields the independence of the choice of $\Phi_{0}$.
Due to the normalization of the supermatrix Bessel–function (6.2),
$\varphi_{k_{1}k_{2}}^{(\beta)}(-\imath r,x^{-}+J)$ only depends on
$e^{\imath\psi}r_{2}$. The same is true for $\Phi_{0}$. Due to the property
$D_{k_{2}r_{2}}^{(4/\beta)}\left(\imath
e^{\imath\psi}\gamma_{1}\varepsilon\right)=e^{\imath
k_{2}\psi}D_{k_{2},e^{\imath\psi}r_{2}}^{(4/\beta)}\left(\imath\gamma_{1}\varepsilon\right)\
,$ (6.15)
the Ingham–Siegel integral in the form (5.7) times the phase
$e^{\imath(k_{1}-k_{2})\psi}$ only depends on $e^{\imath\psi}r_{2}$ and
$e^{-\imath\psi}\partial/\partial r_{2}$. The additional phase comes from the
$\rho$–integration. Thus, we see the independence of the Wick–rotation because
of the same reason as in the $\beta=2$ case.
Let $\Phi_{0}$ and $\Phi_{1}$ be two different supersymmetric extensions of
the characteristic function $\mathcal{F}P$. Then these two superfunctions only
depend on the invariants $\\{{\rm Str\,}\sigma^{m_{j}}\\}_{1\leq j\leq l_{0}}$
and $\\{{\rm Str\,}\sigma^{n_{j}}\\}_{1\leq j\leq l_{1}}$,
$m_{j},n_{j},l_{0},l_{1}\in\mathbb{N}$. We consider $\Phi_{0}$ and $\Phi_{1}$
as functions of $\mathbb{C}^{l_{0}}\rightarrow\mathbb{C}$ and
$\mathbb{C}^{l_{1}}\rightarrow\mathbb{C}$, respectively. Defining the function
$\Delta\Phi(x_{1},\ldots,x_{M})=\Phi_{0}(x_{m_{1}},\ldots,x_{m_{l_{0}}})-\Phi_{1}(x_{n_{1}},\ldots,x_{n_{l_{1}}}),$
(6.16)
where $M={\rm max}\\{m_{a},n_{b}\\}$, we notice with the discussion in Sec.
4.5 that
$\Delta\Phi(x_{1},\ldots,x_{M})|_{x_{j}=\tr H^{j}}=0$ (6.17)
for every hermitian matrix $H$. However, there could be a symmetric
supermatrix $\sigma$ with
$\Delta\Phi(x_{1},\ldots,x_{M})|_{x_{j}={\rm Str\,}\sigma^{j}}\neq 0.$ (6.18)
With the differential operator
$\mathfrak{D}_{r}=\left[D_{k_{2}r_{2}}^{(4/\beta)}\left(\imath
e^{\imath\psi}\gamma_{1}\varepsilon\right)\right]^{N-k_{1}}\frac{\varphi_{k_{1}k_{2}}^{(\beta)}(-\imath
r,x^{-}+J)}{V_{k}(r_{1},e^{\imath\psi}r_{2})},$ (6.19)
we consider the difference of the generating functions
$\displaystyle\Delta Z_{k}(x^{-}+J)$ $\displaystyle=$ $\displaystyle
Z_{k}(x^{-}+J)|_{\Phi_{0}}-Z_{k}(x^{-}+J)|_{\Phi_{1}}=$ (6.20)
$\displaystyle=$
$\displaystyle\int_{\mathbb{R}^{k_{1}}}|\Delta_{k_{2}}(r_{1})|^{\beta}{\det}^{\kappa}r_{1}\Theta(r_{1})\left.\mathfrak{D}_{r}\Delta\Phi(x)|_{x_{j}={\rm
Str\,}r^{j}}\right|_{r_{2}=0}d[r_{1}]$
Here, we omit the Efetov–Wegner terms. The differential operator is invariant
under the action of the permutation group $S(k_{2})$ on the fermionic block
${\rm Herm\,}(4/\beta,k_{2})$. Hence, we find
$\displaystyle\left.\mathfrak{D}_{r}\Delta\Phi(x)|_{x_{j}={\rm
Str\,}r^{j}}\right|_{r_{2}=0}$ $\displaystyle=$
$\displaystyle\left.\underset{|a|\leq
k_{2}(N-k_{1})}{\sum\limits_{a\in\\{0,\ldots,N-k_{1}\\}^{M}}}d_{a}(r)\prod\limits_{j=1}^{M}\frac{\partial^{a_{j}}}{\partial
x_{j}^{a_{j}}}\Delta\Phi(x)|_{x_{j}={\rm Str\,}r^{j}}\right|_{r_{2}=0}=$
(6.21) $\displaystyle=$ $\displaystyle\underset{|a|\leq
k_{2}(N-k_{1})}{\sum\limits_{a\in\\{0,\ldots,N-k_{1}\\}^{M}}}d_{a}(r_{1})\prod\limits_{j=1}^{M}\frac{\partial^{a_{j}}}{\partial
x_{j}^{a_{j}}}\Delta\Phi(x)|_{x_{j}=\tr r^{j}}=$ $\displaystyle=$
$\displaystyle 0,$
where $d_{a}$ are certain symmetric functions depending on the eigenvalues
$r$. At $r_{2}=0$ these functions are well-defined since the supermatrix
Bessel–functions and the term $V_{k}^{-1}(r_{1},e^{\imath\psi}r_{2})$ are
$C^{\infty}$ at this point. Thus, we find that
$\Delta Z_{k}(x^{-}+J)=0.$ (6.22)
This means that the generating function is independent of the supersymmetric
extension of the characteristic function. $\square$
## 7 One–point and higher order correlation functions
We need an explicit expression or some properties of the supermatrix
Bessel–function to simplify the integral for the generating function. For
$k=1$ we know the supermatrix Bessel–functions for all $\beta$. The simplest
case is $\beta=2$ where we take the formula (6.12) with $k=1$ and obtain
$R_{1}(x)=R^{({\rm
fund})}(x,x)=\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\Phi_{0}(r)\frac{\mathfrak{F}_{N}\left(r,x\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{2}\right)}{2\pi\imath}dr_{2}dr_{1}\ .$ (7.1)
Since the Efetov–Wegner term in the generating function is just unity there
are no boundary terms in the level density. For $\beta\in\\{1,4\\}$ we use the
supermatrix Bessel–function [29, 38, 14]
$\displaystyle\varphi_{21}^{(1)}(-\imath r,x^{-}+J)$ $\displaystyle=$
$\displaystyle\frac{-2J}{\pi}\exp\left[-\imath{\rm
Str\,}r(x^{-}+J)\right]\times$ (7.2) $\displaystyle\times$
$\displaystyle\left[\imath{\rm
Str\,}r+J\left(r_{11}-e^{\imath\psi}r_{2}\right)\left(r_{21}-e^{\imath\psi}r_{2}\right)\right]\
.$
We find
$\displaystyle\widehat{R}_{1}(x^{-})=\displaystyle-\imath\int\limits_{\mathbb{R}^{2}}\int\limits_{\mathbb{R}}\Phi_{0}(r)\det
r_{1}^{(N-1)/2}{\rm
Str\,}r\frac{|r_{11}-r_{21}|}{(r_{11}-e^{\imath\psi}r_{2})^{2}(r_{21}-e^{\imath\psi}r_{2})^{2}}\times$
$\displaystyle\times\displaystyle\exp\left(-\imath x^{-}{\rm
Str\,}r\right)\Theta(r_{1})\frac{1}{(N-2)!}\left(-e^{-\imath\psi}\frac{\partial}{\partial
r_{2}}\right)^{N-2}\delta(r_{2})d[r_{1}]dr_{2}$ (7.3)
for $\beta=1$ and
$\displaystyle\widehat{R}_{1}(x^{-})=\displaystyle-4\imath\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^{2}}\Phi_{0}(r)r_{1}^{2N+1}{\rm
Str\,}r\frac{e^{\imath\psi}r_{12}-e^{\imath\psi}r_{22}}{(r_{1}-e^{\imath\psi}r_{12})^{2}(r_{1}-e^{\imath\psi}r_{22})^{2}}\times$
$\displaystyle\times\exp\left(-\imath x^{-}{\rm
Str\,}r\right)\Theta(r_{1})\frac{\det
e^{\imath\psi}r_{2}}{(2N+1)!}\left(4e^{-2\imath\psi}D_{2,r_{2}}^{(1)}\right)^{N}\frac{\delta(r_{12})\delta(r_{22})}{e^{\imath\psi}r_{12}-e^{\imath\psi}r_{22}}d[r_{2}]dr_{1}$
(7.4)
for $\beta=4$. The differential operator has the explicit form
$D_{2,r_{2}}^{(1)}=\frac{\partial^{2}}{\partial r_{12}\partial
r_{22}}-\frac{1}{2}\frac{1}{r_{12}-r_{22}}\left(\frac{\partial}{\partial
r_{12}}-\frac{\partial}{\partial r_{22}}\right)\ .$ (7.5)
For the level density we have
$\displaystyle
R_{1}(x)=\displaystyle-\frac{1}{2\pi}\int\limits_{\mathbb{R}^{2}}\int\limits_{\mathbb{R}}\Phi_{0}(r)\det
r_{1}^{(N-1)/2}\exp\left(-\imath x{\rm Str\,}r\right){\rm
Str\,}r\frac{|r_{11}-r_{21}|}{(r_{11}-e^{\imath\psi}r_{2})^{2}(r_{21}-e^{\imath\psi}r_{2})^{2}}\times$
$\displaystyle\times\displaystyle\left(\Theta(r_{1})+\Theta(-r_{1})\right)\frac{1}{(N-2)!}\left(-e^{-\imath\psi}\frac{\partial}{\partial
r_{2}}\right)^{N-2}\delta(r_{2})d[r_{1}]dr_{2}$ (7.6)
for $\beta=1$ and
$\displaystyle
R_{1}(x)=\displaystyle-\frac{2}{\pi}\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^{2}}\Phi_{0}(r)r_{1}^{2N+1}\exp\left(-\imath
x{\rm Str\,}r\right){\rm
Str\,}r\frac{e^{\imath\psi}r_{12}-e^{\imath\psi}r_{22}}{(r_{1}-e^{\imath\psi}r_{12})^{2}(r_{1}-e^{\imath\psi}r_{22})^{2}}\times$
$\displaystyle\times\frac{\det
e^{\imath\psi}r_{2}}{(2N+1)!}\left(4e^{-2\imath\psi}D_{2,r_{2}}^{(1)}\right)^{N}\frac{\delta(r_{12})\delta(r_{22})}{e^{\imath\psi}r_{12}-e^{\imath\psi}r_{22}}d[r_{2}]dr_{1}$
(7.7)
for $\beta=4$. The equations (7.4) to (7.7) comprise all level–densities for
arbitrary matrix ensembles invariant under orthogonal and unitary–symplectic
rotations. As probability densities which do not factorize are included, these
results considerably extend those obtained by orthogonal polynomials.
For higher order correlation functions we use the definition (2.3) and the
definition of the matrix Green’s function. With help of the quantities $L={\rm
diag\,}(L_{1},\ldots,L_{k})\in\\{\pm 1\\}^{k}$ and
$\widehat{L}=L\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{2\tilde{\gamma}}$, this yields
$\displaystyle R_{k}(x)=\displaystyle
2^{2k(k-\tilde{\gamma})}\int\limits_{\mathbb{R}^{k_{1}}}\int\limits_{\mathbb{R}^{k_{2}}}\Phi_{0}(r)\underset{\epsilon\searrow
0}{\lim}\sum\limits_{L\in\\{\pm 1\\}^{k}}\prod\limits_{j=1}^{k}L_{j}\
\frac{I_{k}^{(\beta,N)}\left(\widehat{L}r\right)\exp\left(-\varepsilon{\rm
Str\,}\widehat{L}r\right)}{\left(2\pi\imath
e^{-\imath\psi\gamma_{1}}\right)^{k}}\times$
$\displaystyle\times\displaystyle\left.\left(\prod\limits_{j=1}^{k}-\frac{1}{2}\frac{\partial}{\partial
J_{j}}\right)\varphi_{k_{1}k_{2}}^{(\beta)}(-\imath
r,x^{(0)}+J)\right|_{J=0}\left|B_{k}^{(\beta)}(r_{1},e^{\imath\psi}r_{2})\right|d[r_{2}]d[r_{1}]+{\rm
b.t.}$ (7.8)
for analytic correlation functions. We extend this formula to all rotation
invariant ensembles by the universality of the integral kernel. First, we make
a limit of a uniformly convergent series of Schwartz–functions analytic in the
real components of its entries to every arbitrary Schwartz–function describing
a matrix ensemble. The Schwartz–functions are dense in a weak sense in the
sets of Lebesgue–integrable Functions $L^{p}$ and the tempered distributions.
Thus, we integrate Eq. (7.8) with an arbitrary Schwartz–function on
$\mathbb{R}^{k}$ and take the limit of a series of Schwartz–functions
describing the ensembles to a tempered distribution which completes the
extension.
## 8 Remarks and conclusions
We extended the method of the generalized Hubbard–Stratonovich transformation
to arbitrary orthogonally and unitary–symplectically invariant random matrix
ensembles. Due to a duality between ordinary and supersymmetric matrix spaces,
the integral for the $k$–point correlation function is over a superspace. This
integral was reduced to an eigenvalue integral for all probability densities,
including those which do not factorize. The results are in terms of the
characteristic function. Thus, the characteristic function has to be
calculated for the ensemble in question. Since the matrix Bessel–functions of
the ordinary orthogonal and unitary–symplectic group [39, 29, 40] and, thus,
the supermatrix Bessel–functions of ${\rm UOSp\,}(2k/2k)$ are not known
explicitly beyond $k=1$, we can not further simplify our results. However, we
found the previously unknown determinantal structure of the Berezinian of
${\rm UOSp\,}(2k/2k)$.
Up to the restriction $N\geq k_{1}$, formula (7.8) is exact for every $k$, $N$
and rotation invariant ensemble. Thus, it can serve not only as starting point
for universality considerations [7], but for all other studies.
The expressions for the supersymmetric Ingham–Siegel integrals (5.20), (5.21)
and (5.22) confirm the equivalence of the superbosonization formula [20, 11,
12] with our derivation. A work for a proof of this equivalence for all
$\beta$’s is in progress. The comparison of the superbosonization formula [12,
11] with Eq. (5.1) shows that the crucial difference lies in the integration
domain. However, the Dirac–distribution and the partial derivatives in the
fermionic part imply a representation as a contour integral which is
equivalent to the compact space used in the superbosonization formula.
## Acknowledgements
We thank H. Kohler for clarifying remarks on relation between the ordinary
matrix Bessel–functions and the Jack–polynomials as well as on the Sekiguchi
differential operators. We are also grateful to S. Mandt, H.-J. Sommers and
M.R. Zirnbauer for fruitful discussions. A big thank you goes to P. Heinzner
and E. Vishnyakova for helpful advice on the Paley–Wiener theorem. We thank
the referee for helpful remarks. We acknowledge financial support from the
Deutsche Forschungsgemeinschaft within Sonderforschungsbereich Transregio 12
“Symmetries and Universality in Mesoscopic Systems” (M.K. and T.G.) and from
Det Svenska Vetenskapsrådet (J.G.).
## Appendix A Circularity of the supertrace for rectangular supermatrices
The circularity for rectangular matrices of pure commuting entries or
anticommuting entries was derived by Berezin [18]. Since we have not found the
general theorem for arbitrary rectangular supermatrices, we give the trivial
statement.
###### Statement A.1
Let the matrices $V_{1}$ and $V_{2}$ be the same as in Eq. (4.23). Then, we
have
${\rm Str\,}V_{1}V_{2}={\rm Str\,}V_{2}V_{1}$ (1.1)
Derivation:
We recall the circularity of the trace for rectangular matrices of commuting
elements $\tr A_{1}A_{2}=\tr A_{2}A_{1}$ and its anticommuting analogue $\tr
B_{1}B_{2}=-\tr B_{2}B_{1}$ which has been proven by Berezin [18]. We make the
simple calculation
$\displaystyle{\rm Str\,}V_{1}V_{2}$ $\displaystyle=$ $\displaystyle\tr
A_{1}A_{2}+\tr B_{1}C_{2}-\tr C_{1}B_{2}-\tr D_{1}D_{2}$ (1.2)
$\displaystyle=$ $\displaystyle\tr A_{2}A_{1}-\tr C_{2}B_{1}+\tr
B_{2}C_{1}-\tr D_{2}D_{1}$ $\displaystyle=$ $\displaystyle{\rm
Str\,}V_{2}V_{1}$
$\square$
For our purposes we must prove
$\tr(V^{\dagger}V)^{m}={\rm Str\,}(VV^{\dagger})^{m}\ .$ (1.3)
We define $V_{1}=V^{\dagger}$ and $V_{2}=(VV^{\dagger})^{m-1}V$ and get
$a=2k$, $b=2k$, $c=\gamma_{2}N$ and $d=0$. Applying corollary A.1 and
reminding that $\tr A={\rm Str\,}A$ for a matrix of commuting elements and
identification with the Boson–Boson block, we have the desired result (1.3).
## Appendix B A matrix–Bessel version of the Sekiguchi differential operator
We derive a version for the Sekiguchi differential operator for the ordinary
matrix Bessel–functions $\varphi_{N}^{(\beta)}(y,x)$ on the connection between
the Jack–polynomials and the ordinary matrix Bessel–functions.
The Sekiguchi differential operator is defined as [28]
$\displaystyle
D_{Nz}(u,\beta)=\Delta_{N}^{-1}(z)\det\left[z_{a}^{N-b}\left(z_{a}\frac{\partial}{\partial
z_{a}}+(N-b)\frac{\beta}{2}+u\right)\right]_{1\leq a,b\leq N}=$
$\displaystyle=\Delta_{N}^{-1}(z)\det\left[\frac{\beta}{2}\left(z_{a}\frac{\partial}{\partial
z_{a}}+u\right)z_{a}^{N-b}+\left(1-\frac{\beta}{2}\right)z_{a}^{N-b}\left(z_{a}\frac{\partial}{\partial
z_{a}}+u\right)\right]_{1\leq a,b\leq N}\ .$ (2.1)
Here, $u$ is a boost and the expansion parameter to generate the elementary
polynomials in the Cherednik operators, for more explicit information see Ref.
[41]. Let $J_{N}^{(\beta)}(n,z)$ the Jack–polynomial with the partition
$n_{1}\geq\ldots\geq n_{N}$ and the standard parameter
$\alpha=\frac{2}{\beta}$ in Macdonald’s [42] notation. The Jack–polynomials
are eigenfunctions with respect to $D_{Nz}(u,\beta)$
$D_{Nz}(u,\beta)J_{N}^{(\beta)}(n,z)=\prod\limits_{a=1}^{N}\left[n_{a}+(N-a)\frac{\beta}{2}+u\right]J_{N}^{(\beta)}(n,z)\
.$ (2.2)
The aim is to find a similar differential operator for the ordinary matrix
Bessel–function $\varphi_{N}^{(\beta)}(y,x)$ such that
$\displaystyle
D_{Nx}^{(\beta)}(B)\varphi_{N}^{(\beta)}\left(\frac{y}{\gamma_{2}},x\right)$
$\displaystyle=$
$\displaystyle\prod\limits_{a=1}^{N}\imath\left(y_{a}+B\right)\varphi_{N}^{(\beta)}\left(\frac{y}{\gamma_{2}},x\right)=$
(2.3) $\displaystyle=$
$\displaystyle{\det}^{1/\gamma_{2}}\imath(y+B\leavevmode\hbox{\small
1\kern-3.8pt\normalsize
1}_{\gamma_{2}N})\varphi_{N}^{(\beta)}\left(\frac{y}{\gamma_{2}},x\right).$
###### Statement B.1
The differential operator which fulfils Eq. (2.3) is
$D_{Nx}^{(\beta)}(B)=\Delta_{N}^{-1}(x)\det\left[x_{a}^{N-b}\left(\frac{\partial}{\partial
x_{a}}+(N-b)\frac{\beta}{2}\frac{1}{x_{a}}+\imath B\right)\right]_{1\leq
a,b\leq N}\ .$ (2.4)
Derivation:
Kohler [43] has presented a connection between the Jack–polynomials and the
matrix Bessel–functions. Let
$z_{a}=e^{\imath\frac{2\pi}{L}x_{a}}\ \ \ {\rm and}\ \ \
n_{a}=\frac{L}{2\pi}y_{a}-\left(\frac{N+1}{2}-a\right)\frac{\beta}{2}$ (2.5)
then it is true
$\varphi_{N}^{(\beta)}\left(\frac{y}{\gamma_{2}},x\right)=\underset{L\to\infty}{\rm
lim}\left(\frac{\Delta_{N}(z)}{\Delta_{N}(x)\Delta_{N}(y)}\right)^{\beta/2}\prod\limits_{a=1}^{N}z_{a}^{-\beta(N-1)/4}J_{N}^{(\beta)}(n,z)\
.$ (2.6)
We expand the determinant in Eq. (2.1) and have
$\displaystyle D_{Nz}(u,\beta)=$
$\displaystyle=\Delta_{N}^{-1}(z)\sum\limits_{m\in\\{0,1\\}^{N}}\prod\limits_{a=1}^{N}\left[\frac{\beta}{2}\left(z_{a}\frac{\partial}{\partial
z_{a}}+u\right)\right]^{m_{a}}\Delta_{N}(z)\prod\limits_{a=1}^{N}\left[\left(1-\frac{\beta}{2}\right)\left(z_{a}\frac{\partial}{\partial
z_{a}}+u\right)\right]^{1-m_{a}}.$ (2.7)
Using the substitution (2.5) and
$\widetilde{\Delta}(x)=\prod\limits_{1\leq a<b\leq
N}2\imath\sin\left(\frac{\pi}{L}(x_{a}-x_{b})\right)\exp\left(\imath\pi\frac{x_{a}+x_{b}}{L}\right)\
,$ (2.8)
we consider the limit
$\displaystyle\underset{L\to\infty}{\lim}\left(\frac{2\pi\imath}{L}\right)^{N}D_{Nz}(u,\beta)=$
$\displaystyle=\underset{L\to\infty}{\lim}\frac{1}{\widetilde{\Delta}(x)}\sum\limits_{m\in\\{0,1\\}^{N}}\prod\limits_{a=1}^{N}\left[\frac{\beta}{2}\left(\frac{\partial}{\partial
x_{a}}+\imath\frac{2\pi
u}{L}\right)\right]^{m_{a}}\widetilde{\Delta}(x)\times$
$\displaystyle\times\prod\limits_{j=1}^{N}\left[\left(1-\frac{\beta}{2}\right)\left(\frac{\partial}{\partial
x_{a}}+\imath\frac{2\pi u}{L}\right)\right]^{1-m_{a}}=$
$\displaystyle=\Delta_{N}^{-1}(x)\sum\limits_{m\in\\{0,1\\}^{N}}\prod\limits_{a=1}^{N}\left[\frac{\beta}{2}\left(\frac{\partial}{\partial
x_{a}}+\imath
B\right)\right]^{m_{a}}\Delta_{N}(x)\left[\left(1-\frac{\beta}{2}\right)\left(\frac{\partial}{\partial
x_{a}}+\imath B\right)\right]^{1-m_{a}}=$
$\displaystyle=\Delta_{N}^{-1}(x)\det\left[\frac{\beta}{2}\left(\frac{\partial}{\partial
x_{a}}+\imath
B\right)x_{a}^{N-b}+\left(1-\frac{\beta}{2}\right)x_{a}^{N-b}\left(\frac{\partial}{\partial
x_{a}}+\imath B\right)\right]_{1\leq a,b\leq N}=$
$\displaystyle=\Delta_{N}^{-1}(x)\det\left[x_{a}^{N-b}\left(\frac{\partial}{\partial
x_{a}}+(N-b)\frac{\beta}{2}\frac{1}{x_{a}}+\imath B\right)\right]_{1\leq
a,b\leq N}\ .$ (2.9)
Here, we defined a boost $B=\underset{L\to\infty}{\lim}2\pi u/L$ . The
eigenvalue in Eq. (2.2) is in the limit
$\underset{L\to\infty}{\lim}\left(\frac{2\pi\imath}{L}\right)^{N}\prod\limits_{a=1}^{N}\left[n_{a}+(N-a)\frac{\beta}{2}+u\right]=\prod\limits_{a=1}^{N}\imath\left(y_{a}+B\right)={\det}^{1/\gamma_{2}}\imath(y+B\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\gamma_{2}N})\ .$ (2.10)
We assume that Eq. (2.6) is a uniformly convergent limit. Thus, we combine
(2.6), (2.9) and (2.10) with Eq. (2.2) and find Eq. (2.4). $\square$
Indeed, $D_{Nx}^{(\beta)}(B)$ is for the unitary case, $\beta=2$,
$D_{Nx}^{(2)}(B)=\Delta_{N}^{-1}(x)\prod\limits_{a=1}^{N}\left(\frac{\partial}{\partial
x_{a}}+\imath B\right)\Delta_{N}(x)\ .$ (2.11)
## Appendix C Calculation of the supersymmetric Ingham–Siegel integral
In C.1, we compute the Ingham–Siegel integral. We derive the statements 5.1
and 5.2 in C.2 and C.3, respectively.
### C.1 Decomposition of the Boson–Boson and Fermion–Fermion block
integration
We split $\sigma$ in its Boson–Fermion block structure
$\mathfrak{p}\sigma=\left[\begin{array}[]{cc}\sigma_{1}&e^{-\imath\psi/2}\sigma_{\eta}^{\dagger}\\\
e^{-\imath\psi/2}\sigma_{\eta}&e^{-\imath\psi}\sigma_{2}\end{array}\right]\ .$
(3.1)
The following calculation must be understand in a weak sense. We first
integrate over a conveniently integrable function and, then, perform the
integral transformations. Hence, we understand $I_{k}^{(\beta,N)}$ as a
distribution where we must fix the underlying set of test–functions. For our
purposes, we need Schwartz–functions analytic in the real independent
variables.
Since the superdeterminant of
$\mathfrak{p}\left(\sigma+\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{4k}\right)$ is
${\rm
Sdet\,}\mathfrak{p}\sigma^{+}=\frac{\det\left(\sigma_{1}+\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize
1}_{\tilde{k}}\right)}{\det\left[e^{-\imath\psi}\sigma_{2}+\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize
1}_{\tilde{k}}-e^{-\imath\psi}\sigma_{\eta}\left(\sigma_{1}+\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize
1}_{\tilde{k}}\right)^{-1}\sigma_{\eta}^{\dagger}\right]}$ (3.2)
we shift $\sigma_{2}$ by analytic continuation to
$\sigma_{2}+\sigma_{\eta}\left(\sigma_{1}+\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)^{-1}\sigma_{\eta}^{\dagger}$ and
obtain
$\displaystyle I_{k}^{(\beta,N)}(\rho)$ $\displaystyle=$
$\displaystyle\int\limits_{\Sigma_{-\psi}^{0}(\beta,k)}\displaystyle{\rm
exp}\left(-\imath\tr r_{1}\sigma_{1}+\imath\tr
r_{2}\sigma_{2}+\imath\tr\left[r_{2}\sigma_{\eta}\left(\sigma_{1}+\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize
1}_{\tilde{k}}\right)^{-1}\sigma_{\eta}^{\dagger}\right]\right)\times$ (3.3)
$\displaystyle\times$ $\displaystyle\exp\left(\varepsilon{\rm
Str\,}r\right)\left[\frac{{\det}\left(e^{-\imath\psi}\sigma_{2}+\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize
1}_{\tilde{k}}\right)}{{\det}\left(\sigma_{1}+\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)}\right]^{N/\gamma_{1}}d[\sigma]\
.$
An integration over the Grassmann variables yields
$\displaystyle I_{k}^{(\beta,N)}(\rho)$ $\displaystyle=$
$\displaystyle\left(\frac{-\imath\tilde{\gamma}}{2\pi}\right)^{k_{1}k_{2}}\exp\left(\varepsilon{\rm
Str\,}r\right){\det}^{k}r_{2}\times$ (3.4) $\displaystyle\times$
$\displaystyle\int\limits_{{\rm Herm\,}(\beta,k_{1})}\exp\left(-\imath\tr
r_{1}\sigma_{1}\right){\det}\left(\sigma_{1}+\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize
1}_{\tilde{k}}\right)^{-N/\gamma_{1}-k}d[\sigma_{1}]\times$
$\displaystyle\times$ $\displaystyle\int\limits_{{\rm
Herm\,}(4/\beta,k_{2})}\exp\left(\imath\tr
r_{2}\sigma_{2}\right){\det}\left(e^{-\imath\psi}\sigma_{2}+\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)^{N/\gamma_{1}}d[\sigma_{2}]\ .$
With help of Eq. (5.3) we have
$\displaystyle I_{k}^{(\beta,N)}(\rho)$ $\displaystyle=$
$\displaystyle\imath^{-k_{2}N}G_{Nk_{1}}^{(\beta)}\left(-\frac{\tilde{\gamma}}{2\pi}\right)^{k_{1}k_{2}}\displaystyle{\det}^{\kappa}r_{1}\Theta(r_{1})\exp\left(-e^{\imath\psi}\varepsilon\tr
r_{2}\right)\times$ (3.5) $\displaystyle\times$
$\displaystyle{\det}^{k}r_{2}\int\limits_{{\rm
Herm\,}(4/\beta,k_{2})}\exp\left(\imath\tr
r_{2}\sigma_{2}\right){\det}^{N/\gamma_{1}}\left(e^{-\imath\psi}\sigma_{2}+\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)d[\sigma_{2}]\ .$
The remaining integral over the Fermion–Fermion block $\sigma_{2}$,
$\displaystyle\mathfrak{I}(r_{2})=\exp\left(-e^{\imath\psi}\varepsilon\tr
r_{2}\right)\int\limits_{{\rm Herm\,}(4/\beta,k_{2})}\exp\left(\imath\tr
r_{2}\sigma_{2}\right){\det}^{N/\gamma_{1}}\left(\sigma_{2}+\imath
e^{\imath\psi}\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{\tilde{k}}\right)d[\sigma_{2}]\ ,$ (3.6)
is up to a constant a differential operator with respect to $r_{2}$ times the
Dirac–distribution of $r_{2}$ because the determinant term is for
$\beta\in\\{1,2\\}$ a polynomial in $\sigma_{2}$ and for $\beta=4$ we use
Cramers–degeneracy. We give several representations of this distribution.
We first start with an eigenvalue–angle decomposition of
$\sigma_{2}=Us_{2}U^{\dagger}$ where $s_{2}$ is diagonal and $U\in{\rm
U\,}^{(4/\beta)}(k_{2})$. Integrating over the group ${\rm
U\,}^{(4/\beta)}(k_{2})$, Eq. (3.6) becomes
$\displaystyle\mathfrak{I}(r_{2})$ $\displaystyle=$
$\displaystyle\displaystyle\exp\left(-e^{\imath\psi}\varepsilon\tr
r_{2}\right)g_{k_{2}}^{(4/\beta)}\times$ (3.7) $\displaystyle\times$
$\displaystyle\int\limits_{\mathbb{R}^{k_{2}}}\varphi_{k_{2}}^{(4/\beta)}(r_{2},s_{2}){\det}^{N/\gamma_{1}}\left(s_{2}+\imath
e^{\imath\psi}\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{\tilde{k}}\right)|\Delta_{k_{2}}(s_{2})|^{4/\beta}d[s_{2}].$
For more information about the ordinary matrix Bessel–function
$\varphi_{k_{2}}^{(4/\beta)}(r_{2},s_{2})=\int\limits_{{\rm
U\,}^{(4/\beta)}(k_{2})}\exp\left(\imath\tr
r_{2}Us_{2}U^{\dagger}\right)d\mu(U)$ (3.8)
with normalized Haar–measure $d\mu(U)$ see in Ref. [39, 40]. The constant
$g_{n}^{(\beta)}$ is defined by
$\int\limits_{{\rm
Herm\,}(\beta,n)}f(H)d[H]=g_{n}^{(\beta)}\int\limits_{\mathbb{R}^{n}}f(E)|\Delta_{n}(E)|^{\beta}d[E]$
(3.9)
independent of a sufficiently integrable function $f$ which is invariant under
the action of ${\rm U\,}^{(\beta)}(n)$. The Gaussian distribution is such a
function. For the left hand side we obtain
$\int\limits_{{\rm Herm\,}(\beta,n)}\exp\left(-\tr
H^{2}\right)d[H]=\gamma_{2}^{-n(2n-1)/2}2^{-\beta n(n-1)/4}\pi^{n/2+\beta
n(n-1)/4}\ .$ (3.10)
The integral on the right hand side is equal to
$\int\limits_{\mathbb{R}^{n}}\exp\left(-\gamma_{2}\sum\limits_{j=1}^{n}E_{j}^{2}\right)|\Delta_{n}(E)|^{\beta}d[E]=\left\\{\begin{array}[]{ll}2^{-n(n-5)/4}\prod\limits_{j=1}^{n}\Gamma\left(\frac{j}{2}+1\right)&,\
\beta=1,\\\
2^{-n(n-1)/2}\pi^{n/2}\prod\limits_{j=1}^{n}\Gamma\left(j+1\right)&,\
\beta=2,\\\
2^{-n(2n-1/2)}\pi^{n/2}\prod\limits_{j=1}^{n}\Gamma\left(2j+1\right)&,\
\beta=4,\end{array}\right.$ (3.11)
see Mehta’s book [15]. Thus, we have
$g_{n}^{(\beta)}=\frac{1}{n!}\prod\limits_{j=1}^{n}\frac{\pi^{\beta(j-1)/2}\Gamma\left(\beta/2\right)}{\Gamma\left(\beta
j/2\right)}\ .$ (3.12)
This constant is the quotient of the volumes of the permutation group $S(n)$
and of the flag manifold ${\rm U\,}^{(\beta)}(n)/[{\rm U\,}^{(\beta)}(1)]^{n}$
with the volume element defined as in Ref. [44] denoted by ${\rm Vol}_{B}$.
We plug the differential operator of B (2.3) into Eq. (3.7) and have
$\displaystyle\mathfrak{I}(r_{2})$ $\displaystyle=$ $\displaystyle
g_{k_{2}}^{(4/\beta)}\exp\left(-e^{\imath\psi}\varepsilon\tr
r_{2}\right)(\imath\gamma_{1})^{-k_{2}N}\times$ (3.13) $\displaystyle\times$
$\displaystyle\displaystyle\left[D_{k_{2}r_{2}}^{(4/\beta)}\left(\imath
e^{\imath\psi}\gamma_{1}\varepsilon\right)\right]^{N}\int\limits_{\mathbb{R}^{k_{2}}}\phi_{k_{2}}^{(4/\beta)}(r_{2},s_{2})|\Delta_{k_{2}}(s_{2})|^{4/\beta}d[s_{2}]\
.$
The integration over the eigenvalues leads to the Dirac–distribution
$\displaystyle\mathfrak{I}(r_{2})$ $\displaystyle=$
$\displaystyle\displaystyle\left(\frac{2\pi}{\gamma_{1}}\right)^{k_{2}}\left(\frac{\pi}{\gamma_{1}}\right)^{2k_{2}(k_{2}-1)/\beta}\frac{\exp\left(-e^{\imath\psi}\varepsilon\tr
r_{2}\right)}{g_{k_{2}}^{(4/\beta)}}(\imath\gamma_{1})^{-k_{2}}\times$ (3.14)
$\displaystyle\times$
$\displaystyle\displaystyle\left[D_{k_{2}r_{2}}^{(4/\beta)}\left(\imath
e^{\imath\psi}\gamma_{1}\varepsilon\right)\right]^{N}\frac{\delta(r_{2})}{|\Delta_{k_{2}}(r_{2})|^{4/\beta}}$
and we find the representation for the supersymmetric Ingham–Siegel integral
(5.7).
### C.2 Derivation of statement 5.1
The boost $\imath e^{\imath\psi}\varepsilon$ in the determinant can simply be
shifted away because of
$D_{k_{2}r_{2}}^{(4/\beta)}\left(\imath
e^{\imath\psi}\gamma_{1}\varepsilon\right)\exp\left(\varepsilon
e^{\imath\psi}\tr r_{2}\right)=\exp\left(\varepsilon e^{\imath\psi}\tr
r_{2}\right)D_{k_{2}r_{2}}^{(4/\beta)}(0)=\exp\left(\varepsilon
e^{\imath\psi}\tr r_{2}\right)D_{k_{2}r_{2}}^{(4/\beta)}$ (3.15)
and Eq. (3.14). Let $\mathfrak{S}$ the set of ${\rm
U\,}^{(4/\beta)}(k_{2})$–invariant Schwartz–functions on ${\rm
Herm\,}(4/\beta,k_{2})\rightarrow\mathbb{C}$. The ordinary matrix
Bessel–functions are complete and orthogonal in $\mathfrak{S}$ with the
sesquilinear scalar product
$\langle
f|f^{\prime}\rangle=\int\limits_{\mathbb{R}^{k_{2}}}f^{*}(x)f^{\prime}(x)|\Delta_{k_{2}}(x)|^{4/\beta}d[x]\
.$ (3.16)
The completeness and the orthogonality are
$\displaystyle\langle\phi_{k_{2}}^{(4/\beta)}(x)|\phi_{k_{2}}^{(4/\beta)}(x^{\prime})\rangle$
$\displaystyle=$
$\displaystyle\int\limits_{\mathbb{R}^{k_{2}}}|\phi_{k_{2}}^{(4/\beta)}(y)\rangle\langle\phi_{k_{2}}^{(4/\beta)}(y)|\
|\Delta_{k_{2}}(y)|^{4/\beta}d[y]=$ (3.17) $\displaystyle=$
$\displaystyle\int\limits_{\mathbb{R}^{k_{2}}}\phi_{k_{2}}^{(4/\beta)}(y,x)\phi_{k_{2}}^{(4/\beta)*}(y,x^{\prime})|\Delta_{k_{2}}(y)|^{4/\beta}d[y]=$
$\displaystyle=$ $\displaystyle C_{k}^{(\beta)}\frac{1}{k_{2}!}\sum_{p\in
S(k_{2})}\frac{\prod\limits_{j=1}^{k_{2}}\delta(x_{j}-x_{p(j)}^{\prime})}{|\Delta_{k_{2}}(x)|^{2/\beta}|\Delta_{k_{2}}(x^{\prime})|^{2/\beta}}$
where $S(n)$ is the permutation group of $n$ elements. We defined the constant
$C_{k}^{(\beta)}=\left(\frac{2\pi}{\gamma_{1}}\right)^{k_{2}}\left(\frac{\pi}{\gamma_{1}}\right)^{2k_{2}(k_{2}-1)/\beta}\left(g_{k_{2}}^{(4/\beta)}\right)^{-2}\
.$ (3.18)
Thus, we write $D_{k_{2}r_{2}}^{(4/\beta)}$ in the Bessel–function basis
$\displaystyle\qquad\qquad D_{k_{2}}^{(4/\beta)}$ $\displaystyle=$
$\displaystyle{C_{k}^{(\beta)}\
}^{-2}\int\limits_{\mathbb{R}^{k_{2}}}|\phi_{k_{2}}^{(4/\beta)}(y)\rangle\langle\phi_{k_{2}}^{(4/\beta)}(y)|\
|\Delta_{k_{2}}(y)|^{4/\beta}d[y]\times$ (3.19) $\displaystyle\times$
$\displaystyle
D_{k_{2}x}^{(4/\beta)}\int\limits_{\mathbb{R}^{k_{2}}}|\phi_{k_{2}}^{(4/\beta)}(y^{\prime})\rangle\langle\phi_{k_{2}}^{(4/\beta)}(y^{\prime})|\
|\Delta_{k_{2}}(y^{\prime})|^{4/\beta}d[y^{\prime}]=$ $\displaystyle=$
$\displaystyle{C_{k}^{(\beta)}\
}^{-1}\int\limits_{\mathbb{R}^{k_{2}}}{\det}(i\gamma_{1}y)^{1/\gamma_{1}}\phi_{k_{2}}^{(4/\beta)}(y,x)\phi_{k_{2}}^{(4/\beta)*}(y,x^{\prime})|\Delta_{k_{2}}(y)|^{4/\beta}d[y]$
with the action on a function $f\in\mathfrak{S}$
$\displaystyle D_{k_{2}}^{(4/\beta)}|f\rangle$ $\displaystyle=$
$\displaystyle{C_{k}^{(\beta)}\
}^{-1}\int\limits_{\mathbb{R}^{k_{2}}}\int\limits_{\mathbb{R}^{k_{2}}}{\det}(i\gamma_{1}y)^{1/\gamma_{1}}\phi_{k_{2}}^{(4/\beta)}(y,x)\phi_{k_{2}}^{(4/\beta)*}(y,x^{\prime})f(x^{\prime})\times$
(3.20) $\displaystyle\times$
$\displaystyle|\Delta_{k_{2}}(x^{\prime})|^{4/\beta}|\Delta_{k_{2}}(y)|^{4/\beta}d[x^{\prime}]d[y]\
.$
Due to this representation of the Sekiguchi differential operator analog,
$\imath^{k_{2}}D_{k_{2}}^{(4/\beta)}$ is symmetric with respect to the scalar
product (3.16)
$\langle
f|\imath^{k_{2}}D_{k_{2}}^{(4/\beta)}|f^{\prime}\rangle=\langle\imath^{k_{2}}D_{k_{2}}^{(4/\beta)}f|f^{\prime}\rangle\
.$ (3.21)
Let $L$ be a real number. Then, we easily see with help of Eq. (2.4)
$D_{k_{2}x}^{(4/\beta)}\det
x^{L/\gamma_{1}}=\prod\limits_{b=1}^{k_{2}}\left(L+\frac{2}{\beta}b-\frac{2}{\beta}\right)\det
x^{(L-1)/\gamma_{1}}\ .$ (3.22)
Since the property (3.21), we obtain for a function $f\in\mathfrak{S}$
$\displaystyle\int\limits_{\mathbb{R}^{k_{2}}}\det
x^{L/\gamma_{1}}|\Delta_{k_{2}}(x)|^{4/\beta}D_{k_{2}x}^{(4/\beta)}f(x)d[x]=$
(3.23) $\displaystyle=$
$\displaystyle(-1)^{k_{2}}\int\limits_{\mathbb{R}^{k_{2}}}f(x)|\Delta_{k_{2}}(x)|^{4/\beta}D_{k_{2}x}^{(4/\beta)}\det
x^{L/\gamma_{1}}d[x]=$ $\displaystyle=$
$\displaystyle(-1)^{k_{2}}\prod\limits_{b=1}^{k_{2}}\left(L+\frac{2}{\beta}b-\frac{2}{\beta}\right)\int\limits_{\mathbb{R}^{k_{2}}}f(x)|\Delta_{k_{2}}(x)|^{4/\beta}\det
x^{(L-1)/\gamma_{1}}d[x]\ .$
The boundary terms of the partial integration do not appear because $f$ is a
Schwartz–function and $D_{k_{2}x}^{(4/\beta)}$ has the representation (3.19).
Let $F$ and $f$ be the functions of statement 5.1. Then, we calculate
$\displaystyle\int\limits_{\mathbb{R}^{k_{2}}}\int\limits_{{\rm
Herm\,}(4/\beta,k_{2})}F(r_{2}){\det}^{k}r_{2}|\Delta_{k_{2}}(r_{2})|^{4/\beta}\exp\left(\imath\tr
r_{2}\sigma_{2}\right){\det}^{N/\gamma_{1}}\left(e^{-\imath\psi}\sigma_{2}+\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)d[\sigma_{2}]d[r_{2}]=$
$\displaystyle=\int\limits_{\mathbb{R}^{k_{2}}}\int\limits_{{\rm
Herm\,}(4/\beta,k_{2})}f(r_{2}){\det}^{N/\gamma_{1}}r_{2}|\Delta_{k_{2}}(r_{2})|^{4/\beta}\exp\left(\imath\tr
r_{2}\sigma_{2}\right)\times$
$\displaystyle\times{\det}^{N/\gamma_{1}}\left(e^{-\imath\psi}\sigma_{2}+\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)d[\sigma_{2}]d[r_{2}]=$
$\displaystyle=\left(\frac{-\imath
e^{-\imath\psi}}{\gamma_{1}}\right)^{k_{2}N}g_{k_{2}}^{(4/\beta)}\int\limits_{\mathbb{R}^{k_{2}}}\int\limits_{\mathbb{R}^{k_{2}}}f(r_{2})\exp\left(\varepsilon
e^{\imath\psi}\tr r_{2}\right)|\Delta_{k_{2}}(r_{2})|^{4/\beta}\times$
$\displaystyle\times{\det}^{N/\gamma_{1}}s_{2}|\Delta_{k_{2}}(s_{2})|^{4/\beta}\left(D_{k_{2}s_{2}}^{(4/\beta)}\right)^{N}\phi_{k_{2}}^{(4/\beta)}(r_{2},s_{2})d[s_{2}]d[r_{2}]=$
$\displaystyle=(\imath
e^{-\imath\psi})^{k_{2}N}g_{k_{2}}^{(4/\beta)}\prod\limits_{a=1}^{N}\prod\limits_{b=1}^{k_{2}}\left(\frac{a}{\gamma_{1}}+\frac{b-1}{\gamma_{2}}\right)\times$
$\displaystyle\times\int\limits_{\mathbb{R}^{k_{2}}}\int\limits_{\mathbb{R}^{k_{2}}}f(r_{2})\exp\left(\varepsilon
e^{\imath\psi}\tr
r_{2}\right)|\Delta_{k_{2}}(r_{2})|^{4/\beta}|\Delta_{k_{2}}(s_{2})|^{4/\beta}\phi_{k_{2}}^{(4/\beta)}(r_{2},s_{2})d[s_{2}]d[r_{2}]=$
$\displaystyle=\left(\frac{2\pi}{\gamma_{1}}\right)^{k_{2}}\left(\frac{\pi}{\gamma_{1}}\right)^{2k_{2}(k_{2}-1)/\beta}\frac{\left(\imath
e^{-\imath\psi}\right)^{k_{2}N}}{g_{k_{2}}^{(4/\beta)}\gamma_{1}^{k_{2}N}}\prod_{j=0}^{k_{2}-1}\frac{\Gamma\left(N+1+2j/\beta\right)}{\Gamma\left(1+2j/\beta\right)}f(0)\
.$ (3.24)
The second equality in Eq. (5.13) is true because of
$f(0)=\left.\prod\limits_{j=1}^{k_{2}}\frac{1}{\left(N-k_{1}\right)!}\left(\frac{\partial}{\partial
r_{j2}}\right)^{N-k_{1}}\left[f(r_{2})\exp\left(\varepsilon e^{\imath\psi}\tr
r_{2}\right)\det r_{2}^{N/\gamma_{1}-k}\right]\right|_{r_{2}=0}.$ (3.25)
The function in the bracket is $F$ times the exponential term ${\rm
exp}\left(\varepsilon e^{\imath\psi}\tr r_{2}\right)$.
### C.3 Derivation of statement 5.2
We have to show
$\displaystyle\int\limits_{{\rm Herm\,}(4/\beta,k_{2})}\int\limits_{{\rm
Herm\,}(4/\beta,k_{2})}F(\rho_{2}){\det}^{k}\rho_{2}\exp\left(\imath\tr\rho_{2}\sigma_{2}\right){\det}^{N/\gamma_{1}}\sigma_{2}d[\sigma_{2}]d[\rho_{2}]\sim$
(3.26) $\displaystyle\sim$
$\displaystyle\int\limits_{\mathbb{R}^{k_{2}}}F(r_{2})\prod\limits_{j=1}^{k}\left(-\frac{\partial}{\partial
r_{j2}}\right)^{N-2/\beta}\delta(r_{j2})d[r_{2}]$
for every rotation invariant Schwartz–function $F:{\rm
Herm\,}(4/\beta,k_{2})\rightarrow\mathbb{C}$ and $\beta\in\\{1,2\\}$. Due to
$\displaystyle\int\limits_{{\rm Herm\,}(4/\beta,k_{2})}\exp\left(\imath\tr
r_{2}\sigma_{2}\right){\det}\sigma_{2}^{N/\gamma_{1}}d[\sigma_{2}]$
$\displaystyle\sim$
$\displaystyle\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^{4(k_{2}-1)/\beta}}y^{N}{\rm
exp}\left[\imath r_{k_{2}2}\tr(y\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\tilde{\gamma}}+v^{\dagger}v)\right]d[v]dy\times$
(3.27) $\displaystyle\times$ $\displaystyle\int\limits_{{\rm
Herm\,}(4/\beta,k_{2}-1)}\exp\left(\imath\tr\tilde{r}_{2}\tilde{\sigma}_{2}\right){\det}\tilde{\sigma}_{2}^{(N+2/\beta)/\gamma_{1}}d[\tilde{\sigma}_{2}]$
with the decompositions $r_{2}={\rm
diag\,}\left(\tilde{r}_{2},r_{k_{2}2}\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\tilde{\gamma}}\right)$ and
$\sigma_{2}=\left[\begin{array}[]{cc}\tilde{\sigma}_{2}&v\\\
v^{\dagger}&y\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{\tilde{\gamma}}\end{array}\right]\ ,$ (3.28)
we make a complete induction. Thus, we reduce the derivation to
$\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^{4(k_{2}-1)/\beta}}f(x)x^{k_{1}}y^{N}{\rm
exp}\left[\imath
x\tr(y+v^{\dagger}v)\right]d[v]dydx\sim\int\limits_{\mathbb{R}}f(x)\frac{\partial^{N-2/\beta}}{\partial
x^{N-2/\beta}}\delta(x)d[x]$ (3.29)
where $f:\mathbb{R}\rightarrow\mathbb{C}$ is a Schwartz–function. The function
$\tilde{f}(y)=\int\limits_{\mathbb{R}}f(x)x^{k_{1}}\exp\left(\imath
xy\right)dx$ (3.30)
is also a Schwartz–function. Hence, we compute
$\displaystyle\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^{4(k_{2}-1)/\beta}}f(x)x^{k_{1}}y^{N}{\rm
exp}\left[\imath x\tr(y+v^{\dagger}v)\right]d[v]dydx=$ (3.31) $\displaystyle=$
$\displaystyle\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^{4(k_{2}-1)/\beta}}\tilde{f}\left[\tr(y+v^{\dagger}v)\right]y^{N}d[v]dy=$
$\displaystyle=$
$\displaystyle\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^{4(k_{2}-1)/\beta}}y^{N-2(k_{2}-1)/\beta}\left(-\frac{\partial}{\partial
y}\right)^{2(k_{2}-1)/\beta}\tilde{f}\left(\tr(y+v^{\dagger}v)\right)d[v]dy\sim$
$\displaystyle\sim$
$\displaystyle\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^{+}}\tilde{v}^{2(k_{2}-1)/\beta-1}\left(-\frac{\partial}{\partial\tilde{v}}\right)^{2(k_{2}-1)/\beta}\tilde{f}\Bigl{(}\tr(y+\tilde{v})\Bigr{)}y^{N-2(k_{2}-1)/\beta}d\tilde{v}dy\sim$
$\displaystyle\sim$ $\displaystyle\int\limits_{\mathbb{R}}\tilde{f}\left(\tr
y\right)y^{N-2(k_{2}-1)/\beta}dy\sim$ $\displaystyle\sim$
$\displaystyle\int\limits_{\mathbb{R}}f(x)x^{k_{1}}\left(-\frac{\partial}{\partial
x}\right)^{N-2(k_{2}-1)/\beta}\delta(x)dx\sim$ $\displaystyle\sim$
$\displaystyle\int\limits_{\mathbb{R}}f(x)\frac{\partial^{N-2/\beta}}{\partial
x^{N-2/\beta}}\delta(x)d[x]\ ,$
which is for $\beta\in\\{1,2\\}$ well–defined.
## Appendix D Determinantal structure of the ${\rm UOSp\,}(2k/2k)$–Berezinian
###### Statement D.1
Let $k\in\mathbb{N}$, $x_{1}\in\mathbb{C}^{2k}$ and $x_{2}\in\mathbb{C}^{k}$.
$x_{1}$ and $x_{2}$ satisfy the condition
$x_{a1}-x_{b2}\neq 0\ \ ,\ \forall a\in\\{1,\ldots,2k\\}\ \wedge\
b\in\\{1,\ldots,k\\}\ .$ (4.1)
Then, we have
$\frac{\Delta_{2k}(x_{1})\Delta_{k}^{4}(x_{2})}{V_{k}^{2}(x_{1},x_{2})}=(-1)^{k(k-1)/2}\det\left[\left\\{\frac{1}{x_{a1}-x_{b2}}\right\\}\underset{{1\leq
b\leq k}}{\underset{1\leq a\leq
2k}{}},\left\\{\frac{1}{(x_{a1}-x_{b2})^{2}}\right\\}\underset{{1\leq b\leq
k}}{\underset{1\leq a\leq 2k}{}}\right].$ (4.2)
We prove this theorem by complete induction.
Derivation:
We rearrange the determinant by exchanging the columns
$\displaystyle\det\left[\left\\{\frac{1}{x_{a1}-x_{b2}}\right\\}\underset{{1\leq
b\leq k}}{\underset{1\leq a\leq 2k}{}}\ ,\
\left\\{\frac{1}{(x_{a1}-x_{b2})^{2}}\right\\}\underset{{1\leq b\leq
k}}{\underset{1\leq a\leq 2k}{}}\right]=$
$\displaystyle=(-1)^{k(k-1)/2}\det\left[\frac{1}{x_{a1}-x_{b2}}\ ,\
\frac{1}{(x_{a1}-x_{b2})^{2}}\right]\underset{{1\leq b\leq k}}{\underset{1\leq
a\leq 2k}{}}\ .$ (4.3)
Thus, the minus sign in Eq. (4.2) cancels out.
We find for $k=1$
$\det\left[\begin{array}[]{cc}\displaystyle\frac{1}{x_{11}-x_{2}}&\displaystyle\frac{1}{(x_{11}-x_{2})^{2}}\\\
\displaystyle\frac{1}{x_{21}-x_{2}}&\displaystyle\frac{1}{(x_{21}-x_{2})^{2}}\end{array}\right]=\frac{(x_{11}-x_{21})}{(x_{11}-x_{2})^{2}(x_{21}-x_{2})^{2}}\
.$ (4.4)
We assume that this theorem is for $k-1$ true. Let
$\displaystyle s$ $\displaystyle=$
$\displaystyle\left[\frac{1}{x_{a1}-x_{b2}}\ ,\
\frac{1}{(x_{a1}-x_{b2})^{2}}\right]\underset{{1\leq b\leq k}}{\underset{1\leq
a\leq 2k}{}}=\left[\begin{array}[]{cc}s_{1}&w\\\ v&s_{2}\end{array}\right]\ ,$
(4.7) $\displaystyle s_{1}$ $\displaystyle=$
$\displaystyle\left[\begin{array}[]{cc}\displaystyle\frac{1}{x_{11}-x_{12}}&\displaystyle\frac{1}{(x_{11}-x_{12})^{2}}\\\
\displaystyle\frac{1}{x_{21}-x_{12}}&\displaystyle\frac{1}{(x_{21}-x_{12})^{2}}\end{array}\right]\
,$ (4.10) $\displaystyle s_{2}$ $\displaystyle=$
$\displaystyle\left[\frac{1}{x_{a1}-x_{b2}}\ ,\
\frac{1}{(x_{a1}-x_{b2})^{2}}\right]\underset{{2\leq b\leq k}}{\underset{3\leq
a\leq 2k}{}}\ ,$ (4.11) $\displaystyle v$ $\displaystyle=$
$\displaystyle\left[\frac{1}{x_{a1}-x_{12}}\ ,\
\frac{1}{(x_{a1}-x_{12})^{2}}\right]_{3\leq a\leq 2k}\ {\rm and}$ (4.12)
$\displaystyle w$ $\displaystyle=$
$\displaystyle\left[\begin{array}[]{cc}\displaystyle\frac{1}{x_{11}-x_{b2}}&\displaystyle\frac{1}{(x_{11}-x_{b2})^{2}}\\\
\displaystyle\frac{1}{x_{21}-x_{b2}}&\displaystyle\frac{1}{(x_{21}-x_{b2})^{2}}\end{array}\right]_{2\leq
b\leq k}\ .$ (4.15)
Then, we have
$\det s=\det
s_{1}\det(s_{2}-vs_{1}^{-1}w)\overset{(D.4)}{=}\frac{(x_{11}-x_{21})}{(x_{11}-x_{12})^{2}(x_{21}-x_{12})^{2}}\det(s_{2}-vs_{1}^{-1}w)\
.$ (4.16)
The matrix in the determinant is equal to
$(s_{2}-vs_{1}^{-1}w)^{T}=\left[\begin{array}[]{c}\displaystyle\frac{(x_{11}-x_{a1})(x_{21}-x_{a1})(x_{12}-x_{b2})^{2}}{(x_{a1}-x_{12})^{2}(x_{11}-x_{b2})(x_{21}-x_{b2})}\frac{1}{x_{a1}-x_{b2}}\\\
\\\
\displaystyle\frac{(x_{11}-x_{a1})(x_{21}-x_{a1})(x_{12}-x_{b2})}{(x_{a1}-x_{12})^{2}(x_{11}-x_{b2})^{2}(x_{21}-x_{b2})^{2}}\frac{P_{ab}}{(x_{a1}-x_{b2})^{2}}\end{array}\right]\underset{{2\leq
b\leq k}}{\underset{3\leq a\leq 2k}{\underset{}{\underset{}{\underset{}{}}}}}$
(4.17)
where $P_{ab}$ is a polynomial
$\displaystyle
P_{ab}=(x_{a1}-x_{b2})(x_{11}-x_{b2})(x_{12}-x_{b2})-(x_{a1}-x_{12})(x_{11}-x_{b2})(x_{21}-x_{b2})-$
$\displaystyle-(x_{21}-x_{b2})(x_{a1}-x_{b2})(x_{11}-x_{12})=$
$\displaystyle=(x_{11}-x_{b2})(x_{21}-x_{b2})(x_{12}-x_{b2})+$
$\displaystyle+(x_{a1}-x_{b2})\left[(x_{11}+x_{21})(x_{12}+x_{b2})-2x_{11}x_{21}-2x_{12}x_{b2}\right]=$
$\displaystyle=A_{b}^{(1)}+(x_{a1}-x_{b2})A_{b}^{(2)}\ .$ (4.18)
The polynomials $A_{b}^{(1)}$ and $A_{b}^{(2)}$ are independent of the index
$a$. Due to the multilinearity and the skew symmetry of the determinant, the
result is
$\det
s=\frac{(x_{11}-x_{21})}{(x_{11}-x_{12})^{2}(x_{21}-x_{12})^{2}}\frac{\prod\limits_{a=3}^{2k}(x_{11}-x_{a1})(x_{21}-x_{a1})\prod\limits_{b=2}^{k}(x_{12}-x_{b2})^{4}}{\prod\limits_{a=3}^{2k}(x_{a1}-x_{12})^{2}\prod\limits_{b=2}^{k}(x_{11}-x_{b2})^{2}(x_{21}-x_{b2})^{2}}\det
s_{2}$ (4.19)
which completes the induction. $\square$
## Appendix E Derivation of statement 4.1
Let $\lambda$ be the wanted eigenvalue and is a commuting variable of the
Grassmann algebra constructed from the
$\\{\tau_{q}^{(p)},\tau_{q}^{(p)*}\\}_{p,q}$. Then, we split this eigenvalue
in its body $\lambda^{(0)}$ and its soul $\lambda^{(1)}$, i.e.
$\lambda=\lambda^{(0)}+\lambda^{(1)}$. Let $v$ the $\gamma_{2}N$–dimensional
eigenvector of $H$ such that
$Hv=\lambda v{\rm\ \ and\ \ }v^{\dagger}v=1\ .$ (5.1)
In this equation, we recognize in the lowest order of Grassmann variables that
$\lambda^{(0)}$ is an eigenvalue of $H^{(0)}$. Then, let $\lambda^{(0)}$ be an
eigenvalue of the highest degeneracy $\delta$ of $H^{(0)}$, i.e. $\delta={\rm
dim\ ker}(H^{(0)}-\lambda^{(0)}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{N})$. Without loss of generality, we assume that $H^{(0)}$ is diagonal and
the eigenvalue $\lambda^{(0)}$ only appears in the upper left
$\delta\times\delta$–matrix block,
$H^{(0)}=\left[\begin{array}[]{cc}\lambda^{(0)}\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\delta}&0\\\
0&\widetilde{H}^{(0)}\end{array}\right]\ .$ (5.2)
We also split the vectors in $\delta$ and $N-\delta$ dimensional vectors
$v^{(0)}=\left[\begin{array}[]{c}v_{1}\\\ v_{2}\end{array}\right]{\rm\ \ and\
\ }\tau_{q}=\left[\begin{array}[]{c}\tau_{q1}\\\ \tau_{q2}\end{array}\right]\
.$ (5.3)
Thus, we find the two equations from (5.1)
$\displaystyle T_{11}v_{1}-\lambda^{(1)}v_{1}+T_{12}v_{2}$ $\displaystyle=$
$\displaystyle 0\ ,$ (5.4) $\displaystyle
T_{21}v_{1}+\left[\widetilde{H}^{(0)}-\lambda\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{N-\delta}+T_{22}\right]v_{2}$ $\displaystyle=$
$\displaystyle 0$ (5.5)
where
$T_{nm}=\sum\limits_{q=1}^{\widetilde{N}}l_{q}\left[\tau_{qn}\tau_{qm}^{\dagger}+\widetilde{Y}\left(\tau_{qn}^{*}\tau_{qm}^{T}\right)\right]$.
Eq. (5.5) yields
$v_{2}=-\left[\widetilde{H}^{(0)}-\lambda\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{N-\delta}+T_{22}\right]^{-1}T_{21}v_{1}\ .$ (5.6)
Hence, the body of $v_{2}$ is zero and we have for Eq. (5.4)
$T_{11}v_{1}-\lambda^{(1)}v_{1}-T_{12}\left[\widetilde{H}^{(0)}-\lambda\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{N-\delta}+T_{22}\right]^{-1}T_{21}v_{1}=0\ .$
(5.7)
If the degeneracy is $\delta>\gamma_{2}$, we consider a $\delta$–dimensional
real vector $w\neq 0$ such that $w^{\dagger}v_{1}=0$. Then, we get for the
lowest order in the Grassmann variables of Eq. (5.7) times $w^{\dagger}$
$w^{\dagger}T_{11}v_{1}^{(0)}=0$ (5.8)
where $v_{1}^{(0)}$ is the body of $v_{1}$. The entries of $w^{\dagger}T_{11}$
are linearly independent. Thus, the body of $v_{1}$ is also zero. This
violates the second property of (5.1).
Let the degeneracy $\delta=\gamma_{2}$. Then, $v_{1}$ is
$\gamma_{2}$-dimensional and is normalizable. For $\beta=4$, we have the
quaternionic case and the matrix before $v_{1}$ in Eq. (5.7) is a diagonal
quaternion. Hence, it must be true
$\lambda^{(1)}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{\gamma_{2}}=T_{11}-T_{12}\left[\widetilde{H}^{(0)}-\lambda\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{N-\delta}+T_{22}\right]^{-1}T_{21}\ .$ (5.9)
Considering the second order term in the Grassmann variables of Eq. (5.9),
$\lambda$’s second order term is $T_{11}$ for $\beta\in\\{1,2\\}$ and $\tr
T_{11}/2$ for $\beta=4$. Eq. (5.9) is unique solvable by recursive
calculation. We plug the right hand side of Eq. (5.9) into the $\lambda^{(1)}$
on the same side and repeat this procedure. Hence, we define the operator
$\displaystyle O(\mu)$ $\displaystyle=$
$\displaystyle\frac{1}{\gamma_{2}}\tr\left\\{T_{11}-T_{12}\left[\widetilde{H}^{(0)}-(\lambda^{(0)}+\mu)\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{N-\delta}+T_{22}\right]^{-1}T_{21}\right\\}{\rm\
and}$ (5.10) $\displaystyle O^{n+1}(\mu)$ $\displaystyle=$ $\displaystyle
O\left[O^{n}(\mu)\right]\ .$ (5.11)
Then, $\lambda^{(1)}=O^{n}(\lambda^{(1)})$ is true for arbitrary
$n\in\mathbb{N}$. The recursion is finished for $n_{0}\in\mathbb{N}$ if
$\lambda^{(1)}=O^{n_{0}}(\lambda^{(1)})=O^{n_{0}}(0)$. Due to the Grassmann
variables, this recursion procedure eventually terminates after the
$(\gamma_{2}N\widetilde{N}/2)$’th time. Thus, the eigenvalue $\lambda$ depends
on Grassmann variables and is not a real number.
## References
## References
* [1] K.B. Efetov. Adv. Phys., 32:53, 1983.
* [2] J.J.M. Verbaarschot, H.A. Weidenmüller, and M.R. Zirnbauer. Phys. Rep., 129:367, 1985.
* [3] K.B. Efetov. Supersymmetry in Disorder and Chaos. Cambridge University Press, Cambridge, 1st edition, 1997.
* [4] T. Guhr, A. Müller-Groeling, and H.A. Weidenmüller. Phys. Rep., 299:189, 1998.
* [5] T. Guhr. J. Phys., A 39:13191, 2006.
* [6] N. Lehmann, D. Saher, V.V. Sokolov, and H.-J. Sommers. Nucl. Phys., A 582:223, 1995.
* [7] G. Hackenbroich and H.A. Weidenmüller. Phys. Rev. Lett., 74:4118, 1995.
* [8] K.B. Efetov, G. Schwiete, and K. Takahashi. Phys. Rev. Lett., 92:026807, 2004.
* [9] K.B. Efetov and V.R. Kogan. Phys. Rev., B 70:195326, 2004.
* [10] F. Basile and G. Akemann. JHEP, page 0712:043, 2007.
* [11] H.-J. Sommers. Acta Phys. Pol., B 38:1001, 2007.
* [12] P. Littelmann, H.-J. Sommers, and M.R. Zirnbauer. Commun. Math. Phys., 283:343, 2008.
* [13] T. Jiang. J. Math. Phys., 46:052106, 2005.
* [14] M. Kieburg, H. Kohler, and T. Guhr. J. Math. Phys., 50:013528, 2009.
* [15] M.L. Mehta. Random Matrices and the statistical Theory of Energy Levels. Academic Press Inc., New York, 1st edition, 1967.
* [16] L. Hörmander. The Analysis of Linear Partial Differential Operators. Springer, Berlin, Heidelberg, New York, 1976.
* [17] M.R. Zirnbauer. The Supersymmetry Method of Random Matrix Theory, Encyclopedia of Mathematical Physics, eds. J.-P. Franoise, G.L. Naber and Tsou S.T., Elsevier, Oxford, 5:151, 2006.
* [18] F.A. Berezin. Introduction to Superanalysis. D. Reidel Publishing Company, Dordrecht, 1st edition, 1987.
* [19] B. DeWitt. Supermanifolds. Cambridge University Press, Cambridge, 1st edition, 1984.
* [20] J.E. Bunder, K.B. Efetov, K.B. Kravtsov, O.M. Yevtushenko, and M.R. Zirnbauer. J. Stat. Phys., 129:809, 2007.
* [21] B.L. van der Waerden. Algebra I. Springer, Berlin, Heidelberg, New York, 8th edition, 1971.
* [22] H. Kohler and T. Guhr. J. Phys., A 38:9891, 2005.
* [23] M.R. Zirnbauer. J. Math. Phys., 37:4986, 1996.
* [24] A.E. Ingham. Proc. Camb. Phil. Soc., 29:271, 1933.
* [25] C.L. Siegel. Ann. Math., 36:527, 1935.
* [26] Y.V. Fyodorov. Nucl. Phys., B 621:643, 2002.
* [27] M.L. Mehta. Random Matrices. Academic Press Inc., New York, 2nd edition, 1991.
* [28] A. Okounkov and G. Olshanski. Math. Res. Letters, 4:69, 1997.
* [29] T. Guhr and H. Kohler. J. Math. Phys., 43:2741, 2002.
* [30] T. Guhr. Commun. Math. Phys., 176:555, 1996.
* [31] T. Guhr. Ann. Phys. (N.Y.), 250:145, 1996.
* [32] T. Guhr. J. Math. Phys., 32:336, 1991.
* [33] T. Guhr. J. Math. Phys., 34:2523, 1993.
* [34] M.J. Rothstein. Trans. Am. Math. Soc., 299:387, 1987.
* [35] F. Wegner, 1983.
* [36] F. Constantinescu. J. Stat. Phys., 50:1167, 1988.
* [37] F. Constantinescu and H.F. de Groote. J. Math. Phys., 30:981, 1989.
* [38] E. Brezin and S. Hikami. J. Phys. A: Math. Gen., 36:711, 2003.
* [39] T. Guhr and H. Kohler. J. Math. Phys., 43:2707, 2002.
* [40] M. Bergère and B. Eynard, 2008. arxiv:0805.4482v1 [math-ph].
* [41] B. Feigin, M. Jimbo, T. Miwa, and E. Mukhin. Internat. Math. Res. Notices, 23:1223, 2002.
* [42] I.G. Macdonald. Symmetric Functions and Hall polynomials. Oxford University Press, Oxford, 2nd edition, 1995.
* [43] H. Kohler, 2007. arxiv:0801.0132v1 [math-ph].
* [44] K. Życzkowski and H.-J. Sommers. J. Phys., A 36:10115, 2003.
|
arxiv-papers
| 2009-05-20T09:15:51 |
2024-09-04T02:49:02.750203
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Mario Kieburg, Johan Gr\\\"onqvist, Thomas Guhr",
"submitter": "Mario Kieburg",
"url": "https://arxiv.org/abs/0905.3253"
}
|
0905.3256
|
†
# Comparison of the superbosonization formula and the generalized
Hubbard–Stratonovich transformation
Mario Kieburg†, Hans-Jürgen Sommers and Thomas Guhr Universität Duisburg-
Essen, Lotharstraße 1, 47048 Duisburg, Germany mario.kieburg@uni-due.de
###### Abstract
Recently, two different approaches were put forward to extend the
supersymmetry method in random matrix theory from Gaussian ensembles to
general rotation invariant ensembles. These approaches are the generalized
Hubbard–Stratonovich transformation and the superbosonization formula. Here,
we prove the equivalence of both approaches. To this end, we reduce integrals
over functions of supersymmetric Wishart–matrices to integrals over quadratic
supermatrices of certain symmetries.
###### pacs:
02.30.Px, 05.30.Ch, 05.30.-d, 05.45.Mt
††: J. Phys. A: Math. Gen.
## 1 Introduction
The supersymmetry technique is a powerful method in random matrix theory and
disordered systems. For a long time it was thought to be applicable for
Gaussian probability densities only [1, 2, 3, 4]. Due to universality on the
local scale of the mean level spacing [5, 6, 7, 8], this restriction was not a
limitation for calculating in quantum chaos and disordered systems. Indeed,
results of Gaussian ensembles are identical for large matrix dimension with
other invariant matrix ensembles on this scale. In the Wigner–Dyson theory [9]
and its corrections for systems with diffusive dynamics [10], Gaussian
ensembles are sufficient. Furthermore, universality was found on large scale,
too [11]. This is of paramount importance when investigating matrix models in
high–energy physics.
There are, however, situations in which one can not simply resort to Gaussian
random matrix ensembles. The level densities in high–energy physics [12] and
finance [13] are needed for non-Gaussian ensembles. But these one–point
functions strongly depend on the matrix ensemble. Other examples are
bound–trace and fixed–trace ensembles [14], which are both norm–dependent
ensembles [15], as well as ensembles derived from a non-extensive entropy
principle [16, 17, 18]. In all these cases one is interested in the non-
universal behavior on special scales.
Recently, the supersymmetry method was extended to general rotation invariant
probability densities [15, 19, 20, 21]. There are two approaches. The first
one is the generalized Hubbard–Stratonovich transformation [15, 21]. With help
of a proper Dirac–distribution in superspace an integral over rectangular
supermatrices was mapped to a supermatrix integral with non-compact domain in
the Fermion–Fermion block. The second approach is the superbosonization
formula [19, 20] mapping the same integral over rectangular matrices as before
to a supermatrix integral with compact domain in the Fermion–Fermion block.
In this work, we prove the equivalence of the generalized Hubbard–Stratonovich
transformation with the superbosonization formula. The proof is based on
integral identities between supersymmetric Wishart–matrices and quadratic
supermatrices. The orthogonal, unitary and unitary-symplectic classes are
dealt with in a unifying way.
The article is organized as follows. In Sec. 2, we give a motivation and
introduce our notation. In Sec. 3, we define rectangular supermatrices and the
supersymmetric version of Wishart-matrices built up by supervectors. We also
give a helpful corollary for the case of arbitrary matrix dimension discussed
in Sec. 7. In Secs. 4 and 5, we present and further generalize the
superbosonization formula and the generalized Hubbard–Stratonovich
transformation, respectively. The theorem stating the equivalence of both
approaches is given in Sec. 6 including a clarification of their mutual
connection. In Sec. 7, we extend both theorems given in Secs. 4 and 5 to
arbitrary matrix dimension. Details of the proofs are given in the appendices.
## 2 Ratios of characteristic polynomials
We employ the notation defined in Refs. [22, 21]. ${\rm Herm\,}(\beta,N)$ is
either the set of $N\times N$ real symmetric ($\beta=1$), $N\times N$
hermitian ($\beta=2$) or $2N\times 2N$ self-dual ($\beta=4$) matrices,
according to the Dyson–index $\beta$. We use the complex representation of the
quaternionic numbers $\mathbb{H}$. Also, we define
$\gamma_{1}=\left\\{\begin{array}[]{ll}1&,\ \beta\in\\{2,4\\}\\\ 2&,\
\beta=1\end{array}\right.\quad,\quad\gamma_{2}=\left\\{\begin{array}[]{ll}1&,\
\beta\in\\{1,2\\}\\\ 2&,\ \beta=4\end{array}\right.$ (2.1)
and $\tilde{\gamma}=\gamma_{1}\gamma_{2}$.
The central objects in many applications of supersymmetry are averages over
ratios of characteristic polynomials [23, 24, 25]
$\displaystyle Z_{k_{1}k_{2}}(E^{-})$ $\displaystyle=$
$\displaystyle\int\limits_{{\rm
Herm\,}(\beta,N)}P(H)\frac{\prod\limits_{n=1}^{k_{2}}\det\left(H-(E_{n2}-\imath\varepsilon)\leavevmode\hbox{\small
1\kern-3.8pt\normalsize
1}_{\gamma_{2}N}\right)}{\prod\limits_{n=1}^{k_{1}}\det\left(H-(E_{n1}-\imath\varepsilon)\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\gamma_{2}N}\right)}d[H]$ (2.2) $\displaystyle=$
$\displaystyle\int\limits_{{\rm Herm\,}(\beta,N)}P(H){\rm
Sdet\,}^{-1/\tilde{\gamma}}\left(H\otimes\leavevmode\hbox{\small
1\kern-3.8pt\normalsize
1}_{\tilde{\gamma}(k_{1}+k_{2})}-\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\gamma_{2}N}\otimes E^{-}\right)d[H]$
where $P$ is a sufficiently integrable probability density on the matrix set
${\rm Herm\,}(\beta,N)$ invariant under the group
${\rm U\,}^{(\beta)}(N)=\left\\{\begin{array}[]{ll}\Or(N)&,\ \beta=1\\\ {\rm
U\,}(N)&,\ \beta=2\\\ {\rm USp\,}(2N)&,\ \beta=4\end{array}\right..$ (2.3)
Here, we assume that $P$ is analytic in its real independent variables. We use
the same measure for $d[H]$ as in Ref. [22] which is the product over all real
independent differentials, see also Eq. (4.26). Also, we define $E={\rm
diag\,}(E_{11},\ldots,E_{k_{1}1},E_{12},\ldots,E_{k_{2}2})\otimes\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\tilde{\gamma}}$ and
$E^{-}=E-\imath\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{\tilde{\gamma}(k_{1}+k_{2})}$.
The generating function of the $k$–point correlation function [26, 27, 15, 21]
$R_{k}(x)=\gamma_{2}^{-k}\int\limits_{{\rm
Herm\,}(\beta,N)}P(H)\prod\limits_{p=1}^{k}\tr\delta(x_{p}-H)d[H]$ (2.4)
is one application and can be computed starting from the matrix Green function
and Eq. (2.2) with $k_{1}=k_{2}=k$. Another example is the $n$–th moment of
the characteristic polynomial [28, 29, 27]
$\widehat{Z}_{n}(x,\mu)=\int\limits_{{\rm
Herm\,}(\beta,N)}P(H)\Theta(H){\det}^{n}\left(H-E\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\gamma_{2}k}\right)d[H],$ (2.5)
where the Heavyside–function for matrices $\Theta(H)$ is unity if $H$ is
positive definite and zero otherwise. [21]
With help of Gaussian integrals, we get an integral expression for the
determinants in Eq. (2.2). Let $\Lambda_{j}$ be the Grassmann space of
$j$–forms. We consider a complex Grassmann algebra [30]
$\Lambda=\bigoplus\limits_{j=0}^{2\gamma_{2}Nk_{2}}\Lambda_{j}$ with
$\gamma_{2}Nk_{2}$ pairs $\\{\zeta_{jn},\zeta_{jn}^{*}\\}$, $1\leq n\leq
k_{2},\ 1\leq j\leq\gamma_{2}N$, of Grassmann variables and use the
conventions of Ref. [22] for integrations over Grassmann variables. Due to the
$\mathbb{Z}_{2}$–grading, $\Lambda$ is a direct sum of the set of commuting
variables $\Lambda^{0}$ and of anticommuting variables $\Lambda^{1}$. The body
of an element in $\Lambda$ lies in $\Lambda_{0}$ while the Grassmann
generators are elements in $\Lambda_{1}$.
Let $\imath$ be the imaginary unit. We take $\gamma_{2}Nk_{1}$ pairs
$\\{z_{jn},z_{jn}^{*}\\}$, $1\leq n\leq k_{1},\ 1\leq j\leq\gamma_{2}N$, of
complex numbers and find for Eq. (2.2)
$Z_{k_{1}k_{2}}(E^{-})=(2\pi)^{\gamma_{2}N(k_{2}-k_{1})}\imath^{\gamma_{2}Nk_{2}}\int\limits_{\mathfrak{C}}\mathcal{F}P(K)\exp\left(-\imath{\rm
Str\,}BE^{-}\right)d[\zeta]d[z]$ (2.6)
where
$d[z]=\prod\limits_{p=1}^{k_{1}}\prod\limits_{j=1}^{\gamma_{2}N}dz_{jp}dz_{jp}^{*}$
,
$d[\zeta]=\prod\limits_{p=1}^{k_{2}}\prod\limits_{j=1}^{\gamma_{2}N}(d\zeta_{jp}d\zeta_{jp}^{*})$
and
$\mathfrak{C}=\mathbb{C}^{\gamma_{2}k_{1}N}\times\Lambda_{2\gamma_{2}Nk_{2}}$.
The characteristic function appearing in (2.6) is defined as
$\mathcal{F}P(K)=\int\limits_{{\rm Herm\,}(\beta,N)}P(H)\exp\left(\imath\tr
HK\right)d[H].$ (2.7)
The two matrices
$K=\frac{1}{\tilde{\gamma}}V^{\dagger}V\qquad{\rm and}\qquad
B=\frac{1}{\tilde{\gamma}}VV^{\dagger}$ (2.8)
are crucial for the duality between ordinary and superspace. While $K$ is a
$\gamma_{2}N\times\gamma_{2}N$ ordinary matrix whose entries have nilpotent
parts, $B$ is a $\tilde{\gamma}(k_{1}+k_{2})\times\tilde{\gamma}(k_{1}+k_{2})$
supermatrix. They are composed of the rectangular
$\gamma_{2}N\times\tilde{\gamma}(k_{1}+k_{2})$ supermatrix
$\displaystyle V^{\dagger}|_{\beta\neq 2}$ $\displaystyle=$
$\displaystyle(z_{1},\ldots,z_{k_{1}},Yz_{1}^{*},\ldots,Yz_{k_{1}}^{*},\zeta_{1},\ldots,\zeta_{k_{2}},Y\zeta_{1}^{*},\ldots,Y\zeta_{k_{2}}^{*}),$
$\displaystyle V|_{\beta\neq 2}$ $\displaystyle=$
$\displaystyle(z_{1}^{*},\ldots,z_{k_{1}}^{*},Yz_{1},\ldots,Yz_{k_{1}},-\zeta_{1}^{*},\ldots,-\zeta_{k_{2}}^{*},Y\zeta_{1},\ldots,Y\zeta_{k_{2}})^{T},$
$\displaystyle V^{\dagger}|_{\beta=2}$ $\displaystyle=$
$\displaystyle(z_{1},\ldots,z_{k_{1}},\zeta_{1},\ldots,\zeta_{k_{2}}),$
$\displaystyle V|_{\beta=2}$ $\displaystyle=$
$\displaystyle(z_{1}^{*},\ldots,z_{k_{1}}^{*},-\zeta_{1}^{*},\ldots,-\zeta_{k_{2}}^{*})^{T}.$
(2.9)
The transposition “$T$” is the ordinary transposition and is not the
supersymmetric one. However, the adjoint “$\dagger$” is the complex
conjugation with the supersymmetric transposition “$T_{\rm S}$”
$\sigma^{T_{\rm S}}=\left[\begin{array}[]{cc}\sigma_{11}&\sigma_{12}\\\
\sigma_{21}&\sigma_{22}\end{array}\right]^{T_{\rm
S}}=\left[\begin{array}[]{cc}\sigma_{11}^{T}&\sigma_{21}^{T}\\\
-\sigma_{12}^{T}&\sigma_{22}^{T}\end{array}\right],$ (2.10)
where $\sigma$ is an arbitrary rectangular supermatrix. We introduce the
constant $\gamma_{2}N\times\gamma_{2}N$ matrix
$Y=\left\\{\begin{array}[]{ll}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{N}&,\ \beta=1\\\ Y_{s}^{T}\otimes\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{N}&,\ \beta=4\end{array}\right.\ ,\ \
Y_{s}=\left[\begin{array}[]{cc}0&1\\\ -1&0\end{array}\right].$ (2.11)
The crucial duality relation [15, 21]
$\tr K^{m}={\rm Str\,}B^{m}\ ,\ \ m\in\mathbb{N},$ (2.12)
holds, connecting invariants in ordinary and superspace. As $\mathcal{F}P$
inherits the rotation invariance of $P$, the duality relation (2.12) yields
$Z_{k_{1}k_{2}}(E^{-})=(2\pi)^{\gamma_{2}N(k_{2}-k_{1})}\imath^{\gamma_{2}Nk_{2}}\int\limits_{\mathfrak{C}}\Phi(B)\exp\left(-\imath{\rm
Str\,}BE^{-}\right)d[\zeta]d[z].$ (2.13)
Here, $\Phi$ is a supersymmetric extension of a representation
$\mathcal{F}P_{0}$ of the characteristic function,
$\Phi(B)=\mathcal{F}P_{0}({\rm
Str\,}B^{m}|m\in\mathbb{N})=\mathcal{F}P_{0}(\tr
K^{m}|m\in\mathbb{N})=\mathcal{F}P(K).$ (2.14)
The representation $\mathcal{F}P_{0}$ is not unique [31]. However, the
integral (2.13) is independent of a particular choice [21].
The supermatrix $B$ fulfills the symmetry
$B^{*}=\left\\{\begin{array}[]{ll}\widetilde{Y}B\widetilde{Y}^{T}&,\
\beta\in\\{1,4\\},\\\ \widetilde{Y}B^{*}\widetilde{Y}^{T}&,\
\beta=2\end{array}\right.$ (2.15)
with the supermatrices
$\widetilde{Y}|_{\beta=1}=\left[\begin{array}[]{ccc}0&\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{k_{1}}&0\\\ \leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{k_{1}}&0&0\\\
0&0&Y_{s}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{k_{2}}\end{array}\right]\quad,\quad\widetilde{Y}|_{\beta=4}=\left[\begin{array}[]{ccc}Y_{s}\otimes\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{k_{1}}&0&0\\\ 0&0&\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{k_{2}}\\\ 0&\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{k_{2}}&0\end{array}\right]$ (2.16)
and $\widetilde{Y}|_{\beta=2}=\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{k_{1}+k_{2}}$ and is self-adjoint for every $\beta$. Using the
$\pi/4$–rotations
$U|_{\beta=1}=\frac{1}{\sqrt{2}}\left[\begin{array}[]{ccc}\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{k_{1}}&\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{k_{1}}&0\\\ -\imath\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{k_{1}}&\imath\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{k_{1}}&0\\\ 0&0&\sqrt{2}\ \leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{2k_{2}}\end{array}\right],\
U|_{\beta=4}=\frac{1}{\sqrt{2}}\left[\begin{array}[]{ccc}\sqrt{2}\
\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{2k_{1}}&0&0\\\
0&\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{k_{2}}&\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k_{2}}\\\
0&-\imath\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{k_{2}}&\imath\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{k_{2}}\end{array}\right]$ (2.17)
and $U|_{\beta=2}=\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{k_{1}+k_{2}}$, $\widehat{B}=UBU^{\dagger}$ lies in the well-known
symmetric superspaces [32],
$\displaystyle\widetilde{\Sigma}_{\beta,\gamma_{1}k_{1},\gamma_{2}k_{2}}^{(\dagger)}$
$\displaystyle=$ $\displaystyle\Biggl{\\{}\sigma\in{\rm
Mat}(\tilde{\gamma}k_{1}/\tilde{\gamma}k_{2})\Biggl{|}\sigma^{\dagger}=\sigma,$
(2.20)
$\displaystyle\left.\sigma^{*}=\left\\{\begin{array}[c]{ll}\widehat{Y}_{\gamma_{1}k_{1},\gamma_{2}k_{2}}\sigma\widehat{Y}_{\gamma_{1}k_{1},\gamma_{2}k_{2}}^{T}&,\
\beta\in\\{1,4\\}\\\
\widehat{Y}_{k_{1}k_{2}}\sigma^{*}\widehat{Y}_{k_{1}k_{2}}^{T}&,\
\beta=2\end{array}\right\\}\right\\}$
where
$\left.\widehat{Y}_{pq}\right|_{\beta=1}=\left[\begin{array}[]{cc}\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{p}&0\\\ 0&Y_{s}\otimes\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{q}\end{array}\right]\ ,\ \
\left.\widehat{Y}_{pq}\right|_{\beta=2}=\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{p+q}\ \ \ {\rm and}\ \ \
\left.\widehat{Y}_{pq}\right|_{\beta=4}=\left[\begin{array}[]{cc}Y_{s}\otimes\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{p}&0\\\ 0&\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{q}\end{array}\right].$ (2.21)
The set ${\rm Mat}(p/q)$ is the set of $(p+q)\times(p+q)$ supermatrices on the
complex Grassmann algebra $\bigoplus\limits_{j=0}^{2pq}\Lambda_{j}$. The
entries of the diagonal blocks of an element in ${\rm Mat}(p/q)$ lie in
$\Lambda^{0}$ whereas the entries of the off-diagonal block are elements in
$\Lambda^{1}$.
The rectangular supermatrix $\widehat{V}^{\dagger}=V^{\dagger}U^{\dagger}$ is
composed of real, complex or quaternionic supervectors whose adjoints form the
rows. They are given by
$\Psi_{j}^{\dagger}=\left\\{\begin{array}[]{ll}\left(\left\\{\sqrt{2}{\rm
Re\,}z_{jn},\sqrt{2}{\rm Im\,}z_{jn}\right\\}_{1\leq n\leq
k_{1}},\left\\{\zeta_{jn},\zeta^{*}_{jn}\right\\}_{1\leq n\leq
k_{2}}\right)&,\ \beta=1,\\\ \left(\left\\{z_{jn}\right\\}_{1\leq n\leq
k_{1}},\left\\{\zeta_{jn}\right\\}_{1\leq n\leq k_{2}}\right)&,\ \beta=2,\\\
\left(\left\\{\begin{array}[]{cc}z_{jn}&-z_{j+N,n}^{*}\\\
z_{j+N,n}&z_{jn}^{*}\end{array}\right\\}_{1\leq n\leq
k_{1}},\displaystyle\left\\{\begin{array}[]{cc}\zeta_{jn}^{(-)}&\zeta_{jn}^{(+)}\\\
\zeta_{jn}^{(-)*}&\zeta_{jn}^{(+)*}\end{array}\right\\}_{1\leq n\leq
k_{2}}\right)&,\ \beta=4,\end{array}\right.$ (2.22)
respectively, where $\zeta_{jn}^{(\pm)}=\imath^{(1\pm
1)/2}(\zeta_{jn}\pm\zeta_{j+N,n}^{*})/\sqrt{2}$. Then, the supermatrix
$\widehat{B}$ acquires the form
$\widehat{B}=\frac{1}{\tilde{\gamma}}\sum\limits_{j=1}^{N}\Psi_{j}\Psi_{j}^{\dagger}.$
(2.23)
The integrand in Eq. (2.13)
$F\left(\widehat{B}\right)=\Phi\left(\widehat{B}\right)\exp\left(-\imath{\rm
Str\,}E\widehat{B}\right)$ (2.24)
comprises a symmetry breaking term,
$\exists\ U\in{\rm U\,}^{(\beta)}(\gamma_{1}k_{1}/\gamma_{2}k_{2})\ {\rm\
that\ \ }F\left(\widehat{B}\right)\neq F\left(U\widehat{B}U^{\dagger}\right),$
(2.25)
according to the supergroup
${\rm
U\,}^{(\beta)}(\gamma_{1}k_{1}/\gamma_{2}k_{2})=\left\\{\begin{array}[]{ll}{\rm
UOSp\,}^{(+)}(2k_{1}/2k_{2})&,\ \beta=1\\\ {\rm U\,}(k_{1}/k_{2})&,\
\beta=2\\\ {\rm UOSp\,}^{(-)}(2k_{1}/2k_{2})&,\ \beta=4\end{array}\right..$
(2.26)
We use the notation of Refs. [33, 22] for the representations ${\rm
UOSp\,}^{(\pm)}$ of the supergroup ${\rm UOSp\,}$. These representations are
related to the classification of Riemannian symmetric superspaces by Zirnbauer
[32]. The index “$+$” in Eq. (2.26) refers to real entries in the Boson–Boson
block and to quaternionic entries in the Fermion–Fermion block and “$-$”
indicates the other way around.
## 3 Supersymmetric Wishart–matrices and some of their properties
We generalize the integrand (2.24) to arbitrary sufficiently integrable
superfunctions on rectangular
$(\gamma_{2}c+\gamma_{1}d)\times(\gamma_{2}a+\gamma_{1}b)$ supermatrices
$\widehat{V}$ on the complex Grassmann–algebra
$\Lambda=\bigoplus\limits_{j=0}^{2(ad+bc)}\Lambda_{j}$. Such a supermatrix
$\widehat{V}=\left(\Psi_{11}^{({\rm C})},\ldots,\Psi_{a1}^{({\rm
C})},\Psi_{12}^{({\rm C})},\ldots\Psi_{b2}^{({\rm
C})}\right)=\left(\Psi_{11}^{({\rm R})*}\ldots,\Psi_{c1}^{({\rm
R})*},\Psi_{12}^{({\rm R})*},\ldots\Psi_{d2}^{({\rm R})*}\right)^{T_{\rm S}}$
(3.1)
is defined by its columns
$\displaystyle\Psi_{j1}^{({\rm C})\dagger}$ $\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{ll}\left(\left\\{x_{jn}\right\\}_{1\leq
n\leq c},\left\\{\chi_{jn},\chi_{jn}^{*}\right\\}_{1\leq n\leq d}\right)&,\
\beta=1,\\\ \left(\left\\{z_{jn}\right\\}_{1\leq n\leq
c},\left\\{\chi_{jn}\right\\}_{1\leq n\leq d}\right)&,\ \beta=2,\\\
\left(\left\\{\begin{array}[]{cc}z_{jn1}&-z_{jn2}^{*}\\\
z_{jn2}&z_{jn1}^{*}\end{array}\right\\}_{1\leq n\leq
c},\left\\{\begin{array}[]{c}\chi_{jn}\\\
\chi_{jn}^{*}\end{array}\right\\}_{1\leq n\leq d}\right)&,\
\beta=4,\end{array}\right.$ (3.9) $\displaystyle\Psi_{j2}^{({\rm C})\dagger}$
$\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{ll}\left(\left\\{\begin{array}[]{c}\zeta_{jn}\\\
\zeta_{jn}^{*}\end{array}\right\\}_{1\leq n\leq
c},\left\\{\begin{array}[]{cc}\tilde{z}_{jn1}&-\tilde{z}_{jn2}^{*}\\\
\tilde{z}_{jn2}&\tilde{z}_{jn1}^{*}\end{array}\right\\}_{1\leq n\leq
d}\right)&,\ \beta=1,\\\ \left(\left\\{\zeta_{jn}\right\\}_{1\leq n\leq
c},\left\\{\tilde{z}_{jn}\right\\}_{1\leq n\leq d}\right)&,\ \beta=2,\\\
\left(\left\\{\zeta_{jn},\zeta^{*}_{jn}\right\\}_{1\leq n\leq
c},\left\\{y_{jn}\right\\}_{1\leq n\leq d}\right)&,\
\beta=4,\end{array}\right.$ (3.17)
or by its rows
$\displaystyle\Psi_{j1}^{({\rm R})\dagger}$ $\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{ll}\left(\left\\{x_{nj}\right\\}_{1\leq
n\leq a},\left\\{\zeta_{nj}^{*},-\zeta_{nj}\right\\}_{1\leq n\leq b}\right)&,\
\beta=1,\\\ \left(\left\\{z_{nj}^{*}\right\\}_{1\leq n\leq
a},\left\\{\zeta_{nj}^{*}\right\\}_{1\leq n\leq b}\right)&,\ \beta=2,\\\
\left(\left\\{\begin{array}[]{cc}z_{nj1}^{*}&z_{nj2}^{*}\\\
-z_{nj2}&z_{nj1}\end{array}\right\\}_{1\leq n\leq
a},\left\\{\begin{array}[]{c}\zeta_{nj}^{*}\\\
-\zeta_{nj}\end{array}\right\\}_{1\leq n\leq b}\right)&,\
\beta=4,\end{array}\right.$ (3.25) $\displaystyle\Psi_{j2}^{({\rm R})\dagger}$
$\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{ll}\left(\left\\{\begin{array}[]{c}-\chi_{nj}^{*}\\\
\chi_{nj}\end{array}\right\\}_{1\leq n\leq
a},\left\\{\begin{array}[]{cc}\tilde{z}_{nj1}^{*}&\tilde{z}_{nj2}^{*}\\\
-\tilde{z}_{nj2}&\tilde{z}_{nj1}\end{array}\right\\}_{1\leq n\leq b}\right)&,\
\beta=1,\\\ \left(\left\\{-\chi_{nj}^{*}\right\\}_{1\leq n\leq
a},\left\\{\tilde{z}_{nj}^{*}\right\\}_{1\leq n\leq b}\right)&,\ \beta=2,\\\
\left(\left\\{-\chi_{nj}^{*},\chi_{nj}\right\\}_{1\leq n\leq
a},\left\\{y_{nj}\right\\}_{1\leq n\leq b}\right)&,\
\beta=4\end{array}\right.$ (3.33)
which are real, complex and quaternionic supervectors. We use the complex
Grassmann variables $\chi_{mn}$ and $\zeta_{mn}$ and the real numbers $x_{mn}$
and $y_{mn}$. Also, we introduce the complex numbers $z_{mn}$,
$\tilde{z}_{mn}$, $z_{mnl}$ and $\tilde{z}_{mnl}$. The
$(\gamma_{2}c+\gamma_{1}d)\times(\gamma_{2}c+\gamma_{1}d)$ supermatrix
$\widehat{B}=\tilde{\gamma}^{-1}\widehat{V}\widehat{V}^{\dagger}$ can be
written in the columns of $\widehat{V}$ as in Eq. (2.23). As this supermatrix
has a form similar to the ordinary Wishart–matrices, we refer to it as
supersymmetric Wishart–matrix. The rectangular supermatrix above fulfills the
property
$\widehat{V}^{*}=\widehat{Y}_{cd}\widehat{V}\widehat{Y}_{ab}^{T}.$ (3.34)
The corresponding generating function (2.2) is an integral over a rotation
invariant superfunction $P$ on a superspace, which is sufficiently convergent
and analytic in its real independent variables,
$Z_{cd}^{ab}(E^{-})=\int\limits_{\widetilde{\Sigma}_{\beta,ab}^{(-\psi)}}P(\sigma){\rm
Sdet\,}^{-1/\tilde{\gamma}}\left(\sigma\otimes\widehat{\Pi}_{2\psi}^{({\rm
C})}-\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{\gamma_{2}a+\gamma_{1}b}\otimes E^{-}\right)d[\sigma],$ (3.35)
where
$E^{-}={\rm diag\,}\left(E_{11}\otimes\leavevmode\hbox{\small
1\kern-3.8pt\normalsize
1}_{\gamma_{2}},\ldots,E_{c1}\otimes\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\gamma_{2}},E_{12}\otimes\leavevmode\hbox{\small
1\kern-3.8pt\normalsize
1}_{\gamma_{1}},\ldots,E_{d2}\otimes\leavevmode\hbox{\small
1\kern-3.8pt\normalsize
1}_{\gamma_{1}}\right)-\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\gamma_{2}c+\gamma_{1}d}.$ (3.36)
Let $\widetilde{\Sigma}_{\beta,ab}^{0(\dagger)}$ be a subset of
$\widetilde{\Sigma}_{\beta,ab}^{(\dagger)}$. The entries of elements in
$\widetilde{\Sigma}_{\beta,ab}^{0(\dagger)}$ lie in $\Lambda_{0}$ and
$\Lambda_{1}$. The set
$\widetilde{\Sigma}_{\beta,ab}^{(-\psi)}=\widehat{\Pi}_{-\psi}^{({\rm
R})}\widetilde{\Sigma}_{\beta,ab}^{0(\dagger)}\widehat{\Pi}_{-\psi}^{({\rm
R})}$ is the Wick–rotated set of $\widetilde{\Sigma}_{\beta,ab}^{0(\dagger)}$
by the generalized Wick–rotation $\widehat{\Pi}_{-\psi}^{({\rm R})}={\rm
diag\,}(\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{\gamma_{2}a},e^{-\imath\psi/2}\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\gamma_{1}b})$. As in Ref. [21], we introduce such
a rotation for the convergence of the integral (3.35). The matrix
$\widehat{\Pi}_{2\psi}^{({\rm C})}={\rm diag\,}(\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\gamma_{2}c},e^{\imath\psi}\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\gamma_{1}d})$ is also a Wick–rotation.
In the rest of our work, we restrict the calculations to a class of
superfunctions. These superfunctions has a Wick–rotation such that the
integrals are convergent. We have not explicitly analysed the class of such
functions. However, this class is very large and sufficient for physical
interests. We consider the probability distribution
$P(\sigma)=f(\sigma)\exp(-{\rm Str\,}\sigma^{2m}),$ (3.37)
where $m\in\mathbb{N}$ and $f$ is a superfunction which does not increase so
fast as $\exp({\rm Str\,}\sigma^{2m})$ in the infinty, in particular
$\underset{\epsilon\to\infty}{\lim}P\left(\epsilon
e^{\imath\alpha}\sigma\right)=0\Leftrightarrow\underset{\epsilon\to\infty}{\lim}\exp\left(-\epsilon
e^{\imath\alpha}{\rm Str\,}\sigma^{2m}\right)=0$ (3.38)
for every angle $\alpha\in[0,2\pi]$. Then, a Wick–rotation exists for $P$.
To guarantee the convergence of the integrals below, let
$\widehat{V}_{\psi}=\widehat{\Pi}_{\psi}^{({\rm C})}\widehat{V}$,
$\widehat{V}_{-\psi}^{\dagger}=\widehat{V}^{\dagger}\widehat{\Pi}_{\psi}^{({\rm
C})}$ and $\widehat{B}_{\psi}=\widehat{\Pi}_{\psi}^{({\rm
C})}\widehat{B}\widehat{\Pi}_{\psi}^{({\rm C})}$. Considering a function $f$
on the set of supersymmetric Wishart–matrices, we give a lemma and a corollary
which are of equal importance for the superbosonization formula and the
generalized Hubbard–Startonovich transformation. For $b=0$, the lemma presents
the duality relation between the ordinary and superspace (2.12) which is
crucial for the calculation of (2.2). This lemma was proven in Ref. [20] by
representation theory. Here, we only state it.
###### Lemma 3.1
Let $f$ be a superfunction on rectangular supermatrices of the form (3.1) and
invariant under
$f(\widehat{V}_{\psi},\widehat{V}_{-\psi}^{\dagger})=f\left(\widehat{V}_{\psi}U^{\dagger},U\widehat{V}_{-\psi}^{\dagger}\right)\
,$ (3.39)
for all $\widehat{V}$ and $U\in{\rm U\,}^{(\beta)}(a/b)$. Then, there is a
superfunction $F$ on the ${\rm U\,}^{(\beta)}(c/d)$–symmetric supermatrices
with
$F(\widehat{B}_{\psi})=f(\widehat{V}_{\psi},\widehat{V}_{-\psi}^{\dagger})\ .$
(3.40)
The ${\rm U\,}^{(\beta)}(c/d)$–symmetric supermatrices are elements of
$\widetilde{\Sigma}_{\beta,ab}^{(\dagger)}$. The invariance condition (3.39)
implies that $f$ only depends on the rows of $\widehat{V}_{\psi}$ by
$\Psi_{nr}^{({\rm R})\dagger}\Psi_{ms}^{({\rm R})}$ for arbitrary $n,m,r$ and
$s$. These scalar products are the entries of the supermatrix
$\widehat{V}_{\psi}\widehat{V}_{-\psi}^{\dagger}$ which leads to the
statement.
The corollary below is an application of integral theorems by Wegner [34]
worked out in Refs. [35, 36] and of the Theorems III.1, III.2 and III.3 in
Ref. [22]. It states that an integration over supersymmetric Wishart–matrices
can be reduced to integrations over supersymmetric Wishart–matrices comprising
a lower dimensional rectangular supermatrix. In particular for the generating
function, it reflects the equivalence of the integral (3.35) with an
integration over smaller supermatrices [22]. We assume that
$\tilde{a}=a-2(b-\tilde{b})/\beta\geq 0$ with
$\tilde{b}=\left\\{\begin{array}[]{ll}1&,\ \beta=4\ {\rm and}\ b\in
2\mathbb{N}_{0}+1\\\ 0&,\ {\rm else}\end{array}\right..$ (3.41)
###### Corollary 3.2
Let $F$ be the superfunction of Lemma 3.1, real analytic in its real
independent entries and a Schwartz–function. Then, we find
$\displaystyle\int\limits_{\mathfrak{R}}F(\widehat{B}_{\psi})d[\widehat{V}]=C\int\limits_{\widetilde{\mathfrak{R}}}F(\widetilde{B}_{\psi})d[\widetilde{V}]$
(3.42)
where $\widetilde{B}=\tilde{\gamma}^{-1}\widetilde{V}\widetilde{V}$. The sets
are $\mathfrak{R}=\mathbb{R}^{\beta ac+4bd/\beta}\times\Lambda_{2(ad+bc)}$ and
$\widetilde{\mathfrak{R}}=\mathbb{R}^{\beta\tilde{a}c+4\tilde{b}d/\beta}\times\Lambda_{2(\tilde{a}d+\tilde{b}c)}$,
the constant is
$C=\left[-\frac{\gamma_{1}}{2}\right]^{(b-\tilde{b})c}\left[\frac{\gamma_{2}}{2}\right]^{(a-\tilde{a})d}$
(3.43)
and the measure
$d[\widehat{V}]=\underset{1\leq l\leq\beta}{\underset{1\leq n\leq
c}{\underset{1\leq m\leq a}{\prod}}}dx_{mnl}\underset{1\leq l\leq
4/\beta}{\underset{1\leq n\leq d}{\underset{1\leq m\leq
b}{\prod}}}dy_{mnl}\underset{1\leq n\leq c}{\underset{1\leq m\leq
b}{\prod}}d\zeta_{mn}d\zeta_{mn}^{*}\underset{1\leq n\leq d}{\underset{1\leq
m\leq a}{\prod}}d\chi_{mn}d\chi_{mn}^{*}\ .$ (3.44)
The $(\gamma_{2}c+\gamma_{1}d)\times(\gamma_{2}\tilde{a}+\gamma_{1}\tilde{b})$
supermatrix $\widetilde{V}$ and its measure $d[\widetilde{V}]$ is defined
analogous to $\widehat{V}$ and $d[\widehat{V}]$, respectively. Here, $x_{mna}$
and $y_{mna}$ are the independent real components of the real, complex and
quaternionic numbers of the supervectors $\Psi_{j1}^{({\rm R})}$ and
$\Psi_{j2}^{({\rm R})}$, respectively.
Proof:
We integrate $F$ over all supervectors $\Psi_{j1}^{({\rm R})}$ and
$\Psi_{j2}^{({\rm R})}$ except $\Psi_{11}^{({\rm R})}$. Then,
$\displaystyle\int\limits_{\mathfrak{R}^{\prime}}F(V_{\psi}V_{-\psi}^{\dagger})d[\widehat{V}_{\neq
11}]$ (3.45)
only depends on $\Psi_{11}^{({\rm R})\dagger}\Psi_{11}^{({\rm R})}$. The
integration set is $\mathfrak{R}^{\prime}=\mathbb{R}^{\beta
a(c-1)+4bd/\beta}\times\Lambda_{2(ad+b(c-1))}$ and the measure
$d[\widehat{V}_{\neq 11}]$ is $d[\widehat{V}]$ without the measure for the
supervector $\Psi_{11}^{({\rm R})}$. With help of the Theorems in Ref. [34,
35, 36, 22], the integration over $\Psi_{11}^{({\rm R})}$ is up to a constant
equivalent to an integration over a supervector $\widetilde{\Psi}_{11}^{({\rm
R})}$. This supervector is equal to $\Psi_{11}^{({\rm R})}$ in the first
$\tilde{a}$–th entries and else zero. We repeat this procedure for all other
supervectors reminding that we only need the invariance under the supergroup
action ${\rm U\,}^{(\beta)}\left(b-\tilde{b}/b-\tilde{b}\right)$ on $f$ as in
Eq. (3.39) embedded in ${\rm U\,}^{(\beta)}(a/b)$. This invariance is
preserved in each step due to the zero entries in the new supervectors.
$\square$
This corollary allows us to restrict our calculation on supermatrices with
$b=1$ only to $\beta=4$ and $b=0$ for all $\beta$. Only the latter case is of
physical interest. Thus, we give the computation for $b=0$ in the following
sections and consider the case $b=1$ in Sec. 7. For $b=0$ we omit the
Wick–rotation for $\widehat{B}$ as it is done in Refs. [15, 21] due to the
convergence of the integral (3.35).
## 4 The superbosonization formula
We need for the following theorem the definition of the sets
$\displaystyle\Sigma_{1,pq}=\left\\{\left.\sigma=\left[\begin{array}[]{ccc}\sigma_{1}&\eta&\eta^{*}\\\
-\eta^{\dagger}&\sigma_{21}&\sigma_{22}^{(1)}\\\
\eta^{T}&\sigma_{22}^{(2)}&\sigma_{21}^{T}\end{array}\right]\in{\rm
Mat}(p/2q)\right|\sigma_{1}^{\dagger}=\sigma_{1}^{*}=\sigma_{1}\ {\rm with\
positive}\right.$ (4.4) $\displaystyle\left.{\rm definite\ body},\
\sigma_{22}^{(1)T}=-\sigma_{22}^{(1)},\
\sigma_{22}^{(2)T}=-\sigma_{22}^{(2)}\right\\},$ (4.5)
$\displaystyle\Sigma_{2,pq}=\left\\{\left.\sigma=\left[\begin{array}[]{cc}\sigma_{1}&\eta\\\
-\eta^{\dagger}&\sigma_{2}\end{array}\right]\in{\rm
Mat}(p/q)\right|\sigma_{1}^{\dagger}=\sigma_{1}\ {\rm with\ positive\
definite\ body}\right\\},$ (4.8)
$\displaystyle\Sigma_{4,pq}=\left\\{\sigma=\left[\begin{array}[]{ccc}\sigma_{11}&\sigma_{12}&\eta\\\
-\sigma_{12}^{*}&\sigma_{11}^{*}&\eta^{*}\\\
-\eta^{\dagger}&\eta^{T}&\sigma_{2}\end{array}\right]\in{\rm
Mat}(2p/q)\right|\sigma_{1}^{\dagger}=\sigma_{1}=\left[\begin{array}[]{cc}\sigma_{11}&\sigma_{12}\\\
-\sigma_{12}^{*}&\sigma_{11}^{*}\end{array}\right]$ (4.14) $\displaystyle{\rm
with\ positive\ definite\ body},\ \sigma_{2}=\sigma_{2}^{T}\Biggl{\\}}.$
(4.15)
Also, we will use the sets
$\Sigma_{\beta,pq}^{(\dagger)}=\left\\{\left.\sigma\in\Sigma_{\beta,pq}\right|\sigma_{2}^{\dagger}=\sigma_{2}\right\\}=\widetilde{\Sigma}_{\beta,pq}^{(\dagger)}\cap\Sigma_{\beta,pq}$
(4.16)
and
$\Sigma_{\beta,pq}^{({\rm
c})}=\left\\{\left.\sigma\in\Sigma_{\beta,pq}\right|\sigma_{2}\in{\rm
CU\,}^{(4/\beta)}\left(q\right)\right\\}$ (4.17)
where ${\rm CU\,}^{(\beta)}\left(q\right)$ is the set of the circular
orthogonal (COE, $\beta=1$), unitary (CUE, $\beta=2$) or unitary-symplectic
(CSE, $\beta=4$) ensembles,
${\rm CU\,}^{(\beta)}\left(q\right)=\left\\{A\in{\rm
Gl}(\gamma_{2}q,\mathbb{C})\left|\begin{array}[]{ll}A=A^{T}\in{\rm
U\,}^{(2)}(q)&,\ \beta=1\\\ A\in{\rm U\,}^{(2)}(q)&,\ \beta=2\\\
A=(Y_{s}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{q})A^{T}(Y_{s}^{T}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{q})\in{\rm U\,}^{(2)}(2q)&,\ \beta=4\end{array}\right.\right\\}$ (4.18)
The index “$\dagger$” in Eq. (4.16) refers to the self-adjointness of the
supermatrices and the index “${\rm c}$” indicates the relation to the circular
ensembles. We notice that the set classes presented above differ in the
Fermion–Fermion block. In Sec. 6, we show that this is the crucial difference
between both methods. Due to the nilpotence of $B$’s Fermion–Fermion block, we
can change the set in this block for the Fourier–transformation. The sets of
matrices in the sets above with entries in $\Lambda_{0}$ and $\Lambda_{1}$ are
denoted by $\Sigma_{\beta,pq}^{0}$, $\Sigma_{\beta,pq}^{0(\dagger)}$ and
$\Sigma_{\beta,pq}^{0({\rm c})}$, respectively.
The proof of the superbosonization formula [19, 20] given below is based on
the proofs of the superbosonization formula for arbitrary superfunctions on
real supersymmetric Wishart–matrices in Ref. [19] and for Gaussian functions
on real, complex and quaternionic Wishart–matrices in Ref. [37]. This theorem
extends the superbosonization formula of Ref. [20] to averages of square roots
of determinants over unitary-symplectically invariant ensembles, i.e.
$\beta=4$, $b=c=0$ and $d$ odd in Eq. (3.35). The proof of this theorem is
given in A.
###### Theorem 4.1 (Superbosonization formula)
Let $F$ be a conveniently integrable and analytic superfunction on the set of
$\left(\gamma_{2}c+\gamma_{1}d\right)\times\left(\gamma_{2}c+\gamma_{1}d\right)$
supermatrices and
$\kappa=\frac{a-c+1}{\gamma_{1}}+\frac{d-1}{\gamma_{2}}.$ (4.19)
With
$a\geq c\ ,$ (4.20)
we find
$\int\limits_{\mathfrak{R}}F(\widehat{B})\exp\left(-\varepsilon{\rm
Str\,}\widehat{B}\right)d[\widehat{V}]=C_{acd}^{(\beta)}\int\limits_{\Sigma_{\beta,cd}^{0({\rm
c})}}F(\rho)\exp\left(-\varepsilon{\rm Str\,}\rho\right){\rm
Sdet\,}\rho^{\kappa}d[\rho],$ (4.21)
where the constant is
$\displaystyle
C_{acd}^{(\beta)}=\left(-2\pi\gamma_{1}\right)^{-ad}\left(-\frac{2\pi}{\gamma_{2}}\right)^{cd}2^{-c}\tilde{\gamma}^{\beta
ac/2}\frac{{\rm Vol}\left({\rm U\,}^{(\beta)}(a)\right)}{{\rm Vol}\left({\rm
U\,}^{(\beta)}(a-c)\right)}\times$
$\displaystyle\times\prod\limits_{n=1}^{d}\frac{\Gamma\left(\gamma_{1}\kappa+2(n-d)/\beta\right)}{\imath^{4(n-1)/\beta}\pi^{2(n-1)/\beta}}.$
(4.22)
We define the measure $d[\widehat{V}]$ as in Corollary 3.2 and the measure on
the right hand side is $d[\rho]=d[\rho_{1}]d[\rho_{2}]d[\eta]$ where
$\displaystyle d[\rho_{1}]$ $\displaystyle=$
$\displaystyle\prod\limits_{n=1}^{c}d\rho_{nn1}\times\left\\{\begin{array}[]{ll}\prod\limits_{1\leq
n<m\leq c}d\rho_{nm1}&,\ \beta=1,\\\ \prod\limits_{1\leq n<m\leq c}d{\rm
Re\,}\rho_{nm1}d{\rm Im\,}\rho_{nm1}&,\ \beta=2,\\\ \prod\limits_{1\leq
n<m\leq c}d{\rm Re\,}\rho_{nm11}d{\rm Im\,}\rho_{nm11}d{\rm
Re\,}\rho_{nm12}d{\rm Im\,}\rho_{nm12}&,\ \beta=4,\end{array}\right.$ (4.26)
$\displaystyle d[\rho_{2}]$ $\displaystyle=$ $\displaystyle{\rm
FU}_{d}^{(4/\beta)}|\Delta_{d}(e^{\imath\varphi_{j}})|^{4/\beta}\prod\limits_{n=1}^{d}\frac{de^{\imath\varphi_{n}}}{2\pi\imath}d\mu(U),$
(4.27) $\displaystyle d[\eta]$ $\displaystyle=$
$\displaystyle\prod\limits_{n=1}^{c}\prod\limits_{m=1}^{d}(d\eta_{nm}d\eta_{nm}^{*}).$
(4.28)
Here, $\rho_{2}=U{\rm
diag\,}\left(e^{\imath\varphi_{1}},\ldots,e^{\imath\varphi_{d}}\right)U^{\dagger}$,
$U\in{\rm U\,}^{(4/\beta)}\left(d\right)$ and $d\mu(U)$ is the normalized
Haar-measure of ${\rm U\,}^{(4/\beta)}\left(d\right)$. We introduce the
volumes of the rotation groups
${\rm Vol}\left({\rm
U\,}^{(\beta)}(n)\right)=\prod\limits_{j=1}^{n}\frac{2\pi^{\beta
j/2}}{\Gamma\left(\beta j/2\right)}$ (4.29)
and the ratio of volumes of the group flag manifold and the permutation group
${\rm
FU}_{d}^{(4/\beta)}=\frac{1}{d!}\prod\limits_{j=1}^{d}\frac{\pi^{2(j-1)/\beta}\Gamma(2/\beta)}{\Gamma(2j/\beta)}\
.$ (4.30)
The absolute value of the Vandermonde determinant
$\Delta_{d}(e^{\imath\varphi_{j}})=\prod\limits_{1\leq n<m\leq
d}\left(e^{\imath\varphi_{n}}-e^{\imath\varphi_{m}}\right)$ refers to a change
of sign in every single difference
$\left(e^{\imath\varphi_{n}}-e^{\imath\varphi_{m}}\right)$ with “$+$” if
$\varphi_{m}<\varphi_{n}$ and with “$-$” otherwise. Thus, it is not an
absolute value in the complex plane.
The exponential term can also be shifted in the superfunction $F$. We need
this additional term to regularize an intermediate step in the proof.
The inequality (4.20) is crucial. For example, let $\beta=2$ and $F(\rho)=1$.
Then, the left hand side of Eq. (4.21) is not equal to zero. On the right hand
side of Eq. (4.21), the dependence on the Grassmann variables only stems from
the superdeterminant and we find
$\int\limits_{\Lambda_{2cd}}{\rm
Sdet\,}\rho^{\kappa}d[\eta]=\int\limits_{\Lambda_{2cd}}\frac{\det\left(\rho_{1}+\eta\rho_{2}^{-1}\eta^{\dagger}\right)^{\kappa}}{\det\rho_{2}^{\kappa}}d[\eta]=0$
(4.31)
for $\kappa<d$. The superdeterminant ${\rm Sdet\,}\rho$ is a polynomial of
order $2c$ in the Grassmann variables $\\{\eta_{nm},\eta_{nm}^{*}\\}$ and the
integral over the remaining variables is finite for $\kappa\geq 0$. Hence, it
is easy to see that the right hand side of Eq. (4.21) is zero for $\kappa<d$.
This inequality is equivalent to $a<c$.
This problem was also discussed in Ref. [31]. These authors gave a solution
for the case that (4.20) is violated. This solution differs from our approach
in Sec. 7.
## 5 The generalized Hubbard–Stratonovich transformation
The following theorem is proven in a way similar to Refs. [15, 21]. The proof
is given in B. We need the Wick–rotated set
$\Sigma_{\beta,cd}^{(\psi)}=\widehat{\Pi}_{\psi}^{({\rm
C})}\Sigma_{\beta,cd}^{0(\dagger)}\widehat{\Pi}_{\psi}^{({\rm C})}$,
particularly $\Sigma_{\beta,cd}^{(0)}=\Sigma_{\beta,cd}^{0(\dagger)}$. The
original extension of the Hubbard–Stratonovich transformation [15, 21] was
only given for $\gamma_{2}c=\gamma_{1}d=\tilde{\gamma}k$. Here, we generalize
it to arbitrary $c$ and $d$.
###### Theorem 5.1 (Generalized Hubbard–Stratonovich transformation)
Let $F$ and $\kappa$ be the same as in Theorem 4.1. If the inequality (4.20)
holds, we have
$\displaystyle\int\limits_{\mathfrak{R}}F(\widehat{B})\exp\left(-\varepsilon{\rm
Str\,}\widehat{B}\right)d[\Psi]=$
$\displaystyle=\widetilde{C}_{acd}^{(\beta)}\int\limits_{\Sigma_{\beta,cd}^{(\psi)}}F\left(\hat{\rho}\right)\exp\left(-\varepsilon{\rm
Str\,}\hat{\rho}\right)\det\rho_{1}^{\kappa}\left(e^{-\imath\psi
d}D_{dr_{2}}^{(4/\beta)}\right)^{a-c}\frac{\delta(r_{2})}{|\Delta_{d}(e^{\imath\psi}r_{2})|^{4/\beta}}e^{-\imath\psi
cd}d[\rho]=$
$\displaystyle=\widetilde{C}_{acd}^{(\beta)}\int\limits_{\Sigma_{\beta,cd}^{(0)}}\det\rho_{1}^{\kappa}\frac{\delta(r_{2})}{|\Delta_{d}(r_{2})|^{4/\beta}}\left((-1)^{d}D_{dr_{2}}^{(4/\beta)}\right)^{a-c}\left.F\left(\hat{\rho}\right)\exp\left(-\varepsilon{\rm
Str\,}\hat{\rho}\right)\right|_{\psi=0}d[\rho]$ (5.1)
with
$\hat{\rho}=\left[\begin{array}[]{c|c}\rho_{1}&e^{\imath\psi/2}\rho_{\eta}\\\
\hline\cr-e^{\imath\psi/2}\rho_{\eta}^{\dagger}&e^{\imath\psi}\left(\rho_{2}-\rho_{\eta}^{\dagger}\rho_{1}^{-1}\rho_{\eta}\right)\end{array}\right].$
(5.2)
The variables $r_{2}$ are the eigenvalues of the supermatrix $\rho_{2}$. The
measure $d[\rho]=d[\rho_{1}]d[\rho_{2}]d[\eta]$ is defined by Eqs. (4.26) and
(4.28). For the measure $d[\rho_{2}]$ we take the definition (4.26) for
$4/\beta$. The differential operator in Eq. (5.1) is an analog of the
Sekiguchi–differential operator [38] and has the form [21]
$D_{dr_{2}}^{(4/\beta)}=\frac{1}{\Delta_{d}(r_{2})}\det\left[r_{n2}^{d-m}\left(\frac{\partial}{\partial
r_{n2}}+(d-m)\frac{2}{\beta}\frac{1}{r_{n2}}\right)\right]_{1\leq n,m\leq d}.$
(5.3)
The constant is
$\widetilde{C}_{acd}^{(\beta)}=2^{-c}\left(2\pi\gamma_{1}\right)^{-ad}\left(\frac{2\pi}{\gamma_{2}}\right)^{cd}\tilde{\gamma}^{\beta
ac/2}\frac{{\rm Vol}\left({\rm U\,}^{(\beta)}(a)\right)}{{\rm Vol}\left({\rm
U\,}^{(\beta)}(a-c)\right){\rm FU}_{d}^{(4/\beta)}}.$ (5.4)
Since the diagonalization of $\rho_{2}$ yields an
$|\Delta_{d}(r_{2})|^{4/\beta}$ in the measure, the ratio of the
Dirac–distribution with the Vandermonde–determinant is for Schwartz–functions
on ${\rm Herm\,}(4/\beta,d)$ well–defined. Also, the action of
$D_{dr_{2}}^{(4/\beta)}$ on such a Schwartz–function integrated over the
corresponding rotation group is finite at zero.
The distribution in the Fermion–Fermion block in Eq. (5.1) takes for
$\beta\in\\{1,2\\}$ the simpler form [15, 21]
$\displaystyle\left(D_{dr_{2}}^{(4/\beta)}\right)^{a-c}\frac{\delta(r_{2})}{|\Delta_{d}(r_{2})|^{4/\beta}}=$
$\displaystyle={\rm
FU}_{d}^{(4/\beta)}\prod\limits_{n=1}^{d}\frac{\Gamma\left(a-c+1+2(n-1)/\beta\right)}{(-\pi)^{2(n-1)/\beta}\Gamma\left(\gamma_{1}\kappa\right)}\prod\limits_{n=1}^{d}\frac{\partial^{\gamma_{1}\kappa-1}}{\partial
r_{n2}^{\gamma_{1}\kappa-1}}\delta(r_{2n})\ .$ (5.5)
This expression written as a contour integral is the superbosonization formula
[39]. For $\beta=4$, we do not find such a simplification due to the term
$|\Delta(r_{2})|$ as the Jacobian in the eigenvalue–angle coordinates.
## 6 Equivalence of and connections between the two approaches
Above, we have argued that both expressions in Theorems 4.1 and 5.1 are
equivalent for $\beta\in\\{1,2\\}$. Now we address all $\beta\in\\{1,2,4\\}$.
The Theorem below is proven in C. The proof treats all three cases in a
unifying way. Properties of the ordinary matrix Bessel–functions are used.
###### Theorem 6.1 (Equivalence of Theorems 4.1 and 5.1)
The superbosonization formula, 4.1, and the generalized Hubbard–Stratonovich
transformation, 5.1, are equivalent for superfunctions which are
Schwartz–functions and analytic in the fermionic eigenvalues.
The compact integral in the Fermion–Fermion block of the superbosonization
formula can be considered as a contour integral. In the proof of Theorem 6.1,
we find the integral identity
$\displaystyle\int\limits_{[0,2\pi]^{d}}\widetilde{F}\left(e^{\imath\varphi_{j}}\right)\left|\Delta_{d}\left(e^{\imath\varphi_{j}}\right)\right|^{4/\beta}\prod\limits_{n=1}^{d}\frac{e^{\imath(1-\gamma_{1}\kappa)\varphi_{n}}d\varphi_{n}}{2\pi}=$
(6.1) $\displaystyle=$
$\displaystyle\prod\limits_{n=1}^{d}\frac{\imath^{4(n-1)/\beta}\Gamma(1+2n/\beta)}{\Gamma(2/\beta+1)\Gamma(\gamma_{1}\kappa-2(n-1)/\beta)}\left.\left(D_{dr_{2}}^{(4/\beta)}\right)^{a-c}\widetilde{F}(r_{2})\right|_{r_{2}=0}$
for an analytic function $\widetilde{F}$ on $\mathbb{C}^{d}$ with permutation
invariance. Hence, we can relate both constants (4.22) and (5.4),
$\frac{\widetilde{C}_{acd}^{(\beta)}}{C_{acd}^{(\beta)}}=(-1)^{d(a-c)}\prod\limits_{n=1}^{d}\frac{\imath^{4(n-1)/\beta}\Gamma(1+2n/\beta)}{\Gamma(2/\beta+1)\Gamma(\gamma_{1}\kappa-2(n-1)/\beta)}.$
(6.2)
The integral identity (6.1) is a reminiscent of the residue theorem. It is the
analog of the connection between the contour integral and the differential
operator in the cases $\beta\in\\{1,2\\}$, see Fig. 1. Thus, the differential
Operator with the Dirac–distribution in the generalized Hubbard–Stratonovich
transformation restricts the non-compact integral in the Fermion–Fermion block
to the point zero and its neighborhood. Therefore it is equivalent to a
compact Fermion–Fermion block integral as appearing in the superbosonization
formula.
Figure 1: In the superbosonization formula, the integration of the fermionic
eigenvalues is along the unit circle in the complex plane (dotted circle). The
eigenvalue integrals in the generalized Hubbard–Stratonovich transformation
are integrations over the real axis (bold line) or on the Wick–rotated real
axis (thin line at angle $\psi$) if the differential operator acts on the
superfunction or on the Dirac–distribution at zero (bold dot, $0$),
respectively.
## 7 The general case for arbitrary positive integers $a$, $b$, $c$, $d$ and
arbitrary Dyson–index $\beta\in\\{1,2,4\\}$
We consider an application of our results. The inequality (4.20) reads
$N\geq\gamma_{1}k$ (7.1)
for the calculation of the $k$–point correlation function (2.4) with help of
the matrix Green function. For $\beta=1$ , a $N\times N$ real symmetric matrix
has in the absence of degeneracies $N$ different eigenvalues. However, we can
only calculate $k$–point correlation functions with $k<N/2$. For $N\to\infty$,
this restriction does not matter. But for exact finite $N$ calculations, we
have to modify the line of reasoning.
We construct the symmetry operator
$\mathfrak{S}\left(\sigma\right)=\mathfrak{S}\left(\left[\begin{array}[]{cc}\sigma_{11}&\sigma_{12}\\\
\sigma_{21}&\sigma_{22}\end{array}\right]\right)=\left[\begin{array}[]{cc}-\sigma_{22}&-\sigma_{21}\\\
\sigma_{12}&\sigma_{11}\end{array}\right]$ (7.2)
from $(m_{1}+m_{2})\times(n_{1}+n_{2})$ supermatrix to
$(m_{2}+m_{1})\times(n_{2}+n_{1})$ supermatrix. This operator has the
properties
$\displaystyle\mathfrak{S}(\sigma^{\dagger})$ $\displaystyle=$
$\displaystyle\mathfrak{S}(\sigma)^{\dagger}\quad,$ (7.3)
$\displaystyle\mathfrak{S}(\sigma^{*})$ $\displaystyle=$
$\displaystyle\mathfrak{S}(\sigma)^{*},$ (7.4)
$\displaystyle\mathfrak{S}^{2}(\sigma)$ $\displaystyle=$
$\displaystyle-\sigma,$ (7.5)
$\displaystyle\mathfrak{S}\left(\left[\begin{array}[]{cc}\sigma_{11}&\sigma_{12}\\\
\sigma_{21}&\sigma_{22}\end{array}\right]\left[\begin{array}[]{cc}\rho_{11}&\rho_{12}\\\
0&0\end{array}\right]\right)$ $\displaystyle=$
$\displaystyle\mathfrak{S}\left(\left[\begin{array}[]{cc}\sigma_{11}&\sigma_{12}\\\
\sigma_{21}&\sigma_{22}\end{array}\right]\right)\mathfrak{S}\left(\left[\begin{array}[]{cc}\rho_{11}&\rho_{12}\\\
0&0\end{array}\right]\right).$ (7.14)
Let $a,\ b,\ c$, $d$ be arbitrary positive integers and $\beta\in\\{1,2,4\\}$.
Then, the equation (7.14) reads for a matrix product of a
$(\gamma_{2}c+\gamma_{1}d)\times(0+\gamma_{1}b)$ supermatrix with a
$(0+\gamma_{1}b)\times(\gamma_{2}c+\gamma_{1}d)$ supermatrix
$\left[\begin{array}[]{c}\zeta^{\dagger}\\\
\tilde{z}^{\dagger}\end{array}\right]\left[\begin{array}[]{cc}\zeta&\tilde{z}\end{array}\right]=\mathfrak{S}\left(\left[\begin{array}[]{c}\tilde{z}^{\dagger}\\\
-\zeta^{\dagger}\end{array}\right]\right)\mathfrak{S}\left(\left[\begin{array}[]{cc}\tilde{z}&\zeta\end{array}\right]\right)=\mathfrak{S}\left(\left[\begin{array}[]{c}\tilde{z}^{\dagger}\\\
-\zeta^{\dagger}\end{array}\right]\left[\begin{array}[]{cc}\tilde{z}&\zeta\end{array}\right]\right).$
(7.15)
With help of the operator $\mathfrak{S}$, we split the supersymmetric
Wishart–matrix $\widehat{B}$ into two parts,
$\widehat{B}=\widehat{B}_{1}+\mathfrak{S}(\widehat{B}_{2})$ (7.16)
such that
$\widehat{B}_{1}=\tilde{\gamma}^{-1}\sum\limits_{j=1}^{a}\Psi_{j1}^{({\rm
C})}\Psi_{j1}^{({\rm C})\dagger}\qquad{\rm
and}\qquad\widehat{B}_{2}=\tilde{\gamma}^{-1}\sum\limits_{j=1}^{b}\mathfrak{S}\left(\Psi_{j2}^{({\rm
C})}\right)\mathfrak{S}\left(\Psi_{j2}^{({\rm C})}\right)^{\dagger}.$ (7.17)
The supervectors $\mathfrak{S}\left(\Psi_{j2}^{({\rm C})}\right)$ are of the
same form as $\Psi_{j1}^{({\rm C})}$. Let $\sigma$ be a quadratic supermatrix,
i.e. $m_{1}=n_{1}$ and $m_{2}=n_{2}$. Then, we find the additional property
${\rm Sdet\,}\mathfrak{S}(\sigma)=(-1)^{m_{2}}{\rm Sdet\,}^{-1}\sigma.$ (7.18)
Let
$\widehat{\Sigma}_{\beta,pq}^{(0)}=\mathfrak{S}\left(\Sigma_{\beta,pq}^{(0)}\right)$,
$\widehat{\Sigma}_{\beta,pq}^{0({\rm
c})}=\mathfrak{S}\left(\Sigma_{\beta,pq}^{0({\rm c})}\right)$ and the
Wick–rotated set
$\widehat{\Sigma}_{\beta,pq}^{(\psi)}=\widehat{\Pi}_{\psi}^{({\rm
C})}\widehat{\Sigma}_{\beta,pq}^{(0)}\widehat{\Pi}_{\psi}^{({\rm C})}$. Then,
we construct the analog of the superbosonization formula and the generalized
Hubbard–Stratonovich transformation.
###### Theorem 7.1
Let $F$ be the superfunction as in Theorem 4.1 and
$\kappa=\frac{a-c+1}{\gamma_{1}}-\frac{b-d+1}{\gamma_{2}}.$ (7.19)
Also, let $e\in\mathbb{N}_{0}$ and
$\tilde{a}=a+\gamma_{1}e\qquad{\rm and}\qquad\tilde{b}=b+\gamma_{2}e$ (7.20)
with
$\tilde{a}\geq c\qquad\tilde{b}\geq d.$ (7.21)
We choose the Wick–rotation $e^{\imath\psi}$ such that all integrals are
convergent. Then, we have
$\displaystyle\int\limits_{\mathfrak{R}}F(\widehat{B}_{\psi})\exp\left(-\varepsilon{\rm
Str\,}\widehat{B}_{\psi}\right)d[\widehat{V}]=$
$\displaystyle=\left(-\frac{2}{\gamma_{1}}\right)^{\gamma_{2}ec}\left(\frac{2}{\gamma_{2}}\right)^{\gamma_{1}ed}\int\limits_{\widetilde{\mathfrak{R}}}F(\widetilde{B}_{\psi})\exp\left(-\varepsilon{\rm
Str\,}\widetilde{B}_{\psi}\right)d[\widetilde{V}]=$ $\displaystyle=C_{{\rm
SF}}\int\limits_{\Sigma_{\beta,cd}^{0({\rm
c})}}\int\limits_{\widehat{\Sigma}_{4/\beta,dc}^{0({\rm
c})}}d[\rho^{(2)}]d[\rho^{(1)}]F(\rho^{(1)}+e^{\imath\psi}\rho^{(2)})\exp\left[-\varepsilon{\rm
Str\,}(\rho^{(1)}+e^{\imath\psi}\rho^{(2)})\right]\times$
$\displaystyle\times{\rm Sdet\,}^{\kappa+\tilde{b}/\gamma_{2}}\rho^{(1)}{\rm
Sdet\,}^{\kappa-\tilde{a}/\gamma_{1}}\rho^{(2)}=$ (7.22)
$\displaystyle=C_{{\rm
HS}}\int\limits_{\Sigma_{\beta,cd}^{(0)}}\int\limits_{\widehat{\Sigma}_{4/\beta,cd}^{(0)}}d[\rho^{(2)}]d[\rho^{(1)}]\frac{\delta\left(r_{2}^{(1)}\right)}{\left|\Delta_{d}\left(r_{2}^{(1)}\right)\right|^{4/\beta}}\frac{\delta\left(r_{1}^{(2)}\right)}{\left|\Delta_{c}\left(r_{1}^{(2)}\right)\right|^{\beta}}{\det}^{\kappa+b/\gamma_{2}}\rho_{1}^{(1)}{\det}^{a/\gamma_{1}-\kappa}\rho_{2}^{(2)}\times$
$\displaystyle\times\left(D_{dr_{2}^{(1)}}^{(4/\beta)}\right)^{\tilde{a}-c}\left(D_{cr_{1}^{(2)}}^{(\beta)}\right)^{\tilde{b}-d}F(\hat{\rho}^{(1)}+e^{\imath\psi}\hat{\rho}^{(2)})\exp\left[-\varepsilon{\rm
Str\,}(\hat{\rho}^{(1)}+e^{\imath\psi}\hat{\rho}^{(2)})\right],$ (7.23)
where the constants are
$\displaystyle C_{{\rm SF}}$ $\displaystyle=$
$\displaystyle(-1)^{c(b-d)}e^{\imath\psi(\tilde{a}d-\tilde{b}c)}\left(\frac{2}{\gamma_{1}}\right)^{\gamma_{2}ec}\left(\frac{2}{\gamma_{2}}\right)^{\gamma_{1}ed}C_{\tilde{a}cd}^{(\beta)}C_{\tilde{b}dc}^{(4/\beta)},$
(7.24) $\displaystyle C_{{\rm HS}}$ $\displaystyle=$
$\displaystyle(-1)^{d(a-c)}e^{\imath\psi(\tilde{a}d-\tilde{b}c)}\left(-\frac{2}{\gamma_{1}}\right)^{\gamma_{2}ec}\left(-\frac{2}{\gamma_{2}}\right)^{\gamma_{1}ed}\widetilde{C}_{\tilde{a}cd}^{(\beta)}\widetilde{C}_{\tilde{b}dc}^{(4/\beta)}.$
(7.25)
Here, we define the supermatrix
$\hat{\rho}^{(1)}+e^{\imath\psi}\hat{\rho}^{(2)}=\left[\begin{array}[]{c|c}\rho_{1}^{(1)}+e^{\imath\psi}\left(\rho_{1}^{(2)}-\rho_{\tilde{\eta}}^{(2)}\rho_{2}^{(2)-1}\rho_{\tilde{\eta}}^{(2)\dagger}\right)&\rho_{\eta}^{(1)}+e^{\imath\psi}\rho_{\tilde{\eta}}^{(2)}\\\
\hline\cr-\rho_{\eta}^{(1)\dagger}-e^{\imath\psi}\rho_{\tilde{\eta}}^{(2)\dagger}&\rho_{2}^{(1)}-\rho_{\eta}^{(1)\dagger}\rho_{1}^{(1)-1}\rho_{\eta}^{(1)}+e^{\imath\psi}\rho_{2}^{(2)}\end{array}\right]$
(7.26)
The set $\widetilde{\mathfrak{R}}$ is given as in corollary 3.2. The measures
$d[\rho^{(1)}]=d[\rho_{1}^{(1)}]d[\rho_{2}^{(1)}]d[\eta]$ and
$d[\rho^{(2)}]=d[\rho_{1}^{(2)}]d[\rho_{2}^{(2)}]d[\tilde{\eta}]$ are given by
Theorem 4.1. The measures (4.26) for $\beta$ and $4/\beta$ assign
$d[\rho_{1}^{(1)}]$ and $d[\rho_{2}^{(2)}]$ in Eqs. (7.22) and (7.23),
respectively. In Eq. (7.22), $d[\rho_{2}^{(1)}]$ and $d[\rho_{1}^{(2)}]$ are
defined by the measure (4.27) for the cases $\beta$ and $4/\beta$,
respectively, and, in Eq. (7.23), they are defined by the measure (4.26) for
the cases $4/\beta$ and $\beta$, respectively. The measures $d[\eta]$ and
$d[\tilde{\eta}]$ are the product of all complex Grassmann pairs as in Eq.
(4.28).
Since this Theorem is a consequence of corollary 3.2 and Theorems 4.1 and 5.1,
the proof is quite simple.
Proof:
Let $e\in\mathbb{N}_{0}$ as in Eq. (7.20). Then, we use corollary 3.2 to
extend the integral over $\widehat{V}$ to an integral over $\widetilde{V}$. We
split the supersymmetric Wishart–matrix $\widehat{B}$ as in Eq. (7.16). Both
Wishart–matrices $\widehat{B}_{1}$ and $\widehat{B}_{2}$ fulfill the
requirement (4.20) according to their dimension. Thus, we singly apply both
Theorems 4.1 and 5.1 to $\widehat{B}_{1}$ and $\widehat{B}_{2}$. $\square$
Our approach of a violation of inequality (4.20) is quite different from the
solution given in Ref. [31]. These authors introduce a matrix which projects
the Boson–Boson block and the bosonic side of the off-diagonal blocks onto a
space of the smaller dimension $a$. Then, they integrate over all of such
orthogonal projectors. This integral becomes more difficult due to an
additional measure on a curved, compact space. We use a second symmetric
supermatrix. Hence, we have up to the dimensions of the supermatrices a
symmetry between both supermatrices produced by $\mathfrak{S}$. There is no
additional complication for the integration, since the measures of both
supermatrices are of the same kind. Moreover, our approach extends the results
to the case of $\beta=4$ and odd $b$ which is not considered in Ref. [31].
## 8 Remarks and conclusions
We proved the equivalence of the generalized Hubbard–Stratonovich
transformation [15, 21] and the superbosonization formula [19, 20]. Thereby,
we generalized both approaches. The superbosonization formula was proven in a
new way and is now extended to odd dimensional supersymmetric Wishart–matrices
in the Fermion–Fermion block for the quaternionic case. The generalized
Hubbard–Stratonovich transformation was here extended to arbitrary dimensional
supersymmetric Wishart–matrices which not only stem of averages over the
matrix Green functions. [8, 27, 15, 22] Furthermore, we got an integral
identity beyond the restriction of the matrix dimension, see Eq. (4.20). This
approach distinguishes from the method presented in Ref. [31] by the
integration of an additional matrix. It is, also, applicable on the artificial
example $\beta=4$ and odd $b$ which has not been considered in Ref. [31].
The generalized Hubbard–Stratonovich transformation and the superbosonization
formula reduce in the absence of Grassmann variables to the ordinary integral
identity for ordinary Wishart–matrices. [29, 20] In the general case with the
restriction (4.20), both approaches differ in the Fermion–Fermion block
integration. Due to the Dirac–distribution and the differential operator, the
integration over the non-compact domain in the generalized
Hubbard–Stratonovich transformation is equal with help of the residue theorem
to a contour integral. This contour integral is equivalent to the integration
over the compact domain in the superbosonization formula. Hence, we found an
integral identity between a compact integral and a differentiated
Dirac–distribution.
## Acknowledgements
We thank Heiner Kohler for fruitful discussions. This work was supported by
Deutsche Forschungsgemeinschaft within Sonderforschungsbereich Transregio 12
“Symmetries and Universality in Mesoscopic Systems”.
## Appendix A Proof of Theorem 4.1 (Superbosonization formula)
First, we consider two particular cases. Let $d=0$ and $a\geq c$ be an
arbitrary positive integer. Then, we find
$\widehat{B}\in\Sigma_{\beta,c0}^{0}=\Sigma_{\beta,c0}^{0(\dagger)}=\Sigma_{\beta,c0}^{0({\rm
c})}\subset{\rm Herm\,}(\beta,c).$ (1.1)
We introduce a Fourier–transformation
$\displaystyle\int\limits_{\mathbb{R}^{\beta
ac}}F(\widehat{B})\exp\left(-\varepsilon\tr\widehat{B}\right)d[\widehat{V}]=$
$\displaystyle=\displaystyle\left(\frac{\gamma_{2}}{2\pi}\right)^{c}\left(\frac{\gamma_{2}}{\pi}\right)^{\beta
c(c-1)/2}\int\limits_{{\rm Herm\,}(\beta,c)}\int\limits_{\mathbb{R}^{\beta
ac}}\mathcal{F}F(\sigma_{1})\exp\left(\imath\tr\widehat{B}\sigma_{1}^{+}\right)d[\widehat{V}]d[\sigma_{1}]$
(1.2)
where the measure $d[\sigma_{1}]$ is defined as in Eq. (4.22) and
$\sigma_{1}^{+}=\sigma_{1}+\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\gamma_{2}c}$. The Fourier–transform is
$\displaystyle\mathcal{F}F(\sigma_{1})=\int\limits_{{\rm
Herm\,}(\beta,c)}F(\rho_{1})\exp\left(-\imath\tr\rho_{1}\sigma_{1}\right)d[\rho_{1}].$
(1.3)
The integration over the supervectors, which are in this particular case
ordinary vectors, yields
$\displaystyle\int\limits_{\mathbb{R}^{\beta
ac}}\exp\left(\imath\tr\widehat{B}\sigma_{1}^{+}\right)d[\widehat{V}]=\det\left(\frac{\sigma_{1}^{+}}{\imath\gamma_{1}\pi}\right)^{-a/\gamma_{1}}.$
(1.4)
The Fourier–transform of this determinant is an Ingham–Siegel integral [40,
41]
$\displaystyle\int\limits_{{\rm
Herm\,}(\beta,c)}\exp\left(-\imath\tr\rho_{1}\sigma_{1}\right){\det}\left(-\imath\sigma_{1}^{+}\right)^{-a/\gamma_{1}}d[\sigma_{1}]=G_{a-c,c}^{(\beta)}\displaystyle\det\rho_{1}^{\kappa}\exp\left(-\varepsilon\tr\rho_{1}\right)\Theta(\rho_{1}),$
(1.5)
where the constant is
$G_{a-c,c}^{(\beta)}=\left(\frac{\gamma_{2}}{\pi}\right)^{\gamma_{2}c\kappa}\prod\limits_{j=a-c+1}^{a}\frac{2\pi^{\beta
j/2}}{\Gamma(\beta j/2)}$ (1.6)
and the exponent is
$\kappa=\frac{a-c+1}{\gamma_{1}}-\frac{1}{\gamma_{2}}.$ (1.7)
$\Gamma(.)$ is Euler’s gamma–function. This integral was recently used in
random matrix theory [29] and is normalized in our notation as in Ref. [21].
Thus, we find for Eq. (1.2)
$\int\limits_{\mathbb{R}^{\beta
ac}}F(\widehat{B})\exp\left(-\varepsilon\tr\widehat{B}\right)d[\widehat{V}]=C_{ac0}^{(\beta)}\int\limits_{\Sigma_{\beta,c0}^{0({\rm
c})}}F(\rho)\exp\left(-\varepsilon\tr\rho_{1}\right)\det\rho_{1}^{\kappa}d[\rho_{1}],$
(1.8)
which verifies this theorem. The product in the constant
$C_{ac0}^{(\beta)}=2^{-c}\tilde{\gamma}^{\beta ac/2}\frac{{\rm Vol}\left({\rm
U\,}^{(\beta)}(a)\right)}{{\rm Vol}\left({\rm U\,}^{(\beta)}(a-c)\right)}$
(1.9)
is a ratio of group volumes.
In the next case, we consider $c=0$ and arbitrary $d$. We see that
$\widehat{B}\in\Sigma_{\beta,0d}^{(\dagger)}$ (1.10)
is true. We integrate over
$\int\limits_{\Lambda_{2ad}}F(\widehat{B})\exp\left(\varepsilon\tr\widehat{B}\right)d[\widehat{V}],$
(1.11)
where the function $F$ is analytic. As in Ref. [19], we expand
$F(\widehat{B})\exp\left(\varepsilon\tr\widehat{B}\right)$ in the entries of
$\widehat{B}$ and, then, integrate over every single term of this expansion.
Every term is a product of $\widehat{B}$’s entries and can be generated by
differentiation of $\left(\tr A\widehat{B}\right)^{n}$ with respect to
$A\in\Sigma_{\beta,0d}^{0(\dagger)}$ for certain $n\in\mathbb{N}$. Thus, it is
sufficient to proof the integral theorem
$\int\limits_{\Lambda_{2ad}}\left(\tr
A\widehat{B}\right)^{n}d[\widehat{V}]=C_{a0d}^{(\beta)}\int\limits_{\Sigma_{\beta,0d}^{0({\rm
c})}}(\tr A\rho_{2})^{n}\det\rho_{2}^{-\kappa}d[\rho_{2}].$ (1.12)
Since $\Sigma_{\beta,0d}^{0(\dagger)}$ is generated of
$\Sigma_{\beta,0d}^{0({\rm c})}$ by analytic continuation in the eigenvalues,
it is convenient that $A\in\Sigma_{\beta,0d}^{0({\rm c})}$. Then, $A^{-1/2}$
is well-defined and $A^{-1/2}\rho_{2}A^{-1/2}\in\Sigma_{\beta,0d}^{0({\rm
c})}$. We transform in Eq. (1.13)
$\widehat{V}\rightarrow A^{-1/2}\widehat{V}\ ,\ \
\widehat{V}^{\dagger}\rightarrow\widehat{V}^{\dagger}A^{-1/2}\ \ {\rm and}\ \
\rho_{2}\rightarrow A^{-1/2}\rho_{2}A^{-1/2}.$ (1.13)
The measures turns under this change into
$\displaystyle d[\widehat{V}]$ $\displaystyle\rightarrow$ $\displaystyle\det
A^{a/\gamma_{1}}d[\widehat{V}]\ \ {\rm and}$ (1.14) $\displaystyle
d[\rho_{2}]$ $\displaystyle\rightarrow$ $\displaystyle\det
A^{-\kappa+a/\gamma_{1}}d[\rho_{2}],$ (1.15)
where the exponent is
$\kappa=\frac{a+1}{\gamma_{1}}+\frac{d-1}{\gamma_{2}}.$ (1.16)
Hence, we have to calculate the remaining constant defined by
$\int\limits_{\Lambda_{2ad}}\left(\tr\widehat{B}\right)^{n}d[\widehat{V}]=C_{a0d}^{(\beta)}\int\limits_{\Sigma_{\beta,0d}^{0({\rm
c})}}(\tr\rho_{2})^{n}\det\rho_{2}^{-\kappa}d[\rho_{2}].$ (1.17)
This equation holds for arbitrary $n$. Then, this must also be valid for
$F(\widehat{B})=\varepsilon=1$ in Eq. (1.11). The right hand side of Eq.
(1.11) is
$\int\limits_{\Lambda_{2ad}}\exp\left(\tr\widehat{B}\right)d[\widehat{V}]=\left(-2\pi\right)^{-ad}.$
(1.18)
On the left hand side, we first integrate over the group ${\rm
U\,}^{(4/\beta)}\left(d\right)$ and get
$\displaystyle\displaystyle\int\limits_{\Sigma_{\beta,0d}^{0({\rm
c})}}\exp\left(\tr\rho_{2}\right)\det\rho_{2}^{-\kappa}d[\rho_{2}]=$ (1.19)
$\displaystyle={\rm
FU}_{d}^{(4/\beta)}\int\limits_{[0,2\pi]^{d}}|\Delta_{d}(e^{\imath\varphi_{j}})|^{4/\beta}\prod_{n=1}^{d}{\rm
exp}\left(\gamma_{1}e^{\imath\varphi_{n}}\right)e^{-\imath\varphi_{n}(\gamma_{1}\kappa-1)}\frac{d\varphi_{n}}{2\pi}.$
(1.20)
We derive this integral with help of Selberg’s integral formula [14]. We
assume that $\tilde{\beta}=4/\beta$ and $\gamma_{1}\kappa$ are arbitrary
positive integers and $\tilde{\beta}$ is even. Then, we omit the absolute
value and Eq. (1.20) becomes
$\displaystyle\int\limits_{\Sigma_{\beta,0d}^{0({\rm
c})}}\exp\left(\tr\rho_{2}\right)\det\rho_{2}^{-\kappa}d[\rho_{2}]=\left.{\rm
FU}_{d}^{(\beta)}\Delta_{d}^{\tilde{\beta}}\left(\frac{1}{\gamma_{1}}\frac{\partial}{\partial\lambda_{j}}\right)\prod_{n=1}^{d}\frac{\left(\gamma_{1}\lambda_{n}\right)^{\gamma_{1}\kappa-1}}{\Gamma\left(\gamma_{1}\kappa\right)}\right|_{\lambda=1}.$
(1.21)
We consider another integral which is the Laguerre version of Selberg’s
integral [14]
$\displaystyle\displaystyle\int\limits_{\mathbb{R}_{+}^{d}}\Delta_{d}^{\tilde{\beta}}(x)\prod\limits_{n=1}^{d}\exp\left(-\gamma_{1}x_{n}\right)x_{n}^{\xi}dx_{n}$
$\displaystyle=$
$\displaystyle\left.\Delta_{d}^{\tilde{\beta}}\left(-\frac{1}{\gamma_{1}}\frac{\partial}{\partial\lambda_{j}}\right)\prod_{n=1}^{d}\Gamma(\xi+1)\left(\gamma_{1}\lambda_{n}\right)^{-\xi-1}\right|_{\lambda=1}=$
(1.22) $\displaystyle=$
$\displaystyle\prod\limits_{n=1}^{d}\frac{\Gamma\left(1+n\tilde{\beta}/2\right)\Gamma\left(\xi+1+(n-1)\tilde{\beta}/2\right)}{\gamma_{1}^{\xi+1+\tilde{\beta}(d-1)/2}\Gamma\left(1+\tilde{\beta}/2\right)},$
where $\xi$ is an arbitrary positive integer. Since $\tilde{\beta}$ is even
the minus sign in the Vandermonde determinant vanishes. The equations (1.21)
and (1.22) are up to the Gamma–functions polynomials in $\kappa$ and $\xi$. We
remind that (1.22) is true for every complex $\xi$. Let ${\rm Re\,}\xi>0$, we
have
$\displaystyle\left|\left.\Delta_{d}^{\tilde{\beta}}\left(-\frac{1}{\gamma_{1}}\frac{\partial}{\partial\lambda_{j}}\right)\prod_{n=1}^{d}\frac{\left(\gamma_{1}\lambda_{n}\right)^{-\xi-1}}{\Gamma\left(\xi+1+(n-1)\tilde{\beta}/2\right)}\right|_{\lambda=1}\right|$
$\displaystyle\leq$ $\displaystyle{\rm const.}<\infty\ \ {\rm and}$ (1.23)
$\displaystyle\left|\gamma_{1}^{-d(\xi+1+\tilde{\beta}(d-1)/2)}\prod\limits_{n=1}^{d}\frac{\Gamma\left(1+n\tilde{\beta}/2\right)}{\Gamma(\xi+1)\Gamma\left(1+\tilde{\beta}/2\right)}\right|$
$\displaystyle\leq$ $\displaystyle{\rm const.}<\infty.$ (1.24)
The functions are bounded and regular for ${\rm Re\,}\xi>0$ and we can apply
Carlson’s theorem [14]. We identify $\xi=-\gamma_{1}\kappa$ and find
$\displaystyle\displaystyle\int\limits_{\Sigma_{\beta,0d}^{0({\rm
c})}}\exp\left(\tr\rho_{2}\right)\det\rho_{2}^{-\kappa}d[\rho_{2}]=$
$\displaystyle=\gamma_{1}^{ad}{\rm
FU}_{d}^{(4/\beta)}\prod\limits_{n=1}^{d}\frac{\Gamma\left(1+n\tilde{\beta}/2\right)\Gamma\left(1-\gamma_{1}\kappa+(n-1)\tilde{\beta}/2\right)}{\Gamma\left(1+\tilde{\beta}/2\right)\Gamma\left(\gamma_{1}\kappa\right)\Gamma\left(1-\gamma_{1}\kappa\right)}.$
(1.25)
Due to Euler’s reflection formula $\Gamma(z)\Gamma(1-z)=\pi/\sin(\pi z)$, this
equation simplifies to
$\displaystyle\int\limits_{\Sigma_{\beta,0d}^{0({\rm
c})}}\exp\left(\tr\rho_{2}\right)\det\rho_{2}^{-\kappa}d[\rho_{2}]=\gamma_{1}^{ad}{\rm
FU}_{d}^{(4/\beta)}\prod\limits_{n=1}^{d}\frac{\imath^{4(n-1)/\beta}\Gamma\left(1+2n/\beta\right)}{\Gamma\left(1+2/\beta\right)\Gamma\left(\gamma_{1}\kappa-2(n-1)/\beta\right)}$
(1.26)
or equivalent
$\displaystyle\displaystyle
2^{\tilde{\beta}d(d-1)/2}\int\limits_{[0,2\pi]^{d}}\prod\limits_{1\leq n<m\leq
d}\left|\sin\left(\frac{\varphi_{n}-\varphi_{m}}{2}\right)\right|^{\tilde{\beta}}\prod_{n=1}^{d}{\rm
exp}\left(\gamma_{1}e^{\imath\varphi_{n}}\right)e^{-\imath\varphi_{n}a}\frac{d\varphi_{n}}{2\pi}=$
$\displaystyle=\displaystyle\gamma_{1}^{ad}\prod\limits_{n=1}^{d}\frac{\Gamma\left(1+n\tilde{\beta}/2\right)}{\Gamma\left(1+\tilde{\beta}/2\right)\Gamma\left(a+1+(n-1)\tilde{\beta}/2\right)}.$
(1.27)
Since $a$ is a positive integer for all positive and even $\tilde{\beta}$, the
equations above are true for all such $\tilde{\beta}$. For constant natural
numbers $a$, $d$, $\gamma_{1}$ and complex $\tilde{\beta}$ with ${\rm
Re\,}\tilde{\beta}>0$, the inequalities
$\displaystyle\displaystyle\left|\int\limits_{[0,2\pi]^{d}}\prod\limits_{1\leq
n<m\leq
d}\left|\sin\left(\frac{\varphi_{n}-\varphi_{m}}{2}\right)\right|^{\tilde{\beta}}\prod_{n=1}^{d}{\rm
exp}\left(\gamma_{1}e^{\imath\varphi_{n}}\right)e^{-\imath\varphi_{n}a}\frac{d\varphi_{n}}{2\pi}\right|\leq$
$\displaystyle\leq\displaystyle\int\limits_{[0,2\pi]^{d}}\prod\limits_{1\leq
n<m\leq
d}\left|\sin\left(\frac{\varphi_{n}-\varphi_{m}}{2}\right)\right|^{{\rm
Re\,}\tilde{\beta}}\prod_{n=1}^{d}{\rm
exp}\left(\gamma_{1}\cos\varphi_{n}\right)\frac{d\varphi_{n}}{2\pi}<\infty\ \
{\rm and}$ (1.28)
$\displaystyle\displaystyle\left|2^{-\tilde{\beta}d(d-1)/2}\gamma_{1}^{ad}\prod\limits_{n=1}^{d}\frac{\Gamma\left(1+n\tilde{\beta}/2\right)}{\Gamma\left(1+\tilde{\beta}/2\right)\Gamma\left(a+1+(n-1)\tilde{\beta}/2\right)}\right|\leq$
$\displaystyle\leq\displaystyle{\rm const.}\ 2^{-{\rm
Re\,}\tilde{\beta}d(d-1)/2}<\infty$ (1.29)
are valid and allow us with Carlson’s theorem to extend Eq. (1.27) to every
complex $\tilde{\beta}$, in particular to $\tilde{\beta}=1$. Thus, we find for
the constant in Eq. (1.17)
$C_{a0d}=\left(-2\pi\gamma_{1}\right)^{-ad}\left[\prod\limits_{n=1}^{d}\frac{\imath^{4(n-1)/\beta}\pi^{2(n-1)/\beta}}{\Gamma(a+1+2(n-1)/\beta)}\right]^{-1}.$
(1.30)
Now, we consider arbitrary $d$ and $a\geq c$ and split
$\Psi_{j1}^{({\rm C})}=\left[\begin{array}[]{c}\mathbf{x}_{j}\\\
\chi_{j}\end{array}\right]$ (1.31)
and
$\widehat{B}=\frac{1}{\tilde{\gamma}}\sum\limits_{j=1}^{a}\Psi_{j1}^{({\rm
C})}\Psi_{j1}^{({\rm
C})\dagger}=\left[\begin{array}[]{c|c}\displaystyle\sum\limits_{j=1}^{a}\frac{\mathbf{x}_{j}\mathbf{x}_{j}^{\dagger}}{\tilde{\gamma}}&\displaystyle\sum\limits_{j=1}^{a}\frac{\mathbf{x}_{j}\chi_{j}^{\dagger}}{\tilde{\gamma}}\\\
\hline\cr\displaystyle\sum\limits_{j=1}^{a}\frac{\chi_{j}\mathbf{x}_{j}^{\dagger}}{\tilde{\gamma}}&\displaystyle\sum\limits_{j=1}^{a}\frac{\chi_{j}\chi_{j}^{\dagger}}{\tilde{\gamma}}\end{array}\right]=\left[\begin{array}[]{cc}B_{11}&B_{12}\\\
B_{21}&B_{22}\end{array}\right]$ (1.32)
such that $\mathbf{x}_{j}$ contains all commuting variables of
$\Psi_{j1}^{({\rm C})}$ and $\chi_{j}$ depends on all Grassmann variables.
Then, we replace the sub-matrices $B_{12},B_{21}$ and $B_{22}$ by
Dirac–distributions
$\displaystyle\int\limits_{\mathfrak{R}}F(\widehat{B})\exp\left(-\varepsilon{\rm
Str\,}\widehat{B}\right)d[\widehat{V}]=$ $\displaystyle=C_{1}\int\limits_{{\rm
Herm\,}(4/\beta,d)^{2}}\int\limits_{\mathfrak{R}}\int\limits_{\left(\Lambda_{2cd}\right)^{2}}d[\eta]d[\tilde{\eta}]d[\widehat{V}]d[\tilde{\rho}_{2}]d[\sigma_{2}]F\left(\left[\begin{array}[]{cc}B_{11}&\rho_{\eta}\\\
-\rho_{\eta}^{\dagger}&\tilde{\rho_{2}}\end{array}\right]\right)\times$ (1.35)
$\displaystyle\times{\rm exp}\left[-\varepsilon{\rm
Str\,}B-\imath\left(\tr(\rho_{\eta}^{\dagger}+B_{21})\sigma_{\tilde{\eta}}+\tr\sigma_{\tilde{\eta}}^{\dagger}(\rho_{\eta}-B_{12})-\tr(\tilde{\rho_{2}}-B_{22})\sigma_{2}\right)\right],$
(1.36)
where
$C_{1}=\left(\frac{2\pi}{\tilde{\gamma}}\right)^{2cd}\left(\frac{\gamma_{1}}{\pi}\right)^{2d(d-1)/\beta}\left(\frac{\gamma_{1}}{2\pi}\right)^{d}.$
(1.37)
The matrices $\rho_{\eta}$ and $\sigma_{\tilde{\eta}}$ are rectangular
matrices depending on Grassmann variables as in the Boson–Fermion and
Fermion–Boson block in the sets (4.5-4.15). Shifting
$\chi_{j}\rightarrow\chi_{j}+\left(\sigma_{2}^{+}\right)^{-1}\sigma_{\tilde{\eta}}^{\dagger}\mathbf{x}_{j}$
and
$\chi_{j}^{\dagger}\rightarrow\chi_{j}^{\dagger}-\mathbf{x}_{j}^{\dagger}\sigma_{\tilde{\eta}}\left(\sigma_{2}^{+}\right)^{-1}$,
we get
$\displaystyle\int\limits_{\mathfrak{R}}F(\widehat{B})\exp\left(-\varepsilon{\rm
Str\,}\widehat{B}\right)d[\widehat{V}]=$ $\displaystyle=C_{1}\int\limits_{{\rm
Herm\,}(4/\beta,d)^{2}}\int\limits_{\mathfrak{R}}\int\limits_{\left(\Lambda_{2cd}\right)^{2}}d[\eta]d[\tilde{\eta}]d[\widehat{V}]d[\tilde{\rho}_{2}]d[\sigma_{2}]F\left(\left[\begin{array}[]{cc}B_{11}&\rho_{\eta}\\\
-\rho_{\eta}^{\dagger}&\tilde{\rho_{2}}\end{array}\right]\right)\times$ (1.40)
$\displaystyle\times{\rm exp}\left[-\varepsilon{\rm
Str\,}B-\imath\left(\tr\sigma_{\tilde{\eta}}^{\dagger}B_{11}\sigma_{\tilde{\eta}}\left(\sigma_{2}^{+}\right)^{-1}+\tr\rho_{\eta}^{\dagger}\sigma_{\tilde{\eta}}+\tr\sigma_{\tilde{\eta}}^{\dagger}\rho_{\eta}-\tr(\tilde{\rho_{2}}-B_{22})\sigma_{2}\right)\right].$
(1.41)
This integral only depends on $B_{11}$ and $B_{22}$. Thus, we apply the first
case of this proof and replace $B_{11}$. We find
$\displaystyle\int\limits_{\mathfrak{R}}F(\widehat{B})\exp\left(-\varepsilon{\rm
Str\,}\widehat{B}\right)d[\widehat{V}]=$
$\displaystyle=C_{ac0}^{(\beta)}C_{1}\int\limits_{{\rm
Herm\,}(4/\beta,d)^{2}}\int\limits_{\mathfrak{R}}\int\limits_{\left(\Lambda_{2cd}\right)^{2}}d[\chi]d[\eta]d[\tilde{\eta}]d[\rho_{1}]d[\tilde{\rho}_{2}]d[\sigma_{2}]F\left(\left[\begin{array}[]{cc}B_{11}&\rho_{\eta}\\\
-\rho_{\eta}^{\dagger}&\tilde{\rho_{2}}\end{array}\right]\right)\det\rho_{1}^{\tilde{\kappa}}\times$
(1.44) $\displaystyle\times\displaystyle{\rm exp}\left[\varepsilon(\tr
B_{22}-\tr\rho_{1})+\imath\left(\tr\sigma_{\tilde{\eta}}^{\dagger}\rho_{1}\sigma_{\tilde{\eta}}\left(\sigma_{2}^{+}\right)^{-1}-\tr\rho_{\eta}^{\dagger}\sigma_{\tilde{\eta}}-\tr\sigma_{\tilde{\eta}}^{\dagger}\rho_{\eta}+\tr(\tilde{\rho_{2}}-B_{22})\sigma_{2}\right)\right]$
(1.45)
with the exponent
$\tilde{\kappa}=\frac{a-c+1}{\gamma_{1}}-\frac{1}{\gamma_{2}}.$ (1.46)
After another shifting
$\sigma_{\tilde{\eta}}\rightarrow\sigma_{\tilde{\eta}}-\rho_{1}^{-1}\rho_{\eta}\sigma_{2}^{+}$
and
$\sigma_{\tilde{\eta}}^{\dagger}\rightarrow\sigma_{\tilde{\eta}}^{\dagger}-\sigma_{2}^{+}\rho_{\eta}^{\dagger}\rho_{1}^{-1}$,
we integrate over $d[\tilde{\eta}]$ and $B_{22}$ and have
$\displaystyle\int\limits_{\mathfrak{R}}F(\widehat{B})\exp\left(-\varepsilon{\rm
Str\,}\widehat{B}\right)d[\widehat{V}]=$
$\displaystyle=C_{ac0}^{(\beta)}C_{2}\int\limits_{\Sigma_{\beta,c0}^{0({\rm
c})}}\int\limits_{{\rm
Herm\,}(4/\beta,d)^{2}}\int\limits_{\Lambda_{2cd}}d[\eta]d[\rho_{1}]d[\tilde{\rho}_{2}]d[\sigma_{2}]F\left(\left[\begin{array}[]{cc}\rho_{1}&\rho_{\eta}\\\
-\rho_{\eta}^{\dagger}&\tilde{\rho_{2}}\end{array}\right]\right)\times$ (1.49)
$\displaystyle\times\displaystyle\det\rho_{1}^{\kappa}\det\left(\sigma_{2}^{+}\right)^{(a-c)/\gamma_{1}}{\rm
exp}\left[-\varepsilon\tr\rho_{1}+\imath\left(\tr\rho_{\eta}^{\dagger}\rho_{1}^{-1}\rho_{\eta}\sigma_{2}^{+}+\tr\tilde{\rho_{2}}\sigma_{2}\right)\right],$
(1.50)
where the exponent is
$\kappa=\frac{a-c+1}{\gamma_{1}}+\frac{d-1}{\gamma_{2}}$ (1.51)
and the new constant is
$C_{2}=\left(\frac{\imath}{2\pi}\right)^{ad}\left(\frac{2\pi}{\tilde{\gamma}\imath}\right)^{cd}\left(\frac{\gamma_{1}}{\pi}\right)^{2d(d-1)/\beta}\left(\frac{\gamma_{1}}{2\pi}\right)^{d}.$
(1.52)
We express the determinant in $\sigma_{2}^{+}$ as in Sec. 2 as Gaussian
integrals and define a new $(\gamma_{2}(a-c)+0)\times(0+\gamma_{1}d)$
rectangular supermatrix $\widehat{V}_{{\rm new}}$ and its corresponding
$(0+\gamma_{1}d)\times(0+\gamma_{1}d)$ supermatrix $\widehat{B}_{{\rm
new}}=\tilde{\gamma}^{-1}\widehat{V}_{{\rm new}}\widehat{V}_{{\rm
new}}^{\dagger}$. Integrating $\sigma_{2}$ and $\rho_{2}$, Eq. (1.50) becomes
$\displaystyle\int\limits_{\mathfrak{R}}F(\widehat{B})\exp\left(-\varepsilon{\rm
Str\,}\widehat{B}\right)d[\widehat{V}]=\tilde{\gamma}^{-cd}C_{ac0}^{(\beta)}\int\limits_{\Sigma_{\beta,c0}^{0({\rm
c})}}\int\limits_{\Lambda_{2(a-c)d}}F\left(\left[\begin{array}[]{c|c}\rho_{1}&\rho_{\eta}\\\
\hline\cr-\rho_{\eta}^{\dagger}&\widehat{B}_{{\rm
new}}-\rho_{\eta}^{\dagger}\rho_{1}^{-1}\rho_{\eta}\end{array}\right]\right)\times$
(1.55)
$\displaystyle\times\displaystyle\exp\left(-\varepsilon\tr\rho_{1}+\varepsilon\tr(\widehat{B}_{{\rm
new}}-\eta^{\dagger}\rho_{1}^{-1}\eta)\right)\det\rho_{1}^{\kappa}d[\widehat{V}_{{\rm
new}}]d[\eta]d[\rho_{1}].$ (1.56)
Now, we apply the second case in this proof and shift
$\rho_{2}\rightarrow\rho_{2}+\rho_{\eta}^{\dagger}\rho_{1}^{-1}\rho_{\eta}$ by
analytic continuation. We get the final result
$\int\limits_{\mathfrak{R}}F(\widehat{B})\exp\left(-\varepsilon{\rm
Str\,}\widehat{B}\right)d[\widehat{V}]=C_{acd}^{(\beta)}\int\limits_{\Sigma_{\beta,cd}^{0({\rm
c})}}F\left(\rho\right)\displaystyle\exp\left(-\varepsilon{\rm
Str\,}\rho\right){\rm Sdet\,}\rho^{\kappa}d[\rho]$ (1.57)
with
$\displaystyle
C_{acd}^{(\beta)}=\tilde{\gamma}^{-cd}C_{ac0}^{(\beta)}C_{a-c,0d}^{(\beta)}=$
$\displaystyle=\left(-2\pi\gamma_{1}\right)^{-ad}\left(-\frac{2\pi}{\gamma_{2}}\right)^{cd}2^{-c}\tilde{\gamma}^{\beta
ac/2}\frac{{\rm Vol}\left({\rm U\,}^{(\beta)}(a)\right)}{{\rm Vol}\left({\rm
U\,}^{(\beta)}(a-c)\right)}\prod\limits_{n=1}^{d}\frac{\Gamma\left(\gamma_{1}\kappa+2(n-d)/\beta\right)}{\imath^{4(n-1)/\beta}\pi^{2(n-1)/\beta}}=$
$\displaystyle=\imath^{-2d(d-1)/\beta}\frac{(2\pi)^{d}\tilde{\gamma}^{\beta
ac/2-cd}}{(-2)^{(c-a)d}2^{c}}\left\\{\begin{array}[]{ll}\displaystyle\frac{2^{d^{2}}{\rm
Vol}\left({\rm U\,}^{(1)}(a)\right)}{{\rm Vol}\left({\rm
U\,}^{(1)}(a-c+2d)\right)}&,\ \beta=1\\\ \displaystyle\frac{{\rm Vol}({\rm
U\,}^{(2)}(a)}{{\rm Vol}\left({\rm U\,}^{(2)}(a-c+d)\right)}&,\ \beta=2\\\
\displaystyle\frac{2^{-(2a+1-c)c}{\rm Vol}\left({\rm
U\,}^{(1)}(2a+1)\right)}{{\rm Vol}\left({\rm U\,}^{(1)}(2(a-c)+d+1)\right)}&,\
\beta=4\end{array}\right.\ .$ (1.61)
## Appendix B Proof of Theorem 5.1 (Generalized Hubbard–Stratonovich
transformation)
We choose a Wick–rotation $e^{\imath\psi}$ that all calculations below are
well defined. Then, we perform a Fourier transformation
$\displaystyle\int\limits_{\mathfrak{R}}F(\widehat{B})\exp\left(-\varepsilon{\rm
Str\,}\widehat{B}\right)d[\widehat{V}]=\widetilde{C}_{1}\int\limits_{\widetilde{\Sigma}_{\beta,cd}^{(-\psi)}}\int\limits_{\mathfrak{R}}\mathcal{F}F(\sigma)\exp\left(\imath{\rm
Str\,}\widehat{B}\sigma^{+}\right)d[\widehat{V}]d[\sigma],$ (2.1)
where $\sigma^{+}=\sigma+\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\gamma_{2}c+\gamma_{1}d}$,
$\mathcal{F}F(\sigma)=\int\limits_{\widetilde{\Sigma}_{\beta,cd}^{(\psi)}}F(\rho)\exp\left(-\imath{\rm
Str\,}\rho\sigma\right)d[\rho],$ (2.2)
and the constant is
$\displaystyle\widetilde{C}_{1}=\left(\frac{2\pi}{\tilde{\gamma}}\right)^{2cd}\left(\frac{\gamma_{2}}{2\pi}\right)^{c}\left(\frac{\gamma_{2}}{\pi}\right)^{\beta
c(c-1)/2}\left(\frac{\gamma_{1}}{2\pi}\right)^{d}\left(\frac{\gamma_{1}}{\pi}\right)^{2d(d-1)/\beta}.$
(2.3)
The integration over $\widehat{V}$ yields
$\displaystyle\int\limits_{\mathfrak{R}}F(\widehat{B})\exp\left(-\varepsilon{\rm
Str\,}\widehat{B}\right)d[\widehat{V}]=\widetilde{C}_{2}\int\limits_{\widetilde{\Sigma}_{\beta,cd}^{(-\psi)}}\mathcal{F}F(\sigma){\rm
Sdet\,}^{-a/\gamma_{1}}\sigma^{+}d[\sigma]$ (2.4)
with
$\displaystyle\widetilde{C}_{2}=\left(\frac{2\pi}{\tilde{\gamma}}\right)^{2cd}\left(\frac{\gamma_{2}}{2\pi}\right)^{c}\left(\frac{\gamma_{2}}{\pi}\right)^{\beta
c(c-1)/2}\left(\frac{\gamma_{1}}{2\pi}\right)^{d}\left(\frac{\gamma_{1}}{\pi}\right)^{2d(d-1)/\beta}\left(\frac{\imath}{2\pi}\right)^{ad}\left(\gamma_{1}\pi\imath\right)^{\beta
ac/2}.$ (2.5)
We transform this result back by a Fourier–transformation
$\displaystyle\int\limits_{\mathfrak{R}}F(\widehat{B})\exp\left(-\varepsilon{\rm
Str\,}\widehat{B}\right)d[\widehat{V}]=\widetilde{C}_{2}\int\limits_{\widetilde{\Sigma}_{\beta,cd}^{(\psi)}}F(\rho)I_{cd}^{(\beta,a)}(\rho)\exp\left(-\varepsilon{\rm
Str\,}\rho\right)d[\rho],$ (2.6)
where we have to calculate the supersymmetric Ingham–Siegel integral
$I_{cd}^{(\beta,a)}(\rho)=\int\limits_{\widetilde{\Sigma}_{\beta,cd}^{(-\psi)}}\exp\left(-\imath{\rm
Str\,}\rho\sigma^{+}\right){\rm Sdet\,}^{-a/\gamma_{1}}\sigma^{+}d[\sigma].$
(2.7)
This distribution is rotation invariant under ${\rm U\,}^{(\beta)}(c/d)$. The
ordinary version, $d=0$, of Eq. (2.6) is Eq. (1.5).
After performing four shifts
$\displaystyle\sigma_{1}$ $\displaystyle\rightarrow$
$\displaystyle\sigma_{1}-\sigma_{\tilde{\eta}}\left(\sigma_{2}+\imath
e^{\imath\psi}\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{\gamma_{1}d}\right)^{-1}\sigma_{\tilde{\eta}}^{\dagger},$ (2.8)
$\displaystyle\sigma_{\tilde{\eta}}$ $\displaystyle\rightarrow$
$\displaystyle\sigma_{\tilde{\eta}}-\rho_{1}^{-1}\rho_{\eta}\left(\sigma_{2}+\imath
e^{\imath\psi}\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{\gamma_{1}d}\right),$ (2.9) $\displaystyle\sigma_{\tilde{\eta}}^{\dagger}$
$\displaystyle\rightarrow$
$\displaystyle\sigma_{\tilde{\eta}}^{\dagger}-\left(\sigma_{2}+\imath
e^{\imath\psi}\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{\gamma_{1}d}\right)\rho_{\eta}^{\dagger}\rho_{1}^{-1},$ (2.10)
$\displaystyle\rho_{2}$ $\displaystyle\rightarrow$
$\displaystyle\rho_{2}-\rho_{\eta}^{\dagger}\rho_{1}^{-1}\rho_{\eta},$ (2.11)
and defining
$\hat{\rho}=\left[\begin{array}[]{c|c}\rho_{1}&e^{\imath\psi/2}\rho_{\eta}\\\
\hline\cr-e^{\imath\psi/2}\rho_{\eta}^{\dagger}&e^{\imath\psi}\left(\rho_{2}-\rho_{\eta}^{\dagger}\rho_{1}^{-1}\rho_{\eta}\right)\end{array}\right],$
(2.12)
we find
$\displaystyle\int\limits_{\mathfrak{R}}F(\widehat{B})\exp\left(-\varepsilon{\rm
Str\,}\widehat{B}\right)d[\widehat{V}]=\widetilde{C}_{2}\int\limits_{\widetilde{\Sigma}_{\beta,cd}^{(\psi)}}F\left(\hat{\rho}\right)\widetilde{I}(\rho)\exp\left(-\varepsilon{\rm
Str\,}\hat{\rho}\right)d[\rho],$ (2.13)
where
$\displaystyle\displaystyle\widetilde{I}(\rho)=\displaystyle\int\limits_{\widetilde{\Sigma}_{\beta,cd}^{(-\psi)}}{\rm
exp}\left[{\varepsilon{\rm
Str\,}\rho-\imath\left(\tr\rho_{1}\sigma_{1}-\tr\rho_{2}\sigma_{2}+\tr\sigma_{\tilde{\eta}}^{\dagger}\rho_{1}\sigma_{\tilde{\eta}}(\sigma_{2}+\imath
e^{\imath\psi}\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{\gamma_{1}d})^{-1}\right)}\right]\times$
$\displaystyle\times\displaystyle\left(\frac{\det(e^{-\imath\psi}\sigma_{2}+\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize
1}_{\gamma_{1}d})}{\det(\sigma_{1}+\imath\varepsilon\leavevmode\hbox{\small
1\kern-3.8pt\normalsize 1}_{\gamma_{2}c})}\right)^{a/\gamma_{1}}d[\sigma].$
(2.14)
We integrate over $d[\tilde{\eta}]$ and apply Eq. (1.5) for the
$d[\sigma_{1}]$–integration. Then, Eq. (2.14) reads
$\displaystyle\displaystyle\widetilde{I}(\rho)=\displaystyle\widetilde{C}_{3}e^{-\imath\psi
cd}\det\rho_{1}^{\kappa}\Theta(\rho_{1})\times$
$\displaystyle\times\int\limits_{{\rm
Herm\,}(4/\beta,d)}\exp\left(-\imath\tr\rho_{2}(\sigma_{2}+\imath
e^{\imath\psi}\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{\gamma_{1}d})\right)\det(e^{-\imath\psi}\sigma_{2}+\imath\varepsilon)^{(a-c)/\gamma_{1}}d[\sigma_{2}]$
(2.15)
with the constant
$\displaystyle\widetilde{C}_{3}=\imath^{-\beta
ac/2}\left(\frac{\tilde{\gamma}}{2\pi\imath}\right)^{cd}G_{a-c,c}^{(\beta)},$
(2.16)
see Eq. (1.6). The exponent $\kappa$ is the same as in Eq. (4.19). As in Ref.
[21], we decompose $\sigma_{2}$ in angles and eigenvalues and integrate over
the angles. Thus, we get the ordinary matrix Bessel–function
$\varphi_{d}^{(4/\beta)}(r_{2},s_{2})=\int\limits_{{\rm
U\,}^{(4/\beta)}(d)}\exp\left(\imath\tr r_{2}Us_{2}U^{\dagger}\right)d\mu(U)$
(2.17)
in Eq. (2.15) which are only for certain $\beta$ and $d$ explicitly known.
However, the analog of the Sekiguchi differential operator for the ordinary
matrix Bessel–functions $D_{dr_{2}}^{(4/\beta)}$, see Eq. (5.3), fulfills the
eigenvalue equation
$D_{dr_{2}}^{(4/\beta)}\varphi_{d}^{(4/\beta)}(r_{2},s_{2})=(\imath\gamma_{1})^{d}\det
s_{2}^{1/\gamma_{1}}\varphi_{d}^{(4/\beta)}(r_{2},s_{2}).$ (2.18)
Since the determinant of $\sigma_{2}$ stands in the numerator, we shift
$\sigma_{2}\rightarrow\sigma_{2}-\imath
e^{\imath\psi}\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize
1}_{\gamma_{1}d}$ and replace the determinants in Eq. (2.15) by
$D_{dr_{2}}^{(4/\beta)}$. After an integration over $\sigma_{2}$, we have
$\displaystyle\widetilde{I}(\rho)=\displaystyle\widetilde{C}_{4}e^{-\imath\psi
cd}\det\rho_{1}^{\kappa}\Theta(\rho_{1})\left(e^{-\imath\psi
d}D_{dr_{2}}^{(4/\beta)}\right)^{a-c}\frac{\delta(r_{2})}{|\Delta_{d}(e^{\imath\psi}r_{2})|^{4/\beta}}.$
(2.19)
The constant is
$\widetilde{C}_{4}=\imath^{-\beta
ac/2}\left(\frac{\tilde{\gamma}}{2\pi\imath}\right)^{cd}G_{a-c,c}^{(\beta)}(\imath\gamma_{1})^{(c-a)d}\left(\frac{\pi}{\gamma_{1}}\right)^{2d(d-1)/\beta}\left(\frac{2\pi}{\gamma_{1}}\right)^{d}\frac{1}{{\rm
FU}_{d}^{(4/\beta)}}.$ (2.20)
Summarizing the constants (2.5) and (2.20), we get
$\widetilde{C}_{acd}^{(\beta)}=\widetilde{C}_{2}\widetilde{C}_{4}=2^{-c}\left(2\pi\gamma_{1}\right)^{-ad}\left(\frac{2\pi}{\gamma_{2}}\right)^{cd}\tilde{\gamma}^{\beta
ac/2}\frac{{\rm Vol}\left({\rm U\,}^{(\beta)}(a)\right)}{{\rm Vol}\left({\rm
U\,}^{(\beta)}(a-c)\right){\rm FU}_{d}^{(4/\beta)}}.$ (2.21)
Due to the Dirac–distribution, we shift $D_{dr_{2}}^{(4/\beta)}$ from the
Dirac–distribution to the superfunction and remove the Wick–rotation. Hence,
we find the result of the Theorem.
## Appendix C Proof of Theorem 6.1 (Equivalence of both approaches)
We define the function
$\displaystyle\widetilde{F}(r_{2})$ $\displaystyle=$
$\displaystyle\int\limits_{{\rm U\,}^{4/\beta}(d)}\int\limits_{{\rm
Herm\,}(\beta,c)}\int\limits_{\Lambda_{2cd}}F\left(\left[\begin{array}[]{c|c}\rho_{1}&\rho_{\eta}\\\
\hline\cr-\rho_{\eta}^{\dagger}&Ur_{2}U^{\dagger}-\rho_{\eta}^{\dagger}\rho_{1}^{-1}\rho_{\eta}\end{array}\right]\right)\times$
(3.3) $\displaystyle\times$
$\displaystyle\exp\left[-\varepsilon(\tr\rho_{1}-\tr(r_{2}-\rho_{\eta}^{\dagger}\rho_{1}^{-1}\rho_{\eta})\right]{\det}^{\kappa}\rho_{1}d[\eta]d[\rho_{1}]d\mu(U).$
(3.4)
Then, we have to prove
$C_{acd}^{(\beta)}\int\limits_{[0,2\pi]^{d}}\widetilde{F}\left(e^{\imath\varphi_{j}}\right)\left|\Delta_{d}\left(e^{\imath\varphi_{j}}\right)\right|^{4/\beta}\prod\limits_{n=1}^{d}\frac{e^{\imath(1-\kappa)\varphi_{n}}d\varphi_{n}}{2\pi}=\widetilde{C}_{acd}^{(\beta)}\left.\left((-1)^{d}D_{dr_{2}}^{(4/\beta)}\right)^{a-c}\widetilde{F}(r_{2})\right|_{r_{2}=0}.$
(3.5)
Since $\widetilde{F}$ is permutation invariant and a Schwartz–function, we
express $\widetilde{F}$ as an integral over ordinary matrix Bessel–functions,
$\widetilde{F}(r_{2})=\int\limits_{\mathbb{R}^{d}}g(q)\varphi_{d}^{(4/\beta)}(r_{2},q)|\Delta_{d}(q)|^{4/\beta}dq,$
(3.6)
where $g$ is a Schwartz–function. The integral and the differential operator
in Eq. (3.5) commute with the integral in Eq. (3.6). Thus, we only need to
prove
$\displaystyle
C_{acd}^{(\beta)}\int\limits_{[0,2\pi]^{d}}\varphi_{d}^{(4/\beta)}\left(e^{\imath\varphi_{j}},q\right)\left|\Delta_{d}\left(e^{\imath\varphi_{j}}\right)\right|^{4/\beta}\prod\limits_{n=1}^{d}\frac{e^{\imath(1-\gamma_{1}\kappa)\varphi_{n}}d\varphi_{n}}{2\pi}=$
(3.7) $\displaystyle=$
$\displaystyle\widetilde{C}_{acd}^{(\beta)}\left.\left((-1)^{d}D_{dr_{2}}^{(4/\beta)}\right)^{a-c}\varphi_{d}^{(4/\beta)}(r_{2},q)\right|_{r_{2}=0}$
for all $q\in\mathcal{S}_{1}^{d}$ where $\mathcal{S}_{1}$ is the unit–circle
in the complex plane. The right hand side of this equation is with help of Eq.
(2.18)
$\left.\left(D_{dr_{2}}^{(4/\beta)}\right)^{a-c}\varphi_{d}^{(4/\beta)}(r_{2},q)\right|_{r_{2}=0}=(-\imath\gamma_{1})^{d(a-c)}\det
q^{(a-c)/\gamma_{1}}.$ (3.8)
The components of $q$ are complex phase factors. The integral representation
of the ordinary matrix Bessel–functions (2.17) and the $d[\varphi]$–integral
in Eq. (3.7) form the integral over the circular ensembles ${\rm
CU\,}^{(4/\beta)}(d)$. Thus, $q$ can be absorbed by $e^{\imath\varphi_{j}}$
and we find
$\displaystyle\int\limits_{[0,2\pi]^{d}}\varphi_{d}^{(4/\beta)}\left(e^{\imath\varphi_{j}},q\right)\left|\Delta_{d}\left(e^{\imath\varphi_{j}}\right)\right|^{4/\beta}\prod\limits_{n=1}^{d}\frac{e^{\imath(1-\gamma_{1}\kappa)\varphi_{n}}d\varphi_{n}}{2\pi}=$
(3.9) $\displaystyle=$ $\displaystyle\det
q^{(a-c)/\gamma_{1}}\int\limits_{[0,2\pi]^{d}}\varphi_{d}^{(4/\beta)}\left(e^{\imath\varphi_{j}},1\right)\left|\Delta_{d}\left(e^{\imath\varphi_{j}}\right)\right|^{4/\beta}\prod\limits_{n=1}^{d}\frac{e^{\imath(1-\gamma_{1}\kappa)\varphi_{n}}d\varphi_{n}}{2\pi}.$
The ordinary matrix Bessel–function is at $q=1$ the exponential function
$\varphi_{d}^{(4/\beta)}\left(e^{\imath\varphi_{j}},1\right)={\rm
exp}\left(\imath\gamma_{1}\sum\limits_{n=1}^{d}e^{\imath\varphi_{n}}\right).$
(3.10)
With Eq. (1.27), the integral on the left hand side in Eq. (3.9) yields with
this exponential function
$\displaystyle\int\limits_{[0,2\pi]^{d}}\left|\Delta_{d}\left(e^{\imath\varphi_{j}}\right)\right|^{4/\beta}\prod\limits_{n=1}^{d}\frac{e^{\imath(1-\gamma_{1}\kappa)\varphi_{n}}{\rm
exp}\left(\imath\gamma_{1}e^{\imath\varphi_{n}}\right)d\varphi_{n}}{2\pi}=$
(3.11) $\displaystyle=$
$\displaystyle(\imath\gamma_{1})^{(a-c)d}\prod\limits_{n=1}^{d}\frac{\imath^{4(n-1)/\beta}\Gamma\left(1+2n/\beta\right)}{\Gamma\left(1+2/\beta\right)\Gamma\left(a-c+1+2(n-1)/\beta\right)}=$
$\displaystyle=$ $\displaystyle\frac{(\imath\gamma_{1})^{(a-c)d}}{{\rm
FU}_{d}^{(4/\beta)}}\prod\limits_{n=1}^{d}\frac{\imath^{4(n-1)/\beta}\pi^{2(n-1)/\beta}}{\Gamma\left(a-c+1+2(n-1)/\beta\right)}.$
Hence, the normalization on both sides in Eq. (3.5) is equal.
## References
## References
* [1] K.B. Efetov. Adv. Phys., 32:53, 1983.
* [2] J.J.M. Verbaarschot and M.R. Zirnbauer. J. Phys., A 17:1093, 1985.
* [3] J.J.M. Verbaarschot, H.A. Weidenmüller, and M.R. Zirnbauer. Phys. Rep., 129:367, 1985.
* [4] K.B. Efetov. Supersymmetry in Disorder and Chaos. Cambridge University Press, Cambridge, 1st edition, 1997.
* [5] E. Brezin and A. Zee. Nucl. Phys., B 402:613, 1993.
* [6] E. Brezin and A. Zee. C.R. Acad. Sci., 17:735, 1993.
* [7] G. Hackenbroich and H.A. Weidenmüller. Phys. Rev. Lett., 74:4118, 1995.
* [8] T. Guhr, A. Müller-Groeling, and H.A. Weidenmüller. Phys. Rep., 299:189, 1998.
* [9] C.W.J. Beenakker. Rev. Mod. Phys., 69:733, 1997.
* [10] A.D. Mirlin. Phys. Rep., 326:259, 2000.
* [11] J. Ambjørn, J. Jurkiewicz, and M. Makeenko Yu. Phys. Lett., B 251:517, 1993.
* [12] E. Brezin, C. Itzykson, G. Parisi, and J. Zuber. Commun. Math. Phys., 59:35, 1978.
* [13] L. Laloux, P. Cizeau, J.P. Bouchard, and M. Potters. Phys. Rev. Lett., 83:1467, 1999.
* [14] M.L. Mehta. Random Matrices. Academic Press Inc., New York, 3rd edition, 2004.
* [15] T. Guhr. J. Phys., A 39:13191, 2006.
* [16] F. Toscano, R.O. Vallejos, and C. Tsallis. Phys. Rev., E 69:066131, 2004.
* [17] A.C. Bertuola, O. Bohigas, and M.P. Pato. Phys. Rev., E 70:065102(R), 2004.
* [18] Y.A. Abul-Magd. Phys. Lett., A 333:16, 2004.
* [19] H.-J. Sommers. Acta Phys. Pol., B 38:1001, 2007.
* [20] P. Littelmann, H.-J. Sommers, and M.R. Zirnbauer. Commun. Math. Phys., 283:343, 2008.
* [21] M. Kieburg, J. Grönqvist, and T. Guhr, 2008. J. Phys., A 42:275205, 2009.
* [22] M. Kieburg, H. Kohler, and T. Guhr. J. Math. Phys., 50:013528, 2009.
* [23] G. Akemann and Y.V. Fyodorov. Nucl. Phys., B 664:457, 2003.
* [24] G. Akemann and A. Pottier. J. Phys., A 37:L453, 2004.
* [25] A. Borodin and E. Strahov. Commun. Pure Appl. Math., 59:161, 2005.
* [26] E. Brezin and S. Hikami. Commun. Math. Phys., 214:111, 2000.
* [27] M.R. Zirnbauer. The Supersymmetry Method of Random Matrix Theory, Encyclopedia of Mathematical Physics, eds. J.-P. Franoise, G.L. Naber and Tsou S.T., Elsevier, Oxford, 5:151, 2006.
* [28] M.L. Mehta and J.-M. Normand. J. Phys. A: Math. Gen., 34:1, 2001.
* [29] Y.V. Fyodorov. Nucl. Phys., B 621:643, 2002.
* [30] F.A. Berezin. Introduction to Superanalysis. D. Reidel Publishing Company, Dordrecht, 1st edition, 1987.
* [31] J.E. Bunder, K.B. Efetov, K.B. Kravtsov, O.M. Yevtushenko, and M.R. Zirnbauer. J. Stat. Phys., 129:809, 2007.
* [32] M.R. Zirnbauer. J. Math. Phys., 37:4986, 1996.
* [33] H. Kohler and T. Guhr. J. Phys., A 38:9891, 2005.
* [34] F. Wegner, 1983. unpublished notes.
* [35] F. Constantinescu. J. Stat. Phys., 50:1167, 1988.
* [36] F. Constantinescu and H.F. de Groote. J. Math. Phys., 30:981, 1989.
* [37] H.-J. Sommers, 2008. lecture notes: www.sfbtr12.uni-koeln.de.
* [38] A. Okounkov and G. Olshanski. Math. Res. Letters, 4:69, 1997.
* [39] F. Basile and G. Akemann. JHEP, page 0712:043, 2007.
* [40] A.E. Ingham. Proc. Camb. Phil. Soc., 29:271, 1933.
* [41] C.L. Siegel. Ann. Math., 36:527, 1935.
|
arxiv-papers
| 2009-05-20T09:24:03 |
2024-09-04T02:49:02.765169
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Mario Kieburg, Hans-J\\\"urgen Sommers, Thomas Guhr",
"submitter": "Mario Kieburg",
"url": "https://arxiv.org/abs/0905.3256"
}
|
0905.3326
|
# Volatility derivatives in market models with jumps
Harry Lo Imperial College London and Swiss Re lo.harry@gmail.com and
Aleksandar Mijatović Department of Mathematics, Imperial College London
a.mijatovic@imperial.ac.uk
###### Abstract.
It is well documented that a model for the underlying asset price process that
seeks to capture the behaviour of the market prices of vanilla options needs
to exhibit both diffusion and jump features. In this paper we assume that the
asset price process $S$ is Markov with càdlàg paths and propose a scheme for
computing the law of the realized variance of the log returns accrued while
the asset was trading in a prespecified corridor. We thus obtain an algorithm
for pricing and hedging volatility derivatives and derivatives on the
corridor-realized variance in such a market. The class of models under
consideration is large, as it encompasses jump-diffusion and Lévy processes.
We prove the weak convergence of the scheme and describe in detail the
implementation of the algorithm in the characteristic cases where $S$ is a CEV
process (continuous trajectories), a variance gamma process (jumps with
independent increments) or an infinite activity jump-diffusion (discontinuous
trajectories with dependent increments).
We would like to thank Martijn Pistorius for many useful discussions.
## 1\. Introduction
Derivative securities on the realized variance of the log returns of an
underlying asset price process trade actively in the financial markets. Such
derivatives play an important role in risk management and are also used for
expressing a view on the future behaviour of volatility in the underlying
market. Since the liquid contracts have both linear (variance swaps) and non-
linear (square-root = volatility swaps, hockey stick = variance options)
payoffs, it is very important to have a robust algorithm for computing the
entire law of the realized variance. Often such contingent claims have an
additional feature, which makes them cheaper and hence more attractive to the
investor, that stipulates that variance of log returns accrues only when the
spot price is trading in a contract-defined corridor (see Subsection 2.1 for
the precise definitions of such derivatives).
It is clear from these definitions that, in order to manage the risks that
arise in the context of volatility derivatives, one needs to apply the same
modelling framework that is being used for pricing and hedging vanilla options
on the underlying asset. It has therefore been argued that the pricing and
hedging of volatility derivatives should be done using models with jumps and
stochastic volatility (see for example [10], Chapter 11). In this paper we
propose a scheme for computing the distribution of the realized variance and
the corridor-realized variance when the underlying process $S=(S)_{t\geq 0}$
is a Markov process with possibly discontinuous trajectories, thus obtaining
an algorithm for pricing and hedging all the payoffs mentioned above. Our main
assumption is that the Markov dimension of $S$ is equal to one (i.e. we assume
that the future and past of the process $S$ are independent given its present
value). We do not make any additional assumptions on the structure of the
increments or the distributional properties of the process $S$. This class of
processes is large as it encompasses one dimensional jump-diffusions and Lévy
processes.
The algorithm consists of two steps. In the first step the original Markov
process $S$ under a risk-neutral measure is approximated by a continuous-time
finite state Markov chain $X=(X_{t})_{t\geq 0}$. This is achieved by
approximating the generator of $S$ by a generator matrix for $X$. The second
step consists of pricing the corresponding volatility derivative in the
approximate model $X$. It should be stressed that the two steps are
independent of each other but both clearly contribute to the accuracy of the
scheme. In other words the second step can be carried out for any approximate
generator matrix of the chain $X$. In specific examples in this paper we
describe a natural way of defining the approximate generator matrix (see
Section 4 for diffusions and Section 5 for processes with jumps) which is by
no means optimal (see monograph [9] for weak convergence of such
approximations and [16] for possible improvements) but already makes the
proposed scheme accurate enough (see the numerical results in Section 6).
In the second step of the algorithm we approximate the dynamics of the
corridor-realized variance of the logarithm of the chain $X$ (i.e. the
variance that accrued while $X$ was in the prespecified corridor) by a Poisson
process with an intensity that is a function of the current state of the chain
$X$. This approximation is obtained by matching $k\in\mathbb{N}$ instantaneous
conditional moments of the corridor-realized variance of the chain $X$. This
is a generalisation of the method proposed in [1], which in the framework of
this paper corresponds to $k=1$ and only works in the case of linear payoffs
on the realized variance (i.e. variance swaps) as can be seen in Tables 5, 6
and 7 of Section 6. Using $k$ strictly larger than one improves considerably
the quality of the approximation to the distribution of the corridor-realized
variance for $S$. In fact if $S$ is a diffusion process, then our algorithm
with $k=2$ produces prices for the non-linear volatility payoffs (e.g.
volatility swaps and options on variance) which are within a few basis points
of the true price (see Table 5 and Figure 1c). If the trajectories of $S$ are
discontinuous, then the scheme with $k=3$ appears to suffice (see Tables 6 and
7 and Figures 2b and 3a). Note also that in [14] we provide a straightforward
implementation of our algorithm in Matlab for $k=3$. Furthermore in Section 3
we prove the weak convergence of our scheme as $k$ tends to infinity (see
Theorem 3.1).
The general approach of this paper is to view continuous-time Markov chains as
a numerical tool that is based on probabilistic principles and can therefore
be applied in a very natural way to problems in pricing theory. It is worth
noting that there is no theoretical obstruction for extending our scheme to
the case when $S$ is just one component of a two dimensional Markov process
(e.g. $S$ is the asset price in a stochastic volatility model) by using a
Markov chain to approximate this two dimensional process. The reason why
throughout this paper we assume that $S$ itself is Markov lies in the
feasibility of the associated numerical scheme. If $S$ is Markov the dimension
of the generator of $X$ can be as small as $70$, while in the case of the
stochastic volatility process we would need to find the spectra of matrices of
dimension larger than $2000$. This is by no means impossible but is not the
focus of the present paper.
The literature on the pricing and hedging of derivatives on the realized
variance is vast. It is generally agreed that either the assumption on the
independence of increments or the continuity of trajectories of the underlying
process needs to be relaxed in order to obtain a realistic model for the
realized variance. In the recent paper [3] model independent bounds for
options on variance are obtained in a general continuous semimartingale
market. The continuity assumption is relaxed in [7], where a class of one
dimensional Markov processes with independent increments is considered and the
law of the realized variance is obtained. A perfect replication for a corridor
variance swap (i.e. the mean of the corridor-realized variance) in the case of
a continuous asset price process is given in [6]. For other contributions to
the theory of volatility derivatives see [1] and the references therein. The
main aim of this paper is to provide a stochastic approximation scheme for the
pricing and hedging of derivatives on the realized (and corridor-realized)
variance in models that violate both the above assumptions, thus making it
virtually impossible to find the laws of the relevant random variables in
semi-analytic form.
The paper is organised as follows. Section 2 defines the approximating Markov
chains and gives a general description of the pricing algorithm. In Section 3
we state and prove the weak convergence of the proposed scheme. Section 5
(resp. 4) describes the implementation of the algorithm in the case where the
process $S$ is an infinite activity jump-diffusion (resp. has continuous
trajectories). Section 6 contains numerical results and Section 7 concludes
the paper.
## 2\. The $k$ conditional moments of the realized variance
Let $S=(S_{t})_{t\geq 0}$ be a strictly positive Markov process with càdlàg
paths (i.e. each path is right-continuous as a function of time and has a left
limit at every time $t$) which serves as a model for the evolution of the
risky security under a risk-neutral measure. Note that we are also implicitly
assuming that $S$ is a semimartingale.
### 2.1. The contracts
A volatility derivative in this market is any security that pays
$\phi([\log(S)]_{T})$ at maturity $T$, where
$\phi:\mathbb{R}_{+}\to\mathbb{R}$ is a measurable payoff function and
$[\log(S)]_{T}$ is the is the quadratic variation of the process
$\log(S)=(\log(S_{t}))_{t\geq 0}$ at maturity $T$ defined by
(1) $\displaystyle[\log(S)]_{T}$ $\displaystyle:=$
$\displaystyle\lim_{n\to\infty}\sum_{t_{i}^{n}\in\Pi_{n},i\geq
1}\left(\log\frac{S_{t_{i}^{n}}}{S_{t^{n}_{i-1}}}\right)^{2},$
where $\Pi_{n}=\\{t_{0}^{n},t_{1}^{n},\ldots,t_{n}^{n}\\}$, $n\in\mathbb{N}$,
is a refining sequence of partitions of the interval $[0,T]$. In other words
$t_{0}^{n}=0$, $t_{n}^{n}=T$, $\Pi_{n}\subset\Pi_{n+1}$ for all
$n\in\mathbb{N}$ and
$\lim_{n\to\infty}\max\\{|t_{i}^{n}-t_{i-1}^{n}|:i=1,\ldots,n\\}=0$. It is
well-known that this sequence converges in probability (see [13], Theorem
4.47). Many such derivative products trade actively in financial markets
across asset-classes (see [1] and the references therein).
A corridor variance swap is a derivative security with a linear payoff
function that depends on the accrued variance of the asset price $S$ while it
is trading in an interval $[L,U]$ that is specified in the contract, where
$0\leq L<U\leq\infty$. More specifically if we define a process
(2)
$\displaystyle\overline{S}_{t}:=\max\\{L,\min\\{S_{t},U\\}\\},\qquad\forall
t\in[0,\infty),$
then for a given partition
$\Pi_{n}=\\{t_{0}^{n},t_{1}^{n},\ldots,t_{n}^{n}\\}$ of the time interval
$[0,T]$ the corridor-realized variance is given by
(3) $\displaystyle\sum_{t_{i}^{n}\in\Pi_{n},i\geq
1}\left[\mathbf{1}_{[L,U]}(S_{t^{n}_{i-1}})+\mathbf{1}_{[L,U]}(S_{t^{n}_{i}})-\mathbf{1}_{[L,U]}(S_{t^{n}_{i-1}})\mathbf{1}_{[L,U]}(S_{t^{n}_{i}})\right]\left(\log\frac{\overline{S}_{t_{i}^{n}}}{\overline{S}_{t^{n}_{i-1}}}\right)^{2},$
where $\mathbf{1}_{[L,U]}$ denotes the indicator function of the interval
$[L,U]$. In practice the increments $t^{n}_{i}-t^{n}_{i-1}$ ususally equal one
day. The square bracket in the sum in (3) ensures that the accrued variance is
not increased when the asset price $S$ jumps over the interval $[L,U]$.
The one sided corridor-realized variance was defined in [4]. Definition (1.1)
in [4] (resp. (1.2) in [4]) corresponds to expression (3) above if we choose
$U=\infty$ (resp. $L=0$). Formulae (1.1) and (1.2) in [4] are used to define
the corridor-realized variance in a way which treats the entrance of $S$ into
the corridor differently from its exit from the corridor. This asymmetry is
then exploited to obtain an approximate hedging strategy for linear payoffs on
the corridor-realized variance. In this paper we opt for a symmetric treatment
of the entrance and exit of $S$ into and from the corridor $[L,U]$, because
this is in some sense more natural. It is however important to note that all
the theorems and the algorithm proposed here do NOT depend in any significant
way on this choice of definition. In other words for any reasonable
modification of the definition in (3) (e.g. the one in [4]) the algorithm
described in this section would still work. Note also that our algorithm will
yield an approximate distribution of random variable (3) in the model $S$ and
therefore allows us to price any non-linear payoff that depends on the
corridor-realized variance.
In the case the corridor-realized variance is monitored continuously (see
[6]), we can express it using quadratic variation as follows. Note first that
since the map $s\mapsto\max\\{L,\min\\{s,U\\}\\}$ can be expressed as a
difference of two convex functions, Theorem 66 in [18] implies that the
process $\overline{S}=(\overline{S}_{t})_{t\geq 0}$ is again a semimartingale
and therefore the corridor-realized variance $Q^{L,U}_{T}(S)$, defined as the
limit of the expression in (3) as $n$ tends to infinity, exists and equals
(4) $\displaystyle Q^{L,U}_{T}(S)$ $\displaystyle=$
$\displaystyle[\log(\overline{S})]_{T}-\left(\log\frac{U}{L}\right)^{2}\sum_{0\leq
t\leq
T}\left[\mathbf{1}_{(0,L)}(S_{t-})\mathbf{1}_{(U,\infty)}(S_{t})+\mathbf{1}_{(0,L)}(S_{t})\mathbf{1}_{(U,\infty)}(S_{t-})\right]$
by Theorem 4.47a in [13]. Since we are assuming that the process $S$ is càdlàg
the limit $S_{t-}:=\lim_{s\nearrow t}S_{s}$ exists almost surely for all
$t>0$. The sum in (4), which corresponds to jumps of the asset price $S$ over
the corridor $[L,U]$, is almost surely finite by Theorem 4.47c in [13]. Note
also that if $L=0$ (resp. $U=\infty$) we find that $Q^{0,U}_{T}(S)$ (resp.
$Q^{L,\infty}_{T}(S)$) equals the quadratic variation of the semimartingale
$\log(\overline{S})=(\log(\overline{S}_{t}))_{t\geq 0}$ because the process
$S$ cannot in these cases jump over the corridor. Our main task it to find an
approximate law of the random variable $Q^{L,U}_{T}(S)$ which will allow us to
price any derivative on the corridor-realized variance with terminal value
$\phi(Q^{L,U}_{T}(S))$, where $\phi$ is a possibly non-linear function.
### 2.2. Markov chain $X$ and its corridor-realized variance
Let us start by assuming that we are given a generator matrix $\mathcal{L}$ of
a continuous-time Markov chain $X=(X_{t})_{t\geq 0}$ which approximates the
generator of the Markov process $S$. The state-space of the Markov chain $X$
is the set $E:=\\{x_{0},\ldots,x_{N-1}\\}\subset\mathbb{R}_{+}$ with
$N\in\mathbb{N}$ elements, such that $x_{i}<x_{j}$ for any integers $0\leq
i<j\leq N-1.$ In Sections 4 and 5 we discuss briefly how to construct such
approximate generators for Markov processes that are widely used in finance
(i.e. diffusion processes with jumps.) Throughout the paper we will use the
notation $\mathcal{L}(x,y)=e_{x}^{\prime}\mathcal{L}e_{y}$ for the elemetns of
the matrix $\mathcal{L}$, where $x,y\in E$, vectors $e_{x},e_{y}$ denote the
corresponding standard basis vectors of $\mathbb{R}^{N}$ and ′ is
transposition.
The quantities of interest are the quadratic variation
$[\log(X)]=([\log(X)]_{t})_{t\geq 0}$ and the corridor-realized vaiance
$Q^{L,U}(X)=(Q^{L,U}_{t}(X))_{t\geq 0}$ processes which are for any maturity
$T$ defined by
(5)
$\displaystyle[\log(X)]_{T}:=\lim_{n\to\infty}\sum_{t_{i}^{n}\in\Pi_{n},i\geq
1}\left(\log\frac{X_{t_{i}^{n}}}{X_{t^{n}_{i-1}}}\right)^{2},$ (6)
$\displaystyle
Q^{L,U}_{T}(X):=[\log(\overline{X})]_{T}-\left(\log\frac{U}{L}\right)^{2}\sum_{0\leq
t\leq
T}\left[\mathbf{1}_{(0,L)}(X_{t-})\mathbf{1}_{(U,\infty)}(X_{t})+\mathbf{1}_{(0,L)}(X_{t})\mathbf{1}_{(U,\infty)}(X_{t-})\right],$
where partitions $\Pi_{n}$, $n\in\mathbb{N}$, of $[0,T]$ are as in (1), the
process $\overline{X}=(\overline{X}_{t})_{t\geq 0}$ is defined analogously
with (2) by $\overline{X}_{t}:=\max\\{L,\min\\{X_{t},U\\}\\}$ and
$X_{t-}:=\lim_{s\nearrow t}X_{s}$ for any $t>0$. Note that if we choose
$L<\min\\{x\>:\>x\in E\\}$ and $U>\max\\{x\>:\>x\in E\\}$, then the random
variables in (5) and (6) coincide. We can therefore without loss of generality
only consider the corridor-realized variance $Q^{L,U}_{T}(X)$.
Since the process $X$ is a finite-state Markov chain, the jumps of $X$ arrive
with bounded intensity and it is therefore clear that the following must hold
(7) $\displaystyle\mathbb{P}\left[Q^{L,U}_{t+\Delta
t}(X)-Q^{L,U}_{t}(X)\neq\left(\log\frac{\overline{X}_{t+\Delta
t}}{\overline{X}_{t}}\right)^{2}\Bigg{\lvert}\>X_{t}=x\right]=o(\Delta
t)\quad\text{for all}\quad x\in[L,U]\cap E.$
An analogous equality holds if $X_{t}$ is ountside of the corridor $[L,U]$.
Recall also that by definition a function $f(\Delta t)$ is of the order
$o(\Delta t)$ (usually denoted by $f(\Delta t)=o(\Delta t)$) if and only if
$\lim_{\Delta t\searrow 0}f(\Delta t)/\Delta t=0$. Equality (7) implies that
the $j$-th instantaneous conditional moment of the corridor-realized variance
$Q^{L,U}(X)$ is given by
(8) $\displaystyle M_{j}(x)$ $\displaystyle:=$ $\displaystyle\lim_{\Delta t\to
0}\frac{1}{\Delta t}\mathbb{E}\left[\left(Q^{L,U}_{t+\Delta
t}(X)-Q^{L,U}_{t}(X)\right)^{j}\Big{\lvert}X_{t}=x\right]$ $\displaystyle=$
$\displaystyle\sum_{y\in
E}\mathcal{L}(x,y)\left[\left(\log\frac{\overline{y}}{\overline{x}}\right)^{2j}-\left(\log\frac{U}{L}\right)^{2j}\mathbf{1}_{A_{U,L}}(x,y)\right]$
where the set $A_{U,L}\subset\mathbb{R}^{2}$ is defined as
$A_{U,L}:=\left([0,L)\times(U,\infty)\right)\cup\left((U,\infty)\times[0,L)\right)$
and for any $x\in E$ we have $\overline{x}:=\max\\{L,\min\\{x,U\\}\\}$.
### 2.3. The extension $(X,I)$
The basic idea of this paper is to extend the markov chain $X$ to a
continuous-time Markov chain $(X,I)=(X_{t},I_{t})_{t\geq 0}$ where the
dynamics of the process $I$ approximates well the dynamics of the corridor-
realized variance $Q^{L,U}(X)$. Conditional on the path of the chain $X$, the
process $I$ will be a compound Poisson process with jump-intensity that is a
function of the current state of $X$. The generator of $(X,I)$ will be chosen
in such a way that the first $k\in\mathbb{N}$ infinitesimal moments of $I$ and
$Q^{L,U}(X)$ coincide. The approximating chain $I$ will start at 0 (as does
the process $Q^{L,U}(X)$) and gradually jump up its uniform state-space
$\\{0,\alpha,..,\alpha 2C\\}$, where $\alpha$ is a small positive constant and
$C$ is some fixed integer.
The main computational tool in this paper is the well-known spectral
decomposition for partial-circulant matrices (see Appendices A.2-A.4 in [1]
for the definition and the properties of the spectrum), which will be applied
to the generator of the Markov chain $(X,I)$. The geometry of the state-space
$\\{0,\alpha,..,\alpha 2C\\}$ is therefore of fundamental importance because
it allows the generator of $(X,I)$ to be expressed as a partial-circulant
matrix. As mentioned in the introduction, the main difference between the
approach in the present paper and the algorithm in [1] is that here we take
advantage of the full strength of the partial-circulant form of the generator
of $(X,I)$. This allows us to define the process $I$ as a compound Poisson
process with state-dependent intensity rather than just a Poisson process
(which was the case in [1]), without adding computational complexity. As we
shall see in Section 6 this enables us to approximate the entire distribution
of the corridor-realized variance and hence obtian much more accurate
numerical results.
Assuming that the process $I$ can jump at most $n\in\mathbb{N}$ states up from
its current position in an infinitesimal amount of time, the dynamics of $I$
are uniquely determined by the state-dependent intensities
(9) $\displaystyle\lambda_{i}:E\to\mathbb{R}_{+},\quad\text{ where }\quad
i\in\\{1,\ldots,n\\}$
and $E$ is the state-space of the chain $X$. The generator of $I$, conditional
on the event $X_{t}=x$, can therefore for any $c,d\in\\{0,1,..,2C\\}$ be
expressed as
(13)
$\displaystyle\mathcal{L}^{I}(x:c,d)\>:=\>\left\\{\begin{array}[]{ll}\lambda_{j}(x)&\mathrm{if}\>\>d=c+j\\!\\!\\!\mod(2C+1)\quad\text{for
some}\quad j\in\\{1,...,n\\};\\\
-\sum_{i=1}^{n}\lambda_{i}(x)&\mathrm{if}\>\>d=c;\\\ 0&\text{
otherwise.}\end{array}\right.$
The dimension of the matrix $\mathcal{L}^{I}(x:\cdot,\cdot)$ is $2C+1$ for all
$x\in E$ and the identity $d=c+j\\!\\!\\!\mod(2C+1)$ means that the numbers
$d$ and $c+j$ represent the same element in the additive group
$\mathbb{Z}_{2C+1}$. A key observation here is that the entries
$\mathcal{L}^{I}(x:c,d)$ in the conditional generator depend on $c$ and $d$
solely through the difference $d-c$ and hence the afore mentioned group
structure makes the conditional generator into a circulant matrix (see
Appendix A for the definition of circulant matrices).
This algebraic structure of the conditional generator
$\mathcal{L}^{I}(x:\cdot,\cdot)$ translates into a periodic boundary condition
for the process $I$. This is very undesirable because the process $Q^{L,U}(X)$
we are trying to approximate clearly does not exhibit such features. We must
therefore choose $C$ large enough so that even if the chain $I$ is allowed to
jump $n$ steps up at any time, the probability that it oversteps the boundary
is negligible (i.e. below machine precision). We will see in Section 6 that in
practice $C\approx 100$ and $n\approx 30$ is sufficient to avoid the boundary.
Since our aim is to match the first $k$ instantaneous moments, it is necessary
to take $n$ larger or equal to $k$. In applications this does not pose
additional restrictions because, as we shall see in Section 6, $k=3$ produces
the desired results for jump-diffusions and $k=2$ is already enough for
continuous processes.
The conditional generators given in (13) can be used to specify the generator
of the Markov chain $(X,I)$ on the state-space
$E\times\\{0,\alpha,\ldots,\alpha 2C\\}$ as follows
(14)
${\mathcal{G}}(x,c;y,d):=\mathcal{L}(x,y)\delta_{c,d}+\mathcal{L}^{I}(x:c,d)\delta_{x,y},$
where $x,y\in E$, $c,d\in\\{0,1,\ldots,2C\\}$ and $\delta_{\cdot,\cdot}$
denotes the Kronecker delta function. The matrix ${\mathcal{G}}$ is of the
size $N(2C+1)$ and has partial-circulant form. In other words we can express
${\mathcal{G}}$ in terms of $N^{2}$ blocks where each block is a square matrix
of the size $2C+1$ and the blocks that intersects the diagonal of
${\mathcal{G}}$ are equal to a sum of a circulant matrix and a scalar multiple
of the identity matrix. All other blocks are scalar matrices. For the precise
definition of partial-circulant matrices see Appendix A.
We can now compute, using (13) and (14), the $j$-th instantaneous conditional
moment of the process $I$ as follows
(15) $\displaystyle\lim_{\Delta t\rightarrow 0}\frac{1}{\Delta
t}\mathbb{E}\left[(I_{t+\Delta t}-I_{t})^{j}\Bigl{\lvert}X_{t}=x,I_{t}=\alpha
c\right]$ $\displaystyle=$ $\displaystyle\sum_{d=0}^{2C}(\alpha d-\alpha
c)^{j}\mathcal{L}^{I}(x:c,d)$ $\displaystyle=$
$\displaystyle\alpha^{j}\sum_{d=1}^{n}d^{j}\lambda_{d}(x)$
for any $x\in E$ and all integers $c\in\\{0,1,\ldots,2C\\}$ that satisfy the
inequality $c<2C-n$, where $n$ was introduced in (9). This inequality implies
that the process $I$ cannot jump to or above $\alpha 2C$ (i.e. it cannot
complete a full circle) in a very short time interval $\Delta t$. Note also
that it is through this inequality only that identity (15) depends on the
current level $\alpha c$ of the process $I$.
Our main goal is to approximate the process $(X,Q^{L,U}(X))$, where corridor-
realized variance $Q^{L,U}(X)$ is defined in (6), by the continuous-time
Markov chain $(X,I)$ with generator given by (14). We now match the first $k$
instantaneous conditional moments of processes $Q^{L,U}(X)$ and $I$ using
identities (8) and (15):
(16) $\alpha^{j}\sum_{d=1}^{n}d^{j}\lambda_{d}(x)=M_{j}(x)\quad\text{for
any}\quad x\in E\quad\text{and}\quad j=1,\ldots,k.$
In other words we must choose the intensity functions $\lambda_{i}$ (see (9))
and the parameter $\alpha$ so that the system (16) is satisfied. The necessary
requirement for the solution is that $\lambda_{i}(x)\geq 0$ for all $x\in E$
and all $i=1,\ldots,n$. These inequalities can place non-trivial restrictions
on the solution space and will be analysed in more detail in Sections 4 and 5.
Another simple yet important observation that follows from (16) is that, in
order to match the first $k$ instantaneous conditional moments of the
corridor-realized variance $Q^{L,U}(X)$, the size of the support of the jump
distribution of the of Poisson processes with state-dependant intensity (i.e.
$n$) must be at least $k$. From now on we assume that $n\geq k$.
The pricing of volatility derivatives is done using the following theorem
which yields a closed-form formula for the semingroup of the Markov chain
$(X,I)$.
###### Theorem 2.1.
Let ${\mathcal{G}}$ be the generator matrix of the Markov process $(X,I)$
given by (14). Then for any $t\geq 0$, $x,y\in E$ and $d\in\\{0,\ldots,2C\\}$
the equality holds
(17) $\displaystyle\mathbb{P}\left(X_{t}=y,I_{t}=\alpha
d\big{\lvert}X_{0}=x\right)$ $\displaystyle=$
$\displaystyle\exp(t{\mathcal{G}})(x,0;y,d)$ $\displaystyle=$
$\displaystyle\frac{1}{2C+1}\sum_{j=0}^{2C}e^{\mathtt{i}p_{j}d}\exp(t\mathcal{L}_{j})(x,y),$
where $\mathtt{i}=\sqrt{-1}$, the scalars $p_{j}$ and the complex matrices
$\mathcal{L}_{j}$, for $j=0,\ldots,2C$, are given by
(18) $\displaystyle\mathcal{L}_{j}(x,y)$ $\displaystyle:=$
$\displaystyle\mathcal{L}(x,y)+\delta_{x,y}\sum_{i=1}^{n}\left(e^{-\mathtt{i}p_{j}i}-1\right)\lambda_{i}(x),$
$\displaystyle p_{j}$ $\displaystyle:=$ $\displaystyle\frac{2\pi}{2C+1}j.$
Theorem 2.1 is the main computational tool used in this paper which allows us
to find in a semi-analytic form the semigroup of the chain $(X,I)$ (if
$C\approx 100$ and $N=70$, the matrix ${\mathcal{G}}$ contains more than
$10^{8}$ elements). For a straightforward implementation of the algorithm in
Matlab see [14]. It is clear that Theorem 2.1 generalizes equation (6) in [1]
and that this generalization involves exactly the same number of matrix
operations as the algorithm in [1]. The only additional computations are the
sums in (18).
The proof of Theorem 2.1 relies on the partial-circulant structure of the
matrix ${\mathcal{G}}$ given in (14). The argument follows precisely the same
lines as the one that proved Theorem 3.1 in [1] and will therefore not be
given here (see Appendix A.5 in [1] for more details).
Since the dynamics of the process $(X,I)$ are assumed to be under a risk-
neutral measure, the current value of any payoff that depends on the corridor-
realized variance at fixed maturity can easily be obtained from the formulae
in Theorem 2.1. Furthermore the same algorithm yields the risk sensitivities
Delta and Gamma of any derivative on the corridor-realized variance, without
adding computational complexity. This is because the output of our scheme is a
vector of values of the derivative in question conditional on the process $X$
starting at each of the elements in its state-space. We should also note that
forward-starting derivatives on the corridor-realized variance can be dealt
with using the same algorithm because conditioning on the state of a Markov
chain at a future time requires only a single additional matrix-vector
multiplication. Explicit calculations are obvious and are omitted (see [1] for
more details).
## 3\. Convergence
In Section 2 we defined the Markov chain $(X,I^{k})$ via its generator (14)
that in some sense approximates the process $(X,Q^{L,U}(X))$, where
$Q^{L,U}(X)$ is the corridor-realized variance of $X$ defined in (6). Here
$I^{k}$ denotes the process $I$ from Section 2 which satisfies the
instantaneous conditional moment restrictions, given by (16), up to order $k$.
Notice that it follows directly from definition (6) that the process
$(X,Q^{L,U}(X))$ is adapted to the natural filtration generated by the chain
$X$ and that its components $X$ and $Q^{L,U}(X)$ can only jump simultaneously.
On the other hand note that the form of the generator of the chain
$(X,I^{k})$, given by (14), implies that the components $X$ and $I^{k}$ cannot
both jump at the same time. It is also clear that the process $I^{k}$ is not
adapted to the natural filtration of $X$. In this section our goal is to prove
that, in spite of these differences, for any fixed time $T$ the sequence of
random variables $(I^{k}_{T})_{k\in\mathbb{N}}$ converges in distribution to
the random variable $Q^{L,U}_{T}(X)$. In fact we have the following theorem
which states that, for any bounded European payoff, the price of the
corresponding derivative on the corridor-realized variance in the approximate
model $(X,I^{k})$ converges to the price of the same derivative in
$(X,Q^{L,U}(X))$ as the number $k$ of matched instantaneous conditional
moments tends to infinity.
###### Theorem 3.1.
Let $X$ be a continuous-time Markov chain with generator $\mathcal{L}$ as
given in Section 2. For each $k\in\mathbb{N}$ define a real number
(19) $\displaystyle\alpha_{k}$ $\displaystyle:=$
$\displaystyle\frac{1}{k}\max\left\\{\left(\log\frac{y}{x}\right)^{2}\>:\>x,y\in
E\backslash\\{0\\}\right\\},$
assume that $n$ in (9) equals $k$ and that there exist functions
$\lambda_{i}^{k}:E\to\mathbb{R}_{+},$ $i\in\\{1,\ldots,k\\}$, that solve the
system of equations (16). Let the continuous-time Markov chain $(X,I^{k})$ be
given by generator (14) where the integers $C_{k}$ in (13), which determine
the size of the state-space of the process $I^{k}$, are chosen in such a way
that $\lim_{k\to\infty}\alpha_{k}C_{k}=\infty$. Then for any fixed time $T>0$
the sequence of random variables $(I_{T}^{k})_{k\in\mathbb{N}}$ converges
weakly to $Q^{L,U}_{T}(X)$. In other words for any bounded continuous function
$f:\mathbb{R}\to\mathbb{R}$ we have
$\lim_{k\to\infty}\mathbb{E}[f(I_{T}^{k})\lvert
X_{0}]=\mathbb{E}[f(Q^{L,U}_{T}(X))\lvert X_{0}].$
Before proving Theorem 3.1 we note that the assumption on the existence of
non-negative solutions of the system in (16) is not stringent and can be
satisfies for any chain $X$ by allowing $n$ in (9) to take values larger than
$k$. The restriction $n=k$ in Theorem 3.1 is used because it simplifies the
notation.
Proof. Throughout this proof we will use the notation
$\Sigma_{t}:=Q^{L,U}_{t}(X)$ for any $t\in\mathbb{R}_{+}$. By the Lévy
continuity theorem it is enough to prove that the equality holds
$\lim_{k\to\infty}\mathbb{E}[\exp(\mathtt{i}uI_{T}^{k})]=\mathbb{E}[\exp(\mathtt{i}u\Sigma_{T})]\quad\text{for
each}\quad u\in\mathbb{R}.$
Let $\Delta t>0$ be a small positive number and note that, by conditioning on
the $\sigma$-algebra generated by the process $X$ up to and including time
$T-\Delta t$ and using the Markov property, we obtain the following
representation
$\displaystyle\mathbb{E}[\exp(\mathtt{i}u\Sigma_{T})]$ $\displaystyle=$
$\displaystyle\mathbb{E}\left[\exp(\mathtt{i}u\Sigma_{T-\Delta
t})\mathbb{E}\left[\exp(\mathtt{i}u(\Sigma_{T}-\Sigma_{T-\Delta
t}))\big{\lvert}X_{T-\Delta t}\right]\right]$ $\displaystyle=$
$\displaystyle\mathbb{E}\left[e^{\mathtt{i}u\Sigma_{T-\Delta
t}}\left(\sum_{j=0}^{k}\frac{(\mathtt{i}u)^{j}}{j!}\mathbb{E}\left[(\Sigma_{T}-\Sigma_{T-\Delta
t})^{j}\big{\lvert}X_{T-\Delta t}\right]\right.\right.$
$\displaystyle\left.\left.+\sum_{j=k+1}^{\infty}\frac{(\mathtt{i}u)^{j}}{j!}\mathbb{E}\left[(\Sigma_{T}-\Sigma_{T-\Delta
t})^{j}\big{\lvert}X_{T-\Delta t}\right]\right)\right]$ $\displaystyle=$
$\displaystyle\mathbb{E}\left[e^{\mathtt{i}u\Sigma_{T-\Delta t}}\left(1+\Delta
t\sum_{j=1}^{k}\frac{(\mathtt{i}u)^{j}}{j!}M_{j}(X_{T-\Delta
t})\right.\right.$
$\displaystyle\left.\left.+\sum_{j=k+1}^{\infty}\frac{(\mathtt{i}u)^{j}}{j!}\mathbb{E}\left[(\Sigma_{T}-\Sigma_{T-\Delta
t})^{j}\big{\lvert}X_{T-\Delta t}\right]\right)\right]+o(\Delta t),$
where $M_{j}$ is defined in (8). By applying Markov property of $(X,I^{k})$,
identity (15) and condition (16), which holds by assumption for all
$j\in\\{1,\ldots,k\\}$, we obtain
$\displaystyle\mathbb{E}[\exp(\mathtt{i}uI^{k}_{T})]$ $\displaystyle=$
$\displaystyle\mathbb{E}\left[e^{\mathtt{i}uI^{k}_{T-\Delta t}}\left(1+\Delta
t\sum_{j=1}^{k}\frac{(\mathtt{i}u)^{j}}{j!}M_{j}(X_{T-\Delta
t})\right.\right.$
$\displaystyle\left.\left.+\sum_{j=k+1}^{\infty}\frac{(\mathtt{i}u)^{j}}{j!}\mathbb{E}\left[(I^{k}_{T}-I^{k}_{T-\Delta
t})^{j}\bigg{\lvert}X_{T-\Delta t},I^{k}_{T-\Delta
t}\right]\right)\right]+o(\Delta t).$
It follows from (8) that there exists a positive constant $G$ such that
$\max\\{M_{j}(x)\>:\>x\in E\\}\leq G^{j}$ for all $j\in\mathbb{N}$. Therefore
we find that for a constant $D:=\exp(uG)$ the following inequality holds on
the entire probability space
(22)
$\displaystyle\bigg{\lvert}\sum_{j=1}^{k}\frac{(\mathtt{i}u)^{j}}{j!}M_{j}(X_{T-\Delta
t})\bigg{\rvert}\leq D.$
Note also that $D$ is independent of $k$ and $\Delta t$.
Definition (19) implies that $k\alpha_{k}$ is a positive constant, say $A$,
for each $k\in\mathbb{N}$. If we introduce a positive constant
$L:=\max\\{-\mathcal{L}(x,x)\>:\>x\in E\\}$, we obtain the following bound
(23) $\displaystyle\mathbb{E}\left[(\Sigma_{T}-\Sigma_{T-\Delta
t})^{j}\big{\lvert}X_{T-\Delta t}\right]$ $\displaystyle\leq$ $\displaystyle
A^{j}L\Delta t+o(\Delta t)\quad\text{for each}\quad j\in\mathbb{N}$
on the entire probability space. In order to find a similar bound for the
process $I^{k}$ we first note that if follows from the linear equation (16)
(for $j=1$) and definition (19) that the inequalities
$\sum_{d=1}^{k}d\lambda_{d}^{k}(x)\leq kL\quad\text{for all}\quad
k\in\mathbb{N},\>x\in E$
must hold. Therefore (15) implies
(24) $\displaystyle\mathbb{E}\left[(I^{k}_{T}-I^{k}_{T-\Delta
t})^{j}\bigg{\lvert}X_{T-\Delta t},I^{k}_{T-\Delta t}\right]$
$\displaystyle\leq$ $\displaystyle A^{j}kL\Delta t+o(\Delta t)\quad\text{for
any}\quad j\in\mathbb{N}$
and any small time-step $\Delta t$. We can now combine the estimates in (3),
(3), (22), (23) and (24) to obtain the key bound
$\displaystyle\left\lvert\mathbb{E}[\exp(\mathtt{i}u\Sigma_{T})]-\mathbb{E}[\exp(\mathtt{i}uI^{k}_{T})]\right\rvert$
$\displaystyle\leq$
$\displaystyle\left\lvert\mathbb{E}[\exp(\mathtt{i}u\Sigma_{T-\Delta
t})]-\mathbb{E}[\exp(\mathtt{i}uI^{k}_{T-\Delta t})]\right\rvert(1+\Delta tD)$
$\displaystyle+\>\>L(k+1)\Delta
t\sum_{j=k+1}^{\infty}\frac{(Au)^{j}}{j!}+o(\Delta t).$
The main idea of the proof of Theorem 3.1 is to iterate the bound in (3)
$\frac{T}{\Delta t}$ times. This procedure yields the following estimates
$\displaystyle\left\lvert\mathbb{E}[\exp(\mathtt{i}u\Sigma_{T})]-\mathbb{E}[\exp(\mathtt{i}uI^{k}_{T})]\right\rvert$
$\displaystyle\leq$ $\displaystyle D\Delta t(1+\Delta tD)^{(T/\Delta
t)-1}+L(k+1)T\sum_{j=k+1}^{\infty}\frac{(Au)^{j}}{j!}+T\frac{o(\Delta
t)}{\Delta t}.$
Since the left-hand side of this inequality is independent of $\Delta t$, the
inequality must hold in the limit as $\Delta t\searrow 0$. We therefore find
(26)
$\displaystyle\left\lvert\mathbb{E}[\exp(\mathtt{i}u\Sigma_{T})]-\mathbb{E}[\exp(\mathtt{i}uI^{k}_{T})]\right\rvert\leq
L(k+1)T\sum_{j=k+1}^{\infty}\frac{(Au)^{j}}{j!}.$
The right-hand side of inequality (26) clearly converges to zero as $k$ tends
to infinity. This concludes the proof of the theorem.
$\Box$
Theorem 3.1 implies that the prices of the volatility derivatives in the
Markov chain model $X$ can be approximated arbitrarily well using the method
defined in Section 2. Our initial problem of approximating prices in the model
based on a continuous-time Markov process $S$ is by Theorem 3.1 reduced to the
question of the approximation of the law of $S$ by the law of $X$. This can be
achieved by a judicious choice for the generator matrix of the chain $X$.
Since this is not the central topic of this paper we will not investigate the
question further in this generality (see [9] for numerous results on weak
convergence of Markov processes). However in Sections 4 and 5 we are going to
propose specific Markov chain approximations for diffusion and jump-diffusion
processes respectively and study numerically the behaviour of the
approximations for volatility derivatives in Section 6.
## 4\. The realized variance of a diffusion process
Our task now is to apply the method described in Section 2 to approximate the
dynamics of the corridor-realized variance of a diffusion processes. The first
step is to approximate the diffusion process $S$ which solves the stochastic
differential equation (SDE)
(27) $\displaystyle\frac{dS_{t}}{S_{t}}=\gamma
dt+\sigma\left(\frac{S_{t}}{S_{0}}\right)dW_{t},$
with measurable volatility function $\sigma:\mathbb{R}_{+}\to\mathbb{R}_{+}$,
using a continuous-time Markov chain $X$. A possible way of achieving this is
to use a generator for the chain $X$ given by the following system of linear
equations
$\displaystyle\sum_{y\in E}\mathcal{L}(x,y)$ $\displaystyle=$ $\displaystyle
0,$ (28) $\displaystyle\sum_{y\in E}\mathcal{L}(x,y)(y-x)$ $\displaystyle=$
$\displaystyle\gamma x,$ $\displaystyle\sum_{y\in E}\mathcal{L}(x,y)(y-x)^{2}$
$\displaystyle=$ $\displaystyle\sigma\left(\frac{x}{X_{0}}\right)^{2}x^{2}$
for each $x\in E$. In Appendix B we give an algorithm to define the state-
space $E$ of the chain $X$. In Section 6 we provide a numerical comparison for
vanilla option prices in the CEV model, i.e. in the case
$\sigma(s):=\sigma_{0}s^{\beta-1}$, and in the corresponding Markov chain
model given by the approximation above. Note that a Markov chain approximation
$X$ of the diffusion $S$ is in the spirit of [2] and is by no means the only
viable alternative. One could produce more accurate results by matching higher
instantaneous moments of the two processes (see [16] for rates of convergence
in some special cases).
If the solution of SDE (27) is used as a model for the risky security under a
risk-neutral measure we have to stipulate that $\gamma=r$, where $r$ is the
prevailing risk-free rate in the economy. Therefore by the first two equations
in system (4) the vector in $\mathbb{R}^{N}$ with cooridnates equal to the
elements in the set $E$ represents an eigenvector of the matrix $\mathcal{L}$
for the eigenvalue $\gamma$. Hence we find
(29) $\displaystyle\mathbb{E}[X_{t}\lvert
X_{0}=x]=e_{x}^{\prime}\exp(t\mathcal{L})\sum_{y\in E}ye_{y}$ $\displaystyle=$
$\displaystyle e^{t\gamma}x,\qquad\forall x\in E,$
where $e_{x}$ denotes the standard basis vector in $\mathbb{R}^{N}$ that
corresponds to the element $x\in E$ in the natural ordering and the operation
′ denotes transposition. Therefore, under the condition $\gamma=r$, the market
driven by the chain $X$ will also have a correct risk-neutral drift.
Once we define the chain $X$, the next task is to specify the process $I$ that
approximates well the corridor-realized variance $Q^{L,U}(X)$ defined in (6).
As we shall see in Section 6, matching the first two moments (i.e. the case
$k=2$ in Section 2) is sufficient to approximate the corridor-realized
variance dynamics of a diffusion processes. It is therefore necessary to take
$n\geq 2$, where $n$ is the number of states the approximate variance process
$I$ can jump up by at any given time (see (9)). To have flexibility we use $n$
much larger than 2, usually around 30. However in order to maintain the
tractability of the solution of system (16) we make an additional assumption
that the intensities $\lambda_{i}$ in (9), for $i=2,\ldots,n$, are all equal
to a single intensity function $\lambda_{n}:E\to\mathbb{R}_{+}$. To simplify
the notation we introduce the symbol
(30) $\displaystyle b_{j}^{n,m}$ $\displaystyle:=$
$\displaystyle\sum_{l=n+1}^{m}l^{j},\qquad\text{where}\quad
j,n,m\in\mathbb{N}\quad\text{and}\quad m>n.$
System (16) can in this case be solved explicitly as follows
(31) $\displaystyle\lambda_{1}(x)$ $\displaystyle=$ $\displaystyle\frac{\alpha
M_{1}(x)b_{2}^{1,n}-M_{2}(x)b_{1}^{1,n}}{\alpha^{2}(b_{2}^{1,n}-b_{1}^{1,n})},\quad\text{for
any}\quad x\in E,$ (32) $\displaystyle\lambda_{n}(x)$ $\displaystyle=$
$\displaystyle\frac{M_{2}(x)-\alpha
M_{1}(x)}{\alpha^{2}(b_{2}^{1,n}-b_{1}^{1,n})},\quad\text{for any}\quad x\in
E,$
where $M_{j}(x)$ is given in (8). Since the functions
$\lambda_{1},\lambda_{n}$ are intensities, all the values they take must be
non-negative. The formulae above imply that this is satisfied if and only if
the following inequalities hold
(33)
$\displaystyle\alpha\frac{b_{2}^{1,n}}{b_{1}^{1,n}}\>\geq\>\frac{M_{2}(x)}{M_{1}(x)}\>\geq\>\alpha\qquad\text{for
all}\quad x\in E.$
It is clear that the function $x\mapsto M_{2}(x)/M_{1}(x)$, $x\in E$, depends
on the definition of the chain $X$ both through the choice of the state-space
$E$ and the choice of the generator $\mathcal{L}$. Figure 1b contains the plot
of this function in the special case of the CEV model. Inequalities (33) are
used to help us choose a feasible value for the parameter $\alpha$ which
determines the geometry of the state-space of the process $I$. Note also that
(33) implies that the larger the value of $n$ is, the less restricted we are
in choosing $\alpha$. In Section 6 we will make these choices explicit for the
CEV model.
The generator of the approximate corridor-realized variance $I$, conditional
on the chain $X$ being at the level $x$, is in general given by Formula (13).
In this particular case the non-zero matrix elements $\mathcal{L}^{I}(x:c,d)$,
$c,d\in\\{0,1,\ldots,2C\\}$, are given by
$\displaystyle\mathcal{L}^{I}(x:c,d)$ $\displaystyle:=$
$\displaystyle\left\\{\begin{array}[]{ll}\lambda_{1}(x)&\mathrm{if}\>\>d=(c+1)\\!\\!\\!\mod(2C+1);\\\
\lambda_{n}(x)&\mathrm{if}\>\>d=(c+i)\\!\\!\\!\mod(2C+1),\>i\in\\{2,...,n\\};\\\
-\lambda_{1}(x)-(n-1)\lambda_{n}(x)&\mathrm{if}\>\>d=c.\end{array}\right.$
This defines explicitly (via equations (31) and (32)) the dynamics of the
chain $(X,I)$ if the original asset price process $S$ is a diffusion. In
Section 6 we will describe an implementation of this method when $S$ follows a
CEV process and study the behaviour of certain volatility derivatives in this
model.
## 5\. The realized variance of a jump-diffusion
In this section the task is to describe the algorithm for the pricing of
volatility derivatives in jump-diffusion models. This will be achieved by an
application of the algorithm from Section 2 with $k=3$. In Section 6 we will
investigate numerically the quality of this approximation. We start by
describing a construction of the Markov chain which is used to approximate a
jump-diffusion.
### 5.1. Markov chain approximations for jump-diffusions
We will consider a class of processes with jumps that is obtained by
subordination of diffusions. The prototype for such processes is the well-
known variance gamma model defined in [15], which can be expressed as a time-
changed Brownian motion with drift.
A general way of building (possibly infinite-activity) jump-diffusion
processes is by subordinating diffusions using a class of independent
stochastic time changes. Such a time change is given by a non-decreasing
stationary process $(T_{t})_{t\geq 0}$ with independent increments, which
starts at zero, and is known as a subordinator. The law of $(T_{t})_{t\geq 0}$
is characterized by the Bernstein function $\phi(\lambda)$, defined by the
following identity
(35) $\displaystyle\mathbb{E}\left[\exp(-\lambda
T_{t})\right]=\exp(-\phi(\lambda)t)\quad\text{for any}\quad t\geq
0\quad\text{and}\quad\lambda\in D,$
where $D$ is an interval in $\mathbb{R}$ that contains the half-axis
$[0,\infty)$. For example in the case of the variance gamma process, the
Bernstein function is of the form
(36)
$\displaystyle\phi(\lambda)=\frac{\mu^{2}}{\nu}\log\left(1+\lambda\frac{\nu}{\mu}\right).$
In this case $(T_{t})_{t\geq 0}$ is a gamma process111 The parameter $\mu$ is
the mean rate, usually taken to be equal to one in order to ensure that
$\mathbb{E}[T_{t}]=t$ for all $t\geq 0$, and $\nu$ is the variance rate of
$(T_{t})_{t\geq 0}$. with characteristic function equal to
$\mathbb{E}[\exp(\mathtt{i}uT_{t})]=\exp(-\phi(-\mathtt{i}u)t)$. Note that the
set $D$ in (35) is in this case equal to $(-\mu/\nu,\infty)$ (see [15],
equation (2)). This subordinator is used to construct the jump-diffusions in
Section 6.
Let $S$ be a diffusion defined by the SDE in (27). If we evaluate the process
$S$ at an independent subordinator $(T_{t})_{t\geq 0}$, we obtain a Markov
process with jumps $(S_{T_{t}})_{t\geq 0}$. It was shown in [17] that the
semigroup of $(S_{T_{t}})_{t\geq 0}$ is generated by the unbounded
differential operator ${\mathcal{G}}^{\prime}:=-\phi(-{\mathcal{G}})$, where
${\mathcal{G}}$ denotes the generator of the diffusion $S$. Similarly, if $X$
is a continuous-time Markov chain with generator $\mathcal{L}$ defined in the
first paragraph of Section 4, the subordinated process $(X_{T_{t}})_{t\geq 0}$
is again a continuous-time Markov chain with the generator matrix
$\mathcal{L}^{\prime}:=-\phi(-\mathcal{L})$. We should stress here that it is
possible to define rigorously the operator ${\mathcal{G}}^{\prime}$ using the
spectral decomposition of ${\mathcal{G}}$ and the theorey of functional
calculus (see [8], Chapter XIII, Section 5, Theorem 1). The matrix
$\mathcal{L}^{\prime}$ can be defined and calculated easily using the Jordan
decomposition of the generator $\mathcal{L}$. If the matrix $\mathcal{L}$ can
be expressed in the diagonal form $\mathcal{L}=U\Lambda U^{-1}$, which is the
case in any practical application (the set of matrices that cannot be
diagonalised is of codimention one in the space of all matrices and therefore
has Lebesgue measure zero), we can compute $\mathcal{L}^{\prime}$ using the
following formula
(37) $\displaystyle\mathcal{L}^{\prime}=-U\phi(-\Lambda)U^{-1}.$
Here $\phi(-\Lambda)$ denotes a diagonal matrix with diagonal elements of the
form $\phi(-\lambda)$, where $\lambda$ runs over the spectrum of the generator
$\mathcal{L}$.
Before using the described procedure to define the jump-diffusion process, we
have to make sure that it has the correct drift under a risk-neutral measure.
Recall that if the process $S$ solves the SDE in (27), then the identity
$\mathbb{E}[S_{t}\lvert S_{0}]=S_{0}\exp(t\gamma)$ holds, where $\gamma$ is
the drift parameter in (27). Since the subordinator $(T_{t})_{t\geq 0}$ is
independent of $S$, by conditioning on the random variable $T_{t}$, we find
that under the pricing measure the following identity must hold
$S_{0}\exp(rt)=\mathbb{E}[S_{T_{t}}\lvert S_{0}]=S_{0}\mathbb{E}[\exp(\gamma
T_{t})]=S_{0}\exp(-\phi(-\gamma)t),$
where $\phi$ is the Bernstein function of the subordinator $(T_{t})_{t\geq 0}$
and $r$ is the prevailing risk-free rate which is assumed to be constant. This
will be satisfied if and only if $r=-\phi(-\gamma)$, which in case of the
gamma subordinator (i.e. when the function $\phi$ is given by (36)) yields an
explicit formula for the drift in equation (27)
(38) $\displaystyle\gamma$ $\displaystyle=$
$\displaystyle\frac{\mu}{\nu}\left(1-\exp\left(-\frac{r\nu}{\mu^{2}}\right)\right).$
Since formula (29) holds for the chain $X$, tower property and the identity
$r=-\phi(-\gamma)$ imply
$\mathbb{E}[X_{T_{t}}\lvert X_{0}]=X_{0}\mathbb{E}[\exp(\gamma
T_{t})]=X_{0}\exp(rt).$
Therefore the subordinated Markov chain $(X_{T_{t}})_{t\geq 0}$ can also be
used as a model for a risky asset under the pricing measure.
The construction of jump-diffusions described above is convenient because we
can use the generator $\mathcal{L}$, that was defined in Section 4, and apply
the Bernstein function $\phi$ from (36) to obtain the generator of the Markov
chain that approximates the process $(S_{T_{t}})_{t\geq 0}$. This accomplishes
the first step in the approximation scheme outlined in the introduction. In
Subsection 5.2 we develop an algorithm for computing the law of the relized
variance of the approximating chain generated by $\mathcal{L}^{\prime}$. It
should be stressed that the algorithm in the next subsection does not depend
on the procedure used to obtain the generator of the approximating chain.
### 5.2. The algorithm
To simplify the notation let us assume that $S$ is a jump-diffusion and that
$X$ is a coninuous-time Markov chain with generator $\mathcal{L}$ that is used
to approximate the dynamics of the Markov process $S$. Since $S$ has jumps it
is no longer enough to use the algorithm from Section 2 with $k=2$ (this will
become clear from the numerical results in Section 6). In this subsection we
give an account of how to apply our algorithm in the case $k=3$.
Assume we have chosen the spacing $\alpha$ and the constant $C$ that uniquely
determine the geometry of the state-space of the process $I$ (see the
paragraph preceding equation (9) for the definition of the state-space). Set
the maximum jump size of $I$ to be $m\alpha$ for some $m\in\mathbb{N}$. We now
pick an integer $n$, such that $1<n<m$, and set the intensities that
correspond to the jumps of the process $I$ of sizes between $2\alpha$ and
$n\alpha$ to equal $\lambda_{n}$. Similarley we set the intensities for the
jumps of sizes between $(n+1)\alpha$ and $m\alpha$ to be equal to
$\lambda_{m}$. This simplifying assumption makes it possible to describe the
dynamics of $I$ using only three functions
$\lambda_{1},\lambda_{n},\lambda_{m}:E\to\mathbb{R}_{+}$ that give state-
dependent intensities for jumping up by $i\alpha$ where $i=1$,
$i\in\\{2,\ldots,n\\}$, $i\in\\{n+1,\ldots,m\\}$ respectively. In order to
match $k=3$ instantaneous conditional moments of the corridor-realized
variance $Q^{L,U}(X)$, these functions must by (16) satisfy the following
system of equations
$\displaystyle\begin{pmatrix}1&b_{1}^{1,n}&b_{1}^{n,m}\\\
1&b_{2}^{1,n}&b_{2}^{n,m}\\\
1&b_{3}^{1,n}&b_{3}^{n,m}\end{pmatrix}\begin{pmatrix}\lambda_{1}(x)\\\
\lambda_{n}(x)\\\ \lambda_{m}(x)\end{pmatrix}$ $\displaystyle=$
$\displaystyle\begin{pmatrix}\overline{M}_{1}(x)\\\ \overline{M}_{2}(x)\\\
\overline{M}_{3}(x)\end{pmatrix}\quad\forall x\in
E,\quad\text{where}\quad\overline{M}_{j}(x):=\frac{M_{j}(x)}{\alpha^{j}},\quad$
the symbol $b_{j}^{n,m}$ is defined in (30) and functions $M_{j}$, $j=1,2,3$,
are given in (8). Gaussian elimination yields the explicit solution of the
system
$\displaystyle\lambda_{1}$ $\displaystyle=$
$\displaystyle\frac{(\overline{M}_{3}b_{1}^{n,m}-\overline{M}_{1}b_{3}^{n,m})(b_{2}^{1,n}b_{1}^{n,m}-b_{1}^{1,n}b_{2}^{n,m})-(\overline{M}_{2}b_{1}^{n,m}-\overline{M}_{1}b_{2}^{n,m})(b_{3}^{1,n}b_{1}^{n,m}-b_{1}^{1,n}b_{3}^{n,m})}{(b_{1}^{n,m}-b_{3}^{n,m})(b_{2}^{1,n}b_{1}^{n,m}-b_{1}^{1,n}b_{2}^{n,m})-(b_{1}^{n,m}-b_{2}^{n,m})(b_{3}^{1,n}b_{1}^{n,m}-b_{1}^{1,n}b_{3}^{n,m})},$
$\displaystyle\lambda_{n}$ $\displaystyle=$
$\displaystyle\frac{(\overline{M}_{3}-\overline{M}_{1})(b_{2}^{n,m}-b_{1}^{n,m})-(\overline{M}_{2}-\overline{M}_{1})(b_{3}^{n,m}-b_{1}^{n,m})}{(b_{2}^{n,m}-b_{1}^{n,m})(b_{3}^{1,n}-b_{1}^{1,n})-(b_{3}^{n,m}-b_{1}^{n,m})(b_{2}^{1,n}-b_{1}^{1,n})},$
$\displaystyle\lambda_{m}$ $\displaystyle=$
$\displaystyle\frac{(\overline{M}_{3}-\overline{M}_{1})(b_{2}^{1,n}-b_{1}^{1,n})-(\overline{M}_{2}-\overline{M}_{1})(b_{3}^{1,n}-b_{1}^{1,n})}{(b_{3}^{n,m}-b_{1}^{n,m})(b_{2}^{1,n}-b_{1}^{1,n})-(b_{2}^{n,m}-b_{1}^{n,m})(b_{3}^{1,n}-b_{1}^{1,n})},$
where all the identities are interpreted as functional equalites on the set
$E$. It is clear from (30) that the denominators in the above expressions
satisfy the inequalities
$(b_{1}^{n,m}-b_{3}^{n,m})(b_{2}^{1,n}b_{1}^{n,m}-b_{1}^{1,n}b_{2}^{n,m})-(b_{1}^{n,m}-b_{2}^{n,m})(b_{3}^{1,n}b_{1}^{n,m}-b_{1}^{1,n}b_{3}^{n,m})<0,$
$(b_{2}^{n,m}-b_{1}^{n,m})(b_{3}^{1,n}-b_{1}^{1,n})-(b_{3}^{n,m}-b_{1}^{n,m})(b_{2}^{1,n}-b_{1}^{1,n})<0,$
for suffciently large $m$ (e.g. $m\geq 10$). This is because the term
$b_{3}^{n,m}$ dominates both expressions and has a negative coefficient in
front of it. We therefore find that, if the functions
$\lambda_{1},\lambda_{n},\lambda_{m}$ are to be positive, the following
inequalities must be satisfied
(39) $\displaystyle 0$ $\displaystyle<$
$\displaystyle\alpha^{2}M_{1}(x)+\alpha
M_{2}(x)\frac{b_{3}^{1,n}b_{1}^{n,m}-b_{1}^{1,n}b_{3}^{n,m}}{b_{3}^{n,m}b_{2}^{1,n}-b_{3}^{1,n}b_{2}^{n,m}}-M_{3}(x)\frac{b_{2}^{1,n}b_{1}^{n,m}-b_{1}^{1,n}b_{2}^{n,m}}{b_{3}^{n,m}b_{2}^{1,n}-b_{3}^{1,n}b_{2}^{n,m}},$
(40) $\displaystyle 0$ $\displaystyle>$
$\displaystyle\alpha^{2}M_{1}(x)-\alpha
M_{2}(x)\frac{b_{3}^{n,m}-b_{1}^{n,m}}{b_{3}^{n,m}-b_{2}^{n,m}}+M_{3}(x)\frac{b_{2}^{n,m}-b_{1}^{n,m}}{b_{3}^{n,m}-b_{2}^{n,m}},$
(41) $\displaystyle 0$ $\displaystyle<$
$\displaystyle\alpha^{2}M_{1}(x)-\alpha
M_{2}(x)\frac{b_{3}^{1,n}-b_{1}^{1,n}}{b_{3}^{1,n}-b_{2}^{1,n}}+M_{3}(x)\frac{b_{2}^{1,n}-b_{1}^{1,n}}{b_{3}^{1,n}-b_{2}^{1,n}},$
for every $x\in E$. These inequalities specify quadratic conditions on the
spacing $\alpha$ (of the state-space of the process $I$) which have to be
satisfied on the entire set $E$.
Note that inequality (39) is always satisfied if the corresponding
discriminant is negative. Alternatively if the discriminant is non-negative,
then the real zeros of the corresponding parabola, denoted by
$\underline{\alpha}(x),\overline{\alpha}(x)$ and without loss of generality
assumed to satisfy $\underline{\alpha}(x)\leq\overline{\alpha}(x)$, exist and
the conditions
$\alpha<\underline{\alpha}(x)\quad\text{or}\quad\alpha>\overline{\alpha}(x)\quad\forall
x\in E$
must hold. Similar analysis can be applied to inequality (41). Inequality (40)
will always be violated if the discriminant is negative. This implies the
following condition
(42)
$\frac{(b_{3}^{n,m}-b_{1}^{n,m})^{2}}{(b_{3}^{n,m}-b_{2}^{n,m})(b_{2}^{n,m}-b_{1}^{n,m})}\geq\frac{4M_{1}(x)M_{3}(x)}{M_{2}(x)^{2}}\quad\forall
x\in E,$
which has to hold regardless of the choice of the spacing $\alpha$. Even if
condition (42) is satisfied we need to enforce the inequalities
$\underline{\alpha}(x)<\alpha<\overline{\alpha}(x)\quad\forall x\in E.$
In Table 1 we summarise the conditions that need to hold for
$\lambda_{1}(x),\lambda_{n}(x),\lambda_{m}(x)$ to be positive for any fixed
element $x\in E$.
| Discriminant $\geq 0$ | Restriction on $\alpha$
---|---|---
$\lambda_{1}(x)>0$ | true | $\alpha<\underline{\alpha}(x)$ or $\alpha>\overline{\alpha}(x)$
| false | none
$\lambda_{n}(x)>0$ | true | $\underline{\alpha}(x)<\alpha<\overline{\alpha}(x)$
| false | $\lambda_{n}(x)$ cannot be positive
$\lambda_{m}(x)>0$ | true | $\alpha<\underline{\alpha}(x)$ or $\alpha>\overline{\alpha}(x)$
| false | none
Table 1. Conditions on the discriminant and the real roots
$\underline{\alpha}(x),\overline{\alpha}(x)$, assumed to satisfy the relation
$\underline{\alpha}(x)\leq\overline{\alpha}(x)$, in this table refer to the
parabolas that arise in inequalities (39), (40) and (41). These inequalities
are equivalent to the conditions $\lambda_{1}(x)>0$, $\lambda_{n}(x)>0$ and
$\lambda_{m}(x)>0$ respectively.
Having chosen the spacing $\alpha$ according to the conditions in Table 1, we
can use the formulae above to compute functions $\lambda_{1},\lambda_{n}$ and
$\lambda_{m}$. The conditional generator of the process $I$, defined in (13),
now takes the form
$\displaystyle\mathcal{L}^{I}(x:c,d):=\left\\{\begin{array}[]{ll}\lambda_{1}(x)&\mathrm{if}\>\>d=(c+1)\\!\\!\\!\mod(2C+1);\\\
\lambda_{n}(x)&\mathrm{if}\>\>d=(c+s)\\!\\!\\!\mod(2C+1),\>s\in\\{2,...,n\\};\\\
\lambda_{m}(x)&\mathrm{if}\>\>d=(c+s)\\!\\!\\!\mod(2C+1),\>s\in\\{n+1,...,m\\};\end{array}\right.$
with diagonal elements given by
$-\lambda_{1}(x)-(n-1)\lambda_{n}(x)-(m-n)\lambda_{m}(x)$ and all other
entries equal to zero. In Section 6 we are going to implement the algorithm
described here for the variance gamma model and the subordinated CEV process.
## 6\. Numerical results
In this section we will perform a numerical study of the approximations given
in the Sections 4 and 5. Subsection 6.1 gives an explicit construction of the
approximating Markov chain $X$ and compares the vanilla option prices with the
ones in the original model $S$. Subsections 6.2 and 6.3 compare the algorithm
for volatility derivatives described in this paper with a Monte Carlo
simulation.
### 6.1. Markov chain approximation
Let $S$ be a Markov process that satisfies SDE (27) with the volatility
function $\sigma:\mathbb{R}_{+}\to\mathbb{R}_{+}$ given by
$\sigma(s):=\sigma_{0}s^{\beta-1}$ and the drift $\gamma$ equal to the risk-
free rate $r$ (i.e. $S$ is a CEV process). We generate the state-space $E$ the
algorithm in Appendix B and define find the generator matrix $\mathcal{L}$ by
solving the linear system in (4).
If the process $S$ is a jump-diffusion of the form described in Subsection 5.1
(e.g. a variance gamma model or a CEV model subordinated by a gamma process),
we obtain the generator for the chain $X$ by applying formula (37) to the
generator defined in the previous paragraph, where the function $\phi$ (in
(37)) is given by (36). More preciselly if $S$ is the subordinated CEV
process, the drift $\gamma$ in (27) is given by the formula in (38). If $S$ is
a variance gamma model we subordinate the geometric Brownian motion which
solves SDE (27) with the constant volatility function $\sigma(s)=\sigma_{0}$
and the drift $\gamma=\theta+\sigma_{0}^{2}/2$ where $\theta$ is the parameter
in the vairance gamma model (see [15], equation (1)). The implementation in
Matlab of this construction can be found in [14]. Note that the state-space of
the Markov chain $X$ in the diffusion and the jump-diffusion cases is of the
same form (i.e. given by the algorithm in Appendix B).
The numerical accuracy of these approximations is illustrated in Tables 2, 3
and 4 where the vanilla option prices in the Markov chain model $X$ are
compared with the prices in the original model $S$ for the CEV process, the
variance gamma model and the subordinated CEV model respectively.
| | Markov chain $X$ | | | CEV: closed-form |
---|---|---|---|---|---|---
$K\backslash T$ | $0.5$ | $1$ | $2$ | $0.5$ | $1$ | $2$
80 | 21.44% | 21.42% | 21.30% | 21.54% | 21.47% | 21.34%
90 | 20.55% | 20.57% | 20.46% | 20.68% | 20.62% | 20.49%
100 | 19.93% | 19.90% | 19.71% | 19.94% | 19.88% | 19.75%
110 | 19.37% | 19.19% | 19.11% | 19.28% | 19.22% | 19.10%
120 | 18.76% | 18.66% | 18.53% | 18.69% | 18.63% | 18.52%
Table 2. Implied volatility in the CEV model. The maturity $T$ varies from
half a year to two years and the corresponding strikes are of the form
$Ke^{rT}$, where $K$ takes values between 80 and 120 and the risk-free rate
equals $r=2\%$. The CEV process $S$, with the current spot value $S_{0}=100$,
is given by (27) with the local volatility function $\sigma$ equal to
$\sigma(s):=\sigma_{0}s^{\beta-1}$ and the drift $\gamma=r$, where the
volatility parameters are $\sigma_{0}=0.2$, $\beta=0.3$. The parameters for
the non-uniform state-space of the chain $X$ are $N=70$ and
$l=1,s=100,u=700,g_{l}=50,g_{u}=50$ (see Appendix B for the definition of
these parameters) and the generator of $X$ is specified by system (4). The
pricing in the Markov chain model is done using (44) and in the CEV model
using a closed-form formula in [12], pages 562-563. The total computation time
for all the option price in the table under the Markov chain model $X$ is less
than one tenth of a second on a standard PC with 1.6GHz Pentium-M processor
and 1GB RAM.
| | Markov chain $X$ | | | VG: FFT |
---|---|---|---|---|---|---
$K\backslash T$ | $0.5$ | $1$ | $2$ | $0.5$ | $1$ | $2$
80 | 20.43% | 20.07% | 19.98% | 20.44% | 20.09% | 20.00%
90 | 19.91% | 19.89% | 19.93% | 19.95% | 19.94% | 19.96%
100 | 19.69% | 19.84% | 19.92% | 19.75% | 19.87% | 19.94%
110 | 19.85% | 19.89% | 19.93% | 19.82% | 19.88% | 19.93%
120 | 20.16% | 19.92% | 19.94% | 20.08% | 19.93% | 19.94%
Table 3. Implied volatility in the variance gamma model. The strikes and
maturities are as in Table 2. The process $S$, with the current spot value
$S_{0}=100$, is obtained by subordinating diffusion (27) with the constant
volatility function $\sigma(s)=\sigma_{0}$ and the drift equal to
$\gamma=\theta+\sigma_{0}^{2}/2$, where $\theta$ is given in [15], equation
(1). The Bernstein function of the gamma subordinator is given in (36). The
risk-free rate is assumed to be $r=2\%$, the diffusion parameters take values
$\sigma_{0}=0.2,\theta=-0.04,$ and the jump parameters in (36) equal
$\mu=1,\nu=0.05$. The parameters for the state-space of the chain $X$ are
$N=70$ and $l=1,s=100,u=700,$ $g_{l}=30,g_{u}=30$ (see Appendix B for the
definition of these parameters). The total computation time for all the option
price in the table under the Markov chain model is less than one tenth of a
second. The Fourier inversion is performed using the algorithm in [5] and
takes approximately the same amount of time. All computations are performed on
the same hardware as in Table 2.
| | Markov chain $X$ | | | CEV with jumps: MC |
---|---|---|---|---|---|---
$K\backslash T$ | $0.5$ | $1$ | $2$ | $0.5$ | $1$ | $2$
80 | 20.82% | 20.57% | 20.49% | 20.92% | 20.66% | 20.41%
90 | 20.08% | 20.10% | 20.11% | 20.16% | 20.12% | 20.19%
100 | 19.74% | 19.83% | 19.82% | 19.75% | 19.81% | 19.78%
110 | 19.66% | 19.61% | 19.56% | 19.64% | 19.58% | 19.53%
120 | 19.72% | 19.48% | 19.32% | 19.75% | 19.37% | 19.39%
Table 4. Implied volatility in the CEV model subordinated by a gamma process.
The strikes and maturities are as in Table 2. The process $S$, with the
current spot value $S_{0}=100$, is obtained by subordinating diffusion (27)
with the volatility function $\sigma(s)=\sigma_{0}s^{\beta-1}$ (where
$\sigma_{0}=0.2,\beta=0.7$) and the drift given by (38) (where the risk-free
rate is $r=2\%$ and the jump-parameters in (36) equal $\mu=1,\nu=0.05$). The
parameters for the state-space of the chain $X$ are as in Table 3. The total
computation time for all the option price in the table under the Markov chain
model is less than one tenth of a second. The prices in the model $S$ were
computed using a Monte Carlo algorithm that first generates the paths of the
gamma process $(T_{t})_{t\geq 0}$ (using the algorithm in [11], page 144) and
then, via an Euler scheme, generates paths of the process $S$. For the $T=2$
years maturity, $10^{5}$ paths were generated in $200$ seconds. All
computations are performed on the same hardware as in Table 2.
It is clear from Tables 2, 3 and 4 that the continuous-time Markov chain $X$
approximates reasonably well the Markov process $S$ on the level of European
option prices. The pricing in the Markov chain model is done by matrix
exponentiation. The transition semigroup of the chain $X$ is of the form
(44) $\displaystyle\mathbb{P}(X_{t}=y\lvert
X_{0}=x)=e_{x}^{\prime}\exp(t\mathcal{L})e_{y},\quad\text{where}\quad x,y\in
E,$
$e_{x},e_{y}$ are the corresponding vectors of the standard basis of
$\mathbb{R}^{N}$ and ′ denotes transposition. For more details on this pricing
algorithm see [2]. The implied volatilities in the CEV, the variance gamma and
the subordinated CEV model were obtained by a closed-form formula, a fast
Fourier transform inversion algorithm and a Monte Carlo algorithm
respectively. As mentioned earlier the quality of this approximation can be
improved considerably, without increasing the size of the set $E$, by matching
more than the first two instantaneous moments of the process $S$.
### 6.2. Volatility derivatives – the continuous case
The next task is to construct the process $I$ defined in Section 2, obtain its
law at a maturity $T$ using Theorem 2.1 and compare it to the law of the
random variable $[\log(S)]_{T}$ defined in (1) by pricing non-linear
contracts.
Let $S$ be the CEV processs with the parameter values as in the caption of
Table 2 and let $X$ be the corresponding Markov chain, which is also described
uniquely in the same caption. As described in Section 4 in this case we use
$k=2$ (i.e. the process $I$ matches the first and the second instantaneous
conditional moments of the process $[\log(X)]_{T}$ defined in (6)) and hence
define the state-dependent intensities in the conditional generator of
$\mathcal{L}^{I}$ by (31) and (32). We still need to determine the values of
the spacing $\alpha$, the size $(2C+1)$ of the state-space of $I$ and the
largest possible jump-size $\alpha n$ of the process $I$ at any given time.
The necessary and sufficien condition on parameters $\alpha$ and $n$ is given
by (33). Figure 1b contains the graph of the ratio in question $x\mapsto
M_{2}(x)/M_{1}(x)$, $x\in E$, for the CEV model. The minimum of the ratio is
$0.000563$, which can be used to define the value of $\alpha$. The largest
value of the ratio is approximately $0.019$ and hence $n=50$ satisfies the
first inequality in (33).
An important observation here is that Figure 1b only displays the values of
the ratio $M_{2}(x)/M_{1}(x)$ for $x$ in $E\cap[20,250]$. The choices of
$\alpha$ and $n$ made above therefore satisfy condition (33) only in this
range (recall that in this case we have $x_{0}=1$ and $x_{69}=700$). However
this apparent violation of the condition in (33) plays no role because the
probability for the underlying process $X$ to get below $20$ or above $250$ in
2 years time is less than $10^{-6}$ (see Figure 1a). This intuitive statement
is supproted by the quality of the approximation of the empirical distribution
of $[\log(S)]_{T}$ by the distribution of $I_{T}$ (see Figure 1c and Table 5).
We now need to choose the size $(2C+1)$ of the state-space for the process
$I$. The integer $C$ is determined by the longest maturity that we are
interested in, which in our case is 2 years. This is because we are using
Theorem 2.1 to find the joint law of the random variable $(X_{T},I_{T})$ and
must make sure that the process $I$ does not complete the full circle during
the time interval of length $T$ (recall that the pricing algorithm based on
Theorem 2.1 makes the assumption that the process $I$ is on a circle). In
other words we have to choose $C$ so that the chain $X$ accumulates much less
than $2C\alpha$ of realized variance. In the example considered here it is
sufficient to take $C=220$, which makes the state-space
$\\{0,\alpha,\ldots,\alpha 2C\\}$, defined in the paragraph following (8), a
uniform lattice in the interval between $0$ and $440\cdot 0.00056=0.246$.
Since the spacing $\alpha$ does not change with maturity, all that is needed
to obtain the joint probability distribution of $(X_{T},I_{T})$ for all
maturities $T\in\\{0.5,1,2\\}$ is to diagonalize numerically the complex
matrices $\mathcal{L}_{j}$, $j=0,...,2C$, in (17) only once. The distribution
of $I_{T}$, obtained as a marginal of the random vector $(X_{T},I_{T})$, is
plotted in Figure 1c. Note that the computational time required to obtain the
law of $I_{T}$ is therefore independent of maturity $T$.
CEV | $k$ moments | | Spectral: $I_{T}$ | | | MC: $[\log(S)]_{T}$ |
---|---|---|---|---|---|---|---
$\mathrm{derivative}\backslash T$ | | $0.5$ | $1$ | $2$ | $0.5$ | $1$ | $2$
var swap | 1 | 20.07% | 20.19% | 20.43% | 20.09% | 20.20% | 20.42%
$\sqrt{\mathbb{E}[\Sigma_{T}/T]}$ | 2 | 20.07% | 20.19% | 20.42% | (0.051%) | (0.051%) | (0.052%)
vol swap | 1 | 19.97% | 20.08% | 20.25% | 19.92% | 20.06% | 20.22%
$\mathbb{E}\left[\sqrt{\Sigma_{T}/T}\right]$ | 2 | 19.92% | 20.05% | 20.22% | (0.006%) | (0.007%) | (0.009%)
call option | 1 | 1.46% | 1.47% | 1.51% | 1.46% | 1.48% | 1.53%
$\theta=80\%$ | 2 | 1.46% | 1.47% | 1.52% | (0.003%) | (0.003%) | (0.005%)
call option | 1 | 0.33% | 0.33% | 0.43% | 0.39% | 0.38% | 0.45%
$\theta=100\%$ | 2 | 0.38% | 0.38% | 0.45% | (0.002%) | (0.002%) | (0.004%)
call option | 1 | 0.01% | 0.02% | 0.07% | 0.05% | 0.03% | 0.08%
$\theta=120\%$ | 2 | 0.06% | 0.04% | 0.08% | (0.001%) | (0.001%) | (0.003%)
Time | | | 15s | | 50s | 100s | 200s
Table 5. The prices of volatility derivatives in the CEV model $S$. The
parameter values for the process $S$ and the chain $X$ are given in the
caption of Table 2. The parameters for the process $I$ are $\alpha=0.00056$,
$C=220$ for $k\in\\{1,2\\}$ and $n=50$ when $k=2$ (recall from Section 4 that
the parameter $n$ controls the jumps of $I$ strictly larger than $\alpha$,
which are note present if $k=1$). The variable $\Sigma_{T}$ denotes either
$I_{T}$ or $[\log(S)]_{T}$ and the call option price is
$\mathbb{E}\left[(\Sigma_{T}/T-(\theta K_{0})^{2})^{+}\right]$, for
$\theta\in\\{80\%,100\%,120\%\\}$, $K_{0}:=\sqrt{\mathbb{E}[\Sigma_{T}/T]}$.
An Euler scheme with a time-increment of one day is used to generate $10^{5}$
paths of the CEV process $S$ and the sum in (1) is used to obtain the
empirical distribution of $[\log(S)]_{T}$ (see Figure 1c) and to evaluate the
contingent claims in this table. The numbers in brackets are the standard
errors in the Monte Carlo simulation. The computational time for the pricing
of volatility derivatives using our algorithm is independent of the maturity
$T$. All computations are performed on the same hardware as in Table 2.
We now perform a numerical comparisons between our method for pricing
volatility derivatives and a pricing algorithm based on a Monte Carlo
simulation of the CEV model $S$. We generate $10^{5}$ paths of the process $S$
using an Euler scheme and compute the empirical probability distribution of
the realized variance $[\log(S)]_{T}$ based on that sample (see Figure 1c). We
also compute the variance swap, the volatility swap and the call option prices
$\mathbb{E}\left[(\Sigma_{T}/T-(\theta K_{0})^{2})^{+}\right]$, for
$\theta\in\\{80\%,100\%,120\%\\}$, where
$K_{0}:=\sqrt{\mathbb{E}[\Sigma_{T}/T]}$ and $\Sigma_{T}$ denotes either
$I_{T}$ or $[\log(S)]_{T}$. The prices and the computation times are
documented in Table 5. A cursory inspection of the prices of non-linear payoff
functions reveals that the method for $k=2$ outperforms the algorithm proposed
in [1], which corresponds to $k=1$, without adding computational complexity
since both algorithms require finding the spectrum of $(2C+1)$ complex
matrices in (18). We will soon see that the discrepancy between the algorithm
in [1] and the one proposed in the current paper is amplified in the presence
of jumps. Note also that all three methods ($k=1,2$ and the Monte Carlo
method) agree in the case of linear payoffs.
### 6.3. Volatility derivatives – the discontinuous case
In this subsection we will study numerically the behaviour of the law of
random variables $[\log(S)]_{T}$ and $Q^{L,U}_{T}(S)$, defined in (1) and (4)
respectively, where $S$ is a Markov process with jumps. Let $S$ be a variance
gamma or a subordinated CEV process with parameter values given in the
captions of Tables 3 and 4 respectively.
Since $S$ has discontinuous trajectories we will have to match $k=3$
instantaneous conditional moments when defining the process $I$ in order to
avoid large pricing errors for non-linear payoffs (see Tables 6 and 7 for the
size of the errors when $k=2$). We firts define the Markov chain $X$ as
described in Subsection 5.1 using the parameter values in the captions of
Tables 3 and 4. All computations in this subsection are performed using the
implementation in [14] of our algorithm.
Recall from Subsection 5.2 that in order to define the process $I$ we need to
set values for the integers $1<n<m$ and the spacing $\alpha$ so that the
intensity functions $\lambda_{1},\lambda_{n},\lambda_{m}$ are positive (Table
1 states explicit necessary and sufficient conditions for this to hold). Note
that the inequality in (42) is necessary if $\lambda_{n}$ is to be positive.
Figure 2c contians the graph of the function $x\mapsto
4M_{1}(x)M_{3}(x)/M_{2}(x)^{2}$ for $x\in E$ such that $20\leq x\leq 250$ in
the case of the variance gamma model. If we choose
$n:=5\quad\text{and}\quad m:=30,$
then the left-hand side of the inequality in (42) equals $13.48$, which is an
upper bound for the ratio in Figure 2c. If $S$ equals the subordinated CEV
process, the graph of the function $x\mapsto 4M_{1}(x)M_{3}(x)/M_{2}(x)^{2}$
takes a similar form and the same choice of $n,m$ as above satisfies the
inequality in (42).
The distance $\alpha$ between the consecutive points in the state-space of the
process $I$ has to be chosen so that the inequality
$\underline{\alpha}(x)<\alpha<\overline{\alpha}(x)$ is satisfied for all $x\in
E$ (see Table 1). Figure 2d contains the graphs of the functions
$\underline{\alpha},\overline{\alpha}$ over the state-space of $X$ in the
range $20\leq x\leq 250$ for the variance gamma model. The corresponding
graphs in case of the subordianted CEV process are very similar and are not
reported. It follows that by choosing
$\alpha:=0.002$
we can ensure that all the conditions in the third row of Table 1 are met,
both in the variance gamma and the subordinated CEV model, for $x\in E$ such
that $20\leq x\leq 250$. It should be noted that it is impossible to find a
single value of $\alpha$ that lies between the zeros $\underline{\alpha}(x)$
and $\overline{\alpha}(x)$ for all $x\in E$ for our specific choice of the
chain $X$ and its state-space. However not matching the instantaneous
conditional moments of $I_{T}$ and $Q^{L,U}_{T}(X)$ outside of the interval
$[20,250]$ is in practice of little consequence because the probability that
the chain $X$ gets into this region (recall that the current spot level is
assumed to be 100) before the maturity $T=2$ is negligible (see Figure 2a for
the distribution of $X$ in the case of variance gamma model).
Once the parameters $n,m$ and $\alpha$ have been determined, we use the
explicit expressions for $\lambda_{1},\lambda_{n},\lambda_{m}$ on page 5.2 to
define the state dependent intensities of the process $I$ for the states $x\in
E$ that satisfy $20\leq x\leq 250$. Outside of this region we choose the
functions $\lambda_{1},\lambda_{n},\lambda_{m}:E\to\mathbb{R}_{+}$ to be
constant. The choice of parameter $C=65$ is, like in the previous subsection,
determined by the longest maturity we are interested in (in our case this is
$T=2$). The laws of the realized variance $[\log(S)]_{T}$, for
$T\in\\{0.5,1,2\\}$, in the variance gamma and the subordinated CEV model
based on the approximation $I_{T}$ are given in Figures 2b and 3a
respectively. The prices of various payoffs on the realized variance in these
two models are given in Tables 6 and 7.
VG | $k$ moments | | Spectral: $I_{T}$ | | | MC: $[\log(S)]_{T}$ |
---|---|---|---|---|---|---|---
$\mathrm{derivative}\backslash T$ | | $0.5$ | $1$ | $2$ | $0.5$ | $1$ | $2$
var swap | 1 | 20.01% | 20.01% | 20.02% | 20.01% | 20.01% | 20.01%
$\sqrt{\mathbb{E}[\Sigma_{T}/T]}$ | 2 | 20.01% | 20.01% | 20.02% | (0.051%) | (0.051%) | (0.051%)
| 3 | 20.01% | 20.01% | 20.02% | | |
vol swap | 1 | 19.74% | 19.88% | 19.96% | 19.28% | 19.62% | 19.81%
$\mathbb{E}\left[\sqrt{\Sigma_{T}/T}\right]$ | 2 | 19.40% | 19.67% | 19.83% | (0.017%) | (0.012%) | (0.009%)
| 3 | 19.25% | 19.62% | 19.81% | | |
call option | 1 | 1.51% | 1.46% | 1.44% | 1.65% | 1.52% | 1.46%
$\theta=80\%$ | 2 | 1.56% | 1.48% | 1.45% | (0.007%) | (0.005%) | (0.004%)
| 3 | 1.66% | 1.53% | 1.47% | | |
call option | 1 | 0.50% | 0.36% | 0.25% | 0.85% | 0.63% | 0.45%
$\theta=100\%$ | 2 | 0.71% | 0.56% | 0.44% | (0.005%) | (0.004%) | (0.003%)
| 3 | 0.83% | 0.61% | 0.45% | | |
call option | 1 | 0.06% | 0.01% | 0.00% | 0.37% | 0.18% | 0.07%
$\theta=120\%$ | 2 | 0.35% | 0.22% | 0.09% | (0.004%) | (0.002%) | (0.001%)
| 3 | 0.35% | 0.18% | 0.07% | | |
Time | | | 4s | | 62s | 120s | 230s
Table 6. The prices of volatility derivatives in the variance gamma model $S$.
The parameter values for the process $S$ and the chain $X$ are given in the
caption of Table 3. The parameters for the process $I$ are $\alpha=0.002$,
$C=65$ for $k=1,2,3$. We choose $n=30$ when $k=2$ and $n=5,m=30$ when $k=3$.
The variable $\Sigma_{T}$ and the payoffs are as in Table 5. The algorithm in
[11], page 144, is used to generate $10^{5}$ paths of the VG process $S$ and
the sum in (1) is used to obtain the empirical distribution of $[\log(S)]_{T}$
(see Figure 2b) and to evaluate the contingent claims in this table. The
numbers in brackets are the standard errors in the Monte Carlo simulation.
Note that the computational time for the pricing of volatility derivatives
using the process $I$ is independent of the maturity $T$. All computations are
performed on the same hardware as in Table 2 (see [14] for the source code in
Matlab).
CEV + jumps | $k$ moments | | Spectral: $I_{T}$ | | | MC: $[\log(S)]_{T}$ |
---|---|---|---|---|---|---|---
$\mathrm{derivative}\backslash T$ | | $0.5$ | $1$ | $2$ | $0.5$ | $1$ | $2$
var swap | 1 | 20.00% | 20.03% | 20.07% | 20.01% | 20.03% | 20.08%
$\sqrt{\mathbb{E}[\Sigma_{T}/T]}$ | 2 | 20.00% | 20.03% | 20.07% | (0.051%) | (0.051%) | (0.051%)
| 3 | 20.00% | 20.03% | 20.09% | | |
vol swap | 1 | 19.73% | 19.89% | 19.98% | 19.27% | 19.63% | 19.84%
$\mathbb{E}\left[\sqrt{\Sigma_{T}/T}\right]$ | 2 | 19.39% | 19.67% | 19.85% | (0.017%) | (0.018%) | (0.010%)
| 3 | 19.24% | 19.62% | 19.85% | | |
call option | 1 | 1.51% | 1.46% | 1.45% | 1.65% | 1.53% | 1.48%
$\theta=80\%$ | 2 | 1.56% | 1.49% | 1.46% | (0.007%) | (0.005%) | (0.004%)
| 3 | 1.66% | 1.54% | 1.49% | | |
call option | 1 | 0.51% | 0.37% | 0.30% | 0.86% | 0.64% | 0.49%
$\theta=100\%$ | 2 | 0.71% | 0.57% | 0.47% | (0.006%) | (0.004%) | (0.003%)
| 3 | 0.84% | 0.63% | 0.49% | | |
call option | 1 | 0.06% | 0.02% | 0.01% | 0.37% | 0.19% | 0.09%
$\theta=120\%$ | 2 | 0.36% | 0.23% | 0.11% | (0.004%) | (0.002%) | (0.001%)
| 3 | 0.35% | 0.19% | 0.09% | | |
Time | | | 4s | | 100s | 200s | 400s
Table 7. The prices of volatility derivatives in the subordinated CEV model
$S$. The parameter values for the process $S$ and the chain $X$ are given in
the caption of Table 4. The parameters for the process $I$, the random
variable $\Sigma_{T}$ and the payoffs of the volatility derivatives are as in
Table 6. The algorithm described in the caption of Table 4 is used to generate
$10^{5}$ paths of the process $S$ and the sum in (1) is used to obtain the
empirical distribution of $[\log(S)]_{T}$ (see Figure 3a) and to evaluate the
contingent claims in this table. The numbers in brackets are the standard
errors in the Monte Carlo simulation. Note that the computational time for the
pricing of volatility derivatives using the process $I$ is independent of the
maturity $T$. All computations are performed on the same hardware as in Table
2 (the code in [14] can easily be adapted to this model).
Observe that the time required to compute the distribution of $I_{T}$ in the
case of the continuous process $S$ (see Table 5) is larger than the time
required to perform the equivalent task for the process with jumps (see Tables
6 and 7). From the point of view of the algorithm this difference arises
because in the continuous case we have to use more points in the state-space
of the process $I$ since condition (33) forces the choice of the smaller
spacing $\alpha$. In other words the quotient $x\mapsto M_{2}(x)/M_{1}(x)$
takes much smaller values if there are no jumps in the model $S$ than if there
are. It is intuitively clear from definition (8) that this ratio for the
variance gamma (or the subordinated CEV) has a larger lower bound than the
function in Figure 1b, because in the the diffusion case the generator matrix
is tridiagonal.
CEV + jumps | $k$ moments | | Spectral: $I_{T}$ | | | MC: $Q^{L,U}_{T}(S)$ |
---|---|---|---|---|---|---|---
$\mathrm{derivative}\backslash T$ | | $0.5$ | $1$ | $2$ | $0.5$ | $1$ | $2$
corr-var swap | 1 | 19.81% | 19.40% | 18.50% | 19.81% | 19.41% | 18.50%
$\sqrt{\mathbb{E}[\Sigma_{T}/T]}$ | 2 | 19.81% | 19.40% | 18.49% | (0.051%) | (0.050%) | (0.048%)
| 3 | 19.81% | 19.40% | 18.50% | | |
corr-vol swap | 1 | 19.59% | 19.22% | 18.25% | 19.12% | 19.03% | 18.19%
$\mathbb{E}\left[\sqrt{\Sigma_{T}/T}\right]$ | 2 | 19.18% | 18.97% | 18.08% | (0.016%) | (0.012%) | (0.005%)
| 3 | 19.06% | 18.93% | 18.08% | | |
Time | | | 4s | | 100s | 200s | 400s
Table 8. Contingent claims on corridor-realized variance in the subordinated
CEV model $S$. The corridor is defined by $L=70$ and $U=130$. All parameter
values are as in Table 7. The empirical distribution of $Q^{L,U}_{T}(S)$ and
the law of $I_{T}$ for $T\in\\{0.5,1,2\\}$ are given in Figure 3b. The Monte
Carlo algorithm is as described in Table 4 and the numbers in brackets are the
standard errors in the simulation.
Finally we apply our algorithm to computing the law of the corridor-realized
variance $Q_{T}^{L,U}(S)$, where $S$ is the subordinated CEV process and the
corridor is given by $L=70$ and $U=130$. It is clear from Figure 3b and the
price of the square root payoff in Table 8 that the process $I$ defined by
matching $k=3$ instantaneous moments of $Q^{L,U}_{T}(S)$ approximates best the
entire distribution of the corridor-realized variance. However, if one is
interested only in the value of the corridor variance swap (i.e. a derivative
with a payoff that is linear in $Q^{L,U}_{T}(S)$), Table 8 shows that it
suffices to take $k=1$.
## 7\. Conclusion
We proposed an algorithm for pricing and hedging volatility derivatives and
derivatives on the corridor-realized variance in markets driven by Markov
processes of dimension one. The scheme is based on an order $k$ approximation
of the corridor-realized variance process by a continuous-time Markov chain.
We proved the weak convergence of our scheme as $k$ tends to infinity and
demonstrated with numerical examples that in practice it is sufficient to use
$k=2$ if the underlying Markov process is continuous and $k=3$ if the market
model has jumps.
There are two natural open questions related to this algorithm. First, it
would be interesting to understand the precise rate of convergence in Theorem
3.1 both from the theoretical point of view and that of applications. The
second question is numerical in nature. As mentioned in the introduction, the
algorithm described in this paper can be adapted to the case when the process
$S$ is a component of a two dimensional Markov process. The implementation of
the algorithm in this case is hampered by the dimension of the generator of
the approximating Markov chain, which would in this case be approximately
$2000$ (as opposed to $70$, as in the examples of Section 6). It would be
interesting to understand the precise structure of this large generator matrix
and perhaps exploit it to obtain an efficient algorithm for pricing volatility
derivatives in the presence of stochastic volatility.
## Appendix A Partial-circulant matrices
A matrix $C\in\mathbb{R}^{n\times n}$ is circulant if there exists a vector
$c\in\mathbb{R}^{n}$ such that $C_{ij}=c_{(i-j)\\!\\!\\!\mod n}$ for all
$i,j\in\\{1,\ldots,n\\}.$ The matrix $C$ can always be diagonalised
analytically, when viewed as a linear operator on the complex vector space
$\mathbb{C}^{n}$, as follows. For any $r\in\\{0,\ldots,n-1\\}$ we have an
eigenvalue $\lambda_{r}$ and a corresponding eigenvector $y^{(r)}$ (i.e. the
equation $Cy^{(r)}=\lambda_{r}y^{(r)}$ holds for all $r$ and the family of
vectors $y^{(r)}$, $r\in\\{0,\ldots,n-1\\}$, spans the whole of
$\mathbb{C}^{n}$) of the form
$\displaystyle\lambda_{r}=\sum_{k=0}^{n-1}c_{k}e^{-i\frac{2\pi}{n}rk}\quad\text{and}\quad
y^{(r)}_{j}=\frac{1}{\sqrt{n}}e^{-i\frac{2\pi}{n}rj}\>\>\>\>\mathrm{for}\>\>\>\>j\in\\{0,\ldots,n-1\\}.$
It is interesting to note that the eigenvectors $y^{(r)}$,
$r\in\\{0,\ldots,n-1\\}$, are independent of the circulant matrix $C$. For the
proof of these statements see Appendix A in [1].
Let $A$ be a linear operator represented by a matrix in $\mathbb{R}^{m\times
m}$ and let $B^{(k)}$, for $k=0,\ldots,m-1$, be a family of $n$-dimensional
matrices with the following property: there exists an invertible matrix
$U\in\mathbb{C}^{n\times n}$ such that
$U^{-1}B^{(k)}U=\Lambda^{(k)},\>\>\>\>\mathrm{for}\>\>\mathrm{all}\>\>\>\>k\in\\{0,\ldots,m-1\\},$
where $\Lambda^{(k)}$ is a diagonal matrix in $\mathbb{C}^{n\times n}$. In
other words this condition stipulates that the family of matrices $B^{(k)}$
can be simultaneously diagonalized by the transformation $U$. Therefore the
columns of matrix $U$ are eigenvectors of $B^{(k)}$ for all $k$ between 0 and
$m-1$.
Let us now define a large linear operator $\widetilde{A}$, acting on a vector
space of dimension $mn$, in the following way. Clearly the matrix
$\widetilde{A}$ can be decomposed naturally into $m^{2}$ blocks of size
$n\times n$. Let $\widetilde{A}_{i,j}$ denote an $n\times n$ matrix which
represents the block in the $i$-th row and $j$-th column of this
decomposition. We now define the operator $\widetilde{A}$ as
(45) $\displaystyle\widetilde{A}_{ii}$ $\displaystyle:=$ $\displaystyle
B^{(i)}+A_{ii}\mathbb{I}_{\mathbb{R}^{n}}\>\>\>\>\mathrm{and}$ (46)
$\displaystyle\widetilde{A}_{ij}$ $\displaystyle:=$ $\displaystyle
A_{ij}\mathbb{I}_{\mathbb{R}^{n}},\>\>\>\>\mathrm{for}\>\>\mathrm{all}\>\>\>\>i,j\in\\{1,\ldots,m\\}\>\>\>\>\mathrm{such}\>\>\mathrm{that}\>\>\>\>i\neq
j.$
The real numbers $A_{ij}$ are the entries of matrix $A$ and
$\mathbb{I}_{\mathbb{R}^{n}}$ is the identity operator on $\mathbb{R}^{n}$. We
may now state our main definition.
Definition. A matrix is termed partial-circulant if it admits a structural
decomposition as in (45) and (46) for any matrix $A\in\mathbb{R}^{m\times m}$
and a family of $n$-dimensional circulant matrices $B^{(k)}$, for
$k=0,\ldots,m-1$.
For the spectral properties of partial-circulant matrices see Appendix A in
[1].
## Appendix B Non-uniform state-space of the Markov chain $X$
The task here is to construct a non-uniform state-space for the Markov chain
$X$, which was used in Section 6 to approximate the Markov process $S$. Recall
that the state-space is a set of non-negative real numbers
$E=\\{x_{0},x_{1},\ldots,x_{N-1}\\}$ for some even integer $N\in 2\mathbb{N}$.
Recall that the elements of the set $E$, when viewed as a finite sequence, are
strictly increasing. We first fix three real numbers $l,s,u\in\mathbb{R}$,
such that $l<s<u$, that specify the boundaries of the lattice $x_{0}=l$,
$x_{N-1}=u$ and the starting point of the chain $x_{\lceil N/2\rceil}=s=S_{0}$
which coincides with the initial spot value in the model $S$. The function
$\lceil\cdot\rceil:\mathbb{R}\to\mathbb{Z}$ returns the smallest integer which
is larger or equal than the argument. We next choose strictly positive
parameter values $g_{l},g_{u}$ which control the granularity of the spacings
between $l$ and $s$ and between $s$ and $u$ respectively. In other words the
larger $g_{l}$ (resp. $g_{u}$) is, the more uniformly spaced the lattice is in
the interval $[l,s]$ (resp. $[s,u]$). The algorithm that constructs the
lattice points is a slight modification of the algorithm in [19], page 167,
and can be described as follows.
1. (1)
Compute $c_{1}=\mathrm{arcsinh}\left(\frac{l-s}{g_{l}}\right)$,
$c_{2}=\mathrm{arcsinh}\left(\frac{u-s}{g_{u}}\right)$, $N_{l}=\lceil
N/2\rceil$ and $N_{u}=N-(N_{l}+1)$.
2. (2)
Define the lower part of the grid by the formula
$x_{k}:=s+g_{l}\mathrm{sinh}(c_{1}(1-k/N_{l}))$ for
$k\in\\{0,\ldots,N_{l}\\}$. Note that $x_{0}=l,x_{N_{l}}=s$.
3. (3)
Define the upper part of the grid using the formula
$x_{N_{l}+k}:=s+g_{u}\mathrm{sinh}(c_{2}k/N_{u})$ for
$k\in\\{0,\ldots,N_{u}\\}$. Note that $x_{N-1}=u$.
## References
* [1] C. Albanese, H. Lo, and A. Mijatović. Spectral methods for volatility derivative. to apper in Quantitative Finance.
* [2] C.Albanese and A. Mijatović. A stochastic volatility model for risk-reversals in foreign exchange. to appear in International Journal of Theoretical and Applied Finance.
* [3] P. Carr and R. Lee. Hedging variance options on continuous semimartingales. to appear in Finance and Stochastics.
* [4] P. Carr and K. Lewis. Corridor variance swaps. Risk, 17(2), February 2004.
* [5] P. Carr and D. Madan. Option valuation using the fast fourier transform. Journal of Computational Finance, pages 61–73, 1998.
* [6] P. Carr and D. Madan. Towards a theory of volatility trading. In R. Jarrow, editor, Volatility: New Estimation Techniques for Pricing Derivatives, Risk publication, pages 417–427. Risk, 1998.
* [7] P. Carr, D. Madan, H. Geman, and M. Yor. Pricing options on realized variance. Finance and Stochastics, 9(4):453–475, 2005.
* [8] N. Dunford and J.T. Schwartz. Linear operators Part II: Spectral theory. John Wiley & Sons, 1963.
* [9] S.N. Ethier and T.G. Kurtz. Markov processes: Characterization and convergence. John Wiley & Sons, 1986.
* [10] J. Gatheral. The volatility surface: a practitoner’s guide. John Wiley & Sons, Inc., 2006.
* [11] P. Glasserman. Monte Carlo Methods in Financial Engineering. Springer, 2004.
* [12] J. Hull. Options, futures and other derivatives. Pearson Education, 6th edition, 2006.
* [13] J. Jacod and A.N. Shiryaev. Limit theorems for stochastic processes, volume 288 of A Series of Comprehensive Studies in Mathematics. Springer-Verlag, 2nd edition, 2003.
* [14] H. Lo and A. Mijatović. An implementation in Matlab of the algorithm given in the paper “Volatility derivatives in market models with jumps”, 2009. see URL http://www.ma.ic.ac.uk/~amijatov/Abstracts/VolDer.html.
* [15] D. Madan, P. Carr, and E.C. Chang. The variance gamma process and option pricing. European Finance Review, 2(1):79–105, 1998.
* [16] A. Mijatović. Spectral properties of trinomial trees. Proc. R. Soc. A, 463:1681–1696, 2007.
* [17] R. S. Phillips. On the generation of semigroups of linear operators. Pacific Journal of Mathematics, 2(3):343–369, 1952.
* [18] P. Protter. Stochastic integration and differential equations. Springer, 2nd edition, 2005.
* [19] D. Tavella and C. Randall. Pricing Financial Instruments: the finite difference method. Wiley, 2000.
(a) The probability distribution function for the spot price $X_{T}$, with the
maturity $T$ equal to 0.5, 1 and 2 years, where $X$ is the Markov chain used
to approximate the CEV process $S$. For a precise description of the process
$X$ see Subsection 6.1. All relevant parameter values are given in the caption
of Table 2.
(b) The function $x\mapsto M_{2}(x)/M_{1}(x)$, where $x\in E$, in the CEV
model. The minimum of this function, which equals $0.000563$, determines the
value of the spacing $\alpha$ by the second inequality in (33). The maximum of
the ratio, which is $0.019$, determines the largest jump-size multiple $n$ by
the first inequality in (33). All relevant parameter values for the CEV model
and the accompanying chain $X$ are given in the caption of Table 2.
(c) The empirical probability distribution of the realized variance
$[\log(S)]_{T}$ of the CEV model $S$, based on the Monte Carlo simulation
described in Subsection 6.2, and the distribution of the random variable
$I_{T}$, obtained from Theorem 2.1, for $T\in\\{0.5,1,2\\}$. For details on
the definition of $I_{T}$ see Sections 2 and 4. Note that the computational
time required to obtain the law of $I_{T}$ is independent of $T$ (see caption
of Table 5).
Figure 1. CEV model
(a) The probability distribution function for the spot price $X_{T}$, with the
maturity $T$ equal to 0.5, 1 and 2 years, where $X$ is the Markov chain used
to approximate the variance gamma process. For a precise description of the
process $X$ see Subsection 6.1. All relevant parameter values are given in the
caption of Table 3.
(b) The empirical probability distribution of the realized variance
$[\log(S)]_{T}$ in the VG model $S$, based on the Monte Carlo simulation
described in Subsection 6.3, and the distribution of the random variable
$I_{T}$ for $T\in\\{0.5,1,2\\}$ matching $k\in\\{2,3\\}$ instantaneous
moments. For details on $I_{T}$ see Sections 2 and 5. Note that the
computational time required to obtain the law of $I_{T}$ is independent of $T$
and that the quality of the approximation is greater for $k=3$ (see also Table
6).
(c) The function $x\mapsto\frac{4M_{1}(x)M_{3}(x)}{M_{2}(x)^{2}}$, for $x\in
E$ such that $20\leq x\leq 250$, in the variance gamma model. This function
appears in condition (42) of Subsection 5.2. The parameters of the chain $X$
are given in the caption of Table 3.
(d) The functions $x\mapsto\underline{\alpha}(x)$ and
$x\mapsto\overline{\alpha}(x)$, for $x\in E$ such that $20\leq x\leq 250$, are
the zeros of the quadratic in condition (40) in the variance gamma model. As
summarised in Table 1, in order to ensure that the intensity $\lambda_{n}(x)$
is positive, we must choose the value of the constant $\alpha$ to lie between
the two curves for all $x$ in the above range (see also Subsection 6.3).
Figure 2. Variance gamma model
(a) The distribution of the realized variance in the subordinated CEV model.
(b) The distribution of the corridor-realized variance in the subordinated CEV
model.
Figure 3. Figure 3a (resp. 3b) contains the empirical probability
distribution of the realized variance $[\log(S)]_{T}$ (resp. corridor-realized
variance $Q^{L,U}_{T}(S)$, where $L=70$ and $U=130$) in the subordinated CEV
model $S$, based on the Monte Carlo simulation described in the caption of
Table 4 (see also Subsection 6.3). The distribution of the random variable
$I_{T}$ for the maturity $T\in\\{0.5,1,2\\}$ matching $k\in\\{2,3\\}$
instantaneous moments is also plotted in both cases. For details on $I_{T}$
see Sections 2 and 5. The computational time required to obtain the law of
$I_{T}$ is independent of $T$ and the quality of the approximation improves
drastically for $k=3$ (see also Tables 7 and 8).
|
arxiv-papers
| 2009-05-20T14:51:18 |
2024-09-04T02:49:02.778717
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "A. Mijatovic, H. Lo",
"submitter": "Aleksandar Mijatovic",
"url": "https://arxiv.org/abs/0905.3326"
}
|
0905.3346
|
# A family of diophantine equations of the form
$x^{4}+2nx^{2}y^{2}+my^{4}=z^{2}$ with no solutions in
$({\mathbb{Z}}^{+})^{3}$
Konstantine Zelator
Department of Mathematics
College of Arts and Sciences
Mail Stop 942
The University of Toledo
Toledo, OH 43606-3390
e-mails: konstantine-zelator@yahoo.com
konstantine.zelator@utoledo.edu
## 1 Introduction
In this work we present a family of diophantine equations of the form
$x^{4}+2nx^{2}y^{2}+my^{4}=z^{2}$ (1)
with no nontrivial solutions.
This is done in Section 3, where the theorem in this paper, Theorem 1, and its
proof are presented. The approach is elementary and uses only congruence
arguments as well as decent. It is branched proof, with some of the branches
leading to contradictions via congruence arguments. Two of the proof’s
branches lead to contradictions via a decent argument. Also in the proof, we
make use of the well-known parametric formulas that describe all the solutions
in $({\mathbb{Z}}^{+})^{3}$ to the diophantine equation $x^{2}+\ell\cdot
y^{2}=z^{2}$, $\ell$ a positive integer. These formulas are found in Section
2. In Section 4, we present a sampling of numerical examples. That is, a
listing of combinations of integers $n$ and $m$ in (1), which satisfy the
hypothesis of the theorem.
The paper concludes with Section 5, wherein we offer a brief historical
commentary on diophantine equations of the form
$ax^{4}+bx^{2}y^{2}+cy^{4}=dz^{2}$. Investigations of these types of
diophantine equations span a time interval of nearly 400 years, not to go back
any further in time. We mention some of the results found in the literature,
including more recent developments (of the last 70 years) on the subject
involving the usage of local methods as well as the association of such
equations with elliptic curves.
## 2 An auxiliary diophantine equation:
$x^{2}+\ell\cdot y^{2}=z^{2}$
For a given positive integer $\ell$, the solution set (subset of
$({\mathbb{Z}}^{+})^{3}$) of the diophantine equation $x^{2}+\ell
y^{2}=z^{2}$, can be parametrically described by the formulas,
$x=\dfrac{d(\rho_{1}k^{2}-\rho_{2}\lambda^{2})}{2},\ y=dk\lambda,\ \
z=\dfrac{d(\rho_{1}k^{2}+\rho_{2}\lambda^{2})}{2}$
where the parameters $d,k,\lambda$ are positive integers such that
$(k,\lambda)=1$; and the positive integers $\rho_{1},\rho_{2}$ are divisors of
$\ell$ such that $\rho_{1}\rho_{2}=\ell$. Obviously, if we require that
$(x,y)=1$, then all the solutions in $({\mathbb{Z}}^{+})^{3}$ can be
parametrically described as follows:
$\left\\{\begin{array}[]{l}x=\dfrac{d(\rho_{1}k^{2}-\rho_{2}\lambda^{2})}{2},\
y=dk\lambda,\ \ z=\dfrac{d(\rho_{1}k^{2}+\rho_{2}\lambda^{2})}{2},\\\ \\\ {\rm
with}\ d,k,\lambda,\rho_{1},\rho_{2}\in{\mathbb{Z}}^{+}\ {\rm such\ that}\
(k,\lambda)=1,\ \rho_{1}\rho_{2}=\ell\\\ \\\ {\rm and\ with}\ d=1\ {\rm or}\
2.\ {\rm Also},\ \rho_{1}k^{2}-\rho_{2}\lambda^{2}>0.\end{array}\right\\}$ (2)
These parametric formulas are well known in the literature and can be found in
reference [1], (pages 420-421). A derivation of them can also be found in [2].
## 3 The theorem and its proof
Theorem 1: Suppose that $n$ is a positive integer, $p$ an odd prime, and such
that either
$\begin{array}[]{lcll}n\equiv 0\ ({\rm mod}\ 4)&{\rm and}&p\equiv 3\ ({\rm
mod}\ 8);&{\rm or\ alternatively},\\\ \\\ n\equiv 2\ ({\rm mod}\ 4)&{\rm
and}&p\equiv 7\ ({\rm mod}\ 8)\end{array}$
In addition to the above, assume that one of the following hypotheses holds:
either
1. (i)
$n^{2}-p>0$ and the positive integer $m=n^{2}-p$ is a prime, or
2. (ii)
$n^{2}-p<0$ and the positive integer $N=-m=-(n^{2}-p)$ is a prime.
Then, the diophantine equation $x^{4}+2nx^{2}y^{2}+my^{4}=z^{2}$ has no
solution in $({\mathbb{Z}}^{+})^{3}$.
Proof: If equation (1) has a solution, then let $(X_{0},Y_{0},Z_{0})$ be a
solution with the product $X_{0}Y_{0}$ being least. Let
$\delta=(X_{0},Y_{0})$, so that $X_{0}=\delta x_{0},\ Y_{0}=\delta y_{0}$, for
positive integers $x_{o},y_{0},\delta$ such that $(x_{0},y_{0})=1$. Then, (1)
implies $\delta^{4}\mid Z^{2}_{0}$ and so $\delta^{2}\mid Z_{0}$; and by
putting $Z_{0}=\delta z_{0}$ for some $z_{0}\in{\mathbb{Z}}^{+}$ we obtain
$x^{4}_{0}+2nx^{2}_{0}y^{2}_{0}+my^{4}_{0}=z^{2}_{0}$ (3)
By (3), the triple $(x_{0},y_{0},z_{0})$ is a solution to equation (1). Thus,
by the minimality of the product $X_{0}Y_{0}$ it follows that $\delta=1,\
X_{0}=x_{0},\ Y_{0}=y_{0},\ Z_{0}=z_{0}$.
Since $x_{0}$ and $y_{0}$ are relatively prime, there are three possibilities:
$x_{0}\equiv y_{0}\equiv 1\ ({\rm mod}\ 2);\ x_{0}\equiv 0\ {\rm and}\
y_{0}\equiv 1\ ({\rm mod}\ 2);\ {\rm or}\ x_{0}\equiv 1\ {\rm and}\
y_{0}\equiv 0\ ({\rm mod}\ 2).$
If $x_{0}$ and $y_{0}$ are both odd, consider equation (3) modulo 4. Since
$x^{2}_{0}\equiv y^{2}_{0}\equiv 1\ ({\rm mod}\ 4)$, in this case, (3) implies
$1+2n+m=z^{2}_{0}\ ({\rm mod}\ 4)$. By the Theorem’s hypothesis, $2n\equiv 0\
({\rm mod}\ 4)$ and $m=n^{2}-p$; we obtain $1-p\equiv z^{2}_{0}\ ({\rm mod}\
4)$, which gives $z^{2}_{0}\equiv 2\ ({\rm mod}\ 4)$ in view of $p\equiv 3\
({\rm mod}\ 4)$, an impossibility.
Next consider the second possibility. The combination $x_{0}$ being even and
$y_{0}$ odd. By (3), since $m$ is odd, we see that $z_{0}$ must be odd as
well. Consider (3) modulo 8. In view of $y^{2}_{0}\equiv z^{2}_{0}\equiv 1\
({\rm mod}\ 8)$, (3) implies $m\equiv 1\ ({\rm mod}\ 8)$, a contradiction
since by hypothesis:
$m=n^{2}-p\equiv 0-3\equiv 5\ ({\rm mod}\ 8),\ {\rm if}\ n\equiv 0\ ({\rm
mod}\ 4)\ {\rm and}\ p\equiv 3\ ({\rm mod}\ 8),$
while also,
$m=n^{2}-p\equiv 4-7\equiv 5\ ({\rm mod}\ 8),\ {\rm if}\ n\equiv 2\ ({\rm
mod}\ 4)\ {\rm and}\ p\equiv 7\ ({\rm mod}\ 8).$
We conclude that $x_{0}$ must be odd and $y_{0}$ even. Also, it is clear from
(3) that since $(x_{0},y_{0})=1$, $y_{0}$ and $z_{0}$ must be relatively prime
as well; and $z_{0}$ must be odd. Therefore,
$\left\\{\begin{array}[]{l}x^{4}_{0}+2nx^{2}_{0}y^{2}_{0}+my^{4}_{0}=z^{2}_{0}\\\
\\\ x_{0}\equiv z_{0}\equiv 1\ ({\rm mod}\ 2),\ y_{0}\equiv 0\ ({\rm mod}\
2)\\\ \\\ (x_{0},y_{0})=1=(y_{0},z_{0})\end{array}\right\\}$ (4)
Now we use the hypothesis that $m=n^{2}-p$. An algebraic manipulation of the
equation in (4) leads to,
$\begin{array}[]{rcl}(x^{2}_{0}+ny^{2}_{0})^{2}-z^{2}_{0}&=&py^{4}_{0};\\\ \\\
\left[(x^{2}_{0}+ny^{2}_{0})+z_{0}\right]\left[(x^{2}_{0}+ny^{2}_{0})-z_{0}\right]&=&py^{4}_{0}\end{array}$
(5)
According to the conditions in (4) both $(x^{2}_{0}+ny^{2}_{0})$ and $z_{0}$
are odd integers, but they are also coprime. Indeed, if a prime $q\neq p$ were
a common divisor of theirs, then by (5) it would also divide $y_{0}$ and
therefore, $x_{0}$ as well, violating $(x_{0},y_{0})=1$. If $p$ divided both
$(x^{2}_{0}+ny^{2}_{0})$ and $z_{0}$, then $p^{2}$ would divide the left-hand
side of (5), and thus $p$ would divide $y_{0}$. Hence, it would divide
$x_{0}$, contrary once more to $(x_{0},y_{0})=1$. We conclude that
$(x^{2}_{0}+ny^{2}_{0},z_{0})=1$ (6)
Moreover, the sum of any two odd integers is congruent to $0\ ({\rm mod}\ 4)$
and their difference to $2\ ({\rm mod}\ 4)$; or vice-versa. Combining this
observation with (6) leads to,
$\left\\{\begin{array}[]{l}x^{2}_{0}+ny^{2}_{0}+z_{0}=2\delta_{1}\\\ \\\
x^{2}_{0}+ny^{2}_{0}-z_{0}=2\delta_{2}\\\ \\\ {\rm for}\
\delta_{1},\delta_{2}\in{\mathbb{Z}}^{+},\ {\rm with}\
(\delta_{1},\delta_{2})=1\ {\rm and}\ \delta_{1}+\delta_{2}\equiv 1\ ({\rm
mod}\ 2)\end{array}\right\\}.$ (7)
Adding the two equations in (7) yields,
$x^{2}_{0}+ny^{2}_{0}=\delta_{1}+\delta_{2}.$ (8)
According to (7), $\delta_{1}$ must be even and $\delta_{2}$ odd; or vice-
versa. Given that the rest of the proof rests on (8) and that (8) is symmetric
in $\delta_{1}$ and $\delta_{2}$. There is no need to distinguish between two
cases, they lead to the same contradictions. Accordingly, assume that
$\delta_{1}$ is even and $\delta_{2}$ is odd.
If we combine (7) with (5), we see that since $p$ is an odd prime, there are
precisely two possibilities expressed in (9) below:
$\begin{array}[]{lllll}{\rm Either}&2\delta_{1}=8py^{4}_{1}&{\rm
and}&2\delta_{2}=2y^{4}_{2}&\hskip 72.26999pt{\rm(9a)}\\\ {\rm
or}&2\delta_{1}=8y^{4}_{1}&{\rm and}&2\delta_{2}=2py^{4}_{2}&\hskip
72.26999pt{\rm(9b)}\end{array}$
for positive integers $y_{1},y_{2},$ such that $(y_{1},y_{2})=1$
and $y_{2}\equiv 1\ ({\rm mod}\ 2)$.
Note that in either case, we have from (5), $2y_{1}y_{2}=y_{0}$. (9)
Case 1: Assume possibility (9b) in (9) to hold. Then by combining (9b) with
(8) gives
$x^{2}_{0}+ny^{2}_{0}=4y^{4}_{1}+py^{2}_{2},$
which is impossible modulo 4, since by (4) we have $x^{2}_{0}+ny^{2}_{0}\equiv
1\ ({\rm mod}\ 4)$, while $4y^{4}_{1}+py^{4}_{2}\equiv p\equiv 3\ ({\rm mod}\
4)$, in view of the hypothesis of the theorem.
Case 2: Assume possibility (9a) to be the case in (9).
Subcase 2(i): Assume hypothesis (i) in the theorem, which means that the
integer $n^{2}-p=m$ is positive and a prime. By combining (9a) with (8) and
$2y_{1}y_{2}=y_{0}$ in (9) we obtain
$x^{2}_{0}+(n^{2}-p)\cdot(2y^{2}_{1})^{2}=(y^{2}_{2}-2ny^{2}_{1})^{2}$ (10)
According to (10), the triple
$(x_{0},2y^{2}_{1},\left|y^{2}_{2}-2ny^{2}_{1}\right|)$ is a positive integer
solution to the diophantine equation $x^{2}+\ell y^{2}=z^{2}$, with
$\ell=n^{2}-p$. Also note that $(x_{0},2y^{2}_{1})=1$, by virtue of the fact
that $(x_{0},y_{0})=1$ in (4) and $y_{0}=2y_{1}y_{2}$ in (9). Therefore, by
(2), we must have
$\left\\{\begin{array}[]{l}2y^{2}_{1}=dk\lambda,\
\left|y^{2}_{2}-2ny^{2}_{1}\right|=\dfrac{d(\rho_{1}k^{2}+\rho_{2}\lambda^{2})}{2},\
{\rm for\ positive}\\\ {\rm integers}\ d,k,\lambda,\rho_{1},\rho_{2};\ {\rm
such\ that}\ (k,\lambda)=1,\ \rho_{1}\rho_{2}=n^{2}-p\\\ {\rm and\ with}\ d=1\
{\rm or}\ 2\end{array}\right\\}$
(10a)
The possibility $d=1$ is easily ruled out by the fact that $\rho_{1}$ and
$\rho_{2}$ are both odd (since $m=n^{2}-p$ is odd); and the fact that
$(k,\lambda)=1$. Indeed, the first equation (10a) implies, if $d=1$, that $k$
and $\lambda$ must have different parities. But, then the integer
$\rho_{1}k^{2}+\rho_{2}\lambda^{2}$ would be odd, instead of even as the
second equation in (10a) requires. Thus, $d=2$ which yields, by (10a)
$\begin{array}[]{l}y^{2}_{1}=k\lambda,\
\left|y^{2}_{2}-2ny^{2}_{1}\right|=\rho_{1}k^{2}+\rho_{2}\lambda^{2};\ {\rm
or\ equivalently}\\\ \\\ \left\\{y^{2}_{1}=k\lambda,\
y^{2}_{2}-2ny^{2}_{1}=\pm(\rho_{1}k^{2}+\rho_{2}\lambda^{2})\right\\}\end{array}$
(10b)
The first equation in (10b) implies, since $(k,\lambda)=1$, that $k=k^{2}_{1}\
{\rm and}\ \lambda=\lambda^{2}_{1};\ {\rm for\ some}\
\lambda_{1},k_{1}\in{\mathbb{Z}}^{+},\ {\rm with}\ (k_{1},\lambda_{1})=1.$
Accordingly (10b) gives,
$y^{2}_{2}-2nk^{2}_{1}\lambda^{2}_{1}=\pm(\rho_{1}k^{4}_{1}+\rho_{2}\lambda^{4})$
(10c)
since $y_{1}=k_{1}\lambda_{1}$.
If the plus sign holds in (10c), we obtain
$y^{2}_{2}=\rho_{1}k^{4}_{1}+2nk^{2}_{1}\lambda^{2}_{1}+\rho_{2}\lambda^{4}_{1}$
(10d)
By (10a) we know that $\rho_{1}\rho_{2}=m=n^{2}-p$. But $m$ is a prime and so
either $\rho_{1}=m$ and $\rho_{2}=1$ or vice-versa. In either case, (10d)
shows that the triple $(k_{1},\lambda_{1},y_{2})$ is a positive integer
solution to the diophantine equation (1). Compare this solution with the
solution $(x_{0},y_{0},z_{0})$ (see (3)). We have
$x_{0}y_{0}\geq y_{0}=2y_{1}y_{2}>y_{1}=k_{1}\lambda_{1}.$
In short, by (9) $x_{0}y_{0}>k_{1}\lambda_{1}$, contradicting the fact that
$x_{0}y_{0}$ is least.
If the minus sign holds in (10c),
$y^{2}_{2}=-\rho_{1}k^{4}_{1}+2nk^{2}_{1}\lambda^{2}_{1}-\rho_{2}\lambda^{4}_{1}$
(10e)
Again, we use the fact that either $\rho_{1}=n^{2}-p$ and $\rho_{2}=1$ or
vice-versa.
In either case, $\rho_{1}+\rho_{2}=n^{2}-p+1$. Consider (10e) modulo 4. If
both $k_{1}$ and $\lambda_{1}$ are odd, then
$k^{2}_{1}\equiv\lambda^{2}_{1}\equiv 1\ ({\rm mod}\ 4)$ and so (10e) implies,
$\begin{array}[]{l}y^{2}_{2}\equiv-\rho_{1}+2n-\rho_{2}\equiv-(\rho_{1}+\rho_{2})+2n\equiv-(n^{2}-p+1)+2n\
({\rm mod}\ 4);\\\ \\\ y^{2}_{2}\equiv-n^{2}+p-1+2n\equiv 2\ ({\rm mod}\
4),\end{array}$
since by hypothesis $p\equiv 3\ ({\rm mod}\ 4)$ and $n$ is even. Thus, a
contradiciton.
If $k_{1}+\lambda_{1}\equiv 1\ ({\rm mod}\ 2)$, again consider (10e) modulo 4.
Given that $\rho_{1}=n^{2}-p$ and $\rho_{2}=1$ or vice-versa, and that $k_{1}$
is odd and $\lambda_{1}$ even, or vice-versa. The four combinations, because
of the symmetry of (10e) reduce to two congruence possibilities:
$y^{2}_{2}\equiv-1$ or $y^{2}_{2}\equiv-(n^{2}-p)\ ({\rm mod}\ 4)$, but
$n^{2}-p\equiv 1\ ({\rm mod}\ 4)$, by hypothesis . Therefore we see that in
both cases we arrive at $y^{2}_{2}\equiv 3\ ({\rm mod}\ 4)$ which is
impossible. This concludes the proof in subcase (2i).
Subcase 2(ii): Assume hypothesis (ii) of the theorem. Then $n^{2}-p<0$ and
$N=p-n^{2}$ is a prime. Combining (8) with (9a) and $2y_{1}y_{2}=y_{0}$ in (9)
leads to
$x^{2}_{0}=(y^{2}_{2}-2ny^{2}_{1})^{2}+(p-n^{2})(2y^{2}_{1})^{2}$ (11)
By (9) we know that $(y_{1},y_{2})=1$ and $y_{2}$ is odd; which implies that
$(y^{2}_{2}-2ny^{2}_{1},2y^{2}_{1})=1$. By (11), the triple
$\left(\left|y^{2}_{2}-2ny^{2}_{1}\right|,2y^{2}_{1},x_{0}\right)$ is a
positive integer solution to the diophantine equation $x^{2}+\ell
y^{2}=z^{2}$, with $\ell=p-n^{2}$; and with the integers
$\left|y^{2}_{2}-2ny^{2}_{1}\right|$ and $2y^{2}_{1}$ being relative prime.
Accordingly, by (2) we must have
$\left|y^{2}_{2}-2ny^{2}_{1}\right|=\dfrac{d(\rho_{1}k^{2}-\rho_{2}\lambda^{2})}{2},\
2y^{2}_{1}=dk\lambda;$
$\left\\{\begin{array}[]{l}y^{2}_{2}-2ny^{2}_{1}=\pm\dfrac{d(\rho_{1}k^{2}-\rho_{2}\lambda^{2})}{2},\
2y^{2}_{1}=dk\lambda,\\\ {\rm for\ positive\ integers}\
d,k,\lambda,\rho_{1},\rho_{2}\ {\rm such\ that}\\\ (k,\lambda)=1,\
\rho_{1}\rho_{2}=p-n^{2},\ {\rm and\ with}\ d=1\ {\rm or}\
2\end{array}\right\\}$ (12)
Since we consider (below) all the combinations $\rho_{1},\rho_{2}$ such that
$\rho_{1}\rho_{2}=p-n^{2}$, it follows that the plus or minus possibilities in
the first equation of (12) are really the same. Thus, we may write
$y^{2}_{2}-2ny^{2}_{1}=\dfrac{d(\rho_{1}k^{2}-\rho_{2}\lambda^{2})}{2},\
2y^{2}_{1}=dk\lambda$ (12a)
As we saw in the proof of subcase (ii), the possibility $d=1$ is easily ruled
out. Indeed, if $d=1$, the first equation in (12a) implies that the integer
$(\rho_{1}k^{2}-\rho_{2}\lambda^{2})$ must be even. On the other hand, the
second equation in (12a) implies, since $(k,\lambda)=1$ that $k$ must be odd
and $\lambda$ even; or vice-versa. But then, by virtue of the fact that
$\rho_{1},\rho_{2}$ are both odd, it follows that
$\rho_{1}k^{2}-\rho_{2}\lambda^{2}\equiv 1\ ({\rm mod}\ 2)$, a contradiction.
Thus, $d=2$ in (12a). We have,
$y^{2}_{2}-2ny^{2}_{1}=\rho_{1}k^{2}-\rho_{2}\lambda^{2},\ y^{2}_{1}=k\lambda$
(12b)
Obviously, the second equation in (12b) implies, since $(k,\lambda)=1$, that
$k=k^{2}_{1}$ and $\lambda^{2}_{1}=\lambda$ for some
$k_{1},\lambda_{1}\in{\mathbb{Z}}^{+}$, with $k_{1},\lambda_{1})=1$. Using
$y_{1}=k_{1}\lambda_{1}$ as well, we see that (12b) implies
$y^{2}_{2}=\rho_{1}k^{4}_{1}+2nk^{2}_{1}\lambda^{2}_{1}-\rho_{2}\lambda^{4}_{1}$
(12c)
Since $\rho_{1}\rho_{2}=p-n^{2}=$ prime, there are precisely two
possibilities. Either $\rho_{1}=1,\ \rho_{2}=p-n^{2}$ or, alternatively,
$\rho_{1}=p-n^{2}$ and $\rho_{2}=1$. In the first case, $\rho_{1}=1$ and
$-\rho_{2}=n^{2}-p=m$; so that by (12c),
$y^{2}_{2}=k^{4}_{1}+2nk^{2}_{1}\lambda^{2}_{1}+m\lambda^{4}_{1}$, which shows
that the triple $(k_{1},\lambda_{1},y_{2})$ is a positive integer solution to
the initial equation (1). Compare this solution with the solution
$(x_{0},y_{0},z_{0})$. We have, $x_{0}y_{0}\geq
y_{0}=2y_{1}y_{2}>y_{1}=k_{1}\lambda_{1}$, violating the minimality of the
product $x_{0}y_{0}$. Next, assume the next possibility to take hold, namely
$\rho_{1}=p-n^{2}$ and $\rho_{2}=1$. Then equation (12c) implies,
$y^{2}_{2}=(p-n^{2})k^{4}_{1}+2nk^{2}_{1}\lambda^{2}_{1}-\lambda^{4}_{1}$
(12d)
Consider (12d) modulo 4:
If $k_{1}\equiv\lambda_{1}\equiv 1\ ({\rm mod}\ 2)$, then (12d) implies
$y^{2}_{2}\equiv p-n^{2}+2n-1\ ({\rm mod}\ 4)\Rightarrow$ (since $n$ is even
and $p\equiv 3\ ({\rm mod}\ 4))\ y^{2}_{2}\equiv 2\ ({\rm mod}\ 4)$, an
impossibility.
If $k_{1}\equiv 0$ and $\lambda_{1}\equiv 1\ ({\rm mod}\ 2)$, (12d) implies
$y^{2}_{2}\equiv-1\equiv 3\ ({\rm mod}\ 4)$, again impossible.
Finally, if $k_{1}$ is odd and $\lambda_{1}$ even, (12d) implies
$y^{2}_{2}\equiv p-n^{2}\ ({\rm mod}\ 4);$ $y^{2}_{2}\equiv 3\ ({\rm mod}\
4)$, again an impossibility.
This concludes the proof of subcase (ii) and with it, the proof of the
theorem.
## 4 Numerical Examples
1. (i)
Below, we provide a list of all combinations of positive integers $n,p,m$;
such that both $p$ and $m$ are primes, $m=n^{2}-p$, and with either $n\equiv
0\ ({\rm mod}\ 4)$ and $p\equiv 3\ ({\rm mod}\ 8)$, or alternatively, $n\equiv
2\ ({\rm mod}\ 4)$ and $p\equiv 7\ ({\rm mod}\ 8)$. Under the constraint
$n\leq 16$, there are 24 such combinations.
$\begin{array}[]{|r|c|c|c|}\hline\cr&n&p&m\\\ \hline\cr 1)&4&3&13\\\ \hline\cr
2)&4&11&5\\\ \hline\cr 3)&6&7&29\\\ \hline\cr 4)&6&23&13\\\ \hline\cr
5)&6&31&5\\\ \hline\cr 6)&8&3&61\\\ \hline\cr 7)&8&11&53\\\ \hline\cr
8)&8&59&5\\\ \hline\cr 9)&10&47&53\\\ \hline\cr 10)&10&71&29\\\ \hline\cr
11)&12&43&101\\\ \hline\cr 12)&12&83&61\\\ \hline\cr\end{array}\hskip
36.135pt\begin{array}[]{|r|c|c|c|}\hline\cr&n&p&m\\\ \hline\cr
13)&12&107&37\\\ \hline\cr 14)&12&131&13\\\ \hline\cr 15)&12&139&5\\\
\hline\cr 16)&14&23&173\\\ \hline\cr 17)&14&47&149\\\ \hline\cr
18)&14&167&29\\\ \hline\cr 19)&14&191&5\\\ \hline\cr 20)&16&59&197\\\
\hline\cr 21)&16&83&173\\\ \hline\cr 22)&16&107&149\\\ \hline\cr
23)&16&227&29\\\ \hline\cr 24)&16&251&5\\\ \hline\cr\end{array}$
2. (ii)
Below, we provide a listing of all combinations of integers $n,p,m,N$; such
that $n,p,N>0,\ m<0,\ p$ and $N$ are both primes, $N=p-n^{2},\ m=-N,$ and with
either $n\equiv 0\ ({\rm mod}\ 4)$ and $p\equiv 3\ ({\rm mod}\ 8)$, or
alternatively, with $n\equiv 2\ ({\rm mod}\ 4)$ and $p\equiv 7\ ({\rm mod}\
8)$. Under the constraint $p\leq 251$, there are 29 such combinations.
$\begin{array}[]{|r|c|c|c|c|}\hline\cr&p&n&N&m\\\ \hline\cr 1)&7&2&3&-3\\\
\hline\cr 2)&23&2&19&-19\\\ \hline\cr 3)&47&2&43&-43\\\ \hline\cr
4)&47&6&11&-11\\\ \hline\cr 5)&59&4&43&-43\\\ \hline\cr 6)&67&8&3&-3\\\
\hline\cr 7)&71&2&67&-67\\\ \hline\cr 8)&79&2&73&-73\\\ \hline\cr
9)&79&6&43&-43\\\ \hline\cr 10)&83&4&67&-67\\\ \hline\cr 11)&83&8&19&-19\\\
\hline\cr 12)&103&6&67&-67\\\ \hline\cr 13)&103&10&3&-3\\\ \hline\cr
14)&107&8&43&-43\\\ \hline\cr 15)&131&8&67&-67\\\ \hline\cr\end{array}\hskip
36.135pt\begin{array}[]{|r|c|c|c|c|}\hline\cr&p&n&N&m\\\ \hline\cr
16)&163&12&19&-19\\\ \hline\cr 17)&167&2&163&-163\\\ \hline\cr
18)&167&6&131&-131\\\ \hline\cr 19)&167&10&67&-67\\\ \hline\cr
20)&179&4&163&-163\\\ \hline\cr 21)&199&6&163&-163\\\ \hline\cr
22)&199&14&3&-3\\\ \hline\cr 23)&211&12&67&-67\\\ \hline\cr
24)&227&4&211&-211\\\ \hline\cr 25)&227&8&163&-163\\\ \hline\cr
26)&227&12&83&-83\\\ \hline\cr 27)&239&10&139&-139\\\ \hline\cr
28)&239&14&43&-43\\\ \hline\cr 29)&251&12&107&-107\\\ \hline\cr&&&&\\\
\hline\cr\end{array}$
## 5 Historical Commentary
Mathematical research on diophantine equations of the form
$ax^{4}+bx^{2}y^{2}+cy^{4}=dz^{2}$ (13)
dates back to the early 17th century. The most comprehensive source of results
on such equations in the 300 year-period from the early 17th century to about
1920, is I. E. Dickson’s monumental book History of the Theory of Numbers,
Vol. II, (see [1]).
All or almost all results (at least the referenced ones) of that period can be
found in that book. Various researchers during that time period employed
decent methods to tackle such equations. Perhaps all the significant results
achieved in that 300-year period can be attributed to about 40-50
investigators. We list the names of thirty-two of them:
Fermat, Frenicle, St. Martin, Genocci, Lagrange, Legendre, Lebesgue, Euler,
Adrain, Gerardin, Aubry, Fauquenbergue, Sucksdorff, Gleizes, Mathieu, Moret-
Blank, Rignaux, Kausler, Fuss, Auric, Realis, Mantel, Desboves, Kramer,
Escott, Thue, Cunningham, Pepin, Lucas, Werebrusov, Carmichael, Pocklington.
A detailed account of the results obtained by these mathematicians is given in
[1], pages 615-639.
On the other hand, the last 75 years or so (from the early 1930’s to the
present) are marked by the introduction and development of what is known as
local methods as well as the connection/association of equations (13) with
elliptic curves. In particular, the beginning of the 75 year period (early
thirties) is characterized by a landmark, the Hasse Principle:
If $F\in{\mathbb{Z}}[x_{1},\ldots,x_{n}]$ is a homogenous polynomial of degree
$2$, then $F(x_{1},\ldots,x_{n})$ has a nontrivial solution in
${\mathbb{Z}}^{n}$ if, and only if,
1. (a)
it has a nontrivial solution in ${\mathbb{R}}^{n}$ and
2. (b)
it has a primitive solution modulo $p^{k}$ , for all primes $p$ and exponents
$k\geq 1$.
Here, a solution $(a_{1},\ldots,a_{n})$ is understood to be nontrivial if at
least one of the $a_{i}$’s is not zero. It is primitive if one of the
$a_{i}$’s is not divisible by $p$.
In 1951, E. Selmer (see [3]), presented an example of a homogenous polynomial
in three variables, and degree $n=3$ which fails the Hasse Principle.This is
the equation $3x^{3}+4y^{3}+5z^{3}=0$, whose only solution in
${\mathbb{Z}}^{3}$ is $(0,0,0)$ (so it has no nontrivial solutions). But it
obviously has nontrivial solutions in ${\mathbb{R}}^{3}$; and it has primitive
solutions modulo each prime power.
In their paper W. Aitken and F. Lemmermeyer, (see [4]), show that equation
(13) has a nontrivial solution in ${\mathbb{Z}}^{3}$ if, and only if, the
diophantine system (in four variables $u,v,w,z$)
$\left\\{\begin{array}[]{ll}{\rm with}\ b^{2}-4ac\neq
0,&au^{2}+bv^{2}+cw^{2}=dz^{2}\\\ \\\ {\rm and}\ d\ {\rm
squarefree},&uw=v^{2}\end{array}\right\\}$ (14)
has a nontrivial solution in ${\mathbb{Z}}^{4}$. This also holds when
${\mathbb{Z}}$ is replaced by any ring containing ${\mathbb{Z}}$. In
particular, it holds for ${\mathbb{R}}$.
Furthermore, (14) has a primitive solution modulo $p^{k}$ if, and only if,
(13) has a primitive solution modulo $p^{k}$; and $k\geq 2$. (If $p$ is not a
divisor of $d$, this can be extended to $k=1$.)
In 1940 and 1942 respectively, C.-E Lind and H. Reichardt, (see [5] and [6]),
found another counterexample to the Hasse Principle: the diophantine equation
(13) with $a=1,\ b=0,\ c=-17$, and $d=2$; that is the equation
$x^{4}-17y^{4}=2z^{2}$.
Aitken and Lemmermeyer generalized the Lind and Reichardt example by taking
$a=1,\ b=0,\ c=-q$, such that $q$ is a prime with $q\equiv 1\ ({\rm mod}\
16),\ d$ is squarefree, $d$ is a nonzero square but not a fourth power modulo
$q$, and $q$ is a fourth power modulo $p$ for every odd prime $p$ dividing
$d$. Thus, they obtained a family of diophantine equations (13) (or
equivalently, systems (14)) which fail the Hasse Principle. Their proofs of
the nontrivial insolvability (of each member of that family) in
${\mathbb{Z}}^{3}$ only involves quadratic reciprocity arguments. The harder
part is to give an elementary proof that the above equations have primitive
solutions modulo all prime powers.
Variants of the Hasse Principle, and the manner in which these principles
fail, can be found in a paper by B. Mazur (see [7]). Also, there is the
seminal work by J. Silverman (see [8]), which provides a comprehensive study
for the links between equations (13) and elliptic curves.
Alongside these developements of the last 75 years, there have been some
results obtained by elementary means only. For example, A. Wakulitz (see [9])
has offered an elementary proof that the diophantine equation
$x^{4}+9x^{2}y^{2}+27y^{4}=z^{2}$ has no solution in $({\mathbb{Z}}^{+})^{3}$.
A corollary of this (in the paper in [9]), is that the equation
$x^{3}+y^{3}=2z^{3}$ has no solution in ${\mathbb{Z}}^{3}$ with $x\neq y$ and
$z\neq 0$.
## References
* [1] Dickson, L. E., History of the Theory of Numbers, Vol. II, Chelsea Publishing, Providence, Rhode Island, (1992), 803 pp. ISBN: 0-8218-1935-6
* [2] Zelator, K., The diophantine equation $x^{2}+ky^{2}=z^{2}$ and integral triangles with a cosine value of $\frac{1}{n}$, Mathematics and Computer Education, Fall 2006.
* [3] Selmer, E., The diophantine equation $ax^{3}+by^{3}+cz^{3}=0$, Acta Math. 85 (1951), 203-362.
* [4] Aitken, W., Lemmermeyer, F., Counterexamples to Hasse Principle: An elementary introduction,
http://public.csusm.edu/aitken_html/m372/diophantine.pdf.
* [5] Lind, C.-E, Untersuchungen über die rationalen Punkte der ebenen kubischen kurven vom Geschlect Eins, Diss. Univ. Uppsala 1940.
* [6] Raichardt, H., Einige im Kleinen überall lösbare, im Grossen unlösbare diophantische Gleichungen, J. Reine Angew. Math. 184 (1942), 12-18.
* [7] Mazur, B., On the passage from local to global in number theory, Bull. Amer. Math. Soc. (N.S.) 29 (1993), no. 1, 14-50.
* [8] Silverman, J., The arithmetic of elliptic curves, Springer-Verlag 1986.
* [9] Wakulitz, A., On the equation $x^{3}+y^{3}=2z^{3}$, Colloq. Math., 5 (1957), 11-15.
|
arxiv-papers
| 2009-05-20T16:28:27 |
2024-09-04T02:49:02.791050
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Konstantine Zelator",
"submitter": "Konstantine Zelator",
"url": "https://arxiv.org/abs/0905.3346"
}
|
0905.3378
|
#
Interpretations of the Web of Data
Marko A. Rodriguez
T-5, Center for Nonlinear Studies
Los Alamos National Laboratory
Los Alamos, New Mexico 87545 E-mail address: marko@lanl.gov
###### Abstract
The emerging Web of Data utilizes the web infrastructure to represent and
interrelate data. The foundational standards of the Web of Data include the
Uniform Resource Identifier (URI) and the Resource Description Framework
(RDF). URIs are used to identify resources and RDF is used to relate
resources. While RDF has been posited as a logic language designed
specifically for knowledge representation and reasoning, it is more generally
useful if it can conveniently support other models of computing. In order to
realize the Web of Data as a general-purpose medium for storing and processing
the world’s data, it is necessary to separate RDF from its logic language
legacy and frame it simply as a data model. Moreover, there is significant
advantage in seeing the Semantic Web as a particular interpretation of the Web
of Data that is focused specifically on knowledge representation and
reasoning. By doing so, other interpretations of the Web of Data are exposed
that realize RDF in different capacities and in support of different computing
models.
## 1 Introduction
The common conception of the World Wide Web is that of a large-scale,
distributed file repository [6]. The typical files found on the World Wide Web
are Hyper-Text Markup Language (HTML) documents and other media such as image,
video, and audio files. The “World Wide” aspect of the World Wide Web pertains
to the fact that all of these files have an accessible location that is
denoted by a Uniform Resource Locator (URL) [55]; a URL denotes what physical
machine is hosting the file (i.e. what domain name/IP address), where in that
physical machine the file is located (i.e. what directory), and finally, which
protocol to use to retrieve that file from that machine (e.g. http, ftp,
etc.). The “Web” aspect of the World Wide Web pertains to the fact that a file
(typically an HTML document) can make reference (typically an href citation)
to another file. In this way, a file on machine $A$ can link to a file on
machine $B$ and in doing so, a network/graph/web of files emerges. The
ingenuity of the World Wide Web is that it combines remote file access
protocols and hypermedia and as such, has fostered a revolution in the way in
which information is disseminated and retrieved—in an open, distributed
manner. From this relatively simple foundation, a rich variety of uses
emerges: from the homepage, to the blog, to the online store.
The World Wide Web is primarily for human consumption. While HTML documents
are structured according to a machine understandable syntax, the content of
the documents are written in human readable/writable language (i.e. natural
human language). It is only through computationally expensive and relatively
inaccurate text analysis algorithms that a machine can determine the meaning
of such documents. For this reason, computationally inexpensive keyword
extraction and keyword-based search engines are the most prevalent means by
which the World Wide Web is machine processed. However, the human-readable
World Wide Web is evolving to support a machine-readable Web of Data. The
emerging Web of Data utilizes the same referencing paradigm as the World Wide
Web, but instead of being focused primarily on URLs and files, it is focused
on Uniform Resource Identifiers (URI) [7] and data.111The URI is the parent
class of both the URL and the Uniform Resource Name (URN) [55]. The “Data”
aspect of the Web of Data pertains to the fact that a URI can denote anything
that can be assigned an identifier: a physical entity, a virtual entity, an
abstract concept, etc. The “Web” aspect of the Web of Data pertains to the
fact that identified resource can be related to other resources by means of
the Resource Description Framework (RDF). Among other things, RDF is an
abstract data model that specifies the syntactic rules by which resources are
connected. If $U$ is the set of all URIs, $B$ the set of all blank or
anonymous nodes, and $L$ the set of all literals, then the Web of Data is
defined as
$\mathcal{W}\subseteq((U\cup B)\times U\times(U\cup B\cup L)).$
A single statement (or triple) in $\mathcal{W}$ is denoted $(s,p,o)$, where
$s$ is called the subject, $p$ the predicate, and $o$ the object. On the Web
of Data
> “[any man or machine can] start with one data source and then move through a
> potentially endless Web of data sources connected by RDF links. Just as the
> traditional document Web can be crawled by following hypertext links, the
> Web of Data can be crawled by following RDF links. Working on the crawled
> data, search engines can provide sophisticated query capabilities, similar
> to those provided by conventional relational databases. Because the query
> results themselves are structured data, not just links to HTML pages, they
> can be immediately processed, thus enabling a new class of applications
> based on the Web of Data.” [9]
As a data model, RDF can conveniently represent commonly used data structures.
From the knowledge representation and reasoning perspective, RDF provides the
means to make assertions about the world and infer new statements given
existing statements. From the network/graph analysis perspective, RDF supports
the representation of various network data structures. From the programming
and systems engineering perspective, RDF can be used to encode objects,
instructions, stacks, etc. The Web of Data, with its general-purpose data
model and supporting technological infrastructure, provides various computing
models a shared, global, distributed space. Unfortunately, this general-
purpose, multi-model vision was not the original intention of the designers of
RDF. RDF was created for the domain of knowledge representation and reasoning.
Moreover, it caters to a particular monotonic subset of this domain [29]. RDF
is not generally understood as supporting different computing models. However,
if the Web of Data is to be used as just that, a “web of data”, then it is up
to the applications leveraging this data to interpret what that data means and
what it can be used for.
The URI address space is an address space. It is analogous, in many ways, to
the address space that exists in the local memory of the physical machines
that support the representation of the Web of Data. With physical memory,
information is contained at an address. For a 64-bit machine, that information
is a 64-bit word. That 64-bit word can be interpreted as a literal primitive
(e.g. a byte, an integer, a floating point value) or yet another 64-bit
address (i.e. a pointer). This is how address locations denote data and link
to each other, respectively. On the Web of Data, a URI is simply an address as
it does not contain content.222This is not completely true. Given that a URL
is a subtype of a URI, and a URL can “contain” a file, it is possible for a
URI to “contain” information. It is through RDF that a URI address has
content. For instance, with RDF, a URI can reference a literal (i.e. xsd:byte,
xsd:integer, xsd:float) or another URI. Thus, RDF, as a data model, has many
similarities to typical local memory. However, the benefit of URIs and RDF is
that they create an inherently distributed and theoretically infinite space.
Thus, the Web of Data can be interpreted as a large-scale, distributed memory
structure. What is encoded and processed in that memory structure should not
be dictated at the level of RDF, but instead dictated by the domains that
leverage this medium for various application scenarios. The Web of Data should
be realized as an application agnostic memory structure that supports a rich
variety of uses: from Semantic Web reasoning, to Giant Global Graph analysis,
to Web of Process execution.
The intention of this article is to create a conceptual splinter that
separates RDF from its legacy use as a logic language and demonstrate that it
is more generally applicable when realized as only a data model. In this way,
RDF as the foundational standard for the Web of Data makes the Web of Data
useful to anyone wishing to represent information and compute in a global,
distributed space. Three specific interpretations of the Web of Data are
presented in order to elucidate the many ways in which the Web of Data is
currently being used. Moreover, within these different presentations, various
standards and technologies are discussed. These presentations are provided as
summaries, not full descriptions. In short, this article is more of a survey
of a very large and multi-domained landscape. The three interpretations that
will be discussed are enumerated below.
1. 1.
The Web of Data as a knowledge base (see §2).
* •
The Semantic Web is an interpretation of the Web of Data.
* •
RDF is the means by which a model of a world is created.
* •
There are many types of logic: logics of truth and logics of thought.
* •
Scalable solutions exist for reasoning on the Web of Data.
2. 2.
The Web of Data as a multi-relational network (see §3).
* •
The Giant Global Graph is an interpretation of the Web of Data.
* •
RDF is the means by which vertices are connected together by labeled edges.
* •
Single-relational network analysis algorithms can be applied to multi-
relational networks.
* •
Scalable solutions exist for network analysis on the Web of Data.
3. 3.
The Web of Data as an object repository (see §4).
* •
The Web of Process is an interpretation of the Web of Data.
* •
RDF is the means by which objects are represented and related to other
objects.
* •
An object’s representation can include both its fields and its methods.
* •
Scalable solutions exist for object-oriented computing on the Web of Data.
The landscape presented in this article is by no means complete and only
provides a glimpse into these different areas. Moreover, within each of these
three presented interpretations, applications and use-cases are not provided.
What is provided is a presentation of common computing models that have been
mapped to the Web of Data in order to take unique advantage of the Web as a
computing infrastructure.
## 2 A Distributed Knowledge Base
The Web of Data can be interpreted as a distributed knowledge base—a Semantic
Web. A knowledge base is composed of a set of statements about some “world”.
These statements are written in some language. Inference rules designed for
that language can be used to derive new statements from existing statements.
In other words, inference rules can be used to make explicit what is implicit.
This process is called reasoning. The Semantic Web initiative is primarily
concerned with this interpretation of the Web of Data.
> “For the Semantic Web to function, computers must have access to structured
> collections of information and sets of inference rules that they can use to
> conduct automated reasoning.” [8]
Currently, the Semantic Web interpretation of the Web of Data forces strict
semantics on RDF. That is, RDF is not simply a data model, but a logic
language. As a data model, it specifies how a statement $\tau$ is constructed
(i.e. $\tau\in((U\cup B)\times U\times(U\cup B\cup L))$). As a logic language
is species specific language constructs and semantics—a way of interpreting
what statements mean. Because RDF was developed in concert with requirements
provided by the knowledge representation and reasoning domain, RDF and the
Semantic Web have been very strongly aligned for many years. This is perhaps
the largest conceptual stronghold that exists as various W3C documents make
this point explicit.
> “RDF is an assertional logic, in which each triple expresses a simple
> proposition. This imposes a fairly strict monotonic discipline on the
> language, so that it cannot express closed-world assumptions, local default
> preferences, and several other commonly used non-monotonic constructs.” [29]
RDF is monotonic in that any asserted statement $\tau\in\mathcal{W}$ can not
be made “false” by future assertions. In other words, the truth-value of a
statement, once stated, does not change. RDF makes use of the open-world
assumption in that if a statement is not asserted, this does not entail that
it is “false”. The open-world assumption is contrasted to the closed-world
assumption found in many systems, where the lack of data is usually
interpreted as that data being “false”.
From this semantic foundation, extended semantics for RDF have been defined.
The two most prevalent language extensions are the RDF Schema (RDFS) [14] and
the Web Ontology Language (OWL) [39]. It is perhaps this stack of standards
that forms the most common conception of what the Semantic Web is. However, if
the Semantic Web is to be just that, a “semantic web”, then there should be a
way to represent other languages with different semantics. If RDF is forced to
be a monotonic, open-world language, then this immediately pigeonholes what
can be represented on the Semantic Web. If RDF is interpreted strictly as a
data model, devoid of semantics, then any other knowledge representation
language can be represented in RDF and thus, contribute to the Semantic Web.
This section will discuss three logic languages: RDFS, OWL, and the Non-
Axiomatic Logic (NAL) [58]. RDFS and OWL are generally understood in the
Semantic Web community as these are the primary logic languages used. However,
NAL is a multi-valent, non-monotonic language that, if to be implemented in
the Semantic Web, requires that RDF be interpreted as a data model, not as a
logic language. Moreover, NAL is an attractive language for the Semantic Web
because its reasoning process is inherently distributed, can handle
conflicting inconsistent data, and was designed on the assumption of
insufficient knowledge and computing resources.
### 2.1 RDF Schema
RDFS is a simple language with a small set of inference rules [14]. In RDF,
resources (e.g. URIs and blank nodes) maintain properties (i.e. rdf:Property).
These properties are used to relate resources to other resources and literals.
In RDFS, classes and properties can be formally defined. Class definitions
organize resources into abstract categories. Property definitions specify the
way in which these resources are related to one another. For example, it is
possible to state there exist people and dogs (i.e. classes) and people have
dogs as pets (i.e. a property). This is represented in RDFS in Figure 1.
Figure 1: An RDFS ontology that states that a person has a dog as a pet.
RDFS inference rules are used to derive new statements given existing
statements that use the RDFS langauge. RDFS inference rules make use of
statements with the following URIs:
* •
rdfs:Class: denotes a class as opposed to an instance.
* •
rdf:Property: denotes a property/role.
* •
rdfs:domain: denotes what a property projects from.
* •
rdfs:range: denotes what a property projects to.
* •
rdf:type: denotes that an instance is a type of class.
* •
rdfs:subClassOf: denotes that a class is a subclass of another.
* •
rdfs:subPropertyOf: denotes that a property is a sub-property of another.
* •
rdfs:Resource: denotes a generic resource.
* •
rdfs:Datatype: denotes a literal primitive class.
* •
rdfs:Literal: denotes a generic literal class.
RDFS supports two general types of inference: subsumption and realization.
Subsumption determines which classes are a subclass of another. The RDFS
inference rules that support subsumption are
$(?x,\texttt{rdf:type},\texttt{rdfs:Class})\implies(?x,\texttt{rdfs:subClassOf},\texttt{rdfs:Resource}),$
$(?x,\texttt{rdf:type},\texttt{rdfs:Datatype})\implies(?x,\texttt{rdfs:subClassOf},\texttt{rdfs:Literal}),$
$\displaystyle(?x,\texttt{rdfs:subPropertyOf},?y)\,\wedge\,$
$\displaystyle(?y,\texttt{rdfs:subPropertyOf},?z)$
$\displaystyle\implies(?x,\texttt{rdfs:subPropertyOf},?z).$
and finally,
$\displaystyle(?x,\texttt{rdfs:subClassOf},?y)\,\wedge\,$
$\displaystyle(?y,\texttt{rdfs:subClassOf},?z)$
$\displaystyle\implies(?x,\texttt{rdfs:subClassOf},?z).$
Thus, if both
(lanl:Chihuahua, rdfs:subClassOf, lanl:Dog)
(lanl:Dog, rdfs:subClassOf, lanl:Mammal)
are asserted, then it can be inferred that
(lanl:Chihuahua, rdfs:subClassOf, lanl:Mammal).
Next, realization is used to determine if a resource is an instance of a
class. The RDFS inference rules that support realization are
$(?x,?y,?z)\implies(?x,\texttt{rdf:type},\texttt{rdfs:Resource}),$
$(?x,?y,?z)\implies(?y,\texttt{rdf:type},\texttt{rdf:Property}),$
$(?x,?y,?z)\implies(?z,\texttt{rdf:type},\texttt{rdfs:Resource}),$
$(?x,\texttt{rdf:type},?y)\wedge(?y,\texttt{rdfs:subClassOf},?z)\implies(?x,\texttt{rdf:type},?z),$
$(?w,\texttt{rdfs:domain},?x)\wedge(?y,?w,?z)\implies(?y,\texttt{rdf:type},?x),$
and finally,
$(?w,\texttt{rdfs:domain},?x)\wedge(?y,?w,?z)\implies(?z,\texttt{rdf:type},?x).$
Thus if, along with the statements in Figure 1,
(lanl:marko, lanl:pet, lanl:fluffy)
is asserted, then it can be inferred that
(lanl:marko, rdf:type, lanl:Person)
(lanl:fluffy, rdf:type, lanl:Dog).
Given a knowledge base containing statements, these inference rules continue
to execute until they no longer produce novel statements. It is the purpose of
an RDFS reasoner to efficiently execute these rules. There are two primary
ways in which inference rules are executed: at insert time and at query time.
With respect to insert time, if a statement is inserted (i.e. asserted) into
the knowledge base, then the RDFS inference rules execute to determine what is
entailed by this new statement. These newly entailed statements are then
inserted in the knowledge base and the process continues. While this approach
ensures fast query times (as all entailments are guaranteed to exist at query
time), it greatly increases the number of statements generated. For instance,
given a deep class hierarchy, if a resource is a type of one of the leaf
classes, then it asserted that it is a type of all the super classes of that
leaf class. In order to alleviate the issue of “statement bloat,” inference
can instead occur at query time. When a query is executed, the reasoner
determines what other implicit statements should be returned with the query.
The benefits and drawbacks of each approach are benchmarked, like much of
computing, according to space vs. time.
### 2.2 Web Ontology Language
OWL is a more complicated language which extends RDFS by providing more
expressive constructs for defining classes [39]. Moreover, beyond subsumption
and realization, OWL provides inference rules to determine class and instance
equivalence. There are many OWL specific inference rules. In order to give the
flavor of OWL, without going into the many specifics, this subsection will
only present some examples of the more commonly used constructs. For a fine,
in depth review of OWL, please refer to [36].
Perhaps the most widely used language URI in OWL is owl:Restriction. In RDFS,
a property can only have a domain and a range. In OWL, a class can apply the
following restrictions to a property:
* •
owl:cardinality
* •
owl:minCardinality
* •
owl:maxCardinality
* •
owl:hasValue
* •
owl:allValuesFrom
* •
owl:someValuesFrom
Cardinality restrictions are used to determine equivalence and inconsistency.
For example, in an OWL ontology, it is possible to state that a country can
only have one president. This is expressed in OWL as diagrammed in Figure 2.
The _:1234 resource is a blank node that denotes a restriction on the country
class’s lanl:president property.
Figure 2: An OWL ontology that states that the president of a country is a
person and there can be at most one president for a country.
Next, if usa:barack and usa:obama are both asserted to be the president of the
United States with the statements
(usa:barack, lanl:president, usa:United_States)
(usa:obama, lanl:president, usa:United_States),
then it can be inferred (according to OWL rules) that these resources are
equivalent. This equivalence relationship is made possible because the maximum
cardinality of the lanl:president property of a country is $1$. Therefore, if
there are “two” people that are president, then they must be the same person.
This is made explicit when the reasoner asserts the statements
(usa:barack, owl:sameAs, usa:obama)
(usa:obama, owl:sameAs, usa:barack).
Next, if lanl:herbertv is asserted to be different from usa:barack (which,
from previous, was asserted to be the same as usa:obama) and lanl:herbertv is
also asserted to be the president of the United States, then an inconsistency
is detected. Thus, given the ontology asserted in Figure 2 and the previous
assertions, asserting
(lanl:herbertv, owl:differentFrom, usa:barack)
(lanl:herbertv, lanl:president, usa:United_States)
causes an inconsistency. This inconsistency is due to the fact that a country
can only have one president and lanl:herbertv is not usa:barack.
Two other useful language URIs for properties in OWL are
* •
owl:SymmetricProperty
* •
owl:TransitiveProperty
In short, if $y$ is symmetric, then if $(x,y,z)$ is asserted, then $(z,y,x)$
can be inferred. Next, if the property $y$ is transitive, then if $(w,y,x)$
and $(x,y,z)$ are asserted then, $(w,y,z)$ can be inferred.
There are various reasoners that exist for the OWL language. A popular OWL
reasoner is Pellet [44]. The purpose of Pellet is to execute the OWL rules
given existing statements in the knowledge base. For many large-scale
knowledge base applications (i.e. triple- or quad-stores), the application
provides its own reasoner. Popular knowledge bases that make use of the OWL
language are OWLim [34], Oracle Spatial [3], and AllegroGraph [1]. It is noted
that due to the complexity (in terms of implementation and running times),
many knowledge base reasoners only execute subsets of the OWL language. For
instance, AllegroGraph’s reasoner is called RDFS++ as it implements all of the
RDFS rules and only some of the OWL rules. However, it is also noted that
RacerPro [26] can be used with AllegroGraph to accomplish complete OWL
reasoning. Finally, OpenSesame [16] can be used for RDFS reasoning. Because
OpenSesame is both a knowledge base and an API, knowledge base applications
that implement the OpenSesame interfaces can automatically leverage the
OpenSesame RDFS reasoner; though there may be speed issues as the reasoner is
not natively designed for that knowledge base application.
### 2.3 Non-Axiomatic Logic
If RDF is strictly considered a monotonic, open-world logic language, then the
Semantic Web is solidified as an open-world, monotonic logic environment. If
reasoning is restricted to the legacy semantics of RDF, then it will become
more difficult to reason on the Semantic Web as it grows in size and as more
inconsistent knowledge is introduced. With the number of statements of the
Semantic Web, computational hurdles are met when reasoning with RDFS and OWL.
With inconsistent statements on the Semantic Web, it is difficult to reason as
inconsistencies are not handled gracefully in RDFS or OWL. In general, sound
and complete reasoning will not be feasible as the Semantic Web continues to
grow. In order to meet these challenges, the Large Knowledge Collider project
(LarKC) is focused on developing a reasoning platform to handle incomplete and
inconsistent data [21].
> “Researchers have developed methods for reasoning in rather small, closed,
> trustworthy, consistent, and static domains. They usually provide a small
> set of axioms and facts. [OWL] reasoners can deal with $10^{5}$ axioms
> (concept definitions), but they scale poorly for large instance sets. […]
> There is a deep mismatch between reasoning on a Web scale and efficient
> reasoning algorithms over restricted subsets of first-order logic. This is
> rooted in underlying assumptions of current systems for computational logic:
> small set of axioms, small number of facts, completeness of inference,
> correctness of inference rules and consistency, and static domains.” [21]
There is a need for practical methods to reason on the Semantic Web. One
promising logic was founded on the assumption of insufficient knowledge and
resources. This logic is called the Non-Axiomatic Logic (NAL) [57].
Unfortunately for the Semantic Web as it is now, NAL breaks the assumptions of
RDF semantics as NAL is multi-valent, non-monotonic, and makes use of
statements with a subject-predicate form. However, if RDF is considered simply
a data model, then it is possible to represent NAL statements and make use of
its efficient, distributed reasoning system. Again, for the massive-scale,
inconsistent world of the Semantic Web, sound and complete approaches are
simply becoming more unreasonable.
#### 2.3.1 Language
There are currently 8 NAL languages. Each language, from NAL-0 to NAL-8,
builds on the constructs of the previous in order to support more complex
statements. The following list itemizes the various languages and what can be
expressed in each.
* •
NAL-0: binary inheritance.
* •
NAL-1: inference rules.
* •
NAL-2: sets and variants of inheritance.
* •
NAL-3: intersections and differences
* •
NAL-4: products, images, and ordinary relations.
* •
NAL-5: statement reification.
* •
NAL-6: variables.
* •
NAL-7: temporal statements.
* •
NAL-8: procedural statements.
Every NAL language is based on a simple inheritance relationship. For example,
in NAL-0, which assumes all statements are binary,
$\texttt{lanl:marko}\rightarrow\texttt{lanl:Person}$
states that Marko (subject) inherits ($\rightarrow$) from person (predicate).
Given that all subjects and predicates are joined by inheritance, there is no
need to represent the copula when formally representing a statement.333This is
not completely true as different types of inheritance are defined in NAL-2
such as instance $\circ\\!\\!\\!\\!\rightarrow$, property
$\rightarrow\\!\\!\\!\circ$, and instance-property
$\circ\\!\\!\\!\\!\rightarrow\\!\\!\\!\circ$ inheritance. However, these $3$
types of inheritance can also be represented using the basic $\rightarrow$
inheritance. Moreover, the RDF representation presented can support the
explicit representation of other inheritance relationships if desired.. If
RDF, as a data model, is to represent NAL, then one possible representation
for the above statement is
(lanl:marko, lanl:1234, lanl:Person),
where lanl:1234 serves as a statement pointer. This pointer could be, for
example, a 128-bit Universally Unique Identifier (UUID) [37]. It is important
to maintain a statement pointer as beyond NAL-0, statements are not simply
“true” or “false”. A statement’s truth is not defined by its existence, but
instead by extra numeric metadata associated with the statement. NAL maintains
an
> “experience-grounded semantics [where] the truth value of a judgment
> indicates the degree to which the judgment is supported by the system’s
> experience. Defined in this way, truth value is system-dependent and time-
> dependent. Different systems may have conflicting opinions, due to their
> different experiences.” [58]
A statement has a particular truth value associated with it that is defined as
the frequency of supporting evidence (denoted $f\in[0,1]$) and the confidence
in the stability of that frequency (denoted $c\in[0,1]$). For example, beyond
NAL-0, the statement “Marko is a person” is not “100% true” simply because it
exists. Instead, every time that aspects of Marko coincide with aspects of
person, then $f$ increases. Likewise, every time aspects of Marko do not
coincide with aspects of person, $f$ decreases.444The idea of “aspects
coinciding” is formally defined in NAL, but is not discussed here for the sake
of brevity. In short, a statement’s $f$ is modulated by both the system’s
“external” experiences and “internal” reasoning—both create new evidence. See
[60] for an in depth explanation. Thus, NAL is non-monotonic as its statement
evidence can increase and decrease. To demonstrate $f$ and $c$, the above
“Marko is a person” statement can be represented in NAL-1 as
$\texttt{lanl:marko}\rightarrow\texttt{lanl:Person}\;<0.9,0.8>,$
where, for the sake of this example, $f=0.9$ and $c=0.8$. In an RDF
representation, this can be denoted
(lanl:marko, lanl:1234, lanl:Person)
(lanl:1234, nal:frequency, "0.9"^^xsd:float)
(lanl:1234, nal:confidence, "0.8"^^xsd:float),
where the lanl:1234 serves as a statement pointer allowing NAL’s nal:frequency
and nal:confidence constructs to reference the inheritance statement.
NAL-4 supports statements that are more analogous to the subject-object-
predicate form of RDF. If Marko is denoted by the URI lanl:marko, Alberto by
the URI ucla:apepe, and friendship by the URI lanl:friend, then in NAL-4, the
statement “Alberto is a friend of Marko” is denoted in RDF as
(ucla:apepe, lanl:friend, lanl:marko).
In NAL-4 this is represented as
$(\texttt{ucla:apepe}\times\texttt{lanl:marko})\rightarrow\texttt{lanl:friend}\;<0.8,0.5>,$
where $f=0.8$ and $c=0.5$ are provided for the sake of the example. This
statement states that the set $(\texttt{ucla:apepe},\texttt{lanl:marko})$
inherits the property of friendship to a certain degree and stability as
defined by $f$ and $c$, respectively. The RDF representation of this NAL-4
construct can be denoted
(lanl:2345, nal:_1, ucla:pepe)
(lanl:2345, nal:_2, lanl:marko)
(lanl:2345, lanl:3456, lanl:friend)
(lanl:3456, nal:frequency, "0.8"^^xsd:float)
(lanl:3456, nal:confidence, "0.5"^^xsd:float).
In the triples above, lanl:2345 serves as an set and thus, this set inherits
from friendship. That is, Alberto and Marko inherit the property of
friendship.
#### 2.3.2 Reasoning
> “In traditional logic, a ‘valid’ or ‘sound’ inference rule is one that never
> derives a false conclusion (that is, it will be contradicted by the future
> experience of the system) from true premises [19]. [In NAL], a ‘valid
> conclusion’ is one that is most consistent with the evidence in the past
> experience, and a ‘valid inference rule’ is one whose conclusions are
> supported by the premises used to derive them.” [60]
Given that NAL is predicated on insufficient knowledge, there is no guarantee
that reasoning will produce “true” knowledge with respect to the world that
the statements are modeling as only a subset of that world is ever known.
However, this does not mean that NAL reasoning is random, instead, it is
consistent with respect to what the system knows. In other words,
> “the traditional definition of validity of inference rules—that is to get
> true conclusions from true premises—no longer makes sense in [NAL]. With
> insufficient knowledge and resources, even if the premises are true with
> respect to the past experience of the system there is no way to get
> infallible predictions about the future experience of the system even though
> the premises themselves may be challenged by new evidence.” [58]
The inference rules in NAL are all syllogistic in that they are based on
statements sharing similar terms (i.e. URIs) [45]. The typical inference rule
in NAL has the following form
$(\tau_{1}<f_{1},c_{1}>\;\wedge\;\tau_{2}<f_{2},c_{2}>)\;\vdash\;\tau_{3}<f_{3},c_{3}>,$
where $\tau_{1}$ and $\tau_{2}$ are statements that share a common term. There
are four standard syllogisms used in NAL reasoning. These are enumerated
below.
1. 1.
deduction: $(x\rightarrow y<f_{1},c_{1}>\;\wedge\;y\rightarrow
z<f_{2},c_{2}>)\;\vdash\;x\rightarrow z<f_{3},c_{3}>$.
2. 2.
induction: $(x\rightarrow y<f_{1},c_{1}>\;\wedge\;z\rightarrow
y<f_{2},c_{2}>)\;\vdash\;x\rightarrow z<f_{3},c_{3}>$.
3. 3.
abduction: $(x\rightarrow y<f_{1},c_{1}>\;\wedge\;x\rightarrow
z<f_{2},c_{2}>)\;\vdash\;y\rightarrow z<f_{3},c_{3}>$.
4. 4.
exemplification: $(x\rightarrow y<f_{1},c_{1}>\;\wedge\;y\rightarrow
z<f_{2},c_{2}>)\;\vdash\;z\rightarrow x<f_{3},c_{3}>$.
Two other important inference rule not discussed here are choice (i.e. what to
do with contradictory evidence) and revision (i.e. how to update existing
evidence with new evidence). Each of the inference rules have a different
formulas for deriving $<f_{3},c_{3}>$ from $<f_{1},c_{1}>$ and
$<f_{2},c_{2}>$.555Note that when the entailed statement already exists, its
$<f_{3},c_{3}>$ component is revised according to the revision rule. Revision
is not discussed in this article. These formulas are enumerated below.
1. 1.
deduction: $f_{3}=f_{1}f_{2}$ and $c_{3}=f_{1}c_{1}f_{2}c_{2}$.
2. 2.
induction: $f_{3}=f_{1}$ and
$c_{3}=\frac{f_{1}c_{1}c_{2}}{f_{1}c_{1}c_{2}+k}$.
3. 3.
abduction: $f_{3}=f_{2}$ and
$c_{3}=\frac{f_{2}c_{1}c_{2}}{f_{2}c_{1}c_{2}+k}$.
4. 4.
exemplification: $f_{3}=1$ and
$c_{3}=\frac{f_{2}c_{1}f_{2}c_{2}}{f_{1}c_{1}f_{2}c_{2}+k}$.
The variable $k\in\mathbb{N}^{+}$ is a system specific parameter used in the
determination of confidence.
To demonstrate deduction, suppose the two statements
$\texttt{lanl:marko}\rightarrow\texttt{lanl:Person}<0.5,0.5>$
$\texttt{lanl:Person}\rightarrow\texttt{lanl:Mammal}<0.9,0.9>.$
Given these two statements and the inference rule for deduction, it is
possible to infer
$\texttt{lanl:marko}\rightarrow\texttt{lanl:Mammal}<0.45,0.2025>.$
Next suppose the statement
$\texttt{lanl:Dog}\rightarrow\texttt{lanl:Mammal}<0.9,0.9>.$
Given the existing statements, induction, and a $k=1$, it is possible to infer
$\texttt{lanl:marko}\rightarrow\texttt{lanl:Dog}<0.45,0.0758>.$
Thus, while the system is not confident, according to all that the system
knows, Marko is a type of dog. This is because there are aspects of Marko that
coincide with aspects of dog—they are both mammals. However, future evidence,
such as fur, four legs, sloppy tongue, etc. will be further evidence that
Marko and dog do not coincide and thus, the $f$ of
$\texttt{lanl:marko}\rightarrow\texttt{lanl:Dog}$ will decrease.
The significance of NAL reasoning is that all inference is based on local
areas of the knowledge base. That is, all inference requires only two degrees
of separation from the resource being inferred on. Moreover, reasoning is
constrained by available computational resources, not by a requirement for
logical completeness. Because of these two properties, the implemented
reasoning system is inherently distributed and when computational resources
are not available, the system does not break, it simply yields less
conclusions. For the Semantic Web, it may be best to adopt a logic that is
better able to take advantage of its size and inconsistency. With a reasoner
that is distributable and functions under variable computational resources,
and by making use of a language that is non-monotonic and supports degrees of
“truth”, NAL may serve as a more practical logic for the Semantic Web.
However, this is only possible if the RDF data model is separated from the RDF
semantics and NAL’s subject-predicate form can be legally represented.
There are many other language constructs in NAL that are not discussed here.
For an in depth review of NAL, please refer to the defacto reference at [60].
Moreover, for a fine discussion of the difference between logics of truth
(i.e. mathematical logic—modern predicate logic) and logics of thought (i.e.
cognitive logic—NAL), see [59].
## 3 A Distributed Multi-Relational Network
The Web of Data can be interpreted as a distributed multi-relational network—a
Giant Global Graph.666The term “graph” is used in the mathematical domain of
graph theory and the term “network” is used primarily in the physics and
computer science domain of network theory. In this chapter, both terms are
used depending on their source. Moreover, with regard to this article, these
two terms are deemed synonymous with each other. A mutli-relational network
denotes a set of vertices (i.e. nodes) that are connected to one another by
set of labeled edges (i.e. typed links).777A multi-relational network is also
known as a directed labeled graph or semantic network. In the graph and
network theory community, the multi-relational network is less prevalent. The
more commonly used network data structure is the single-relational network,
where all edges are of the same type and thus, there is no need to label
edges. Unfortunately, most network algorithms have been developed for the
single-relational network data structure. However, it is possible to port all
known single-relational network algorithms over to the multi-relational
domain. In doing so, it is possible to leverage these algorithms on the Giant
Global Graph. The purpose of this section is to
1. 1.
formalize the single-relational network (see §3.1),
2. 2.
formalize the multi-relational network (see §3.2),
3. 3.
present a collection of common single-relational network algorithms (see
§3.3), and then finally,
4. 4.
present a method for porting all known single-relational network algorithms
over to the multi-relational domain (see §3.4).
Network algorithms are useful in many respects and have been generally applied
to analysis and querying. If the network models an aspect of the world, then
network analysis techniques can be used to elucidate general structural
properties of the network and thus, the world. Moreover, network query
algorithms have been developed for searching and ranking. When these
algorithms can be effectively and efficiently applied to the Giant Global
Graph, the Giant Global Graph can serve as a medium for network analysis and
query.
### 3.1 Single-Relational Networks
The single-relational network represents a set of vertices that are related to
one another by a homogenous set of edges. For instance, in a single-relational
coauthorship network, all vertices denote authors and all edges denote a
coauthoring relationship. Coauthorship exists between two authors if they have
both written an article together. Moreover, coauthorship is symmetric—if
person $x$ coauthored with person $y$, then person $y$ has coauthored with
person $x$. In general, these types of symmetric networks are known as
undirected, single-relational networks and can be denoted
$G^{\prime}=(V,E\subseteq\\{V\times V\\}),$
where $V$ is the set of vertices and $E$ is the set of undirected edges. The
edge $\\{i,j\\}\in E$ states that vertex $i$ and $j$ are connected to each
other. Figure 3 diagrams an undirected coauthorship edge between two author
vertices.
Figure 3: An undirected edge between two authors in an undirected single-
relational network.
Single-relational networks can also be directed. For instance, in a single-
relational citation network, the set of vertices denote articles and the set
of edges denote citations between the articles. In this scenario, the edges
are not symmetric as one article citing another does not imply that the cited
article cites the citing article. Directed single-relational networks can be
denoted
$G=(V,E\subseteq(V\times V)),$
where $(i,j)\in E$ states that vertex $i$ is connected to vertex $j$. Figure 4
diagrams a directed citation edge between two article vertices.
Figure 4: A directed edge between two articles in a directed single-
relational network.
Both undirected and directed single-relational networks have a convenient
matrix representation. This matrix is known as an adjacency matrix and is
denoted
$\mathbf{A}_{i,j}=\begin{cases}1&\text{if}\;(i,j)\in E\\\
0&\text{otherwise,}\end{cases}$
where $\mathbf{A}\in\\{0,1\\}^{|V|\times|V|}$. If $\mathbf{A}_{i,j}=1$, then
vertex $i$ is adjacent (i.e. connected) to vertex $j$. It is important to note
that there exists an information-preserving, bijective mapping between the
set-theoretic and matrix representations of a network. Throughout the
remainder of this section, depending on the algorithm presented, one or the
other form of a network is used. Finally, note that the remainder of this
section is primarily concerned with directed networks as a directed network
can model an undirected network. In other words, the undirected edge
$\\{i,j\\}$ can be represented as the two directed edges $(i,j)$ and $(j,i)$.
### 3.2 Multi-Relational Networks
The multi-relational network is a more complicated structure that can be used
to represent multiple types of relationships between vertices. For instance,
it is possible to not only represent researchers, but also their articles in a
network of edges that represent authorship, citation, etc. A directed multi-
relational network can be denoted
$M=(V,\mathbb{E}=\\{E_{0},E_{1},\ldots,E_{m}\subseteq(V\times V)\\}),$
where $\mathbb{E}$ is a family of edge sets such that any
$E_{k}\in\mathbb{E}:1\leq k\leq m$ is a set of edges with a particular meaning
(e.g. authorship, citation, etc.). A multi-relational network can be
interpreted as a collection of single-relational networks that all share the
same vertex set. Another representation of a multi-relational network is
similar to the one commonly employed to define an RDF graph. This
representation is denoted
$M^{\prime}\subseteq(V\times\Omega\times V),$
where $\Omega$ is the set of edge labels. In this representation if $i,j\in V$
and $k\in\Omega$, then the triple $(i,k,j)$ states that vertex $i$ is
connected to vertex $j$ by the relationship type $k$.
Figure 5 diagrams multiple relationship types between scholars and articles in
a multi-relational network.
Figure 5: Multiple types of edges between articles and scholars in a directed
multi-relational network.
Like the single-relational network and its accompanying adjacency matrix, the
multi-relational network has a convenient $3$-way tensor representation. This
$3$-way tensor is denoted
$\mathcal{A}^{k}_{i,j}=\begin{cases}1&\text{if}\;(i,j)\in E_{k}:1\leq k\leq
m\\\ 0&\text{otherwise.}\end{cases}$
This representation can be interpreted as a collection of adjacency matrix
“slices”, where each slice is a particular edge type. In other words, if
$\mathcal{A}^{k}_{i,j}=1$, then $(i,k,j)\in M^{\prime}$. Like the relationship
between the set-theoretic and matrix forms of a single-relational network,
$M$, $M^{\prime}$, and $\mathcal{A}$ can all be mapped onto one another
without loss of information. Each representation will be used depending on the
usefulness of its form with respect to the idea being expressed.
On the Giant Global Graph, RDF serves as the specification for graphing
resources. Vertices are denoted by URIs, blank nodes, and literals and the
edge labels are denoted by URIs. Multi-relational network algorithms can be
used to exploit the Giant Global Graph. However, there are few algorithms
dedicated specifically to multi-relational networks. Most network algorithms
have been designed for single-relational networks. The remainder of this
section will discuss some of the more popular single-relational network
algorithms and then present a method for porting these algorithms (as well as
other single-relational network algorithms) over to the multi-relational
domain. This section concludes with a distributable and scalable method for
executing network algorithms on the Giant Global Graph.
### 3.3 Single-Relational Network Algorithms
The design and study of graph and network algorithms is conducted primarily by
mathematicians (graph theory) [17], physicists and computer scientists
(network theory) [12], and social scientists (social network analysis) [61].
Many of the algorithms developed in these domains can be used together and
form the general-purpose “toolkit” for researchers doing network analysis and
for engineers developing network-based services. The following itemized list
presents a collection of the single-relational network algorithms that will be
reviewed in this subsection. As denoted with its name in the itemization, each
algorithm can be used to identify properties of vertices, paths, or the
network. Vertex metrics assign a real value to a vertex. Path metrics assign a
real value to a path. And finally, network metrics assign a real value to the
network as a whole.
* •
shortest path: path metric (§3.3.1)
* •
eccentricity: vertex metric (§3.3.2)
* •
radius: network metric (§3.3.2)
* •
diameter: network metric (§3.3.2)
* •
closeness: vertex metric (§3.3.3)
* •
betweenness: vertex metric (§3.3.3)
* •
stationary probability distribution: vertex metric (§3.3.4)
* •
PageRank: vertex metric (§3.3.5)
* •
spreading activation: vertex metric (§3.3.6)
* •
assortative mixing: network metric (§3.3.7)
A simple intuitive approach to determine the appropriate algorithm to use for
an application scenario is presented in [35]. In short, various factors come
into play when selecting a network algorithm such as the topological features
of the network (e.g. its connectivity and its size), the computational
requirements of the algorithms (e.g. its complexity), the type of results that
are desired (e.g. personalized or global), and the meaning of the algorithm’s
result (e.g. geodesic-based, flow-based, etc.). The following sections will
point out which features describe the presented algorithms.
#### 3.3.1 Shortest Path
The shortest path metric is the foundation of all other geodesic metrics. The
other geodesic metrics discussed are eccentricity, radius, diameter,
closeness, and betweenness. A shortest path is defined for any two vertices
$i,j\in V$ such that the sink vertex $j$ is reachable from the source vertex
$i$. If $j$ is unreachable from $i$, then the shortest path between $i$ and
$j$ is undefined. Thus, for geodesic metrics, it is important to only
considered strongly connected networks, or strongly connected components of a
network.888Do not confuse a strongly connected network with a fully connected
network. A fully connected network is where every vertex is connected to every
other vertex directly. A strongly connected network is where every vertex is
connected to every other vertex indirectly (i.e. there exists a path from any
$i$ to any $j$). The shortest path between any two vertices $i$ and $j$ in a
single-relational network is the smallest of the set of all paths between $i$
and $j$. If $\rho:V\times V\rightarrow Q$ is a function that takes two
vertices and returns the set of all paths $Q$ where for any $q\in Q$,
$q=(i,\dots,j)$, then the length of the shortest path between $i$ and $j$ is
$min(\bigcup_{q\in Q}|q|-1)$, where $min$ returns the smallest value of its
domain. The shortest path function is denoted $s:V\times
V\rightarrow\mathbb{N}$ with the function rule
$s(i,j)=min\left(\bigcup_{q\in\rho(i,j)}|q|-1\right).$
There are many algorithms to determine the shortest path between vertices in a
network. Dijkstra’s method is perhaps the most popular as it is the typical
algorithm taught in introductory algorithms classes [20]. However, if the
network is unweighted, then a simple breadth-first search is a more efficient
way to determine the shortest path between $i$ and $j$. Starting from $i$ a
“fan-out” search for $j$ is executed where at each time step, adjacent
vertices are traversed to. The first path that reaches $j$ is the shortest
path from $i$ to $j$.
#### 3.3.2 Eccentricity, Radius, and Diameter
The radius and diameter of a network require the determination of the
eccentricity of every vertex in $V$. The eccentricity of a vertex $i$ is the
largest shortest path between $i$ and all other vertices in $V$ such that the
eccentricity function $e:V\rightarrow\mathbb{N}$ has the rule
$e(i)=max\left(\bigcup_{j\in V}s(i,j):i\neq j\right),$
where $max$ returns the largest value of its domain [28]. The eccentricity
metric calculates $|V|-1$ shortest paths of a particular vertex.
The radius of the network is the minimum eccentricity of all vertices in $V$
[61]. The function $r:G\rightarrow\mathbb{N}$ has the rule
$r(G)=min\left(\bigcup_{i\in V}e(i)\right).$
Finally, the diameter of a network is the maximum eccentricity of the vertices
in $V$ [61]. The function $d:G\rightarrow\mathbb{N}$ has the rule
$d(G)=max\left(\bigcup_{i\in V}e(i)\right).$
The diameter of a network is, in some cases, telling of the growth properties
of the network (i.e. the general principle by which new vertices and edges are
added). For instance, if the network is randomly generated (edges are randomly
assigned between vertices), then the diameter of the network is much larger
then if the network is generated according to a more “natural growth” function
such as a preferential attachment model, where highly connected vertices tend
to get more edges (colloquially captured by the phrase “the rich get richer”)
[11]. Thus, in general, natural networks tend to have a much smaller diameter.
This was evinced by an empirical study of the World Wide Web citation network,
where the diameter of the network was concluded to be only $19$ [2].
#### 3.3.3 Closeness and Betweenness Centrality
Closeness and betweenness centrality are popular network metrics for
determining the “centralness” of a vertex and have been used in sociology
[61], bioinformatics [43], and bibliometrics [10]. Centrality is a loose term
that describes the intuitive notion that some vertices are more
connected/integral/central/influential within the network than others.
Closeness centrality is one such centrality measure and is defined as the mean
shortest path between some vertex $i$ and all the other vertices in $V$ [5,
38, 52]. The function $c:V\rightarrow\mathbb{R}$ has the rule
$c(i)=\frac{1}{\sum_{j\in V}s(i,j)}.$
Betweenness centrality is defined for a vertex in $V$ [23, 13]. The
betweenness of $i\in V$ is the number of shortest paths that exist between all
vertices $j,k\in V$ that have $i$ in their path divided by the total number of
shortest paths between $j$ and $k$, where $i\neq j\neq k$. If $\sigma:V\times
V\rightarrow Q$ is the function that returns the set of shortest paths between
any two vertices $j$ and $k$ such that
$\sigma(j,k)=\bigcup_{q\in p(j,k)}q:|q|-1=s(j,k)$
and $\hat{\sigma}:V\times V\times V\rightarrow Q$ is the set of shortest paths
between two vertices $j$ and $k$ that have $i$ in the path, where
$\hat{\sigma}(j,k,i)=\bigcup_{q\in p(j,k)}q:(|q|-1=s(j,k)\;\wedge\;i\in q),$
then the betweenness function $b:V\rightarrow\mathbb{R}$ has the rule
$b(i)=\sum_{i\neq j\neq k\in V}\frac{|\hat{\sigma}(j,k,i)|}{|\sigma(j,k)|}.$
There are many variations to the standard representations presented above. For
a more in depth review on these metrics, see [61] and [12]. Finally,
centrality is not restricted only to geodesic metrics. The next three
algorithms are centrality metrics based on random walks or “flows” through a
network.
#### 3.3.4 Stationary Probability Distribution
A Markov chain is used to model the states of a system and the probability of
transition between states [27]. A Markov chain is best represented by a
probabilistic, single-relational network where the states are vertices, the
edges are transitions, and the edge weights denote the probability of
transition. A probabilistic, single-relational network can be denoted
$G^{\prime\prime}=\left(V,E\subseteq(V\times
V),\omega:E\rightarrow[0,1]\right)$
where $\omega$ is a function that maps each edge in $E$ to a probability
value. The outgoing edges of any vertex form a probability distribution that
sums to $1.0$. In this section, all outgoing probabilities from a particular
vertex are assumed to be equal. Thus, $\forall
j,k\in\Gamma^{+}(i):\omega(i,j)=\omega(i,k)$, where $\Gamma^{+}(i)\subseteq V$
is the set of vertices adjacent to $i$.
A random walker is a useful way to visualize the transitioning between
vertices. A random walker is a discrete element that exists at a particular
$i\in V$ at a particular point in time $t\in\mathbb{N}^{+}$. If the vertex at
time $t$ is $i$ then the next vertex at time $t+1$ will be one of the vertices
adjacent to $i$ in $\Gamma^{+}(i)$. In this manner, the random walker makes a
probabilistic jump to a new vertex at every time step. As time $t$ goes to
infinity a unique stationary probability distribution emerges if and only if
the network is aperiodic and strongly connected. The stationary probability
distribution expresses the probability that the random walker will be at a
particular vertex in the network. In matrix form, the stationary probability
distribution is represented by a row vector $\pi\in[0,1]^{|V|}$, where
$\pi_{i}$ is the probability that the random walker is at $i$ and $\sum_{i\in
V}\mathbf{\pi}_{i}=1.0$. If the network is represented by the row-stochastic
adjacency matrix
$\mathbf{A}_{i,j}=\begin{cases}\frac{1}{|\Gamma^{+}(i)|}&\text{if }(i,j)\in
E\\\ 0&\text{otherwise}\end{cases}$
and if the network is aperiodic and strongly connected, then there exists some
$\pi$ such that $\pi\mathbf{A}=\pi$. Thus, the stationary probability
distribution is the primary eigenvector of $\mathbf{A}$. The primary
eigenvector of a network is useful in ranking its vertices as those vertices
that are more central are those that have a higher probability in $\pi$. Thus,
intuitively, where the random walker is likely to be is a indicator of how
central the vertex is. However, if the network is not strongly connected (very
likely for most natural networks), then a stationary probability distribution
does not exist.
#### 3.3.5 PageRank
PageRank makes use of the random walker model previously presented [15].
However, in PageRank, the random walker does not simply traverse the single-
relational network by moving between adjacent vertices, but instead has a
probability of jumping, or “teleporting”, to some random vertex in the
network. In some instances, the random walker will follow an outgoing edge
from its current vertex location. In other instances, the random walker will
jump to some other random vertex in the network that is not necessarily
adjacent to it. The benefit of this model is that it ensures that the network
is strongly connected and aperiodic and thus, there exists a stationary
probability distribution. In order to calculate PageRank, two networks are
used. The standard single-relational network is represented as the row-
stochastic adjacency matrix
$\mathbf{A}_{i,j}=\begin{cases}\frac{1}{|\Gamma^{+}(i)|}&\text{if }(i,j)\in
E\\\ \frac{1}{|V|}&\text{otherwise.}\end{cases}$
Any $i\in V$ where $\Gamma^{+}(i)=\emptyset$ is called a “rank-sink”. Rank-
sinks ensure that the network is not strongly connected. To rectify this
connectivity problem, all vertices that are rank-sinks are connected to every
other vertex with probability $\frac{1}{|V|}$. Next, for teleportation, a
fully connected network is created that is denoted
$\mathbf{B}_{i,j}=\frac{1}{|V|}$.
The random walker will choose to use $\mathbf{A}$ or $\mathbf{B}$ at time step
$t$ as its transition network depending on the probability value
$\alpha\in(0,1]$, where in practice, $\alpha=0.85$. This means that 85% of the
time the random walker will use the edges in $\mathbf{A}$ to traverse, and the
other 15% of the time, the random walker will use the edges in $\mathbf{B}$.
The $\alpha$-biased union of the networks $\mathbf{A}$ and $\mathbf{B}$
guarantees that the random walker is traversing an strongly connected and
aperiodic network. The random walker’s traversal network can be expressed by
the matrix
$\mathbf{C}=\alpha\mathbf{A}+(1-\alpha)\mathbf{B}.$
The PageRank row vector $\pi\in[0,1]^{|V|}$ has the property
$\pi\mathbf{C}=\pi$. Thus, the PageRank vector is the primary eigenvector of
the modified single-relational network. Moreover, $\pi$ is the stationary
probability distribution of $\mathbf{C}$. From a certain perspective, the
primary contribution of the PageRank algorithm is not in the way it is
calculated, but in how the network is modified to support a convergence to a
stationary probability distribution. PageRank has been popularized by the
Google search engine and has been used as a ranking algorithm in various
domains. Relative to the geodesic centrality algorithms presented previous,
PageRank is a more efficient way to determine a centrality score for all
vertices in a network. However, calculating the stationary probability
distribution of a network is not cheap and for large networks, can not be
accomplished in real-time. Local rank algorithms are more useful for real-time
results in large-scale networks such as the Giant Global Graph.
#### 3.3.6 Spreading Activation
Both the stationary probability distribution and PageRank are global rank
metrics. That is, they rank all vertices relative to all vertices and as such,
require a full network perspective. However, for many applications, a local
rank metric is desired. Local rank metrics rank a subset of vertices relative
to some set of source vertices. Local rank metrics have the benefit of being
faster to compute and being relative to a particular area of the network. For
large-scale networks, local rank metrics are generally more practical for
real-time queries.
Perhaps the most popular local rank metric is spreading activation. Spreading
activation is a network analysis technique that was inspired by the spreading
activation potential found in biological neural networks [18, 4, 30]. This
algorithm (and its many variants) has been used extensively in semantic
network reasoning and recommender systems. The purpose of the algorithm is to
expose, in a computationally efficient manner, those vertices which are
closest (in terms of a flow distance) to a particular set of vertices. For
example, given $i,j,k\in V$, if there exists many short recurrent paths
between vertex $i$ and vertex $j$ and not so between $i$ and $k$, then it can
be assumed that vertex $i$ is more “similar” to vertex $j$ than $k$. Thus, the
returned ranking will rank $j$ higher than $k$ relative to $i$. In order to
calculate this distance, “energy” is assigned to vertex $i$. Let
$x\in[0,1]^{|V|}$ denote the energy vector, where at the first time step all
energy is at $i$ such that $x_{i}^{1}=1.0$. The energy vector is propagated
over $\mathbf{A}$ for $\hat{t}\in\mathbb{N}^{+}$ number of steps by the
equation $x^{t+1}=x^{t}\mathbf{A}:t+1\leq\hat{t}$. Moreover, at every time
step, $x$ is decayed some amount by $\delta\in[0,1]$. At the end of the
process, the vertex that had the most energy flow through it (as recorded by
$\pi\in\mathbb{R}^{|V|}$) is considered the vertex that is most related to
vertex $i$. Algorithm 1 presents this spreading activation algorithm. The
resultant $\pi$ provides a ranking of all vertices at most $\hat{t}$ steps
away from $i$.
begin $t=1$
while _$t\leq\hat{t}$_ do $\mathbf{\pi}=\mathbf{\pi}+x$
$x=(\delta x)\mathbf{A}$
$t=t+1$
end return _$\mathbf{\pi}$_ end
Algorithm 1 A spreading activation algorithm.
A class of algorithms known as “priors” algorithms perform computations
similar to the local rank spreading activation algorithm, but do so using a
stationary probability distribution [62]. Much like the PageRank algorithm
distorts the original network, priors algorithms distort the local
neighborhood of the graph and require at every time step, with some
probability, that all random walkers return to their source vertex. The long
run behavior of such systems yield a ranking biased towards (or relative to)
the source vertices and thus, can be characterized as local rank metrics.
#### 3.3.7 Assortative Mixing
The final single-relational network algorithm discussed is assortative mixing.
Assortative mixing is a network metric that determines if a network is
assortative (colloquially captured by the phrase “birds of a feather flock
together”), disassortative (colloquially captured by the phrase “opposites
attract”), or uncorrelated. An assortative mixing algorithm returns values in
$[-1,1]$, where $1$ is assortative, $-1$ is disassortative, and $0$ is
uncorrelated. Given a collection of vertices and metadata about each vertex,
it is possible to determine the assortative mixing of the network. There are
two assortative mixing algorithms: one for scalar or numeric metadata (e.g.
age, weight, etc.) and one for nominal or categorical metadata (e.g.
occupation, sex, etc.). In general, an assortative mixing algorithm can be
used to answer questions such as:
* •
Do friends in a social network tend to be the same age?
* •
Do colleagues in a coauthorship network tend to be from the same university?
* •
Do relatives in a kinship network tend to like the same foods?
Note that to calculate the assortative mixing of a network, vertices must have
metadata properties. The typical single-relational network $G=(V,E)$ does not
capture this information. Therefore, assume some other data structure that
stores metadata about each vertex.
The original publication defining the assortative mixing metric for scalar
properties used the parametric Pearson correlation of two vectors [40].999Note
that for metadata property distributions that are not normally distributed, a
non-parametric correlation such as the Spearman $\rho$ or Kendall $\tau$ may
be the more useful correlation coefficient. One vector is the scalar value of
the vertex property for the vertices on the tail of all edges. The other
vector is the scalar value of the vertex property for the vertices on the head
of all the edges. Thus, the length of both vectors is $|E|$ (i.e. the total
number of edges in the network). Formally, the Pearson correlation-based
assortativity is defined as
$r=\frac{|E|\sum_{i}j_{i}k_{i}-\sum_{i}j_{i}\sum_{i}k_{i}}{\sqrt{\left[|E|\sum_{i}j^{2}_{i}-\left(\sum_{i}j_{i}\right)^{2}\right]\left[|E|\sum_{i}k^{2}_{i}-\left(\sum_{i}k_{i}\right)^{2}\right]}},$
where $j_{i}$ is the scalar value of the vertex on the tail of edge $i$, and
$k_{i}$ is the scalar value of the vertex on the head of edge $i$. For nominal
metadata, the equation
$r=\frac{\sum_{p}e_{pp}-\sum_{p}a_{p}b_{p}}{1-\sum_{p}a_{p}b_{p}}$
yields a value in $[-1,1]$ as well, where $e_{pp}$ is the number of edges in
the network that have property value $p$ on both ends, $a_{p}$ is the number
of edges in the network that have property value $p$ on their tail vertex, and
$b_{p}$ is the number of edges that have property value $p$ on their head
vertex [41].
### 3.4 Porting Algorithms to the Multi-Relational Domain
All the aforementioned algorithms are intended for single-relational networks.
However, it is possible to map these algorithms over to the multi-relational
domain and thus, apply them to the Giant Global Graph. In the most simple
form, it is possible to ignore edge labels and simply treat all edges in a
multi-relational network as being equal. This method, of course, does not take
advantage of the rich structured data that multi-relational networks offer. If
only a particular single-relational slice of the multi-relational network is
desired (e.g. a citation network, lanl:cites), then this single-relational
component can be isolated and subjected the previously presented single-
relational network algorithms. However, if a multi-relational network is to be
generally useful, then a method that takes advantage of the various types of
edges in the network is desired. The methods presented next define
abstract/implicit paths through a network. By doing so, a multi-relational
network can be redefined as a “semantically rich” single-relational network.
For example, in Figure 5, there does not exist lanl:authorCites edges (i.e. if
person $i$ wrote an article that cites the article of person $j$, then it is
true that $i$ lanl:authorCites $j$). However, this edge can be automatically
generated by making use of the lanl:authored and lanl:cites edges. In this
way, a breadth-first search or a random walk can use these automatically
generated, semantically rich edges. By using generated edges, it is possible
to treat a multi-type subset of the multi-relational network as a single-
relational network.
#### 3.4.1 A Multi-Relational Path Algebra
A path algebra is presented to map a multi-relational network to a single-
relational network in order to expose the multi-relational network to single-
relational network algorithms. The multi-relational path algebra summarized is
discussed at length in [50]. In short, the path algebra manipulates a multi-
relational tensor, $\mathcal{A}\in\\{0,1\\}^{|V|\times|V|\times|\mathbb{E}|}$,
in order to derive a semantically-rich, weighted single-relational adjacency
matrix, $\mathbf{A}\in\mathbb{R}^{|V|\times|V|}$. Uses of the algebra can be
generally defined as
$\Delta:\\{0,1\\}^{|V|\times|V|\times|\mathbb{E}|}\rightarrow\mathbb{R}^{|V|\times|V|},$
where $\Delta$ is the path operation defined.
There are two primary operations used in the path algebra: traverse and
filter.101010Other operations not discussed in this section are merge and
weight. For a in depth presentation of the multi-relational path algebra, see
[50]. The traverse operation is denoted
$\cdot:\mathbb{R}^{|V|\times|V|}\times\mathbb{R}^{|V|\times|V|}$ and uses
standard matrix multiplication as its function rule. Traverse is used to
“walk” the multi-relational network. The idea behind traverse is first
described using a single-relational network example. If a single-relational
adjacency matrix is raised to the second power (i.e. multiplied with itself)
then the resultant matrix denotes how many paths of length $2$ exist between
vertices [17]. That is, $\mathbf{A}^{(2)}_{i,j}$ (i.e.
$(\mathbf{A}\cdot\mathbf{A})_{i,j}$) denotes how many paths of length $2$ go
from vertex $i$ to vertex $j$. In general, for any power $p$,
$\mathbf{A}_{i,j}^{(p)}=\sum_{l\in
V}\mathbf{A}_{i,l}^{(p-1)}\cdot\mathbf{A}_{l,j}:p\geq 2.$
This property can be applied to a multi-relational tensor. If
$\mathcal{A}^{1}$ and $\mathcal{A}^{2}$ are multiplied together then the
result adjacency matrix denotes the number of paths of type $1\rightarrow 2$
that exist between vertices. For example, if $\mathcal{A}^{1}$ is the
coauthorship adjacency matrix, then the adjacency matrix
$\mathbf{Z}=\mathcal{A}^{1}\cdot{\mathcal{A}^{1}}^{\top}$ denotes how many
coauthorship paths exist between vertices, where $\top$ transposes the matrix
(i.e. inverts the edge directionality). In other words if Marko (vertex $i$)
and Johan (vertex $j$) have written $19$ papers together, then
$\mathbf{Z}_{i,j}=19$. However, given that the identity element
$\mathbf{Z}_{i,i}$ may be greater than $0$ (i.e. a person has coauthored with
themselves), it is important to remove all such reflexive coauthoring paths
back to the original author. In order to do this, the filter operation is
used. Given the identify matrix $\mathbf{I}$ and the all $1$ matrix
$\mathbf{1}$,
$\mathbf{Z}=\left(\mathcal{A}^{1}\cdot{\mathcal{A}^{1}}^{\top}\right)\circ\left(\mathbf{1}-\mathbf{I}\right),$
yields a true coauthorship adjacency matrix, where
$\circ:\mathbb{R}^{|V|\times|V|}\times\mathbb{R}^{|V|\times|V|}$ is the entry-
wise Hadamard matrix multiplication operation [31]. Hadamard matrix
multiplication is defined as
$\mathbf{A}\circ\mathbf{B}=\left[\begin{array}[]{ccc}\mathbf{A}_{1,1}\cdot\mathbf{B}_{1,1}&\cdots&\mathbf{A}_{1,j}\cdot\mathbf{B}_{1,m}\\\
\vdots&\ddots&\vdots\\\
\mathbf{A}_{n,1}\cdot\mathbf{B}_{n,1}&\cdots&\mathbf{A}_{n,m}\cdot\mathbf{B}_{n,m}\\\
\end{array}\right].$
In this example, the Hadamard entry-wise multiplication operation applies an
“identify filter” to
$\left(\mathcal{A}^{1}\cdot{\mathcal{A}^{1}}^{\top}\right)$ that removes all
paths back to the source vertices (i.e. back to the identity vertices) as it
sets $\mathbf{Z}_{i,i}=0$. This example demonstrates that a multi-relational
network can be mapped to a semantically-rich, single-relational network. In
the original multi-relational network, there exists no coauthoring
relationship. However, this relation exists implicitly by means of traversing
and filtering particular paths.111111While not explored in [50], it is
possible to use the path algebra to create inference rules in a manner
analogous to the Semantic Web Rule Language (SWRL) [32]. Moreover, as explored
in [50], it is possible to perform any arbitrary SPARQL query [46] using the
path algebra (save for greater-than/less-than comparisons of and regular
expressions on literals).
The benefit of the summarized path algebra is that is can express various
abstract paths through a multi-relational tensor in an algebraic form. Thus,
given the theorems of the algebra, it is possible to simplify expressions in
order to derive more computationally efficient paths for deriving the same
information. The primary drawback of the algebra is that it is a matrix
algebra that globally operates on adjacency matrix slices of the multi-
relational tensor $\mathcal{A}$. Given that size of the Giant Global Graph, it
is not practical to execute global matrix operations. However, these path
expressions can be used as an abstract path that a discrete “walker” can take
when traversing local areas of the graph. This idea is presented next.
#### 3.4.2 Multi-Relational Grammar Walkers
Previously, both the stationary probability distribution, PageRank, and
spreading activation were defined as matrix operations. However, it is
possible to represent these algorithms using discrete random walkers. In fact,
in many cases, this is the more natural representation both in terms of
intelligibility and scalability. For many, it is more intuitive to think of
these algorithms as being executed by a discrete random walker moving from
vertex to vertex recording the number of times it has traversed each vertex.
In terms of scalability, all of these algorithms can be approximated by using
less walkers and thus, less computational resources. Moreover, when
represented as a swarm of discrete walkers, the algorithm is inherently
distributed as a walker is only aware of its current vertex and those vertices
adjacent to it.
For multi-relational networks, this same principle applies. However, instead
of randomly choosing an adjacent vertex to traverse to, the walker chooses a
vertex that is dependent upon an abstract path description defined for the
walker. Walkers of this form are called grammar-based random walkers [48]. A
path for a walker can be defined using any language such as the path algebra
presented previous or SPARQL [46]. The following examples are provided in
SPARQL as it is the defacto query language for the Web of Data. Given the
coauthorship path description
$\left(\mathcal{A}^{1}\cdot{\mathcal{A}^{1}}^{\top}\right)\circ\left(\mathbf{1}-\mathbf{I}\right),$
it is possible to denote this as a local walker computation in SPARQL as
SELECT ?dest WHERE {
@ lanl:authored ?x .
?dest lanl:authored ?x .
FILTER (@ != ?dest)Ψ
}
where the symbol @ denotes the current location of the walker (i.e. a
parameter to the query) and ?dest is a collection of potential locations for
the walker to move to (i.e. the return set of the query). It is important to
note that the path algebra expression performs a global computation while the
SPARQL query representation distributes the computation to the individual
vertices (and thus, individual walkers). Given the set of resources that bind
to ?dest, the walker selects a single resource from that set and traverses to
it. At which point, @ is updated to that selected resource value. This process
continues indefinitely and, in the long run behavior, the walker’s location
probability over $V$ denotes the stationary distribution of the walker in the
Giant Global Graph according to the abstract coauthorship path description.
The SPARQL query redefines what is meant by an adjacent vertex by allowing
longer paths to be represented as single edges. Again, this is why it is
stated that such mechanisms yield semantically rich, single-relational
networks.
In the previous coauthorship example, the grammar walker, at every vertex it
encounters, executes the same SPARQL query to locate “adjacent” vertices. In
more complex grammars, it is possible to chain together SPARQL queries into a
graph of expressions such that the walker moves not only through the Giant
Global Graph, but also through a web of SPARQL queries. Each SPARQL query
defines a different abstract edge to be traversed. This idea is diagrammed in
Figure 6, where the grammar walker “walks” both the grammar and the Giant
Global Graph.
Figure 6: A grammar walker maintains its state in the Giant Global Graph (its
current vertex location) and its state in the grammar (its current grammar
location—SPARQL query). After executing its current SPARQL query, the walker
moves to a new vertex in the Giant Global Graph as well as to a new grammar
location in the grammar.
To demonstrate a multiple SPARQL query grammar, a PageRank coauthorship
grammar is defined using two queries. The first query was defined above and
the second query is
SELECT ?dest WHERE {
?dest rdf:type lanl:Person
}
This rule serves as the “teleportation” function utilized in PageRank to
ensure a strongly connected network. Thus, if there is a $\alpha$ probability
that the first query will be executed and a $(1-\alpha)$ probability that the
second rule will be executed, then coauthorship PageRank in the Giant Global
Graph is computed. Of course, the second rule can be computationally
expensive, but it serves to elucidate the idea.121212Note that this
description is not completely accurate as “rank sinks” in the first query
(when $\texttt{?dest}=\emptyset$) will halt the process. Thus, in such cases,
when the process halts, the second query should be executed. At which point,
rank sinks are alleviated and PageRank is calculated. It is noted that the
stationary probability distribution and the PageRank of the Giant Global Graph
can be very expensive to compute if the grammar does not reduce the traverse
space to some small subset of the full Giant Global Graph. In many cases,
grammar walkers are more useful for calculating semantically meaningful
spreading activations. In this form, the Giant Global Graph can be searched
efficiently from a set of seed resources and a set of walkers that do not
iterate indefinitely, but instead, for some finite number of steps.
The geodesic algorithms previously defined in §3.3 can be executed in an
analogous fashion using grammar-based geodesic walkers [51]. The difference
between a geodesic walker and a random walker is that the geodesic walker
creates a “clone” walker each time it is adjacent to multiple vertices. This
is contrasted to the random walker, where the random walker randomly chooses a
single adjacent vertex. This cloning process implements a breadth-first
search. It is noted that geodesic algorithms have high algorithmic complexity
and thus, unless the grammar can be defined such that only a small subset of
the Giant Global Graph is traversed, then such algorithms should be avoided.
In general, the computational requirements of the algorithms in single-
relational networks also apply to multi-relational networks. However, in
multi-relational networks, given that adjacency is determined through queries,
multi-relational versions of these algorithms are more costly. Given that the
Giant Global Graph will soon grow to become the largest network instantiation
in existence, being aware of such computational requirements is a necessary.
Finally, a major concern with the Web of Data as it is right now is that data
is pulled to a machine for processing. That is, by resolving an http-based
URI, an RDF subgraph is returned to the retrieving machine. This is the method
advocated by the Linked Data community [9]. Thus, walking the Giant Global
Graph requires pulling large amounts of data over the wire. For large network
traversals, instead of moving the data to the process, it may be better to
move the process to the data. By discretizing the process (e.g. using walkers)
it is possible to migrate walkers between the various servers that support the
Giant Global Graph. These ideas are being further developed in future work.
## 4 A Distributed Object Repository
The Web of Data can be interpreted as a distributed object repository—a Web of
Process. An object, from the perspective of object-oriented programming, is
defined as a discrete entity that maintains
* •
fields: properties associated with the object. These may be pointers to
literal primitives such as characters, integers, etc. or pointers to other
objects.
* •
methods: behaviors associated with the object. These are the instructions that
an object executes in order to change its state and the state of the objects
it references.
Objects are abstractly defined in source code. Source code is written in a
human readable/writeable language. An example Person class defined in the Java
language is presented below. This particular class has two fields (i.e. age
and friends) and one method (i.e. makeFriend).
public class Person {
int age;
Collection<Person> friends;
public void makeFriend(Person p) {
this.friends.add(p);
}Ψ
}
There is an important distinction between a class and an object. A class is an
abstract description of an object. Classes are written in source code.
Object’s are created during the run-time of the executed code and embody the
properties of their abstract class. In this way, objects instantiate (or
realize) classes. Before objects can be created, a class described in source
code must be compiled so that the machine can more efficiently process the
code. In other words, the underlying machine has a very specific instruction
set (or language) that it uses. It is the role of the compiler to translate
source code into machine-readable instructions. Instructions can be
represented in the native language of the hardware processor (i.e. according
to its instruction set) or it can be represented in an intermediate language
that can be processed by a virtual machine (i.e. software that simulates the
behavior of a hardware machine). If a virtual machine language is used, it is
ultimately the role of the virtual machine to translate the instructions it is
processing into the instruction set used by the underlying hardware machine.
However, the computing stack does not end there. It is ultimately up to the
“laws of physics” to alter the state of the hardware machine. As the hardware
machine changes states, its alters the state of all the layers of abstractions
built atop it.
Object-oriented programming is perhaps the most widely used software
development paradigm and is part of the general knowledge of most computer
scientists and engineers. Examples of the more popular object-oriented
languages include C++, Java, and Python. Some of the benefits of object-
oriented programming are itemized below.
* •
abstraction: representing a problem intuitively as a set of interacting
objects.
* •
encapsulation: methods and fields are “bundled” with particular objects.
* •
inheritance: subclasses inherit the fields and methods of their parent
classes.
In general, as systems scale, the management of large bodies of code is made
easier through the use of object-oriented programming.
There exist many similarities between the RDFS and OWL Semantic Web ontology
languages discussed in §2 and the typical object-oriented programming
languages previously mentioned. For example, in the ontology languages, there
exist the notion of classes, their instances (i.e. objects), and instance
properties (i.e. fields).131313It is noted that the semantics of inheritance
and properties in object-oriented languages are different than those of RDFS
and OWL. Object-oriented languages are frame-based and tend to assume a closed
world [56]. Also, there does not exist the notion of sub-properties in object-
oriented languages as fields are not “first-class citizens.” However, the
biggest differentiator is that objects in object-oriented environments
maintain methods. The only computations that occur in RDFS and OWL are through
the inference rules of the logic they implement and as such are not specific
to particular classes. Even if rules are implemented for particular classes
(for example, in SWRL [32]), such rule languages are not typically Turing-
complete [54] and thus, do not support general-purpose computing.
In order to bring general-purpose, object-oriented computing to the Web of
Data, various object-oriented languages have been developed that represent
their classes and their objects in RDF. Much like rule languages such as SWRL
have an RDF encoding, these object-oriented languages do as well. However,
they are general-purpose imperative languages that can be used to perform any
type of computation. Moreover, they are object-oriented so that they have the
benefits associated with object-oriented systems itemized previously. When
human readable-writeable source code written in an RDF programming language is
compiled, it is compiled into RDF. By explicitly encoding methods in RDF—their
instruction-level data—the Web of Data is transformed into a Web of
Process.141414It is noted that the Web of Process is not specifically tied to
object-oriented languages. For example, the Ripple programming language is a
relational language where computing instructions are stored in rdf:Lists [53].
Ripple is generally useful for performing complex query and insert operations
on the Web of Process. Moreover, because programs are denoted by URIs, it is
easy to link programs together by referencing URIs. The remainder of this
section will discuss three computing models on the Web of Process:
1. 1.
partial object repository: where typical object-oriented languages utilize the
Web of Process to store object field data, not class descriptions and methods.
2. 2.
full object repository: where RDF-based object-oriented languages encode
classes, object fields, and object methods in RDF.
3. 3.
virtual machine repository: where RDF-based classes, objects, and virtual
machines are represented in the Web of Process.
### 4.1 Partial Object Repository
The Web of Process can be used as a partial object repository. In this sense,
objects represented in the Web of Process only maintain their fields, not
their methods. It is the purpose of some application represented external to
the Web of Process to store and retrieve object data from the Web of Process.
In many ways, this model is analogous to a “black board” tuple-space
[24].151515Object-spaces such as JavaSpaces is a modern object-oriented use of
a tuple-space [22]. By converting the data that is encoded in the Web of
Process to an object instance, the Web of Process serves as a database for
populating the objects of an application. It is the role of this application
to provide a mapping from the RDF encoded object to its object representation
in the application (and vice versa for storage). A simple mapping is that a
URI can denote a pointer to a particular object. The predicates of the
statements that have the URI as a subject are seen as the field names. The
objects of those statements are the values of those fields. For example, given
the Person class previously defined, an instance in RDF can be represented as
(lanl:1234, rdf:type, lanl:Person)
(lanl:1234, lanl:age, "29"^^xsd:int)
(lanl:1234, lanl:friend, lanl:2345)
(lanl:1234, lanl:friend, lanl:3456)
(lanl:1234, lanl:friend, lanl:4567),
where lanl:1234 represents the Person object and the lanl:friend properties
points to three different Person instances. This simple mapping can be useful
for many types of applications. However, it is important to note that there
exists a mismatch between the semantics of RDF, RDFS, and OWL and typical
object-oriented languages. In order to align both languages it is possible
either to 1.) ignore RDF/RDFS/OWL semantics and interpret RDF as simply a data
model for representing an object or 2.) make use of complicated mechanisms to
ensure that the external object-oriented environment is faithful to such
semantics [33].
Various RDF-to-object mappers exists. Examples include
Schemagen161616Schemagen is currently available at
http://jena.sourceforge.net/., Elmo171717Elmo is currently available at
http://www.openrdf.org/., and ActiveRDF [42]. RDF-to-object mappers usually
provide support to 1.) automatically generate class definitions in the non-RDF
language, 2.) automatically populate these objects using RDF data, and 3.)
automatically write these objects to the Web of Process. With RDF-to-object
mapping, what is preserved in the Web of Process is the description of the
data contained in an object (i.e. its fields), not an explicit representation
of the object’s process information (i.e. its methods). However, there exists
RDF object-oriented programming languages that represent methods and their
underlying instructions in RDF.
### 4.2 Full Object Repository
The following object-oriented languages compile human readable/writeable
source code into RDF: Adenine [47], Adenosine, FABL [25], and Neno [49]. The
compilation process creates a full RDF representation of the classes defined.
The instantiated objects of these classes are also represented in RDF. Thus,
the object fields and their methods are stored in the Web of Process. Each
aforementioned RDF programming language has an accompanying virtual machine.
It is the role of the respective virtual machine to query the Web of Process
for objects, execute their methods, and store any changes to the objects back
into the Web of Process.
Given that these languages are designed specifically for an RDF environment
and in many cases, make use of the semantics defined for RDFS and OWL, the
object-oriented nature of these languages tend to be different than typical
languages such as C++ and Java. Multiple inheritance, properties as classes,
methods as classes, unique SPARQL-based language constructs, etc. can be found
in these languages. To demonstrate methods as classes and unique SPARQL-based
language constructs, two examples are provided from Adenosine and Neno,
respectively. In Adenosine, methods are declared irrespective of a class and
can be assigned to classes as needed.
(lanl:makeFriend, rdf:type, std:Method)
(lanl:makeFriend, std:onClass, lanl:Person)
(lanl:makeFriend, std:onClass, lanl:Dog).
Next, in Neno, it is possible to make use of the inverse query capabilities of
SPARQL. The Neno statement
rpi:josh.lanl:friend.lanl:age;
is typical in many object-oriented languages: the age of the friends of
Josh.181818Actually, this is not that typical as fields cannot denote multiple
objects in most object-oriented langauges. In order to reference multiple
objects, fields tend to reference an abstract “collection” object that
contains multiple objects within it (e.g. an array). However, the statement
rpi:josh..lanl:friend.lanl:age;
is not. This statement makes use of “dot dot” notation and is called inverse
field referencing. This particular example returns the age of all the people
that are friends with Josh. That is, it determines all the lanl:Person objects
that are a lanl:friend of lanl:josh and then returns the xsd:int of their
lanl:age. This expression resolves to the SPARQL query
SELECT ?y WHERE {
?x <lanl:friend> <rpi:josh> .
?x <lanl:age> ?y }.
In RDF programming languages, there does not exist the impedance mismatch that
occurs when integrating typical object-oriented languages with the Web of
Process. Moreover, such languages can leverage many of the standards and
technologies associated with the Web of Data in general. In typical object-
oriented languages, the local memory serves as the object storage environment.
In RDF object-oriented languages, the Web of Process serves this purpose. An
interesting consequence of this model is that because compiled classes and
instantiated objects are stored in the Web of Process, RDF software can easily
reference other RDF software in the Web of Process. Instead of pointers being
$32$\- or $64$-bit addresses in local memory, pointers are URIs. In this
medium, the Web of Process is a shared memory structure by which all the
world’s software and data can be represented, interlinked, and executed.
> “The formalization of computation within RDF allows active content to be
> integrated seamlessly into RDF repositories, and provides a programming
> environment which simplifies the manipulation of RDF when compared to use of
> a conventional language via an API.” [25]
A collection of the previously mentioned benefits of RDF programming are
itemized below.
* •
the language and RDF are strongly aligned: there is a more direct mapping of
the language constructs and the underlying RDF representation.
* •
compile time type checking: RDF APIs will not guarantee the validity of an RDF
object at compile time.
* •
unique language constructs: Web of Data technology and standards are more
easily adopted into RDF programming languages.
* •
reflection: language reflection is made easier because everything is
represented in RDF.
* •
reuse: software can reference other software by means of URIs.
There are many issues with this model that are not discussed here. For
example, issues surrounding security, data integrity, and computational
resource consumption make themselves immediately apparent. Many of these
issues are discussed, to varying degrees of detail, in the publications
describing these languages.
### 4.3 Virtual Machine Repository
In the virtual machine repository model, the Web of Process is made to behave
like a general-purpose computer. In this model, software, data, and virtual
machines are all encoded in the Web of Process. The Fhat RDF virtual machine
(RVM) is a virtual machine that is represented in RDF [49]. The Fhat RVM has
an architecture that is similar to other high-level virtual machines such as
the Java virtual machine (JVM). For example, it maintains a program counter
(e.g. a pointer to the current instruction being executed), various stacks
(e.g. operand stack, return stack, etc.), variable frames (e.g. memory for
declared variables), etc. However, while the Fhat RVM is represented in the
Web of Process, it does not have the ability to alter its state without the
support of some external process. An external process that has a reference to
a Fhat RVM can alter it by moving its program location through a collection of
instructions, by updating its stacks, by altering the objects in its heap,
etc. Again, the Web of Process (and more generally, the Web of Data) is simply
a data structure. While it can represent process information, it is up to
machines external to the Web of Process to manipulate it and thus, alter its
state.
In this computing model, a full computational stack is represented in the Web
of Process. Computing, at this level, is agnostic to the physical machines
that support its representation. The lowest-levels of access are URIs and
their RDF relations. There is no pointer to physical memory, disks, network
cards, video cards, etc. Such RDF software and RVMs exist completely in an
abstract URI and RDF address space—in the Web of Process. In this way, if an
external process that is executing an RVM stops, the RVM simply “freezes” at
its current instruction location. The state of the RVM halts. Any other
process with a reference to that RVM can continue to execute it.191919In
analogy, if the laws of physics stopped “executing” the world, the state of
the world would “freeze” awaiting the process to continue. Similarly, an RVM
represented on one physical machine can compute an object represented on
another physical machine. However, for the sake of efficiency, given that RDF
subgraphs can be easily downloaded by a physical machine, the RVMs can be
migrated between data stores—the process is moved to the data, not the data to
the process. Many issues surrounding security, data integrity, and
computational resource consumption are discussed in [49]. Currently there
exists the concept, the consequences, and a prototype of an RVM. Future work
in this area will hope to transform the Web of Process (and more generally,
the Web of Data) into massive-scale, distributed, general-purpose computer.
## 5 Conclusion
A URI can denote anything. It can denote a term, a vertex, an instruction.
However, by itself, a single URI is not descriptive. When a URI is interpreted
within the context of other URIs and literals, it takes on a richer meaning
and is more generally useful. RDF is the means of creating this context. Both
the URI and RDF form the foundational standards of the Web of Data. From the
perspective of the domain of knowledge representation and reasoning, the Web
of Data is a distributed knowledge base—a Semantic Web. In this
interpretation, according to which ever logic is used, existing knowledge can
be used to infer new knowledge. From the perspective of the domain of network
analysis, the Web of Data is a distributed multi-relational network—a Giant
Global Graph. In this interpretation, network algorithms provide structural
statistics and can support network-based information retrieval systems. From
the perspective of the domain of object-oriented programming, the Web of Data
is a distribute object repository—a Web of Process. In this interpretation, a
complete computing environment exists that yields a general-purpose, Web-
based, distributed computer. For other domains, other interpretations of the
Web of Data can exist. Ultimately, the Web of Data can serve as a general-
purpose medium for storing and relating all the world’s data. As such,
machines can usher in a new era of global-scale data management and
processing.
## Acknowledgements
Joshua Shinavier of the Rensselaer Polytechnic Institute and Joe Geldart of
the University of Durham both contributed through thoughtful discussions and
review of the article.
## References
* [1] Jans Aasman. Allegro graph. Technical Report 1, Franz Incorporated, 2006.
* [2] Reka Albert and Albert-Laszlo Barabasi. Diameter of the world wide web. Nature, 401:130–131, September 1999.
* [3] Nicole Alexander and Siva Ravada. RDF object type and reification in the database. In Proceedings of the International Conference on Data Engineering, pages 93–103, Washington, DC, 2006. IEEE.
* [4] John R. Anderson. A spreading activation theory of memory. Journal of Verbal Learning and Verbal Behaviour, 22:261–295, 1983\.
* [5] Alex Bavelas. Communication patterns in task oriented groups. The Journal of the Acoustical Society of America, 22:271–282, 1950\.
* [6] Tim Berners-Lee, Robert Cailliau, Ari Luotonen, Henrik F. Nielsen, and Arthur Secret. The World-Wide Web. Communications of the ACM, 37:76–82, 1994.
* [7] Tim Berners-Lee, Roy T. Fielding, Day Software, Larry Masinter, and Adobe Systems. Uniform Resource Identifier (URI): Generic Syntax, January 2005.
* [8] Tim Berners-Lee, James A. Hendler, and Ora Lassila. The Semantic Web. Scientific American, pages 34–43, May 2001.
* [9] Christian Bizer, Tom Heath, Kingsley Idehen, and Tim Berners-Lee. Linked data on the web. In Proceedings of the International World Wide Web Conference, Linked Data Workshop, Beijing, China, April 2008.
* [10] Johan Bollen, Herbert Van de Sompel, and Marko A. Rodriguez. Towards usage-based impact metrics: first results from the MESUR project. In Proceedings of the Joint Conference on Digital Libraries, pages 231–240, New York, NY, 2008. IEEE/ACM.
* [11] Bélaa Bollobás and Oliver Riordan. The diameter of a scale-free random graph. Combinatorica, 24(1):5–34, 2004.
* [12] Ulrick Brandes and Thomas Erlebach, editors. Network Analysis: Methodolgical Foundations. Springer, Berling, DE, 2005.
* [13] Ulrik Brandes. A faster algorithm for betweeness centrality. Journal of Mathematical Sociology, 25(2):163–177, 2001.
* [14] Dan Brickley and Ramanathan V. Guha. RDF vocabulary description language 1.0: RDF schema. Technical report, World Wide Web Consortium, 2004.
* [15] Sergey Brin and Lawrence Page. The anatomy of a large-scale hypertextual web search engine. Computer Networks and ISDN Systems, 30(1–7):107–117, 1998.
* [16] Jeen Broekstra, Arjohn Kampman, and Frank van Harmelen. Sesame: A generic architecture for storing and querying RDF. In Proceedings of the International Semantic Web Conference, Sardinia, Italy, June 2002.
* [17] Gary Chartrand. Introductory Graph Theory. Dover, 1977.
* [18] Allan M. Collins and Elizabeth F. Loftus. A spreading activation theory of semantic processing. Psychological Review, 82:407–428, 1975.
* [19] Irving M. Copi. Introduction to Logic. Macmillan Publishing Company, New York, NY, 1982.
* [20] Edsger W. Dijkstra. A note on two problems in connexion with graphs. Numerische Mathematik, 1:269–271, 1959.
* [21] Dieter Fensel, Frank van Harmelen, Bo Andersson, Paul Brennan, Hamish Cunningham, Emanuele Della Valle, Florian Fischer, Zhisheng Huang, Atanas Kiryakov, Tony Kyung il Lee, Lael School, Volker Tresp, Stefan Wesner, Michael Witbrock, and Ning Zhong. Towards larkc: a platform for web-scale reasoning. In Proceedings of the IEEE International Conference on Semantic Computing, Los Alamitos, CA, 2008. IEEE.
* [22] Eric Freeman, Susanne Hupfer, and Ken Arnold. JavaSpaces: Principles, Patterns, and Practice. Prentice Hall, 2008.
* [23] Linton C. Freeman. A set of measures of centrality based on betweenness. Sociometry, 40(35–41), 1977.
* [24] David Gelernter and Nicholas Carriero. Coordination languages and their significance. Communications of the ACM, 35(2):97–107, 1992.
* [25] Chris Goad. Describing computation within RDF. In Proceedings of the International Semantic Web Working Symposium, 2004.
* [26] Volker Haarslev and Ralf Möller. Racer: A core inference engine for the Semantic Web. In Proceedings of the 2nd International Workshop on Evaluation of Ontology-based Tools, pages 27–36, 2003.
* [27] Olle Häggström. Finite Markov Chains and Algorithmic Applications. Cambridge University Press, 2002.
* [28] Frank Harary and Per Hage. Eccentricity and centrality in networks. Social Networks, 17:57–63, 1995.
* [29] Patrick Hayes and Brian McBride. RDF semantics. Technical report, World Wide Web Consortium, February 2004.
* [30] Simon Haykin. Neural Networks. A Comprehensive Foundation. Prentice Hall, New Jersey, USA, 1999.
* [31] Roger Horn and Charles Johnson. Topics in Matrix Analysis. Cambridge University Press, 1994.
* [32] Ian Horrocks, Peter F. Patel-Schneider, Harold Boley, Said Tabet, Benjamin Grosof, and Mike Dean. SWRL: A Semantic Web rule language combining OWL and RuleML. Technical report, World Wide Web Consortium, May 2004.
* [33] A. Kalyanpur, D. Pastor, S. Battle, and J. Padget. Automatic mapping of OWL ontologies into java. In Proceedings of Software Engineering. - Knowledge Engineering, Banff, Canada, 2004.
* [34] Atanas Kiryakov, Damyan Ognyanov, and Dimitar Manov. OWLIM – a pragmatic semantic repository for OWL. In International Workshop on Scalable Semantic Web Knowledge Base Systems, volume LNCS 3807, pages 182–192, New York, NY, November 2005. Spring-Verlag.
* [35] Dirk Koschützki, Katharina Anna Lehmann, Dagmar Tenfelde-Podehl, and Oliver Zlotowski. Network Analysis: Methodolgical Foundations, volume 3418 of Lecture Notes in Computer Science, chapter Advanced Centrality Concepts, pages 83–111. Spring-Verlag, 2004.
* [36] Lee W. Lacy. OWL: Representing Information Using the Web Ontology Language. Trafford Publishing, 2005.
* [37] Paul J. Leach. A Universally Unique IDentifier (UUID) URN Namespace. Technical report, Network Working Group, 2005.
* [38] Harold J. Leavitt. Some effects of communication patterns on group performance. Journal of Abnornal and Social Psychology, 46:38–50, 1951.
* [39] Deborah L. McGuinness and Frank van Harmelen. OWL web ontology language overview, February 2004.
* [40] Mark E. J. Newman. Assortative mixing in networks. Physical Review Letters, 89(20):208701, 2002.
* [41] Mark E. J. Newman. Mixing patterns in networks. Physical Review E, 67(2):026126, Feb 2003.
* [42] Eyal Oren, Benjamin Heitmann, and Stefan Decker. ActiveRDF: Embedding semantic web data into object-oriented languages. Web Semantics: Science, Services and Agents on the World Wide Web, 6(3):191–202, 2008.
* [43] Arzucan Ozgur, Thuy Vu, Gunes Erkan, and Dragomir R. Radev. Identifying gene-disease associations using centrality on a literature mined gene-interaction network. Bioinformatics, 24(13):277–285, July 2008.
* [44] Bijan Parsia and Evren Sirin. Pellet: An OWL DL reasoner. In Proceedings of the International Semantic Web Conference, volume 3298 of Lecture Notes in Computer Science, Hiroshima, Japan, November 2004. Springer-Verlag.
* [45] Gunther Patzig. Aristotle’s Theory of The Syllogism. D. Reidel Publishing Company, Boston, Massachusetts, 1968.
* [46] Eric Prud’hommeaux and Andy Seaborne. SPARQL query language for RDF. Technical report, World Wide Web Consortium, October 2004.
* [47] Dennis Quan, David F. Huynh, Vineet Sinha, and David Karger. Adenine: A metadata programming language. Technical report, Massachusetts Institute of Technology, February 2003\.
* [48] Marko A. Rodriguez. Grammar-based random walkers in semantic networks. Knowledge-Based Systems, 21(7):727–739, 2008.
* [49] Marko A. Rodriguez. Emergent Web Intelligence, chapter General-Purpose Computing on a Semantic Network Substrate. Springer-Verlag, Berlin, DE, 2009.
* [50] Marko A. Rodriguez and Joshua Shinavier. Exposing multi-relational networks to single-relational network analysis algorithms. Technical Report LA-UR-08-03931, Los Alamos National Laboratory, 2008\.
* [51] Marko A. Rodriguez and Jennifer H. Watkins. Grammar-based geodesics in semantic networks. Technical Report LA-UR-07-4042, Los Alamos National Laboratory, 2007.
* [52] Gert Sabidussi. The centrality index of a graph. Psychometrika, 31:581–603, 1966.
* [53] Joshua Shinavier. Functional programs as linked data. In 3rd Workshop on Scripting for the Semantic Web, Innsbruck, Austria, 2007.
* [54] Alan M. Turing. On computable numbers, with an application to the entscheidungsproblem. Proceedings of the London Mathematical Society, 42(2):230–265, 1937\.
* [55] W3C/IETF. URIs, URLs, and URNs: Clarifications and recommendations 1.0, September 2001.
* [56] Hai H. Wang, Natasha Noy, Alan Rector, Mark Musen, Timothy Redmond, Daniel Rubin, Samson Tu, Tania Tudorache, Nick Drummond, Matthew Horridge, and Julian Sedenberg. Frames and OWL side by side. In 10th International Protégé Conference, Budapest, Hungary, July 2007.
* [57] Pei Wang. Non-axiomatic reasoning system (version 2.2). Technical Report 75, Center for Research on Concepts and Cognition at Indiana University, 1993.
* [58] Pei Wang. From inheritance relation to non-axiomatic logic. International Journal of Approximate Reasoning, 11:281–319, 1994\.
* [59] Pei Wang. Cognitive logic versus mathematical logic. In Proceedings of the Third International Seminar on Logic and Cognition, May 2004.
* [60] Pei Wang. Rigid Flexibility: The Logic Of Intelligence. Springer, 2006.
* [61] Stanley Wasserman and Katherine Faust. Social Network Analysis: Methods and Applications. Cambridge University Press, Cambridge, UK, 1994.
* [62] Scott White and Padhraic Smyth. Algorithms for estimating relative importance in networks. In Proceedings of the International Conference on Knowledge Discovery and Data Mining, pages 266–275, New York, NY, 2003. ACM Press.
|
arxiv-papers
| 2009-05-20T19:48:10 |
2024-09-04T02:49:02.798479
|
{
"license": "Public Domain",
"authors": "Marko A. Rodriguez",
"submitter": "Marko A. Rodriguez",
"url": "https://arxiv.org/abs/0905.3378"
}
|
0905.3447
|
# On convex problems in chance-constrained stochastic model predictive control
Eugenio Cinquemani cinquemani@control.ee.ethz.ch Mayank Agarwal
mayankag@stanford.edu Debasish Chatterjee chatterjee@control.ee.ethz.ch John
Lygeros lygeros@control.ee.ethz.ch Automatic Control Laboratory, Physikstrasse
3, ETH Zürich, 8092 Zürich, Switzerland Department of Electrical Engineering,
Stanford University, California, USA
###### Abstract
We investigate constrained optimal control problems for linear stochastic
dynamical systems evolving in discrete time. We consider minimization of an
expected value cost over a finite horizon. Hard constraints are introduced
first, and then reformulated in terms of probabilistic constraints. It is
shown that, for a suitable parametrization of the control policy, a wide class
of the resulting optimization problems are convex, or admit reasonable convex
approximations.
###### keywords:
Stochastic control; Convex optimization; Probabilistic constraints
††thanks: This paper was not presented at any IFAC meeting. Research supported
in part by the Swiss National Science Foundation under grant 200021- 122072.
Corresponding author Eugenio Cinquemani. Tel. +41 (0)44 632 86 61; Fax +41
(0)44 632 12 11.
, , ,
## 1 Introduction
This work stems from the attempt to address the optimal infinite-horizon
constrained control of discrete-time stochastic processes by a model
predictive control strategy [1, 14, 12, 13, 15, 29, 2, 4, 7]. We focus on
linear dynamical systems driven by stochastic noise and a control input, and
consider the problem of finding a control policy that minimizes an expected
cost function while simultaneously fulfilling constraints on the control input
and on the state evolution. In general, no control policy exists that
guarantees satisfaction of deterministic (hard) constraints over the whole
infinite horizon. One way to cope with this issue is to relax the constraints
in terms of probabilistic (soft) constraints [25, 26]. This amounts to
requiring that constraints will not be violated with sufficiently large
probability or, alternatively, that an expected reward for the fulfillment of
the constraints is kept sufficiently large.
Two considerations lead to the reformulation of an infinite horizon problem in
terms of subproblems of finite horizon length. First, given any bounded set
(e.g. a safe set), the state of a linear stochastic dynamical system is
guaranteed to exit the set at some time in the future with probability one
whatever the control policy. Therefore, soft constraints may turn the original
(infeasible) hard-constrained optimization problem into a feasible problem
only if the horizon length is finite. Second, even if the constraints are
reformulated so that an admissible infinite-horizon policy exists, the
computation of such a policy is generally intractable. The aim of this note is
to show that, for certain parameterizations of the policy space [21, 6, 17]
and the constraints, the resulting finite horizon optimization problem is
tractable.
An approach to infinite horizon constrained control problems that has proved
successful in many applications is model predictive control [22]. In model
predictive control, at every time $t$, a finite-horizon approximation of the
infinite-horizon problem is solved but only the first control of the resulting
policy is implemented. At the next time $t+1$, a measurement of the state is
taken, a new finite-horizon problem is formulated, the control policy is
updated, and the process is repeated in a receding horizon fashion. Under
time-invariance assumptions, the finite-horizon optimal control problem is the
same at all times, giving rise to a stationary optimal control policy that can
be computed offline.
Motivated by the previous considerations, here we study the convexity of
certain stochastic finite-horizon control problems with soft constraints.
Convexity is central for the fast computation of the solution by way of
numerical procedures, hence convex formulations [8] or convex approximations
[23, 9] of the stochastic control problems are commonly sought. However, for
many of the classes of problems considered here, tight convex approximations
are usually difficult to derive. One may argue that non-convex problems can be
tackled by randomized algorithms [28, 27, 30]. However, randomized solutions
are typically time-consuming and can only provide probabilistic guarantees. In
particular, this is critical in the case where the system dynamics or the
problem constraints are time-varying, since in that case optimization must be
performed in real-time.
Here we provide conditions for the convexity of chance constrained stochastic
optimal control problems. We derive and compare several explicit convex
approximations of chance constraints for Gaussian noise processes and for
polytopic and ellipsoidal constraint functions. Finally, we establish
conditions for the convexity of a class of expectation-type constrains that
includes standard integrated chance constraints [19, 20] as a special case.
For integrated chance constrains on Gaussian processes with polytopic
constraint functions, an explicit formulation of the optimization problem is
also derived.
The optimal constrained control problem we concentrate on is formulated in
Section 2. A convenient parametrization of the control policies and the
convexity of the objective function are discussed at this stage. Next, two
probabilistic formulations of the constraints and conditions for the convexity
of the space of admissible control policies are discussed: Section 3 is
dedicated to chance constraints, while Section 4 is dedicated to integrated
chance constraints. In Section 5, numerical simulations are reported to
illustrate and discuss the results of the paper.
## 2 Problem statement
Let $\mathbb{N}=\\{1,2,\ldots\\}$ and
$\mathbb{N}_{0}\triangleq\mathbb{N}\cup\\{0\\}$. Consider the following
dynamical model: for $t\in\mathbb{N}_{0}$,
$x(t+1)=Ax(t)+Bu(t)+w(t)$ (1)
where $x(t)\in\mathbb{R}^{n}$ is the state, $u(t)\in\mathbb{R}^{m}$ is the
control input, $A\in\mathbb{R}^{n\times n}$, $B\in\mathbb{R}^{n\times m}$, and
$w(t)$ is a stochastic noise input defined on an underlying probability space
$(\Omega,\mathfrak{F},\mathbb{P})$. No assumption on the probability
distribution of the process $w$ is made at this stage. We assume that at any
time $t\in\mathbb{N}_{0}$, $x(t)$ is observed exactly and that, for given
$x_{0}\in\mathbb{R}^{n}$, $x(0)=x_{0}$.
Fix a horizon length $N\in\mathbb{N}$. The evolution of the system from $t=0$
through $t=N$ can be described in compact form as follows:
$\bar{x}=\bar{A}x_{0}+\bar{B}\bar{u}+\bar{D}\bar{w},$ (2)
where
$\bar{x}\triangleq\begin{bmatrix}x(0)\\\ x(1)\\\ \vdots\\\
x(N)\end{bmatrix},\qquad\bar{u}\triangleq\begin{bmatrix}u(0)\\\ u(1)\\\
\vdots\\\ u(N-1)\end{bmatrix},\qquad\bar{w}\triangleq\begin{bmatrix}w(0)\\\
w(1)\\\ \vdots\\\
w(N-1)\end{bmatrix},\qquad\bar{A}\triangleq\begin{bmatrix}I_{n}\\\ A\\\
\vdots\\\ A^{N}\end{bmatrix},$ $\bar{B}\triangleq\begin{bmatrix}0_{n\times
m}&\cdots&\cdots&0_{n\times m}\\\ B&\ddots&&\vdots\\\ AB&B&\ddots&\vdots\\\
\vdots&&\ddots&0_{n\times m}\\\
A^{N-1}B&\cdots&AB&B\end{bmatrix},\qquad\bar{D}\triangleq\begin{bmatrix}0_{n}&\cdots&\cdots&0_{n}\\\
I_{n}&\ddots&&\vdots\\\ A&I_{n}&\ddots&\vdots\\\ \vdots&&\ddots&0_{n}\\\
A^{N-1}&\cdots&A&I_{n}\end{bmatrix}.$
Let $V:\mathbb{R}^{(N+1)n\times Nm}\to\mathbb{R}$ and
$\eta:\mathbb{R}^{(N+1)n\times Nm}\to\mathbb{R}^{r}$, with $r\in\mathbb{N}$,
be measurable functions. We are interested in constrained optimization
problems of the following kind:
$\displaystyle\inf_{\bar{u}\in\mathscr{U}}$
$\displaystyle\mathbb{E}[V(\bar{x},\bar{u})]$ (3) subject to
$\displaystyle\textrm{(\ref{eq:compactdyn})}\quad\textrm{and}\quad\eta(\bar{x},\bar{u})\leq
0$
where the expectation $\mathbb{E}[\cdot]$ is defined in terms of the
underlying probability space $(\Omega,\mathfrak{F},\mathbb{P})$, $\mathscr{U}$
is a class of causal deterministic state-feedback control policies and the
inequality in (3) is interpreted componentwise.
###### Example 1.
In the (unconstrained) linear stochastic control problem [3], $w$ is Gaussian
white noise and the aim is to minimize
$\mathbb{E}\left[\sum_{t=0}^{N-1}\left(x^{T}(t)Q(t)x(t)+u^{T}(t)R(t)u(t)\right)+x^{T}(N)Q(N)x(N)\right],$
where the matrices $Q(t)\in\mathbb{R}^{n\times n}$ and
$R(t)\in\mathbb{R}^{m\times m}$ are positive definite for all $t$, with
respect to causal feedback policies subject to the system dynamics (1). This
problem fits easily in our framework; it suffices to define
$V(\bar{x},\bar{u})=\begin{bmatrix}\bar{x}^{T}&\bar{u}^{T}\end{bmatrix}M\begin{bmatrix}\bar{x}\\\
\bar{u}\end{bmatrix},$ (4)
with
$M=\textrm{diag}\big{(}Q(0),Q(1),\ldots,Q(N),R(0),R(1),\ldots,R(N-1)\big{)}>0$
(the notation $M>0$ indicates that $M$ is a positive definite matrix). In our
framework, though, the input noise sequence may have an arbitrary correlation
structure, the cost function may be non-quadratic and, most importantly,
constraints may be present.
Standard constraints on the state and the input are also formulated easily.
For instance, sequential ellipsoidal constraints of the type
$\displaystyle\begin{bmatrix}x^{T}(t)&u^{T}(t)\end{bmatrix}S(t)\begin{bmatrix}x(t)\\\
u(t)\end{bmatrix}$ $\displaystyle\leq 1,\qquad t=0,1,\ldots,N-1,$
$\displaystyle x^{T}(N)S(N)x(N)$ $\displaystyle\leq 1,$
with $0<S(t)\in\mathbb{R}^{(n+m)\times(n+m)}$ for $t=0,1,\ldots,N-1$ and
$0<S(N)\in\mathbb{R}^{n\times n}$, are captured by the definition
$\eta(\bar{x},\bar{u})=\begin{bmatrix}\eta_{0}(\bar{x},\bar{u})&\eta_{1}(\bar{x},\bar{u})&\cdots&\eta_{N}(\bar{x})\end{bmatrix}^{T}$
where, for $t=0,1,\ldots,N$,
$\eta_{t}(\bar{x},\bar{u})=\begin{bmatrix}\bar{x}^{T}&\bar{u}^{T}\end{bmatrix}\Xi_{t}\begin{bmatrix}\bar{x}\\\
\bar{u}\end{bmatrix}-1,$
and each matrix $\Xi_{t}$ is immediately constructed in terms of the $S(t)$.
Our framework additionally allows for cross-constraints between states and
inputs at different times.
### 2.1 Feedback from the noise input
By the hypothesis that the state is observed without error, one may
reconstruct the noise sequence from the sequence of observed states and inputs
by the formula
$w(t)=x(t+1)-Ax(t)-Bu(t),\qquad t\in\mathbb{N}_{0}.$ (5)
In light of this, and following [21, 17, 6], we shall consider policies of the
form:
$u(t)=\sum_{i=0}^{t-1}G_{t,i}w(i)+d_{t},$ (6)
where the feedback gains $G_{t,i}\in\mathbb{R}^{m\times n}$ and the affine
terms $d_{t}\in\mathbb{R}^{m}$ must be chosen based on the control objective.
With this definition, the value of $u$ at time $t$ depends on the values of
$w$ up to time $t-1$. Using (5) we see that $u(t)$ is a function of the
observed states up to time $t$. It was shown in [17] that there exists a
(nonlinear) bijection between control policies in the form (6) and the class
of affine state feedback policies. That is, provided one is interested in
affine state feedback policies, parametrization (5) constitutes no loss of
generality. Of course, this choice is generally suboptimal, since there is no
reason to expect that the optimal policy is affine, but it will ensure the
tractability of a large class of optimal control problems. In compact
notation, the control sequence up to time $N-1$ is given by
$\bar{u}=\bar{G}\bar{w}+\bar{d},$ (7)
where
$\bar{G}\triangleq\begin{bmatrix}0_{m\times n}\\\ G_{1,0}&0_{m\times n}\\\
\vdots&\ddots&\ddots\\\ G_{N-1,0}&\cdots&G_{N-1,N-2}&0_{m\times
n}\end{bmatrix},\qquad\bar{d}\triangleq\begin{bmatrix}d_{0}\\\ d_{1}\\\
\vdots\\\ d_{N-1}\end{bmatrix};$ (8)
note the lower triangular structure of $\bar{G}$ that enforces causality. The
resulting closed-loop system dynamics can be written compactly as the equality
constraint
$\bar{x}=\bar{A}x_{0}+\bar{B}(\bar{G}\bar{w}+\bar{d})+\bar{D}\bar{w}.$ (9)
Let us denote the parameters of the control policy by
$\theta=(\bar{G},\bar{d})$ and write $(\bar{x}_{\theta},\bar{u}_{\theta})$ to
emphasize the dependence of $\bar{x}$ and $\bar{u}$ on $\theta$. From now on
we will consider the optimization problem
$\displaystyle\inf_{\theta\in\Theta}\quad$
$\displaystyle\mathbb{E}[V(\bar{x}_{\theta},\bar{u}_{\theta})]$ (10) subject
to $\displaystyle\eqref{e:ubardef},~{}\eqref{eq:cloopsys}\textrm{ and}$ (11)
$\displaystyle\eta(\bar{x}_{\theta},\bar{u}_{\theta})\leq 0,$ (12)
where $\Theta$ is the linear space of optimization parameters in the form (8).
###### Remark 2.
With the above parametrization of the control policy, both $\bar{u}_{\theta}$
and $\bar{x}_{\theta}$ are affine functions of the parameters $\theta$ (for
fixed $\bar{w}$) and of the process noise $\bar{w}$ (for fixed $\theta$). Most
of the results developed below rely essentially on this property. It was
noticed in [5] that a parametric causal feedback control policy with the same
property can be easily defined based on indirect observations of the state,
provided the measurement model is linear. The method enables one to extend the
results of this paper to the case of linear output feedback. For the sake of
conciseness, this extension will not be pursued here.
### 2.2 Optimal control problem with relaxed constraints
In general, no control policy can ensure that the constraint (12) is satisfied
for all outcomes of the stochastic input $\bar{w}$. In the standard LQG
setting, for instance, any nontrivial constraint on the system state would be
violated with nonzero probability. We therefore consider relaxed formulations
of the constrained optimization problem (10)–(12) of the form
$\displaystyle\inf_{\theta\in\Theta}\quad$
$\displaystyle\mathbb{E}[V(\bar{x}_{\theta},\bar{u}_{\theta})]$ (13) subject
to $\displaystyle\eqref{e:ubardef},~{}\eqref{eq:cloopsys}\textrm{ and}$ (14)
$\displaystyle\mathbb{E}[\phi\circ\eta(\bar{x}_{\theta},\bar{u}_{\theta})]\leq
0,$ (15)
where $\phi:\mathbb{R}^{r}\to\mathbb{R}^{R}$, with $R\in\mathbb{N}$, is a
convenient measurable function and the inequality is again interpreted
componentwise. For appropriate choices of $\phi$, this formulation embraces
most common probabilistic constraint relaxations, including chance constraints
(see e.g. [23]), integrated chance constraints [19, 20], and expectation
constraints (see e.g. [26]).
We are interested in the convexity of the optimization problem (13)–(15).
First we establish a general convexity result.
###### Proposition 3.
Let $(\Omega,\mathfrak{F},\mathbb{P})$ be a probability space, $\Theta$ be a
convex subset of a vector space and $\mathscr{D}\subseteq\mathbb{R}$ be
convex. Let $\gamma:\Omega\times\Theta\to\mathscr{D}$ and
$\varphi:\mathscr{D}\to\mathbb{R}$ be measurable functions and define
$J(\theta)\triangleq\mathbb{E}[\varphi\circ\gamma(\omega,\theta)].$
Assume that:
1. _(i)_
the mapping $\gamma(\omega,\cdot):\Theta\to\mathbb{R}$ is convex for almost
all $\omega\in\Omega$;
2. _(ii)_
$\varphi$ is monotone nondecreasing and convex;
3. _(iii)_
$J(\theta)$ is finite for all $\theta\in\Theta$.
Then the mapping $J:\Theta\to\mathbb{R}$ is convex.
###### Proof.
Fix a generic $\omega\in\Omega$. Since $\gamma(\omega,\theta)$ is convex in
$\theta$ and $\varphi$ is monotone nondecreasing, for any
$\theta,\theta^{\prime}\in\Theta$ and any $\alpha\in[0,1]$,
$\varphi\big{(}\gamma(\omega,\alpha\theta+(1-\alpha)\theta^{\prime})\big{)}\leq\varphi\big{(}\alpha\gamma(\omega,\theta)+(1-\alpha)\gamma(\omega,\theta^{\prime})\big{)}.$
Moreover, since $\varphi$ is convex,
$\varphi\big{(}\alpha\gamma(\omega,\theta)+(1-\alpha)\gamma(\omega,\theta^{\prime})\big{)}\leq\alpha\varphi\big{(}\gamma(\omega,\theta)\big{)}+(1-\alpha)\varphi\big{(}\gamma(\omega,\theta^{\prime})\big{)}.$
Since these inequalities hold for almost all $\omega\in\Omega$, it follows
that
$\displaystyle\mathbb{E}[\varphi\big{(}\gamma(\omega,\alpha\theta+(1-\alpha)\theta^{\prime})\big{)}]\leq$
$\displaystyle\,\,\mathbb{E}[\alpha\varphi\big{(}\gamma(\omega,\theta)\big{)}+(1-\alpha)\varphi\big{(}\gamma(\omega,\theta^{\prime})\big{)}]$
$\displaystyle=$
$\displaystyle\,\,\alpha\mathbb{E}[\varphi\big{(}\gamma(\omega,\theta)\big{)}]+(1-\alpha)\mathbb{E}[\varphi\big{(}\gamma(\omega,\theta^{\prime})\big{)}],$
which proves the assertion. ∎
Assumption (iii) can be replaced by either of the following:
* (iii′)
$J(\theta)\in\mathbb{R}\cup\\{+\infty\\}$, $\forall\theta\in\Theta$.
* (iii′′)
$J(\theta)\in\mathbb{R}\cup\\{-\infty\\}$, $\forall\theta\in\Theta$.
Let us now make the following standing assumption.
###### Assumption 1.
$V(\bar{x},\bar{u})$ is a convex function of $(\bar{x},\bar{u})$ and
$\mathbb{E}[V(\bar{x}_{\theta},\bar{u}_{\theta})]$ is finite for all
$\theta\in\Theta$.
###### Proposition 4.
Under Assumption 1, $\mathbb{E}[V(\bar{x}_{\theta},\bar{u}_{\theta})]$ is a
convex function of $\theta$.
###### Proof.
First, note that the set $\Theta$ of admissible parameters $\theta$ is a
linear space. Let us write $\bar{w}(\omega)$ $\bar{x}_{\theta}(\omega)$ and
$\bar{u}_{\theta}(\omega)$ to express the dependence of $\bar{w}$,
$\bar{x}_{\theta}$ and $\bar{u}_{\theta}$ on the random event
$\omega\in\Omega$. Fix $\omega$ arbitrarily. Since the mapping
$\theta\mapsto\begin{bmatrix}\bar{x}_{\theta}(\omega)\\\
\bar{u}_{\theta}(\omega)\end{bmatrix}=\begin{bmatrix}\bar{A}x_{0}+\bar{B}(\bar{G}\bar{w}(\omega)+\bar{d})+\bar{D}\bar{w}(\omega)\\\
\bar{G}\bar{w}(\omega)+\bar{d}\end{bmatrix}$
is affine and the mapping $(\bar{x},\bar{u})\mapsto V(\bar{x},\bar{u})$ is
assumed convex, their combination $\theta\mapsto
V(\bar{x}_{\theta},\bar{u}_{\theta})$ is a convex function of $\theta$. Then,
the result follows from Proposition 3 with
$\gamma(\omega,\theta)=V\big{(}\bar{x}_{\theta}(\omega),\bar{u}_{\theta}(\omega)\big{)}$
and $\varphi$ equal to the identity map. ∎
By virtue of the alternative assumptions (iii′) and (iii′′) of Proposition 3,
the requirement that $\mathbb{E}[V(\bar{x}_{\theta},\bar{u}_{\theta})]$ be
finite for all $\theta$ may be relaxed. A sufficient requirement is that there
exist no two values $\theta$ and $\theta^{\prime}$ such that
$J_{x_{0}}(\theta)=+\infty$ and $J_{x_{0}}(\theta^{\prime})=-\infty$. In
particular, the result applies to quadratic cost functions of the type (4),
with $M\geq 0$.
In general, the relaxed constraint (15) is nonconvex even if the components
$\eta_{i}:\mathbb{R}^{(N+1)n\times Nm}\to\mathbb{R}$, with $i=1,\ldots,r$, of
the vector function $\eta$ are convex. In the next sections we will study the
convexity and provide convex approximations of (15) for different approaches
to probabilistic relaxation of hard constraints, i.e. for different choices of
the function $\phi$.
## 3 Chance Constraints
For a given $\alpha\in\;]0,1[$, we relax the hard constraint
$\eta(\bar{x}_{\theta},\bar{u}_{\theta})\leq 0$ by requiring that it be
satisfied with probability $1-\alpha$. Hence we address the optimization
problem
$\displaystyle\inf_{\theta\in\Theta}\quad$
$\displaystyle\mathbb{E}[V(\bar{x}_{\theta},\bar{u}_{\theta})]$ (16) subject
to $\displaystyle\eqref{e:ubardef},~{}\eqref{eq:cloopsys}\textrm{ and}$ (17)
$\displaystyle\mathbb{P}(\eta(\bar{x}_{\theta},\bar{u}_{\theta})\leq 0)\geq
1-\alpha.$ (18)
The smaller $\alpha$, the better the approximation of the hard constraint (12)
at the expense of a more constrained optimization problem. This problem is
obtained as a special case of Problem (13)–(15) by setting $R=1$ and defining
$\phi$ as
$\phi_{CC}(\eta)=1-\prod_{i=1}^{r}\mathbf{1}_{]-\infty,0]}(\eta_{i})-\alpha,$
where $\mathbf{1}_{]-\infty,0]}(\cdot)$ is the standard indicator function. We
now study the convexity of (18) with respect to $\theta$.
### 3.1 The fixed feedback case
First assume that the feedback term $\bar{G}$ in (7) is fixed and consider the
convexity of the optimization problem (16)–(18) with respect to the open loop
control action $\bar{d}$. That is, for a given $\bar{G}$ in the form (8), the
parameter space $\Theta$ becomes the set
$\\{(\bar{G},\bar{d}),~{}\forall\bar{d}\in\mathbb{R}^{Nm}\\}$. For the given
$\bar{G}$ and $i=1,\ldots,r$ define
$g_{i}(\bar{d},\bar{w})=\eta_{i}(\bar{A}x_{0}+\bar{B}(\bar{G}\bar{w}+\bar{d})+\bar{D}\bar{w},\bar{G}\bar{w}+\bar{d}),$
where $\eta_{i}:\mathbb{R}^{(N+1)n+Nm}\to\mathbb{R}$ is the $i$-th element of
the constraint function $\eta$. Define
$p(\bar{d})=\mathbb{P}[g_{1}(\bar{d},\bar{w})\leq 0,\ldots
g_{r}(\bar{d},\bar{w})\leq 0]$ (19)
and $\mathscr{F}_{CC}\triangleq\\{\bar{d}:~{}p(\bar{d})\geq 1-\alpha\\}$.
Observe that $\mathscr{F}_{CC}$ corresponds to the constraint set dictated by
(17)–(18) when $\bar{G}$ is fixed.
###### Proposition 5.
Assume that $\bar{w}$ has a continuous distribution with log-concave
probability density and that, for $i=1,\ldots,r$,
$g_{i}:\mathbb{R}^{Nm+Nn}\to\mathbb{R}$ is quasi-convex. Then, for any value
of $\alpha\in]\,0,1[$, $\mathscr{F}_{CC}$ is convex. As a consequence, under
Assumption 1 and for any $\alpha\in]\,0,1[$, the optimization problem
$\displaystyle\inf_{\theta\in\Theta}\quad$
$\displaystyle\mathbb{E}[V(\bar{x}_{\theta},\bar{u}_{\theta})]$
$\displaystyle\mathrm{subject~{}to}\quad$
$\displaystyle\eqref{e:ubardef},~{}\eqref{eq:cloopsys}~{}\mathrm{and}~{}p(\bar{d})\geq
1-\alpha$
is convex.
###### Proof.
It follows from [24, Theorem 10.2.1] that (19) is a log-concave function of
$\bar{d}$, i.e. the mapping $\bar{d}\mapsto\log p(\bar{d})$ is concave. Since
$\log$ is a monotone increasing function, we may write that
$\mathscr{F}_{CC}=\\{\bar{d}:~{}\log p(\bar{d})\geq\log(1-\alpha)\\}$. Hence,
$\mathscr{F}_{CC}$ is a convex set. The convexity of the optimization problem
follows readily from Assumption 1. ∎
Among others, Gaussian, exponential and uniform distributions are continuous
with log-concave probability density. As for the functions $g_{i}$, one case
of interest where the assumptions of Proposition 5 are fulfilled is when
$\eta(\bar{x},\bar{u})$ is affine in $\bar{x}$ and $\bar{u}$. This is the case
of polytopic constraints, which will be treated extensively in the next
section. Apparently, this convexity result cannot be applied to the
ellipsoidal constraints treated subsequently in Section 3.3, nor can it be
extended to the general constraint (18) with both $\bar{d}$ and $\bar{G}$
varying. Loosely speaking, the latter is because the functions $g_{i}$ are not
simultaneously quasi-concave in $\bar{w}$ and $\bar{G}$. In the next sections
we will develop convex conservative approximations of (18) for various
definitions of $\eta$.
### 3.2 Polytopic Constraint Functions
Throughout the rest of Section 3 we shall rely on the following assumption.
###### Assumption 2.
$\bar{w}$ is a Gaussian random vector with mean zero and covariance matrix
$\bar{\Sigma}>0$, denoted by $\bar{w}\sim\mathcal{N}(0,\bar{\Sigma})$.
Polytopic constraint functions
$\eta(\bar{x}_{\theta},\bar{u}_{\theta})=T^{x}\bar{x}_{\theta}+T^{u}\bar{u}_{\theta}-y,$
(20)
where $T^{x}\in\mathbb{R}^{r\times(N+1)n},T^{u}\in\mathbb{R}^{r\times Nm}$,
and $y\in\mathbb{R}^{r}$, describe one of the most common types of
constraints. In light of (7) and (9),
$\eta(\bar{x}_{\theta},\bar{u}_{\theta})=h_{\theta}+P_{\theta}\bar{w},$ (21)
where $h_{\theta}=(T^{x}\bar{A}x_{0}-y)+(T^{x}\bar{B}+T^{u})\bar{d},$ and
$P_{\theta}=(T^{x}\bar{D}+(T^{x}\bar{B}+T^{u})\bar{G})$. It is thus apparent
that $\eta(\bar{x}_{\theta},\bar{u}_{\theta})$ is affine in the parameters
$\theta$. Yet, in general, constraint (18) is nonconvex. We now describe three
approaches to approximate constraints (18) that lead to convex conservative
constraints.
#### 3.2.1 Approximation via constraint separation
Constraint (18) requires us to satisfy
$\eta(\bar{x}_{\theta},\bar{u}_{\theta})\leq 0$ with probability of at least
$1-\alpha$. Here $\eta\in\mathbb{R}^{r}$ and the inequality $\eta\leq 0$ is
interpreted componentwise. One idea is to select coefficients
$\alpha_{i}\in(0,1)$ such that $\sum_{i=1}^{r}\alpha_{i}=\alpha$ and to
satisfy the inequalities $\mathbb{P}(\eta_{i}\leq 0)\geq 1-\alpha_{i}$, with
$i=1,\ldots,r$. Note that this choice is obtained in (15) by setting $R=r$ and
$\phi_{i}(\eta)=1-\mathbf{1}_{]-\infty,0]}(\eta_{i})-\alpha_{i}$ where, for
$i=1,\ldots,r$, $\phi_{i}:\mathbb{R}^{r}\to\mathbb{R}$ denotes the $i$-th
component of function $\phi$.
Let $h_{i,\theta}$ and $P_{i,\theta}^{T}$ be the $i$-th entry of $h_{\theta}$
and the $i$-th row of $P_{\theta}$, respectively, and let
$\bar{\Sigma}^{\frac{1}{2}}$ be a symmetric real matrix square root of
$\bar{\Sigma}$.
###### Proposition 6.
Let $\alpha_{i}\in]0,1[$. Under Assumption 2, the constraint
$\mathbb{P}(\eta_{i}(\bar{x}_{\theta},\bar{u}_{\theta})\leq 0)\geq
1-\alpha_{i},$
with $\eta$ defined as in (20), is equivalent to the second-order cone
constraint in the parameters $\theta\in\Theta$
$h_{i,\theta}+\beta_{i}\left\lVert{\bar{\Sigma}^{\frac{1}{2}}P_{i,\theta}}\right\rVert\leq
0$
where $\beta_{i}=\sqrt{2}\operatorname{erf}^{-1}(1-2\alpha_{i})$ and
$\operatorname{erf}^{-1}(\cdot)$ is the inverse of the standard error function
$\operatorname{erf}(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}\mathrm{e}^{-u^{2}}\>\mathrm{d}u$.
As a consequence, under Assumption 1 and if
$\alpha_{1}+\ldots+\alpha_{r}=\alpha$, the problem
$\displaystyle\inf_{\theta\in\Theta}\quad$
$\displaystyle\mathbb{E}[V(\bar{x}_{\theta},\bar{u}_{\theta})]$
$\displaystyle\mathrm{subject~{}to}\quad$
$\displaystyle\eqref{e:ubardef},~{}\eqref{eq:cloopsys}~{}\mathrm{and}~{}h_{i,\theta}+\beta_{i}\left\lVert{\bar{\Sigma}^{\frac{1}{2}}P_{i,\theta}}\right\rVert\leq
0,~{}i=1,\ldots,r$
is a convex conservative approximation of Problem (16)–(18).
###### Proof.
From Eq. (21) we can write $\eta_{i}=h_{i,\theta}+P_{i,\theta}^{T}\bar{w}$.
Since $\bar{w}\sim\mathcal{N}(0,\bar{\Sigma})$, $\eta_{i}$ is also Gaussian
with distribution
$\mathcal{N}(h_{i,\theta},P_{i,\theta}^{T}\bar{\Sigma}P_{i,\theta})$. It is
easily seen that, for any scalar Gaussian random variable $X$ with
distribution $\mathcal{N}(\mu,\sigma^{2})$,
$\mathbb{P}(X\leq 0)\geq 1-\alpha\iff\mu+\beta\sigma\leq 0,$
where $\beta=\sqrt{2}\operatorname{erf}^{-1}(1-2\alpha)$. Hence the constraint
$\mathbb{P}(\eta_{i}(\bar{x}_{\theta},\bar{u}_{\theta})\leq 0)\geq
1-\alpha_{i}$ is equivalent to
$h_{i,\theta}+\beta_{i}\left\lVert{\bar{\Sigma}^{\frac{1}{2}}P_{i,\theta}}\right\rVert\leq
0$, where $\beta_{i}=\sqrt{2}\operatorname{erf}^{-1}(1-2\alpha_{i})$. Since
$h_{i,\theta}$ and $P_{i,\theta}$ are both affine in the parameters
$\theta=(\bar{G},\bar{d})$, the above constraint is a second-order cone
constraint in $\theta$. It is easy to see that this choice guarantees that
$\mathbb{P}(\eta(\bar{x}_{\theta},\bar{u}_{\theta})\leq 0)\geq 1-\alpha$. ∎
#### 3.2.2 Approximation via confidence ellipsoids
The approach of Section 3.2.1 may be too conservative since the probability of
a union of events is approximated by the sum of the probabilities of the
individual events. One can also calculate a conservative approximation of the
union at once. Constraint (18) with $\eta(\bar{x}_{\theta},\bar{u}_{\theta})$
as in (21) restricts the choice of $P_{\theta}$ and $h_{\theta}$ to be such
that, with a probability of $1-\alpha$ or more, the realization of random
vector $\eta(\bar{x}_{\theta},\bar{u}_{\theta})$ lies in the negative orthant
$\eta(\bar{x}_{\theta},\bar{u}_{\theta})\leq 0$. In general, it is difficult
to describe this constraint explicitly since it involves the integration of
the probability density of $\eta$ over the negative orthant. However, an
explicit approximation of the constraint can be computed by ensuring that the
$100(1-\alpha)\%$ confidence ellipsoid of $\eta$ is contained in the negative
orthant. Fulfilling this requirement automatically implies that the
probability of $\eta(\bar{x}_{\theta},\bar{u}_{\theta})\leq 0$ is strictly
greater than $1-\alpha$.
Since $\bar{w}\sim\mathcal{N}(0,\bar{\Sigma})$, it follows that
$\eta(\bar{x}_{\theta},\bar{u}_{\theta})=h_{\theta}+P_{\theta}\bar{w}\sim\mathcal{N}(h_{\theta},\bar{\Sigma}_{\theta})$,
with $\bar{\Sigma}_{\theta}:=P_{\theta}\bar{\Sigma}P_{\theta}^{T}$. Consider
the case where $\bar{\Sigma}_{\theta}$ is invertible. Define the
$r$-dimensional ellipsoid
$\mathcal{E}(h_{\theta},\bar{\Sigma}_{\theta},\beta)=\bigl{\\{}\eta\in\mathbb{R}^{r}:\;\big{|}\;(\eta-
h_{\theta})^{T}\bar{\Sigma}_{\theta}^{-1}(\eta-
h_{\theta})\leq\beta^{2}\bigr{\\}},$ (22)
where $\beta>0$ is a parameter specifying the size of the ellipsoid. Notice
that, in general, $\bar{\Sigma}_{\theta}$ is invertible when $r\leq Nn$ (i.e.
the number of constraints is less than the total dimension of the process
noise). If $r>Nn$, as there are $Nn$ independent random variables in the
optimization problem, the following result still holds with $Nn$ in place of
$r$.
###### Proposition 7.
Let $\alpha\in\;]0,1[$. Under Assumption 2, the constraint
$\mathbb{P}[\eta(\bar{x}_{\theta},\bar{u}_{\theta})\leq 0]\geq 1-\alpha$ with
$\eta$ defined as in (20) is conservatively approximated by the constraint
$\mathcal{E}\big{(}h_{\theta},\bar{\Sigma}_{\theta},\beta(\alpha)\big{)}\subset(-\infty,0\,]\,^{r},$
(23)
where $\beta(\alpha)=\sqrt{F^{-1}(1-\alpha)}$ and $F(\cdot)$ is the
probability distribution function of a $\chi^{2}$ random variable with $r$
degrees of freedom. Moreover, (23) can be reformulated as the set of second-
order cone constraints
$h_{i,\theta}+\beta(\alpha)\left\lVert{\bar{\Sigma}^{\frac{1}{2}}P_{i,\theta}}\right\rVert\leq
0,\quad i=1,\ldots,r.$ (24)
As a consequence, under Assumption 1, the problem
$\displaystyle\inf_{\theta\in\Theta}\quad$
$\displaystyle\mathbb{E}[V(\bar{x}_{\theta},\bar{u}_{\theta})]$
$\displaystyle\mathrm{subject~{}to}\quad$
$\displaystyle\eqref{e:ubardef},~{}\eqref{eq:cloopsys}~{}\mathrm{and}~{}\eqref{eq:equivconfellip}$
is a convex conservative approximation of Problem (16)–(18).
###### Proof.
Since
$\eta(\bar{x}_{\theta},\bar{u}_{\theta})\sim\mathcal{N}(h_{\theta},\bar{\Sigma}_{\theta})$,
the random variable
$\big{(}\eta(\bar{x}_{\theta},\bar{u}_{\theta})-h_{\theta}\big{)}^{T}\bar{\Sigma}_{\theta}^{-1}\big{(}\eta(\bar{x}_{\theta},\bar{u}_{\theta})-h_{\theta}\big{)}$
is $\chi^{2}$ with $r$ degrees of freedom. Then, choosing $\beta$ such that
$F(\beta^{2})=1-\alpha$ guarantees that
$\mathcal{E}\big{(}h_{\theta},\bar{\Sigma}_{\theta},\beta(\alpha)\big{)}$ is
the $100(1-\alpha)\%$ confidence ellipsoid for
$\eta(\bar{x}_{\theta},\bar{u}_{\theta})$. Finally, under (23),
$\mathbb{P}[\eta(\bar{x}_{\theta},\bar{u}_{\theta})\leq
0]\geq\mathbb{P}[\eta(\bar{x}_{\theta},\bar{u}_{\theta})\in\mathcal{E}\big{(}h_{\theta},\bar{\Sigma}_{\theta},\beta(\alpha)\big{)}]=1-\alpha$,
which proves the first claim. To prove the second claim, note that (22) can
alternatively be represented as
$\mathcal{E}\big{(}h_{\theta},\bar{\Sigma}_{\theta},\beta\big{)}=\bigl{\\{}\eta\in\mathbb{R}^{r}\>\big{|}\>\eta=h_{\theta}+M_{\theta}u,\;\left\lVert{u}\right\rVert\leq
1\bigr{\\}},$ (25)
where $M_{\theta}=\beta\bar{\Sigma}_{\theta}^{\frac{1}{2}}$. Since $\eta\leq
0$ if and only if $e_{i}^{T}\eta\leq 0,\>i=1,\cdots,r$, where the $e_{i}$
denote the standard basis vectors in $\mathbb{R}^{r}$, we may rewrite (23) as
$e_{i}^{T}(h_{\theta}+M_{\theta}u)\leq
0\quad\forall\left\lVert{u}\right\rVert\leq 1,$
or equivalently
$\displaystyle\sup_{\left\lVert{u}\right\rVert\leq
1}e_{i}^{T}(h_{\theta}+M_{\theta}u)\leq 0$
for $i=1,\cdots,r$. For each $i$, the supremum is attained with
$u=M_{\theta}^{T}e_{i}/\left\lVert{M_{\theta}^{T}e_{i}}\right\rVert$;
therefore, the above is equivalent to
$e_{i}^{T}h_{\theta}+\left\lVert{M_{\theta}^{T}e_{i}}\right\rVert\leq 0$.
Clearly, $e_{i}^{T}h_{\theta}=h_{i,\theta}$. Moreover, since
$M_{\theta}=\beta\bar{\Sigma}_{\theta}^{\frac{1}{2}}$, we have
$\left\lVert{M_{\theta}^{T}e_{i}}\right\rVert=\beta\sqrt{e_{i}^{T}\bar{\Sigma}_{\theta}e_{i}}=\beta\sqrt{e_{i}^{T}P_{\theta}\bar{\Sigma}P_{\theta}^{T}e_{i}}=\beta\left\lVert{\bar{\Sigma}^{\frac{1}{2}}P_{i,\theta}}\right\rVert.$
Therefore, constraint (23) reduces to (24). Since the variables $h_{i,\theta}$
and $P_{i,\theta}$ are affine in the original parameters
$\theta=(\bar{G},\bar{d})$, this is an intersection of second order cone
constraints. As a result, under the additional Assumption 1, the optimization
of $\mathbb{E}[V(\bar{x}_{\theta},\bar{u}_{\theta})]$ with (24) in place of
(18) is a convex conservative approximation of (16)–(18). ∎
#### 3.2.3 Comparison of constraint separation and confidence ellipsoid
methods
In light of Propositions 6 and 7, both approaches lead to formulating
constraints of the form
$\mu_{i}+\beta\sigma_{i}\leq 0,$
with $i=1,\ldots,r$, where $\mu_{i}$ and $\sigma_{i}$ are the mean and the
standard deviation of the scalar random variables
$\eta_{i}(\bar{x}_{\theta},\bar{u}_{\theta})$, and smaller values of the
constant $\beta$ correspond to less conservative approximations of the
original chance constraint. For a given value of $\alpha$, the value of
$\beta$ depends on the number of constraints $r$ in a way that differs in the
two cases. In the confidence ellipsoid method, in particular, the value of
$\beta$ is determined by $Nn$ (the total dimension of the process noise) when
$r\geq Nn$. In Figure 1, we compare the growth of $\beta$ in the two
approaches under the assumption that $r$ grows linearly with the horizon
length $N$. (For the constraint separation method we choose
$\alpha_{1}=\ldots=\alpha_{r}=\alpha/r$, so that the value of $\beta$ is the
same for all constraints.)
Figure 1: Values of $\beta$ as a function of the horizon length $N$ for the
ellipsoidal method (dashed line) and the constraint separation method (solid
lines). It is assumed that $r=N\cdot\bar{r}\cdot(n+m)$, where $n=5$ is the
dimension of the system state and of the process noise, $m=2$ is the dimension
of the input $u$. $\bar{r}$ reflects the number of constraints per stage; we
take $\bar{r}=2,10,100,1000$. The value of $\beta$ is invariant with respect
to $\bar{r}$ (single dashed line) for the ellipsoidal method, while it
increases with $\bar{r}$ (multiple solid lines) for the constraint separation
method.
The increase of $\beta$ is quite rapid in the confidence ellipsoid method,
which is only effective for a small number of constraints. An explanation of
this phenomenon is provided by the following fact, that is better known by the
name of (classical) “concentration of measure” inequalities; proofs may be
found in, e.g., [10].
###### Proposition 8.
Let $\Gamma_{h,\Sigma^{\prime}}$ be the $r$-dimensional Gaussian measure with
mean $h$ and (nonsingular) covariance $\Sigma^{\prime}$, i.e.,
$\Gamma_{h,\Sigma^{\prime}}(\mathrm{d}\xi)=\frac{1}{(2\pi)^{r/2}\sqrt{\det\Sigma^{\prime}}}\exp\biggl{(}-\frac{1}{2}\left\langle{\xi-h},{\Sigma^{\prime-1}(\xi-h)}\right\rangle\biggr{)}\mathrm{d}\xi.$
Then for $\varepsilon\in\;]0,1[$,
1. (i)
$\displaystyle{\Gamma_{h,\Sigma^{\prime}}\biggl{(}\biggl{\\{}\xi\;\bigg{|}\left\lVert{\xi-h}\right\rVert_{\Sigma^{\prime-1}}>\sqrt{\frac{r}{1-\varepsilon}}\biggr{\\}}\biggr{)}\leq\mathrm{e}^{-\frac{r\varepsilon^{2}}{4}}}$;
2. (ii)
$\displaystyle{\Gamma_{h,\Sigma^{\prime}}\bigl{(}\bigl{\\{}\xi\big{|}\left\lVert{\xi-h}\right\rVert_{\Sigma^{\prime-1}}<\sqrt{r(1-\varepsilon)}\bigr{\\}}\bigr{)}\leq\mathrm{e}^{-\frac{r\varepsilon^{2}}{4}}}$.
The above proposition states that as the dimension $r$ of the Gaussian measure
increases, its mass concentrates in an ellipsoidal shell of ‘mean-size’
$\sqrt{r}$. It readily follows that since
$\eta(\bar{x}_{\theta},\bar{u}_{\theta})$ is a $r$-dimensional Gaussian random
vector, its mass concentrates around a shell of size $\sqrt{r}$. Note that the
bounds corresponding to (i) and (ii) of Proposition 8 in the case of
$\eta(\bar{x}_{\theta},\bar{u}_{\theta})$ are independent of the optimization
parameters $\theta$; of course the relative sizes of the confidence ellipsoids
change with $\theta$ (because the mean and the covariance of
$\eta(\bar{x}_{\theta},\bar{u}_{\theta})$ depend on $\theta$), but Proposition
8 shows that the size of the confidence ellipsoids grow quite rapidly with the
dimension of the noise and the length of the optimization horizon. Intuitively
one would expect the ellipsoidal constraint approximation method to be more
effective than the cruder approximation by constraint separation. Figure 1 and
Proposition 8 however suggest that this is not the case in general; for large
numbers of constraints (e.g. longer MPC prediction horizon) the constraint
separation method is the less conservative.
#### 3.2.4 Approximation via expectations
For any $r$-dimensional random vector $\eta$, we have
$1-\mathbb{P}\bigl{(}\eta\leq
0\bigr{)}=\mathbb{E}\left[1-\prod_{i=1}^{r}\mathbf{1}_{]-\infty,0]}(\eta_{i})\right]$.
Using this fact one can arrive at conservative convex approximations of the
chance-constraint (18) by replacing the function in the expectation with
appropriate approximating functions. For $t_{i}>0$, $i=1,\ldots,r$, consider
$\varphi(\eta)=\sum_{i=1}^{r}{\exp(t_{i}\eta_{i})}.$
###### Lemma 9.
For any $r$-dimensional random vector $\eta$, $\mathbb{E}[\varphi(\eta)]\geq
1-\mathbb{P}[\eta\leq 0]$.
###### Proof.
For every fixed value of $\eta$, it holds that $\varphi(\eta)\geq
1-\prod_{i=1}^{r}\mathbf{1}_{]-\infty,0]}(\eta_{i})$. Hence
$\mathbb{E}[\varphi(\eta)]\geq\mathbb{E}[1-\prod_{i=1}^{r}\mathbf{1}_{]-\infty,0]}(\eta_{i})]=1-\mathbb{P}[\eta\leq
0]$. ∎
###### Proposition 10.
Under Assumption 2, for $\eta$ defined as in (20), it holds that
$\mathbb{E}\big{[}\varphi\big{(}\eta(\bar{x}_{\theta},\bar{u}_{\theta})\big{)}\big{]}=\sum_{i=1}^{r}\exp\Big{(}t_{i}h_{i,\theta}+\frac{t_{i}^{2}}{2}||\bar{\Sigma}^{\frac{1}{2}}P_{i,\theta}||^{2}\Big{)}.$
As a consequence, under Assumption 1 and for any choice of $t_{i}>0$,
$i=1,\ldots,r$, the problem
$\displaystyle\inf_{\theta\in\Theta}\quad$
$\displaystyle\mathbb{E}[V(\bar{x}_{\theta},\bar{u}_{\theta})]$
$\displaystyle\mathrm{subject~{}to}\quad$
$\displaystyle\eqref{e:ubardef},~{}\eqref{eq:cloopsys}~{}\mathrm{and}~{}\sum_{i=1}^{r}\exp\Big{(}t_{i}h_{i,\theta}+\frac{t_{i}^{2}}{2}||\bar{\Sigma}^{\frac{1}{2}}P_{i,\theta}||^{2}\Big{)}\leq\alpha$
is a convex conservative approximation of Problem (16)–(18).
###### Proof.
It is easily seen that, for any $r$-dimensional Gaussian random vector $\eta$
with mean $\mu$ and covariance matrix $\Sigma^{\prime}$, and any vector
$c\in\mathbb{R}^{r}$,
$\mathbb{E}\bigl{[}\exp(c^{T}\eta)\bigr{]}=\exp\bigl{(}c^{T}\mu+\frac{1}{2}c^{T}\Sigma^{\prime}c\bigr{)}$.
Let us now write $\eta_{\theta}$ in place of
$\eta(\bar{x}_{\theta},\bar{u}_{\theta})$ for shortness. By the hypotheses on
$\bar{w}$, in the light of (21), $\eta_{\theta}$ is Gaussian with mean
$h_{\theta}$ and covariance $P_{\theta}\bar{\Sigma}P_{\theta}^{T}$. Then, for
a vector $c_{i}\in\mathbb{R}^{r}$ with zero entries except for a coefficient
$t_{i}$ in the $i$-th position,
$\mathbb{E}[\exp(c_{i}^{T}\eta_{\theta})]=\exp\Big{(}c_{i}^{T}h_{\theta}+\frac{1}{2}c_{i}^{T}P_{\theta}\bar{\Sigma}P_{\theta}^{T}c_{i}\Big{)}=\exp\Big{(}t_{i}h_{i,\theta}+\frac{t_{i}^{2}}{2}P_{i,\theta}^{T}\bar{\Sigma}P_{i,\theta}\Big{)}.$
Summing up for $i=1,\ldots,r$ yields the first result. In order to prove the
second statement, note that
$\displaystyle\mathbb{E}\bigl{[}\varphi(\eta_{\theta})\bigr{]}\leq\alpha\iff$
$\displaystyle\log\mathbb{E}\bigl{[}\varphi(\eta_{\theta})\bigr{]}\leq\log\alpha$
$\displaystyle\iff$
$\displaystyle\log\Biggl{(}\sum_{i=1}^{r}{\exp\Bigl{(}t_{i}h_{i,\theta}+\frac{t_{i}^{2}}{2}\left\lVert{\bar{\Sigma}^{\frac{1}{2}}P_{i,\theta}}\right\rVert^{2}\Bigr{)}}\Biggr{)}\leq\log(\alpha).$
For each $i$,
$t_{i}h_{i,\theta}+\frac{t_{i}^{2}}{2}\left\lVert{\bar{\Sigma}^{\frac{1}{2}}P_{i,\theta}}\right\rVert^{2}$
is a convex function of the optimization parameters $\theta$. By [11, Example
3.14], given convex functions $g_{1}(\theta),\ldots,g_{r}(\theta)$, the
function
$f(\theta)=\log\bigl{(}e^{g_{1}(\theta)}+\cdots+e^{g_{k}(\theta)}\bigr{)}$ is
itself convex. It follows that
$\log\mathbb{E}\bigl{[}\varphi(\eta_{\theta})\bigr{]}$ is a convex function of
$\theta$ and that the constraint set
$\\{\theta\in\Theta:~{}\log\mathbb{E}[\varphi(\eta_{\theta})]\leq\log\alpha\\}=\\{\theta\in\Theta:~{}\mathbb{E}[\varphi(\eta_{\theta})]\leq\alpha\\}$
is convex. Finally, from Lemma 9, if
$\mathbb{E}[\varphi(\eta_{\theta})]\leq\alpha$ then
$\mathbb{P}[\eta_{\theta}\leq 0]\geq 1-\mathbb{E}[\varphi(\eta_{\theta})]\geq
1-\alpha$. Together with Assumption 1, this implies that the optimization
problem with constraint
$\sum_{i=1}^{r}\exp\Big{(}t_{i}h_{i,\theta}+\frac{t_{i}^{2}}{2}||\bar{\Sigma}^{\frac{1}{2}}P_{i,\theta}||^{2}\Big{)}\leq\alpha$
in place of (18) is a convex conservative approximation of (16)–(18). ∎
This convex approximation of Problem (16)–(18) is obtained in (13)–(15) by
setting $R=1$ and $\phi(\eta)=\varphi(\eta)-\alpha$. The result that the
approximation is conservative relies essentially on the fact that, with this
choice, $\phi(\eta)\geq\phi_{CC}(\eta)$ $\forall\eta\in\mathbb{R}^{r}$ (see
Lemma 9). This result can be generalized: Given any two functions
$\phi^{\prime},\phi^{\prime\prime}:\mathbb{R}^{r}\to\mathbb{R}$ such that
$\phi^{\prime}(\eta)\geq\phi^{\prime\prime}(\eta)$
$\forall\eta\in\mathbb{R}^{r}$, constraint (15) with $\phi=\phi^{\prime}$ is
more conservative than the same constraint with $\phi=\phi^{\prime\prime}$.
This type of analysis can be exploited to compare different probabilistic
constraints and to minimize the conservatism of the convex approximations with
respect with the tunable parameters, but is not fully pursued here.
### 3.3 Ellipsoidal Constraint Functions
Consider the constraint function
$\eta(\bar{x}_{\theta},\bar{u}_{\theta})=\left(\begin{bmatrix}\bar{x}_{\theta}\\\
\bar{u}_{\theta}\end{bmatrix}-\delta\right)^{T}\Xi\left(\begin{bmatrix}\bar{x}_{\theta}\\\
\bar{u}_{\theta}\end{bmatrix}-\delta\right)-1,$
where $\Xi\geq 0$ and $\delta$ are given. Then the constraint
$\eta(\bar{x}_{\theta},\bar{u}_{\theta})\leq 0$ restricts the vector
$\begin{bmatrix}\bar{x}_{\theta}^{T}&\bar{u}_{\theta}^{T}\end{bmatrix}^{T}$ to
an ellipsoid with center $\delta$ and shape determined by $\Xi$. We now
provide an approximation of the chance constraint (18) that is a semi-definite
program in the optimization parameters $\theta=(\bar{G},\bar{d})$. Similar to
§3.2.2, the idea is to ensure that the $100(1-\alpha)\%$ confidence ellipsoid
of $\bar{w}$ is such that (18) holds. To this end, let
$y_{\theta}=\begin{bmatrix}\bar{x}_{\theta}\\\
\bar{u}_{\theta}\end{bmatrix}-\delta=\begin{bmatrix}\bar{A}x_{0}+\bar{B}(\bar{G}\bar{w}+\bar{d})+\bar{D}\bar{w}\\\
\bar{G}\bar{w}+\bar{d}\end{bmatrix}-\delta=h_{\theta}^{\prime}+P_{\theta}^{\prime}\bar{w},$
where $h_{\theta}^{\prime}=\begin{bmatrix}\bar{A}x_{0}+\bar{B}\bar{d}\\\
\bar{d}\end{bmatrix}-\delta$ and
$P^{\prime}_{\theta}=\begin{bmatrix}\bar{B}\bar{G}+\bar{D}\\\
\bar{G}\end{bmatrix}$.
###### Proposition 11.
Define
$S_{\theta}=\beta(\alpha)\Xi^{\frac{1}{2}}P^{\prime}_{\theta}\bar{\Sigma}^{\frac{1}{2}}$,
with $\beta(\alpha)$ as in Proposition 7, and
$\xi_{\theta}=\Xi^{\frac{1}{2}}h_{\theta}^{\prime}$. Then
$\begin{bmatrix}-\lambda+1&0&\xi_{\theta}^{T}\\\ 0&\lambda
I&(S_{\theta})^{T}\\\ \xi_{\theta}&S_{\theta}&I\end{bmatrix}\geq
0,\qquad\lambda>0,$ (26)
is a Linear Matrix Inequality (LMI) in the unknown parameters $\theta$ and
$\lambda$. If $(\theta,\lambda)$ is a solution of (26), then $\theta$
satisfies (18). As a consequence, under Assumption 1, the problem
$\displaystyle\inf_{\theta\in\Theta,\lambda}\quad$
$\displaystyle\mathbb{E}[V(\bar{x}_{\theta},\bar{u}_{\theta})]$
$\displaystyle\mathrm{subject~{}to}\quad$
$\displaystyle\eqref{e:ubardef},~{}\eqref{eq:cloopsys}~{}\mathrm{and}~{}\eqref{e:bigge}$
is a convex conservative approximation of Problem (16)–(18).
###### Proof.
The inequality (18) may be equivalently represented as
$\displaystyle\mathbb{P}\bigl{(}y_{\theta}^{T}\Xi y_{\theta}$
$\displaystyle-1\leq 0\bigr{)}\geq 1-\alpha$ $\displaystyle\iff$
$\displaystyle\;\;\mathbb{P}\bigl{(}\|\Xi^{\frac{1}{2}}y_{\theta}\|^{2}-1\leq
0\bigr{)}\geq 1-\alpha$ $\displaystyle\iff$
$\displaystyle\;\;\mathbb{P}\bigl{(}\|\Xi^{\frac{1}{2}}(h^{\prime}_{\theta}+P^{\prime}_{\theta}\bar{w})\|^{2}-1\leq
0\bigr{)}\geq 1-\alpha$ $\displaystyle\iff$
$\displaystyle\;\;\mathbb{P}\bigl{(}\|\xi_{\theta}+S^{\prime}_{\theta}\bar{w}\|^{2}-1\leq
0\bigr{)}\geq 1-\alpha,$ (27)
where $S^{\prime}_{\theta}=\Xi^{\frac{1}{2}}P^{\prime}_{\theta}$. Since
$\bar{w}\sim\mathcal{N}(0,\bar{\Sigma})$, one can compute $\beta(\alpha)>0$
such that
$\left\lVert{\bar{\Sigma}^{-1/2}\bar{w}}\right\rVert^{2}\leq\beta(\alpha)^{2}$
specifies the required $100(1-\alpha)\%$ confidence ellipsoid of $\bar{w}$.
Hence, we need to ensure that
$\left\lVert{\bar{\Sigma}^{-1/2}\bar{w}}\right\rVert^{2}\leq\beta(\alpha)^{2}\Rightarrow\left\lVert{\xi_{\theta}+S^{\prime}_{\theta}\bar{w}}\right\rVert^{2}\leq
1$. This is equivalent to
$\sup_{\left\lVert{\bar{\Sigma}^{-1/2}\bar{w}}\right\rVert^{2}\leq\beta(\alpha)^{2}}\left\lVert{\xi_{\theta}+S^{\prime}_{\theta}\bar{w}}\right\rVert^{2}\leq
1\quad\iff\quad\sup_{\left\lVert{\bar{v}}\right\rVert^{2}\leq
1}\left\lVert{\xi_{\theta}+S_{\theta}\bar{v}}\right\rVert^{2}\leq 1.$
It follows from [11, p. 653] that $\sup_{\left\lVert{\bar{v}}\right\rVert\leq
1}\left\lVert{\xi_{\theta}+S_{\theta}\bar{v}}\right\rVert^{2}\leq 1$ if and
only if there exists $\lambda\geq 0$ such that
$\begin{bmatrix}-\xi_{\theta}^{T}\xi_{\theta}-\lambda+1&\xi_{\theta}^{T}S_{\theta}\\\
(S_{\theta})^{T}\xi_{\theta}&\lambda
I-(S_{\theta})^{T}S_{\theta}\end{bmatrix}\geq 0.$
Using Schur complements the last relation can be rewritten equivalently as
(26). Therefore, any solution of (26) implies (18). To verify that (26) is an
LMI, note that $S_{\theta}$ and $\xi_{\theta}$ are affine in the optimization
variables. Together with the assumed convexity of
$\mathbb{E}[V(\bar{x}_{\theta},\bar{u}_{\theta})]$, the last statement of the
proposition follows. ∎
## 4 Integrated chance constraints
In this section we focus on the problem
$\displaystyle\inf_{\theta\in\Theta}\quad$
$\displaystyle\mathbb{E}[V(\bar{x}_{\theta},\bar{u}_{\theta})]$ (28) subject
to $\displaystyle\eqref{e:ubardef},~{}\eqref{eq:cloopsys}\textrm{ and}$ (29)
$\displaystyle J_{i}(\theta)\leq\beta_{i},~{}i=1,\ldots,r$ (30)
where, for $i=1,\ldots,r$,
$J_{i}(\theta)\triangleq\mathbb{E}[\varphi_{i}\circ\eta_{i}(\bar{x}_{\theta},\bar{u}_{\theta})]$,
functions $\varphi_{i}:\mathbb{R}\to\mathbb{R}$ are measurable and the
$\beta_{i}>0$ are fixed parameters. This problem corresponds to Problem
(13)–(15) when setting $R=r$ and $\phi_{i}(\eta)=\varphi_{i}(\eta)-\beta_{i}$,
with $i=1,\ldots,r$. For the choice
$\varphi_{i}(z)=\begin{cases}0,&\textrm{if }z\leq 0,\\\
z,&\textrm{otherwise},\end{cases}$ (31)
constraints of the form (30) are known as integrated chance constraints [19,
20]. In fact, one may write (dropping the dependence of $\eta_{i}$ on
$\bar{x}_{\theta}$ and $\bar{u}_{\theta}$ to simplify the notation)
$\mathbb{E}[\varphi_{i}(\eta_{i})]=\mathbb{E}\left[\int_{0}^{+\infty}\mathbf{1}_{[0,\eta_{i})}(s)\,\mathrm{d}s\right]=\int_{0}^{+\infty}\mathbb{P}\bigl{(}\eta_{i}>s\bigr{)}\,\mathrm{d}s,$
where $\mathbf{1}_{S}(\cdot)$ is the indicator function of set $S$ and the
second equality follows from Tonelli’s theorem [16, Theorem 4.4.5]. Therefore,
constraint (30) is equivalent to
$\int_{0}^{+\infty}\mathbb{P}\bigl{(}\eta_{i}(\bar{x}_{\theta},\bar{u}_{\theta})>s\bigr{)}\,\mathrm{d}s\leq\beta_{i},$
(32)
whence the name integrated chance constraint. Note that $\varphi_{i}$ plays
the role of a penalty (or barrier) function that penalizes violations of the
inequality $\eta_{i}(\bar{x},\bar{u})\leq 0$, and $\beta_{i}$ is a maximum
allowable cost in the sense of (32). Of course, different choices of
$\varphi_{i}$ will not guarantee the equivalence between (30) and (32).
However, they may be useful in deriving other quantitative chance constraint-
type approximations.
### 4.1 Convexity of Integrated Chance Constraints
We now establish sufficient conditions on the $\eta_{i}$ and $\varphi_{i}$ for
the convexity of the constraint set
$\mathscr{F}_{\rm
ICC}\triangleq\\{\theta:~{}J_{i}(\theta)\leq\beta_{i},~{}i=1,\ldots,r\\}.$
(33)
The result is again a consequence of Proposition 3.
###### Proposition 12.
Let the mappings $\eta_{i}:\mathbb{R}^{(N+1)n\times Nm}\to\mathbb{R}$ be
measurable and convex, and let the $\varphi_{i}:\mathbb{R}\to\mathbb{R}$ be
measurable, monotone nondecreasing and convex. Assume that the $J_{i}(\theta)$
are finite for all $\theta$. Then each $J_{i}(\theta)$ is a convex function of
$\theta$ and $\mathscr{F}_{\rm ICC}$ is a convex set. As a consequence, under
Assumption 1, (28)–(30) is a convex optimization problem.
###### Proof.
Fix $\omega\in\Omega$ arbitrarily. Since the mapping
$\theta\mapsto\big{(}\bar{x}_{\theta}(\omega),\bar{u}_{\theta}(\omega)\big{)}$
is affine and $(\bar{x},\bar{u})\mapsto\eta_{i}(\bar{x},\bar{u})$ is convex by
assumption, their composition
$\theta\mapsto\eta_{i}\big{(}\bar{x}(\omega),\bar{u}(\omega)\big{)}$ is a
convex function of $\theta$. Using the assumption that $\varphi_{i}$ is
monotone nondecreasing and convex, we may apply Proposition 3 with
$\gamma(\omega,\theta)=\eta_{i}\big{(}\bar{x}_{\theta}(\omega),\bar{u}_{\theta}(\omega)\big{)}$
and $\varphi_{i}$ in place of $\varphi$ to conclude that
$J_{i}(\theta)=\mathbb{E}[\varphi_{i}\circ\gamma(\omega,\theta)]$ is convex.
Hence, for any choice of $\beta_{i}$, the set
$\mathscr{F}_{i}\triangleq\\{\theta:~{}J_{i}(\theta)\leq\beta_{i}\\}$ is
convex. Since $\mathscr{F}_{\rm ICC}=\bigcap_{i=1}^{r}\mathscr{F}_{i}$, the
convexity of $\mathscr{F}_{\rm ICC}$ follows. Together with Assumption 1, this
proves that (28)–(30) is a convex optimization problem. ∎
It is worth noting that the function $\varphi(\cdot)$ of Section 3.2.4
satisfies analogous monotonicity and convexity assumptions with respect to
each of the $\eta_{i}$, with $i=1,\ldots,r$. Unlike those of Section 3, this
convexity result is independent of the probability distribution of $\bar{w}$.
By virtue of the alternative assumptions (iii′) and (iii′′) of Proposition 3,
the requirement that $J(\theta)$ be finite for all $\theta$ may be relaxed. A
sufficient requirement is that there exist no two values $\theta$ and
$\theta^{\prime}$ such that $J(\theta)=+\infty$ and
$J(\theta^{\prime})=-\infty$. In particular, provided measurable and convex
$\eta_{i}$, definition (31) satisfies all the requirements of Proposition 12.
###### Example 13.
The (scalar) polytopic constraint function:
$\eta(\bar{x}_{\theta},\bar{u}_{\theta})=T_{x}\bar{x}_{\theta}+T_{u}\bar{u}_{\theta}-y$
fulfills the hypotheses of Proposition 12. Hence, the corresponding integrated
chance constraint is convex.
###### Example 14.
Following Example 1, an interesting case is that of ellipsoidal constraints.
For an $\big{(}(N+1)n+Nm\big{)}$-size positive-semidefinite real matrix $\Xi$
and a vector $\delta\in\mathbb{R}^{(N+1)n+Nm}$, define
$\eta(\bar{x}_{\theta},\bar{u}_{\theta})\triangleq\left(\begin{bmatrix}\bar{x}_{\theta}\\\
\bar{u}_{\theta}\end{bmatrix}-\delta\right)^{T}\Xi\left(\begin{bmatrix}\bar{x}_{\theta}\\\
\bar{u}_{\theta}\end{bmatrix}-\delta\right)-1.$ (34)
This is a convex function of the vector
$\begin{bmatrix}\bar{x}_{\theta}^{T}&\bar{u}_{\theta}^{T}\end{bmatrix}^{T}$
(it is the composition of the convex mapping $\xi^{T}\Xi\xi$, $\Xi\geq 0$,
with the affine mapping
$\xi=\begin{bmatrix}\bar{x}_{\theta}^{T}&\bar{u}_{\theta}^{T}\end{bmatrix}^{T}-\delta$)
and hence Proposition 12 applies.
###### Remark 15.
A problem setting similar to Example 14 with quadratic expected-type cost
function and ellipsoidal constraints has been adopted in [26], where hard
constraints are relaxed to expected-type constraints of the form
$\mathbb{E}[\eta(\bar{x},\bar{u})]\leq\beta$. This formulation can be seen as
a special case of integrated chance constraints with $\varphi(z)=z$ for all
$z$. The choice of $\varphi$ within a large class of functions is an extra
degree of freedom provided by our framework that may be exploited to establish
tight bounds on the probability of violating the original hard constraints,
see Lemma 9 for an example.
### 4.2 Numerical solution of optimization problems with ICC
Even though ICC problems are convex in general, deriving efficient algorithms
to solve them is still a major challenge [19, 20]. For certain ICCs, it is
possible however to derive explicit expressions for the gradients of the
constraint function. Provided the cost has a simple (e.g. quadratic) form,
this allows one to implement standard algorithms (e.g. interior point methods
[11]) for the solution of the optimization problem.
Let $\bar{w}$ satisfy Assumption 2 with, for simplicity, $\bar{\Sigma}$ equal
to the identity matrix, i.e. $\bar{w}\sim\mathcal{N}(0,I)$. Consider the
problem with one scalar constraint (the generalization to multiple (joint)
constraints is straightforward):
$\displaystyle\min_{\theta}$
$\displaystyle\begin{bmatrix}\bar{x}_{\theta}^{T}&\bar{u}_{\theta}^{T}\end{bmatrix}M\begin{bmatrix}\bar{x}_{\theta}\\\
\bar{u}_{\theta}\end{bmatrix}$ (35) subject to
$\displaystyle\eqref{e:ubardef},~{}\eqref{eq:cloopsys}\textrm{ and
}\mathbb{E}[\varphi(T_{x}\bar{x}_{\theta}+T_{u}\bar{u}_{\theta}-y)]\leq\beta,$
where $\varphi$ is defined as in (31).
###### Lemma 16.
Let $z$ be a Gaussian random variable with mean $\mu$ and variance
$\sigma^{2}>0$. Then
$\mathbb{E}[\varphi(z)]=\sigma g\Bigl{(}\frac{\mu}{\sigma}\Bigr{)}$
where
$g(x)=\frac{x}{2}\operatorname{erfc}\Bigl{(}\frac{-x}{\sqrt{2}}\Bigr{)}+\frac{1}{\sqrt{2\pi}}\exp\Bigl{(}\frac{-x^{2}}{2}\Bigr{)}$
and $\operatorname{erfc}(x)=1-\operatorname{erf}(x)$ is the standard
complementary error function.
###### Proof.
Since $z\sim\mathcal{N}(\mu,\sigma^{2})$ it holds that
$\mathbb{E}[\varphi(z)]=\frac{1}{\sqrt{2\pi\sigma^{2}}}\int_{0}^{\infty}t\exp\left(-\frac{(t-\mu)^{2}}{2\sigma^{2}}\right)\;dt.$
By the change of variable $y=\frac{t-\mu}{\sqrt{2}\sigma}$ one gets
$\displaystyle\mathbb{E}[\varphi(z)]=$
$\displaystyle\,\,\frac{1}{\sqrt{\pi}}\int_{\frac{-\mu}{\sqrt{2}\sigma}}^{\infty}(\mu+\sqrt{2}\sigma
y)\exp(-y^{2})dy$ $\displaystyle=$
$\displaystyle\,\,\frac{\mu}{\sqrt{\pi}}\int_{\frac{-\mu}{\sqrt{2}\sigma}}^{\infty}\exp(-y^{2})dy+\sigma\sqrt{\frac{2}{\pi}}\int_{\frac{-\mu}{\sqrt{2}\sigma}}^{\infty}y\exp(-y^{2})dy$
$\displaystyle=$
$\displaystyle\,\,\frac{\mu}{2}\operatorname{erfc}\left(\frac{-\mu}{\sqrt{2}\sigma}\right)+\frac{\sigma}{\sqrt{2\pi}}\exp\left(-\frac{\mu^{2}}{2\sigma^{2}}\right),$
which is equal to $\sigma g(\frac{\mu}{\sigma})$. ∎
Simple calculations and the application of this lemma yield the following
result.
###### Proposition 17.
Problem (35) is equivalent to
minimize $\displaystyle h_{1}(\bar{d})+h_{2}(\bar{G})$ (36) subject to
$\displaystyle\sigma g\Bigl{(}\frac{\mu}{\sigma}\Bigr{)}\leq\beta$
where
$\displaystyle\mu=$
$\displaystyle(T^{x}\bar{A}x_{0}-\bar{y})+(T^{x}\bar{B}+T^{u})\bar{d},$
$\displaystyle\sigma=$
$\displaystyle||(T^{x}\bar{D}+(T^{x}\bar{B}+T^{u})\bar{G})^{T}||_{2},$
$\displaystyle h_{1}(\bar{d})=$
$\displaystyle\begin{bmatrix}(\bar{A}x_{0}+\bar{B}\bar{d})^{T}&\bar{d}^{T}\end{bmatrix}M\begin{bmatrix}\bar{A}x_{0}+\bar{B}\bar{d}\\\
\bar{d}\end{bmatrix},$ $\displaystyle h_{2}(\bar{G})=$
$\displaystyle\text{Tr}\Biggl{(}\begin{bmatrix}(\bar{B}\bar{G}+\bar{D})^{T}&\bar{G}^{T}\end{bmatrix}M\begin{bmatrix}\bar{B}\bar{G}+\bar{D}\\\
\bar{G}\end{bmatrix}\Biggr{)}.$
Note that expectations have been integrated out. Now it is possible to put the
problem in a standard form for numerical optimization. Let $\bar{G}_{i}$ and
$\bar{D}_{i}$ be the $i$-th column of $\bar{G}$ and $\bar{D}$, respectively.
Redefine the optimization variable $\theta$ as the vector
$\bar{\theta}=\begin{bmatrix}\bar{d}^{T}&\bar{G}_{1}^{T}&\cdots&\bar{G}_{Nn}^{T}\end{bmatrix}^{T}$.
Define $e$, $f$, $v$ and $L$ by
$\displaystyle\mu=(T^{x}\bar{A}x_{0}-\bar{y})+(T^{x}\bar{B}+T^{u})\bar{d}=$
$\displaystyle e+f^{T}\bar{\theta},$
$\displaystyle(T^{x}\bar{D}+(T^{x}\bar{B}+T^{u})\bar{G})^{T}=$ $\displaystyle
v+L\bar{\theta}.$
###### Corollary 18.
Problem (35) is equivalent to
$\displaystyle\text{minimize }\quad f_{0}(\bar{\theta})=$ $\displaystyle
g_{0}(\bar{\theta})+\sum_{i=1}^{Nn}g_{i}(\bar{\theta})$
$\displaystyle\text{subject to }\quad H\bar{\theta}=$ $\displaystyle 0,$
$\displaystyle f_{1}(\bar{\theta})\leq$ $\displaystyle 0$
where the matrix $H$ in the equality constraint accounts for the causal
structure of $\bar{G}$, while
$\displaystyle g_{0}(\bar{\theta})=$
$\displaystyle\Bigl{(}\begin{bmatrix}\bar{A}x_{0}\\\
0\end{bmatrix}+\begin{bmatrix}\bar{B}\\\
I\end{bmatrix}H_{0}\bar{\theta}\Bigr{)}^{T}M\Bigl{(}\begin{bmatrix}\bar{A}x_{0}\\\
0\end{bmatrix}+\begin{bmatrix}\bar{B}\\\
I\end{bmatrix}H_{0}\bar{\theta}\Bigr{)},$ $\displaystyle g_{i}(\bar{\theta})=$
$\displaystyle\Bigl{(}\begin{bmatrix}\bar{D}_{i}\\\
0\end{bmatrix}+\begin{bmatrix}\bar{B}\\\
I\end{bmatrix}H_{i}\bar{\theta}\Bigr{)}^{T}M\Bigl{(}\begin{bmatrix}\bar{D}_{i}\\\
0\end{bmatrix}+\begin{bmatrix}\bar{B}\\\
I\end{bmatrix}H_{i}\bar{\theta}\Bigr{)},$ $\displaystyle f_{1}(\bar{\theta})=$
$\displaystyle\left\lVert{v+L\bar{\theta}}\right\rVert
g\Bigl{(}\frac{e+f^{T}\bar{\theta}}{\left\lVert{v+L\bar{\theta}}\right\rVert}\Bigr{)}-\beta\leq
0,$
and $H_{0}$ and $H_{i}$ are selection matrices such that
$H_{0}\bar{\theta}=\bar{d}$ and $H_{i}\bar{\theta}=\bar{G}_{i}$.
We conclude the section by documenting the expressions of the gradient and the
Hessian of the constraint function $f_{1}(\bar{\theta})$.
$\nabla
f_{1}(\bar{\theta})=\frac{1}{\sqrt{2\pi}}L^{T}\frac{v+L\bar{\theta}}{\left\lVert{v+L\bar{\theta}}\right\rVert}\exp{\Bigl{(}-\frac{(e+f^{T}\bar{\theta})^{2}}{2\left\lVert{v+L\bar{\theta}}\right\rVert^{2}}\Bigr{)}}+\frac{1}{2}\operatorname{erfc}{\Bigl{(}-\frac{e+f^{T}\bar{\theta}}{\sqrt{2}\left\lVert{v+L\bar{\theta}}\right\rVert}\Bigr{)}}f$
$\nabla^{2}f_{1}(\bar{\theta})=\frac{1}{\sqrt{2\pi}}\exp{\Bigl{(}-\frac{(e+f^{T}\bar{\theta})^{2}}{2\left\lVert{v+L\bar{\theta}}\right\rVert^{2}}\Bigr{)}}\Bigl{[}J_{1}(\bar{\theta})+J_{2}(\bar{\theta})-J_{3}(\bar{\theta})\Bigr{]}$
where
$\displaystyle J_{1}(\bar{\theta})$
$\displaystyle=\frac{1}{\left\lVert{v+L\bar{\theta}}\right\rVert}(L^{T}L+ff^{T}),$
$\displaystyle J_{2}(\bar{\theta})$
$\displaystyle=\Bigl{(}\frac{(e+f^{T}\bar{\theta})^{2}-\left\lVert{v+L\bar{\theta}}\right\rVert^{2}}{\left\lVert{v+L\bar{\theta}}\right\rVert^{5}}\Bigr{)}(L^{T}(v+L\bar{\theta})(v+L\bar{\theta})^{T}L),$
$\displaystyle J_{3}(\bar{\theta})$
$\displaystyle=\Bigl{(}\frac{e+f^{T}\bar{\theta}}{\left\lVert{v+L\bar{\theta}}\right\rVert^{3}}\Bigl{)}(L^{T}(v+L\bar{\theta})f^{T}+f(v+L\bar{\theta})^{T}L).$
The expressions of gradient and Hessian of the quadratic function
$f_{0}(\bar{\theta})$, used e.g. by interior point method solvers, are quite
standard and will not be reported here.
## 5 Simulation results
We illustrate some of our results with the help of a simple example. Consider
the mechanical system shown in Figure 2.
Figure 2: A mechanical system with springs and masses.
$d_{1},\cdots,d_{4}$ are displacements from an equilibrium position,
$u_{1},\cdots,u_{3}$ are forces acting on the masses. In particular, $u_{1}$
is a tension between the first and the second mass, $u_{2}$ is a tension
between the third and the fourth mass, and $u_{3}$ is a force between the wall
(at left) and the second mass. We assume all mass and stiffness constants to
be equal to unity, i.e. $m_{1}=\cdots=m_{4}=1,k_{1}=\cdots=k_{4}=1$. We
consider a discrete-time model of this system with noise in the dynamics,
$x(t+1)=Ax(t)+Bu(t)+w(t),$
where $w$ is an i.i.d. noise process,
$w(t)\sim\mathcal{N}(0,\sigma^{2}I),\sigma=0.05$ for all $t$, and
$x=\begin{bmatrix}d_{1},d_{2},d_{3},d_{4},\dot{d}_{1},\dot{d}_{2},\dot{d}_{3},\dot{d}_{4}\end{bmatrix}^{T}$.
The discrete-time dynamics are obtained by uniform sampling of a continuous-
time model at times $t\cdot h$, with sampling time $h=1$ and $t=0,1,2,\ldots$,
under the assumption that the control action $u(t)$ is piecewise constant over
the sampling intervals $[t\cdot h,t\cdot h+h)$. Hence, $A=e^{hA_{c}}$ and
$B=A_{c}^{-1}(e^{hA_{c}}-I)B_{c}$, where $A_{c}$ and $B_{c}$, defined as
$\displaystyle A_{c}=$ $\displaystyle\,\,\begin{bmatrix}0_{4\times
4}&I_{4\times 4}\\\ A_{c,21}&0_{4\times 4}\end{bmatrix},$ $\displaystyle
A_{c,21}=$ $\displaystyle\,\,\begin{bmatrix}-2&1&0&0,\\\ 1&-2&1&0\\\
0&1&-2&1\\\ 0&0&1&-1\\\ \end{bmatrix},$ $\displaystyle B_{c}=$
$\displaystyle\,\,\begin{bmatrix}0_{4\times 3}\\\ B_{c,21}\end{bmatrix},$
$\displaystyle B_{c,21}=$ $\displaystyle\,\,\begin{bmatrix}1&0&0\\\ -1&0&-1\\\
0&1&0\\\ 0&-1&0\\\ \end{bmatrix}$
are the state and input matrices of the standard ODE model of the system.
We are interested in computing the control policy that minimizes the cost
function
$\mathbb{E}\left[\sum_{t=0}^{N-1}\bigl{(}x(t)^{T}Qx(t)+u(t)^{T}Ru(t)\bigr{)}+x(N)^{T}Qx(N)\right],$
where the horizon length is fixed to $N=5$, the weight matrices are defined as
$Q=\begin{bmatrix}I&0\\\ 0&0\end{bmatrix}$ (penalizing displacements but not
their derivatives) and $R=I$. The initial state is set to
$x_{0}=[0,0,0,1,0,0,0,0]^{T}$. In the absence of constraints, this is a finite
horizon LQG problem whose optimal solution is the linear time-varying feedback
from the state
$u(t)=-\big{(}B^{T}P(t+1)B+R\big{)}^{-1}B^{T}P(t+1)Ax(t),$
where the matrices $P(t)$ are computed by solving, for $t=N-1,\ldots,0$, the
backward dynamic programming recursion
$P(t)=Q+A^{T}P(t+1)A-A^{T}P(t+1)B(B^{T}P(t+1)B+R)^{-1}B^{T}P(t+1)A,$
with $P(N)=Q$. Simulated runs of the controlled system are shown in Figure 3.
Figure 3: 1000 sample paths of the system with LQG control policy. Above:
control input. Below: mass displacements.
We shall now introduce constraints on the state and the control input and
study the feasibility of the problem with the methods of Section 3. The convex
approximations to the chance-constrained optimization problems are solved
numerically in Matlab by the toolbox CVX [18]. In all cases we shall compute a
$5$-stage affine optimal control policy and apply it to repeated runs of the
system. Based on this we will discuss the feasibility of the hard constrained
problem and the probability of constraint violation.
### 5.1 Polytopic constraints
Let us impose bounds on the control inputs, $|u_{1}(t)|\leq 0.1$,
$|u_{2}(t)|\leq 0.3$ and $|u_{3}(t)|\leq 0.15$, with $t=0,\ldots,N-1$, and
bounds on the mass displacements, $|d_{i}(t)|\leq 10$, for $i=1,\cdots,4$ and
with $t=1,\ldots,N$. In the notation of Section 3.2, these constraints are
captured by the equation
$\eta(\bar{x},\bar{u})=T^{x}\bar{x}+T^{u}\bar{u}-y\leq 0$ where
$T^{x}=\begin{bmatrix}M^{T}&0\end{bmatrix}^{T}$ and
$T^{u}=\begin{bmatrix}0&H^{T}\end{bmatrix}^{T}$, with
$M=\begin{bmatrix}M_{1}&&\\\ &\ddots&\\\ &&M_{1}\end{bmatrix},\quad
M_{1}=\begin{bmatrix}I_{4\times 4}&0_{4\times 4}\\\ -I_{4\times 4}&0_{4\times
4}\end{bmatrix},\quad H=\begin{bmatrix}I\\\ -I\end{bmatrix},$
and
$y=\begin{bmatrix}y_{1}\\\ y_{2}\end{bmatrix},\quad y_{1}=\begin{bmatrix}10\\\
\vdots\\\ 10\end{bmatrix},\quad y_{2}=\begin{bmatrix}y^{\prime}\\\ \vdots\\\
y^{\prime}\end{bmatrix},\quad y^{\prime}=\begin{bmatrix}0.10\\\ 0.30\\\
0.15\end{bmatrix}.$
This hard constraint is relaxed to the probabilistic constraint
$\mathbb{P}[\eta(\bar{x},\bar{u})\leq 0]\geq 1-\alpha$. The resulting optimal
control problem is then addressed by constraint separation (Section 3.2.1) and
ellipsoidal approximation (Section 3.2.2).
With constraint separation, the problem is feasible for $\alpha\geq 0.05$. For
$\alpha=0.1$, the application of the suboptimal control policy computed as in
Proposition 6 yields the results shown in Figure 4.
Figure 4: 1000 sample paths of the system with control policy computed via
constraint separation. Above: control input. Below: mass displacements.
Horizontal straight lines show bounds.
With this policy, the control input saturates within the required bounds
whereas the mass displacements stay well below bounds. In fact, although the
required probability of constraint satisfaction is $0.9$, constraints were
never violated in 1000 simulation runs. This suggests that the approximation
incurred by constraint separation is quite conservative, mainly due to the
relatively large number of constraints. It may also be noticed that the
variability of the applied control input is rather small. This hints that the
computed control policy is essentially open-loop, i.e. the linear feedback
gain is small compared to the affine control term.
With the ellipsoidal approximation method, for the same probability level, the
problem turns out to be infeasible, in accordance with the conclusions of
Section 3.2.3. For the sake of investigation, we loosened the bounds on the
mass displacements to $|d_{i}(t)|\leq 100$ for all $i$ and $t$. The problem of
Proposition 7 is then feasible and the results from simulation of the
controlled system are reported in Figure 5.
Figure 5: 1000 sample paths of the system with control policy computed via
ellipsoidal approximation. Above: control input. Below: mass displacements.
Horizontal straight lines show bounds.
Although the controller has been computed under much looser bounds, the
control performance is similar to the one obtained with constraint separation,
a clear sign that the ellipsoidal approximation is overly conservative in this
case. Another evidence of inaccuracy is the fact that, while the control
inputs get closer to the bounds, the magnitude of the displacements is not
reduced. As in the case of constraint separation, the applied control input is
insensitive to the specific simulation run, i.e. the control policy is
essentially open loop.
### 5.2 Ellipsoidal constraints
Consider the constraint function
$\sum_{k=1}^{N}\Bigl{(}d_{1}(k)^{2}+d_{2}(k)^{2}+d_{3}(k)^{2}+d_{4}(k)^{2}\Bigr{)}+\left\lVert{\bar{u}}\right\rVert^{2}\leq
N\cdot c$
with $c=(1^{2}+1^{2}+1^{2}+1^{2}+0.1^{2}+0.15^{2}+0.3^{2})=4.1225$. Unlike the
previous section, we do not impose bounds on $d_{i}(k)$ and $u_{i}(k)$ at each
$k$ but instead require that the total “spending” on $x$ and $u$ does not
exceed a total “budget”. This constraint can be modelled in the form of
Section 3.3, namely
$\eta(\bar{x},\bar{u})=\Biggl{(}\begin{bmatrix}\bar{x}\\\
\bar{u}\end{bmatrix}-\delta\Biggr{)}^{T}\Xi\Biggl{(}\begin{bmatrix}\bar{x}\\\
\bar{u}\end{bmatrix}-\delta\Biggr{)}-1\leq 0,$
with $\delta=0$ and $\Xi=\frac{1}{N\cdot c}\begin{bmatrix}L&0\\\
0&I\end{bmatrix}$, where
$L=\begin{bmatrix}L_{1}&&\\\ &\ddots&\\\
&&L_{1}\end{bmatrix},L_{1}=\begin{bmatrix}I_{4\times 4}&0_{4\times 4}\\\
0_{4\times 4}&0_{4\times 4}\end{bmatrix}.$
The constrained control policy for $N=5$ and $\alpha=0.1$ is computed by
solving the LMI problem of Proposition 11. Results from simulations of the
closed-loop system are reported in Figure 6. Once again, constraints were not
violated over 1000 simulated runs, showing the conservatism of the
approximation. It is interesting to note that the displacements of the masses
are generally smaller than those obtained by the controller computed under
affine constraints, at the cost of a slightly more expensive control action.
In contrast with the affine constraints case, the control action obtained here
is much more sensitive to the noise in the dynamics, i.e. the feedback action
is more pronounced.
Figure 6: 1000 sample paths of the system with control policy computed via
ellipsoidal constraints. Above: control input. Below: mass displacements.
Horizontal straight lines show bounds.
## 6 Conclusions
We have studied the convexity of optimization problems with probabilistic
constraints arising in model predictive control of stochastic dynamical
systems. We have given conditions for the convexity of expectation-type
objective functions and constraints. Convex approximations have been derived
for nonconvex probabilistic constraints. Results have been exemplified by a
numerical simulation study.
Open issues that will be addressed in the future are the role of the tunable
parameters (e.g. the $\alpha_{i}$ in Section 3.2.1, the $t_{i}$ of Section
3.2.4 and the $\beta_{i}$ in Section 4) in the various optimization problems,
and the effect of different choices of the ICC functions $\varphi_{i}$
(Section 4). Directions of future research also include the extension of the
results presented here to the case of noisy state measurements, the exact or
approximate solution of the stochastic optimization problems in terms of
explicit control laws and the control of stochastic systems with probabilistic
constraints on the state via bounded control laws.
## References
* [1] M. Agarwal, E. Cinquemani, D. Chatterjee, and J. Lygeros, On convexity of stochastic optimization problems with constraints, in Proceedings of the European Control Conference (ECC’09), 2009. Accepted.
* [2] J. M. Alden and R. L. Smith, Rolling horizon procedures in nonhomogeneous Markov decision processes, Operations Research, 40 (1992), pp. S183–S194.
* [3] K. Åström, Introduction to Stochastic Control Theory, Academic Press, New York, 1970.
* [4] I. Batina, Model predictive control for stochastic systems by randomized algorithms, PhD thesis, Technische Universiteit Eindhoven, 2004.
* [5] A. Ben-Tal, S. Boyd, and A. Nemirovski, Extending scope of robust optimization, Mathematical Programming, 107 (2006), pp. 63–89.
* [6] A. Ben-Tal, A. Goryashko, E. Guslitzer, and A. Nemirovski, Adjustable robust solutions of uncertain linear programs, Mathematical Programming, 99 (2004), pp. 351–376.
* [7] D. P. Bertsekas, Dynamic programming and suboptimal control: A survey from ADP to MPC, European Journal of Control, 11 (2005).
* [8] D. Bertsimas and D. B. Brown, Constrained stochastic LQC: a tractable approach, IEEE Transactions on Automatic Control, 52 (2007), pp. 1826–1841.
* [9] D. Bertsimas and M. Sim, Tractable approximations to robust conic optimization problems, Mathematical Programming, 107 (2006), pp. 5–36.
* [10] V. I. Bogachev, Gaussian Measures, vol. 62 of Mathematical Surveys and Monographs, American Mathematical Society, Providence, RI, 1998.
* [11] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, 2004.
* [12] D. Chatterjee, E. Cinquemani, G. Chaloulos, and J. Lygeros, Stochastic control up to a hitting time: optimality and rolling-horizon implementation. http://arxiv.org/abs/0806.3008, jun 2008.
* [13] D. Chatterjee, E. Cinquemani, and J. Lygeros, Maximizing the probability of attaining a target prior to extinction. http://arxiv.org/abs/0904.4143, apr 2009.
* [14] D. Chatterjee, P. Hokayem, and J. Lygeros, Stochastic model predictive control with bounded control inputs: a vector space approach. http://arxiv.org/abs/0903.5444, mar 2009.
* [15] P. D. Couchman, M. Cannon, and B. Kouvaritakis, Stochastic MPC with inequality stability constraints, Automatica J. IFAC, 42 (2006), pp. 2169–2174.
* [16] R. M. Dudley, Real Analysis and Probability, vol. 74 of Cambridge Studies in Advanced Mathematics, Cambridge University Press, Cambridge, 2002. Revised reprint of the 1989 original.
* [17] P. J. Goulart, E. C. Kerrigan, and J. M. Maciejowski, Optimization over state feedback policies for robust control with constraints, Automatica, 42 (2006), pp. 523–533.
* [18] M. Grant and S. Boyd, CVX: Matlab software for disciplined convex programming, feb 2009. (web page and software) http://stanford.edu/$\sim$boyd/cvx.
* [19] W. K. K. Haneveld, On integrated chance constraints, in Stochastic programming (Gargnano), vol. 76 of Lecture Notes in Control and Inform. Sci., Springer, Berlin, 1983, pp. 194–209.
* [20] W. K. K. Haneveld and M. H. van der Vlerk, Integrated chance constraints: reduced forms and an algorithm, Computational Management Science, 3 (2006), pp. 245–269.
* [21] J. Löfberg, Minimax Approaches to Robust Model Predictive Control, PhD thesis, Linköping Universitet, Apr 2003.
* [22] J. Maciejowski, Predictive Control with Constraints, Prentice Hall, 2002\.
* [23] A. Nemirovski and A. Shapiro, Convex approximations of chance constrained programs, SIAM Journal on Control and Optimization, 17 (2006), pp. 969–996.
* [24] A. Prékopa, Stochastic Programming, vol. 324 of Mathematics and its Applications, Kluwer Academic Publishers Group, Dordrecht, 1995.
* [25] J. Primbs, A soft constraint approach to stochastic receding horizon control, in Proceedings of the 46th IEEE Conference on Decision and Control, 2007, pp. 4797 – 4802.
* [26] J. Primbs, Stochastic receding horizon control of constrained linear systems with state and control multiplicative noise, in Proceedings of the 26th American Control Conference, 2007.
* [27] G. C. R. Tempo and F. Dabbene, Randomized Algorithms for Analysis and Control of Uncertain Systems, Springer-Verlag, 2005.
* [28] J. Spall, Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control, Wiley, Hoboken, NJ, 2003.
* [29] D. H. van Hessem and O. H. Bosgra, A full solution to the constrained stochastic closed-loop mpc problem via state and innovations feedback and its receding horizon implementation, in Proceedings of the 42nd IEEE Conference on Decision and Control, vol. 1, 9-12 Dec. 2003, pp. 929–934.
* [30] M. Vidyasagar, Randomized algorithms for robust controller synthesis using statistical learning theory, Automatica, 37 (2001), pp. 1515–1528.
|
arxiv-papers
| 2009-05-21T07:28:28 |
2024-09-04T02:49:02.815048
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Eugenio Cinquemani, Mayank Agarwal, Debasish Chatterjee, and John\n Lygeros",
"submitter": "Debasish Chatterjee",
"url": "https://arxiv.org/abs/0905.3447"
}
|
0905.3593
|
# Galactic diffuse gamma-ray flux at the energy about 175 TeV
Romen M. Martirosov1, Samvel V. Ter-Antonyan2, Anatoly D. Erlykin3, Alexandr
P. Garyaka1,
Natalya M. Nikolskaya3, Yves A. Gallant4 and Lawrence W. Jones5
1Yerevan Physics Institute, Yerevan, Armenia 2Department of Physics, Southern
University, Baton Rouge, LA, USA 3 Russian Academy of Sciences, Lebedev
Physical Institute, Moscow, Russia 4 Lab. de Physique Theorique et
Astroparticules, Universite Montpellier II, Montpellier, FRANCE 5 Department
of Physics, University of Michigan, Ann Arbor, USA
###### Abstract
We present an upper limit of galactic diffuse gamma-ray flux, measured with
the GAMMA experiment at energy about 175 TeV. The results were obtained using
selection of muon poor extensive air showers at mountain level (700 g/cm2, Mt.
Aragats, Armenia) for 5 GeV muon energy threshold. An upper limit for the
differential flux at energy $E_{\gamma}\simeq 175^{+25}_{-20}$ TeV is equal to
$(5.8-7.0)\cdot 10^{-12}$ $(erg\cdot m^{2}\cdot s\cdot sr)^{-1}$ for $95\%$
confidence level 111Corresponding author:
E-mail: samvel_terantonyan@subr.edu (S.V. Ter-Antonyan).
Cosmic rays, gamma ray, energy spectrum
## 1 Introduction
Ultra-high energy ($E>100$ TeV) galactic gamma-radiation is an important
source of information about the origin of cosmic rays and their propagation.
According to conventional model of cosmic rays the expected flux of ultra-high
energy $\gamma$-rays in the energy range of $0.1-1$ PeV has presumably to have
hadronic origin from sources distributed within radii $\sim 1-0.01$ Mpc [1]
respectively.
In the very-high (TeV) energy region $\gamma$-ray flux was measured by ground-
based systems and single Cherenkov telescopes (Whipple [2], CANGAROO [3],
MAGIC [4], H.E.S.S. [5], MILAGRO [6]). The measurements in the ultra-high
energy region ($E>100$ TeV) are still poor and made only by extensive air
showers (EAS) arrays at Chacaltaya [7], MSU [8], Tien-Shan [9], CASA-MIA [11],
EAS-TOP [12], KASCADE [13] and Grapes-III [10].
This paper is devoted to measurements of the diffuse gamma-rays with GAMMA EAS
array [14, 15] at Aragats mountain observatory, where correlation of
observable shower parameters with primary energy is about $0.98$. The primary
nuclei (predominantly $H$) and $\gamma$-showers were discriminated using no-
muon signal from underground muon scintillation carpet.
## 2 GAMMA experiment
GAMMA experiment [14, 15] is the ongoing study of primary energy spectra in
the range of $10^{14}-10^{17}$ eV using EAS array with 33 concentric disposed
scintillator stations (each of 3$\times$1m2) and underground scintillation
carpet with 150 scintillators (each of 1m2) to detect the shower muon
component with energy $E_{\mu}>5GeV/\cos\theta$, where $\theta$ is the shower
zenith angle. Layout of GAMMA facility is presented in Fig. 1.
Figure 1: Layout of GAMMA experiment.
The detailed description of experiment, results of exploration of
$p,He,O$-like, and $Fe$-like primary nuclei energy spectra derived from
parametrized EAS inverse problem solution are presented in [14]. The all-
particle primary energy spectrum obtained from event-by-event analysis is
published in [15].
Here, the discrimination of $\gamma$-showers from primary nuclei induced
showers is performed on the basis of following 6 conditions:
1) the reconstructed shower core coordinates is distributed within radius of
$R<15$m;
2) shower zenith angles $\theta<30^{0}$;
3) reconstructed shower size $N_{ch}>10^{5}$;
4) reconstructed shower age parameters ($s$) is distributed within
$0.4<s<1.5$;
5) goodness-of-fit test for reconstructed showers $\chi^{2}<2.5$;
6) no-muon signal is recorded for detected showers satisfying the previous 5
conditions.
The selection criteria and $\gamma$-shower discrimination rule (6) above were
obtained using CORSIKA shower simulation code [16] for the NKG and EGS modes
in the frameworks of the SIBYLL [17] interaction model. Simulations were done
for 4 nuclear species: $p,He,O,Fe$ using united energy spectral index
$\gamma=-2.7$ [14]. Simulated samples were equal $7.5\times 10^{5}$, $10^{5}$,
$7\times 10^{4}$ and $5\times 10^{4}$ for $p,He,O,Fe$ nuclei and NKG mode of
CORSIKA. The samples for the EGS mode of CORSIKA were equal to $2.5\times
10^{4}$ for primary $\gamma$-quanta, $7.5\times 10^{4}$ for primary protons
and $3\times 5000$ for $He,0$ and $Fe$ primary nuclei. The simulation strategy
and reconstruction method for shower size ($N_{ch}$), age parameter ($s$),
core coordinates ($x_{0},y_{0}$) and shower zenith angle ($\theta$) were the
same as in [14].
The shower trigger efficiency and shower size reconstruction errors
($\Delta_{N_{ch}}$ and $\sigma_{N_{ch}}$) are presented in Figs. 2,3
respectively. The observed differences of reconstructed shower size biases
$\Delta_{N_{ch}}$ for different primary particles (Fig. 3, lower panel) stems
from differences of corresponding lateral distribution functions of shower
particles.
Figure 2: Trigger efficiency of GAMMA EAS array for different primary
particles. Dashed lines correspond to the exponential approximations:
$W_{trg}(A,N_{ch})=1-\alpha_{A}\exp{(-N_{ch}/N_{0,A})}$, where $\alpha_{A}$
and $N_{0,A}$ parameters depend on primary particle ($A$). Figure 3: Expected
reconstruction error (upper panel) and average bias (lower panel) of shower
size ($N_{ch}$) for different primary particles (symbols).
$\Delta_{ch}=\ln(N_{ch}^{*}/N_{ch})$ and $N_{ch}^{*}$ is an estimation of
shower size $N_{ch}$.
The distribution of detected shower age parameters (GAMMA data) in comparison
with expected distributions for primary $p,He,O,Fe$ nuclei are presented in
Fig. 4 (left panel). The primary elemental
Figure 4: Shower age parameter ($s$) distribution for all showers (left panel)
and no-muon detected showers ($N_{\mu}=0$, right panel). Simulated data for
the primary mixed composition ($All$ $nuclei$, left panel) and primary gamma
ray (right panel) are normalized to the corresponding GAMMA experimental data.
composition and energy spectra were taken from solution of parametrized EAS
inverse problem [14] in the frameworks of the SIBYLL interaction model (Fig.
5, shaded area [14]) and were extrapolated up to the 100 TeV energy region.
The reliability of this extrapolations stems from data [18].
Figure 5: Primary energy spectra and all-particle energy spectrum taken from
[14] (shaded area) and corresponding extrapolations to the 100 TeV energy
region.
The right panel of Fig. 4 shows distribution of shower age parameters for
selected no-muon signal events (shades GAMMA data area) in comparison with
corresponding expected distributions from the simulated $\gamma$-showers and
background proton showers. It is well seen, that the EAS age parameter is also
a data carrier about primary particle ($\gamma$-showers are younger). However,
we did not include yet the age parameter in the $\gamma$-showers selection
criteria. Results in Fig. 4 we use only as indication of an agreement between
simulated and corresponding detected distributions.
The detected muon number spectra in the normalization of probability density
function for different shower size thresholds ($N_{ch}>10^{5},2\times 10^{5}$
and $4\times 10^{5}$) are presented in Fig. 6
Figure 6: Normalized detected muon number ($N_{\mu}$) spectra for different
shower size thresholds ($10^{5},2\times 10^{5},4\times 10^{5}$). Hollow
symbols (circle, square and triangle) are GAMMA experimental data. The symbols
in $\gamma_{EGS}$, $(p-Fe)_{EGS}$ and $(p-Fe)_{NKG}$ columns correspond to
simulated data for the primary $\gamma$ and mixed composition ($p,He,O,Fe$
[14]) computed using the EGS and NKG modes of CORSIKA.
(hollow symbols) in comparison with the corresponding expected spectra from
different primary particles ($\gamma,p,He,O,Fe$) and different simulation
modes (NKG, EGS) of CORSIKA. Energy spectra and elemental composition of
primary nuclei (Fig. 5) used in the Figs. 4,6 were taken from [14] and applied
for the energy region $E>100$ TeV.
### 2.1 Energy estimation
The energy of primary particle is estimated using event-by-event method [15]
according to the empirical expression:
$\ln{E_{A}}=A_{1}\ln{N_{ch}}+A_{2}/\cos{\theta}+A_{3}$, where $E$ is in GeV,
parameters $A_{1},A_{2}$ and $A_{3}$ are determined using goodness-of-fit test
for simulated database and depend on primary particle $A$.
The corresponding accuracies providing for $\chi^{2}\simeq 1$ is described by
the log-linear functions
$\sigma(E_{A})=\varepsilon_{A}-\delta_{A}\ln{(E_{A}/10^{5})}$, where
$\varepsilon\equiv 0.22$, $0.30$, $0.33$ and $\delta\equiv 0.01$, $0.02$,
$0.05$ for primary $\gamma$, $p$ and proton induced no-muon detected shower
($p_{0}$) respectively.
The primary energy reconstruction efficiencies are presented in Fig. 7 for
different primary particles. The lines represent the log-linear empiric
expression above for primary proton showers (dashed line) and $\gamma$-showers
(solid line) respectively. It is seen, that the proton produced no-muon signal
events (hollow circles) practically similar to $\gamma$-showers. The inset
histogram shows the distribution of $E_{p,\gamma}/N_{ch}$ ratio for $p$ and
$\gamma$ primary particles respectively.
Figure 7: Primary energy ($E$) and corresponding shower size ($N_{ch}$)
distributions at observation level for 5000 primary protons (bold dot symbols)
and 5000 $\gamma$ (gray dot symbols). Hollow circles correspond to the proton
showers with no-muon signal from underground muon carpet ($N_{\mu}=0$). Solid
and dashed lines are the log-linear approximations (see text) for primary
$\gamma$ and $p$ correspondingly. Inset histograms are $E_{p,\gamma}/N_{ch}$
distributions for primary proton and $\gamma$ (shaded area).
## 3 Gamma ray flux
$98000$ shower events were selected for operation time $T=3970$ h of GAMMA
experiment. Number of detected shower events versus number of detected muons
($N_{\mu}$) for different shower size thresholds $N_{ch}>10^{5},2\times
10^{5}$ and $4\times 10^{5}$ are presented in Fig. 8 (histograms with shaded
area). The symbols in Fig. 8 are the corresponding expected number of events
simulated using the CORSIKA code for primary energy spectra [14] presented in
Fig. 5. The simulations were carried out for two modes of CORSIKA to get high
accuracy of simulation (EGS mode) and large simulated sample (NKG mode).
The agreement of simulated and detected muon spectra in all measurement region
and lack of statistically significant excess of no-muon signal events (Fig. 8)
allowed us to estimate only an upper limit of $\gamma$-ray flux according to
the expression
Figure 8: Detected (histogram lines) and expected (symbols) muon number
($N_{\mu}$) spectra for different shower size thresholds ($10^{5},2\times
10^{5},4\times 10^{5}$) and different mode (NKG, EGS) of CORSIKA.
$J=\frac{2\sqrt{M_{0}/(W_{\gamma,N_{\mu}=0}W_{trg,\gamma})}}{S\Omega
T\overline{\cos{\theta}}}\cdot\frac{1}{\Delta E}$ (1)
where $M_{0}$ is the number of no-muon detected showers,
$W_{\gamma,N_{\mu}=0}$ is the probability to detect no-muon signal for
$\gamma$-showers (see Fig. 6), $W_{trg}(E_{\gamma})$ is the trigger efficiency
(see Fig. 2), $\overline{\cos{\theta}}=0.94$ is the average shower zenith
angle, $S$ and $\Omega$ are the EAS core detection area and corresponding
solid angle.
The obtained upper limit of differential $\gamma$-ray flux in the energy range
100-300 PeV are presented in Fig. 9 (black downward triangle symbol) in
comparison with CASA-MIA [11], KASCADE [13] and EAS-TOP [12] data.
Figure 9: Upper limit of gamma ray flux derived from detected no-muon showers
(black downward triangle symbol). The gray symbols are the CASA-MIA [11],
KASCADE [13] and EAS-TOP [12] data taken from [13]. The lines are expected
Galactic diffuse background flux from [19].
## 4 Conclusion
An upper limit of $\gamma$-ray differential flux at energy $E_{\gamma}\simeq
175^{+25}_{-20}$ TeV obtained with GAMMA experiment is equal to
$(5.8-7.0)\cdot 10^{-12}$ $(erg\cdot m^{2}\cdot s\cdot sr)^{-1}$ for $95\%$
confidence level and it is in close agreement with the CASA-MIA data [11].
The lower limit for the primary energy spectra and elemental composition
obtained with the GAMMA experiment [14] can be extended to the lower energy
region up to about 100 TeV energies.
We are going to increase the underground muon carpet area up to 250 m2 to
improve $\gamma$/proton showers discrimination efficiency.
## 5 Acknowledgment
This work has been partly supported by the research grant no. 090 from the
Armenian government, the RFBR grant 07-02-00491 in Russia, by the Moscow P.N.
Lebedev Physical Institute and the Hayastan All-Armenian Fund.
## References
* [1] F.A. Aharonyan, ”Very High Energy Cosmic Gamma Radiation”, Published by World Scientific (2004).
* [2] M. Schroedter et al., Astrophys.J. 634 (2005) 947.
* [3] M. Ohishi et al., Astropart. Phys., 30 (2008) 47.
* [4] E. Aliu, et al., arXiv:0810.3561v1 [astro-ph] (2008).
* [5] F. Aharonian F. et al., (H.E.S.S. collaboration) A&A, 477, (2008) 481.
* [6] A.A. Abdo et al.,Astrophys.J. 688 (2008) 1078.
* [7] A. Castellina et al.,Proceedings of the 27th ICRC, Hamburg, Germany (2001) 2735.
* [8] A.M. Anokhina at al., Astrophysics and Space Science, 209, 1 (1993) 19.
* [9] S.I. Nikolsky, J.N. Stamenov, S.Z. Ushev, 1984, JETP (Journal of Experimental and Theoretical Physics), 60, 10.
* [10] P.K. Mohanty, S.K. Gupta et al., 29th ICRC, Pune 6 (2005) 21.
* [11] Chantell, M. C. et al. 1997, Phys. Rev. Lett. 79, 1805./ A. Borione et al., Astrophys. J., 493:175-179 (1998).
* [12] Aglietta, M. et al. 1996, Astropart. Phys., 6, 71.
* [13] G. Schatz et al., Proc. 28th ICRC , Tsukuba (2003) 2293.
* [14] A.P. Garyaka, R.M. Martirosov, S.V. Ter-Antonyan et al., Astropart. Phys., 28, 2 (2007) 169/ arXiv:0704.3200v1 [astro-ph].
* [15] A.P. Garyaka, R.M. Martirosov, S.V. Ter-Antonyan et al., J of Phys, G: Nucl. and Part. Phys., 35 (2008) 115201/ arXiv:0704.3200v1 [astro-ph].
* [16] D. Heck, J. Knapp, J.N. Capdevielle, G. Schatz, T. Thouw, Forschungszentrum Karlsruhe Report, FZKA 6019 (1998).
* [17] R.S. Fletcher, T.K. Gaisser, P. Lipari, T. Stanev, Phys.Rev. D50 (1994) 5710.
* [18] A.A. Kochanov, T.S. Sinegovskaya and S.I. Sinegovsky, Astropart. Phys. 30 (2008) 219.
* [19] Aharonian, F. A., A. M. Atoyan 2000, Astron. Astrophys. 362, 937.
|
arxiv-papers
| 2009-05-22T02:28:54 |
2024-09-04T02:49:02.828236
|
{
"license": "Public Domain",
"authors": "Romen M. Martirosov, Samvel V. Ter-Antonyan, Anatoly D. Erlykin,\n Alexandr P. Garyaka, Natalya M. Nikolskaya, Yves A. Gallant and Lawrence W.\n Jones",
"submitter": "Samvel Ter-Antonyan V",
"url": "https://arxiv.org/abs/0905.3593"
}
|
0905.3606
|
Discrete Cylindrical Vector Beam Generation from an Array of Optical Fibers
R. Steven Kurti1,∗, Klaus Halterman1, Ramesh K. Shori,1 and Michael J.
Wardlaw2
1Research Division, Physics and Computational Sciences Branch, Naval Air
Warfare Center,
China Lake, California, 93555, USA
2Office of Naval Research, Arlington, Virgina, 22203, USA.
∗Corresponding author: steven.kurti@navy.mil
###### Abstract
A novel method is presented for the beam shaping of far field intensity
distributions of coherently combined fiber arrays. The fibers are arranged
uniformly on the perimeter of a circle, and the linearly polarized beams of
equal shape are superimposed such that the far field pattern represents an
effective radially polarized vector beam, or discrete cylindrical vector (DCV)
beam. The DCV beam is produced by three or more beams that each individually
have a varying polarization vector. The beams are appropriately distributed in
the near field such that the far field intensity distribution has a central
null. This result is in contrast to the situation of parallel linearly
polarized beams, where the intensity peaks on axis.
OCIS codes: 140.3298, 140.3300, 060.3510.
The propagation of electromagnetic fields from multiple fibers can result in
complex far-field intensity profiles that depend crucially on the individual
near field polarization and phase. Coherently phased arrays have been used in
defense and communications systems for many years, where the antenna ensemble
forms a diffraction pattern that can be altered by changing the antenna
spacing, amplitude, and relative phase relationships. In the optical regime,
similar systems have recently been formed by actively or passively locking the
phases of two or more identical optical beams [1, 2, 3, 4]. While coherently
combined fiber lasers are increasingly gaining acceptance as sources for high
power applications, most approaches involve phasing a rectangular or hexagonal
grid of fibers with collimated beams that are linearly polarized along the
same axis [5]. The resultant far field diffraction pattern therefore is peaked
in the center. If the polarization is allowed to vary, as it does in radial
vector beams, the diffraction pattern vanishes in the center[6], which can
result in a myriad of device applications, discussed below.
Several types of polarization states have been investigated, including radial
and azimuthal [7, 8, 6, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]. These beams
are used in mitigating thermal effects in high power lasers [19, 20, 21],
laser machining [22, 23], and particle acceleration interactions [24, 25].
They can even be used to generate longitudinal electric fields when tightly
focused [26, 27], and although they are typically formed in free space laser
cavities using a conical lens, or axicon, they can also be formed and guided
in fibers [28, 29, 30]. Typical methods for creating radially polarized beams
fall into two categories: the first involves an intracavity axicon in a laser
resonator to generate the laser mode, the second begins with a single beam and
rotates the polarization of portions of the beam to create an inhomogeneously
(typically radially or azimuthally) polarized beam. In this Letter we
demonstrate a novel approach that takes an array of Gaussian beams, each with
the appropriately oriented linear polarization, and then superimpose them in
the far-field, effectively creating a DCV beam.
Our starting point in determining the propagation of polarized electromagnetic
beams is the vector Helmholtz equation, $\nabla\times\nabla\times{\bf
E}({\boldsymbol{\rho}},z)-k_{0}^{2}{\bf E}({\boldsymbol{\rho}},z)=0$. We use
cylindrical coordinates, so that $\boldsymbol{\rho}$ is the usual transverse
coordinate, $k_{0}=\omega/c$, and the usual sinusoidal time dependence has
been factored out. Applying Lagrange’s formula permits the wave equation to be
written in terms of the vector Laplacian: $\nabla^{2}{\bf
E}({\boldsymbol{\rho}},z)+k_{0}^{2}{\bf E}({\boldsymbol{\rho}},z)=0$, which
can then be reduced to the paraxial wave equation. The paraxial solutions are
then inserted into the familiar Fraunhofer diffraction integral, which for
propagation at sufficiently large $z$, yields the electric field,
$\displaystyle{\bf
E}(k_{x},k_{y})=\exp\Bigl{(}\frac{ik_{0}\rho^{2}}{2z}\Bigr{)}\frac{\exp(ik_{0}z)}{i\lambda
z}$
$\displaystyle\times\sum_{j=1}^{N}\int_{0}^{a}rdr\int_{0}^{2\pi}d\phi\,e^{-i{\bf
k}\cdot({\bf r}+{\bf s}_{j})}{\bf E}^{0}({\bf r}+{\bf s}_{j},0),$ (1)
where ${\bf E}^{0}({\bf r}+{\bf s}_{j},0)$ is the incident electric field in
the plane of the fiber array.
Fig. 1: Example of the multiple fiber setup. A specific case of $N=3$ is
shown, where each beam is represented by a circle of radius $a$ and arranged
uniformly on a circle of radius $R$. Points within the $i$th hole are located
by ${\bf d}={\bf s}_{i}+{\bf r}$, with the vector ${\bf r}$ originating at the
center of each hole.
It is clear that longitudinal polarization is not considered here, consistent
with the paraxial approximation. The far field intensity will be a complex
diffraction pattern that depends on the individual beam intensity profile in
the near field as well its polarization state and optical phase. A diagram
illustrating the geometry used is shown in Fig. 1, where an array of three
circular holes of radius $a$ are equally distributed on the circumference of a
circle of radius $R$. The center of each hole is located at ${\bf
s}_{j}=R\hat{\bf r}_{j}$, where $\hat{\bf r}_{j}\equiv(\hat{\bf
x}\cos\theta_{j}+\hat{\bf y}\sin\theta_{j})$ is the unit radial vector, and
${\bf r}$ is the coordinate relative to each center. The center of each fiber
is separated by an angle $\theta_{j}=2\pi(j-1)/N$. One can scale to any number
of beams, constrained only by the radius $R$. For identical Gaussian beams,
the incident field is expressed as, ${\bf E}^{0}={E}^{0}({\bf r})\hat{\bf
r}_{j}$, where ${E}^{0}({\bf r})=E_{0}e^{-r^{2}/w_{0}^{2}}$. This permits Eq.
(S0.Ex1) to be separated,
$\displaystyle{\bf E}(k_{x},k_{y})={\cal F}(\rho,z)\sum_{j=1}^{N}e^{-i{\bf
k}\cdot{\bf s}_{j}}\hat{\bf r}_{j},$ (2)
where ${\cal
F}(\rho,z)=E_{0}k_{0}/z\int^{a}_{0}rdr\,J_{0}(k_{0}r\rho/z)e^{-r^{2}/w_{0}^{2}}$,
and the prefactors that do not contribute to the intensity have been
suppressed. It is clear from Eq. (2) that the far field transform preserves
the radially symmetric polarization state of the system, as expected. Note
that in the limit $w_{0}/a\gg 1$, and using the relation
$xJ_{0}(x)=[xJ_{1}(x)]^{\prime}$, we have ${\cal F}(\rho,z)\approx
E_{0}(a/\rho)J_{1}(k_{0}a\rho/z)$, which gives the expected Fraunhofer
diffraction pattern for a circular aperture of radius $a$. In the opposite
limit, where the Gaussian profile is narrow enough so that the aperture
geometry has little effect, we have ${\cal F}(\rho,z)\approx
E_{0}k_{0}w_{0}^{2}/(2z)\exp[-(k_{0}\rho w_{0}/(2z))^{2}]$. For the multiple
beam arrangements investigated, corresponding to $N=3,4,6$, we then can
calculate the intensity, $I_{N}$, explicitly:
$\displaystyle{\cal K}_{3}(\phi)$
$\displaystyle=3-\cos({\sqrt{3}}k_{y}R)-2\cos(\frac{\sqrt{3}}{2}k_{y}R)\cos\Bigl{(}\frac{3}{2}k_{x}R\Bigr{)},$
(3) $\displaystyle{\cal K}_{4}(\phi)$
$\displaystyle=4\bigl{[}\sin^{2}(k_{x}R)+\sin^{2}(k_{y}R)\bigr{]},$ (4)
$\displaystyle{\cal K}_{6}(\phi)$
$\displaystyle=4\Bigl{[}\Bigl{(}\cos\Bigl{(}\frac{\sqrt{3}}{2}k_{y}R\Bigr{)}\sin\Bigl{(}\frac{1}{2}k_{x}R\Bigr{)}+\sin\bigl{(}k_{x}R\bigr{)}\Bigr{)}^{2}$
$\displaystyle\,\,\,\,\,\,+3\cos^{2}\Bigl{(}\frac{1}{2}k_{x}R\Bigr{)}\sin^{2}\Bigl{(}\frac{\sqrt{3}}{2}k_{y}R\Bigr{)}\Bigr{]},$
(5)
where $I_{N}\equiv{\cal F}^{2}(\rho,z){\cal K}_{N}(\phi;\rho,z)$,
$k_{x}=(k_{0}/z)\rho\cos\phi$ and $k_{y}=(k_{0}/z)\rho\sin\phi$. To rotate the
array, one can perform a standard rotation ${\bf r}^{\prime}={\cal
R}(\phi^{\prime}){\bf r}$, so e.g., a $\pi/4$ rotation would give, ${\cal
K}_{4}(\phi^{\prime})=4(1-\cos(\sqrt{2}k_{x}R)\cos(\sqrt{2}k_{y}R))$.
We employ two different methods to create the DCV beams previously described.
In each method, the experimental configuration utilizes collimated Gaussian
beams that are in phase. The arrays are cylindrically symmetric both with
respect to intensity and polarization. We focus the beams with a transform
lens in order to simulate the far field intensity at the lens focus. The focal
spot is then imaged onto a camera by a microscope objective in order to fill
the CCD array. The laser source is a $100$ mW continuous wave (CW) Nd:YAG
operating at $\lambda=1.064\mu$m.
Fig. 2: Diagram of the experimental setup for DCV beam generation. The
oscillator output is split and phase modulated (S/PM). The output of each
fiber is collimated by means of a lens, L1, and then the polarization is
rotated by a half-wave plate (HWP). The beam ensemble is then focused to a
point which is imaged by a microscope objective (MO) onto a charge-coupled-
device (CCD) array.
The first method (for the case of $N=4$) involves expanding a beam from the
Nd:YAG by means of a telescope beam expander to a diameter of about 1cm. The
diffractive optical element then creates an $8\times 8$ array of beams. Four
of the beams are reflected and made nearly parallel by a segmented mirror, and
then linearly polarized upon passing through a set of four half-wave plates.
The phases of each beam are made identical by having them pass through
articulated glass slides.
|
---|---
|
|
Fig. 3: Far field intensity profiles of effectively radially polarized beams.
The left panels correspond to the experimental images and the right panels
represent the theoretical results. We consider a 3 (top row), 4 (middle row),
and 6 (bottom row) beam arrangement. For $N=3$, we take $w_{0}=32$ $\mu$m and
$R=5w_{0}$. For $N=4$, $w_{0}=42$ $\mu$m, and $R=2.7w_{0}$. For $N=6$, we have
$w_{0}=43$ $\mu$m and $R=3.8w_{0}$. Note that in general, a radially polarized
beam undergoes a discontinuity at the origin, and thus the intensity must
vanish there.
The second method (for the case of $N=3$ and $6$), which is simpler for larger
arrays, is shown in Fig. 2. In this method the Nd:YAG is coupled into a
polarization maintaining fiber. This signal is then split into multiple copies
by means of a lithium niobate waveguide 8-way splitter and 8-channel electro-
optic modulator. Each path can then have a separate phase modulation to ensure
proper phasing of the beams. These signals are then propagated through fiber
and coupled out and collimated by a 1 in. lens. The lens is slightly
overfilled so the Gaussian outputs of the fibers are truncated at $1.1\times$
(the $e^{-1}$ radius).
Experimental results for the radial vector beam generation found good
agreement with theoretical calculations, as illustrated in Fig. 3. Clearly,
the interference patterns and symmetry correlate well with the corresponding
calculations in the three beam case. In particular, for $N=4$ (middle row),
the central null surrounded by a checkerboard peak structure. This pattern
differs from the linear polarization by a rotation of $\pi/4$, based on a
simple phase argument. The $N=6$ case is shown in the bottom row, where
clearly the intensity vanishes at the origin, followed by the formation of
bright hexagonal rings. The three beam configuration is shown in the top row,
where again there is satisfactory agreement between the calculated and
measured results. Any observed discrepancies may be reduced with an
appropriately incorporated feedback system. A substantial fraction of the
energy in the plots of Fig. 3 is contained within the first regions of high
intensity peaks. We illustrate this for the $N=4$ case where the net intensity
within a circular region of radius $\rho$, $U(\rho)$, is calculated to be
approximately $U(\rho)\approx k_{0}\rho z/(2R)[k_{0}\rho R/z-J_{1}(2k_{0}\rho
R/z)]$. Inserting the appropriate parameters and normalizing to the intensity
integrated over the entire image plane, yields $50\%$ of the distribution is
contained in the neighborhood of the first main peaks.
In conclusion, it has been shown by employing two different configurations,
and by carefully controlling the polarization, a central null can be created
in the far field. In previous work, the central portion of a phased array of
lasers typically has been a peaked function. Now for the first time to our
knowledge, a central null has been formed. Similar results have been shown
from single aperture lasers using an axicon to generate an annular mode inside
a laser cavity. In the present case however, there is direct control over
which mode will be generated from the same system. As for scaling, there is no
fundamental limitation on the number of beams used. Several methods are
available to incorporate a greater number of beams, including concentric beam
placement (following the same beam-combining prescription described in this
Letter) that could scale in roughly the same pattern as a Bessel beam. It is
also possible to use fiber lasers with active phasing, thus potentially
scaling to many hundreds of beams[5]. Our system also has a practical
advantage over typical high power Gaussian beam applications which have the
drawback of having the beam concentrated near the center, where the reflecting
beam director (such as a telescope obscuration) resides. Our device on the
other hand, has no power propagating along the center axis of the beam, and
thus for high power applications requiring a center obscuration telescope beam
director, our proposed system offers new advances.
We thank S. Feng for many useful discussions.
## References
* [1] “Theory of electronically phased coherent beam combination without a reference beam”, T. M. Shay, Opt. Express 14, 12188 (2006).
* [2] “Self-organized coherence in fiber laser arrays”, H. Bruesselbach, D. C. Jones, M. S. Mangir, M. I. Minden, and J. L. Rogers, Opt. Lett. 30, 1339, (2003).
* [3] “Coherent combining of spectrally broadened fiber lasers,” T. B. Simpson, F. Doft, P. R. Peterson, and A. Gavrielides, Opt. Express 15, 11731 (2007)
* [4] “Coherent addition of fiber lasers by use of a fiber coupler,” Shirakawa, T. Saitou, T. Sekiguchi, and K. Ueda, Opt. Express 10, 1167 (2002).
* [5] “Self-Synchronous and Self-Referenced Coherent Beam Combination for Large Optical Arrays,” T.M. Shay, V. Benham, J.T. Baker, A.D. Sanchez, D. Pilkington, C.A. Lu, IEEE J. Sel. Top. Quant. El. 13, 480 (2007).
* [6] “The formation of laser beams with pure azimuthal or radial polarization,” R. Oron, S. Blit, N. Davidson, A. A. Friesem, Z. Bomzon, and E. Hasman, Appl. Phys. Lett. 77, 3322 (2000).
* [7] “Efficient extracavity generation of radially and azimuthally polarized beams,” G. Machavariani, Y. Lumer, I. Moshe, A. Meir, and S. Jackel, Opt. Lett. 32, 1468 (2007).
* [8] “Intracavity generation of radially polarized CO2 laser beams based on a simple binary dielectric diffraction grating,” T. Moser, J. Balmer, D. Delbeke, P. Muys, S. Verstuyft, and R. Baets, Appl. Opt. 45, 8517 (2006).
* [9] “Simple interferometric technique for generation of a radially polarized light beam,” N. Passilly, R. de Saint Denis, K. A t-Ameur, F. Treussart, R. Hierle, and J. -F. Roch, J. Opt. Soc. Am. A 22, 984 (2005).
* [10] “Generating radially polarized beams interferometrically,” S. C. Tidwell, D. H. Ford, and W. D. Kimura, Appl. Opt. 29, 2234 (1990).
* [11] “Laser beams with axially symmetric polarization,” A V Nesterov et al., J. Phys. D: Appl. Phys. 33 1817 (2000).
* [12] “Generation of a cylindrically symmetric, polarized laser beam with narrow linewidth and fine tunability,” T. Hirayama, Y. Kozawa, T. Nakamura, and S. Sato, Opt. Express 14, 12839 (2006).
* [13] “Radially and azimuthally polarized beams generated by space-variant dielectric subwavelength gratings,” Z. Bomzon, G. Biener, V. Kleiner, and E. Hasman, Opt. Lett. 27, 285 (2002).
* [14] “Generation of a radially polarized laser beam by use of the birefringence of a c-cut Nd:YVO4 crystal,” K. Yonezawa, Y. Kozawa, and S. Sato, Opt. Lett. 31, 2151 (2006).
* [15] “A versatile and stable device allowing the efficient generation of beams with radial, azimuthal or hybrid polarizations,” T. Grosjean, A. Sabac and D. Courjon, Opt. Commun. 252, 12 (2005)
* [16] “Generation of the rotationally symmetric TE01and TM01modes from a wavelength-tunable laser,” J. J. Wynne, IEEE J. Quantum Electron. 10, 125 (1974).
* [17] “Generation of a radially polarized laser beam by use of a conical Brewster prism,” Y. Kozawa and S. Sato, Opt. Lett. 30, 3063 (2005).
* [18] “Fields of a radially polarized Gaussian laser beam beyond the paraxial approximation,” Y. I. Salamin, Opt. Lett. 31, 2619 (2006).
* [19] “2kW, M2 $<$ 10 radially polarized beams from aberration-compensated rod-based Nd:YAG lasers,” I. Moshe, S. Jackel, A. Meir, Y. Lumer, and E. Leibush, Opt. Lett. 32, 47 (2007).
* [20] “Production of radially or azimuthally polarized beams in solid-state lasers and the elimination of thermally induced birefringence effects,” I. Moshe, S. Jackel, and A. Meir, Opt. Lett. 28, 807 (2003).
* [21] “Generation of radially polarized beams in a Nd:YAG laser with self-adaptive overcompensation of the thermal lens,” M. Roth, E. Wyss, H. Glur, and H.P. Weber Opt. Lett. 30, 1665 (2005).
* [22] “Influence of beam polarization on laser cutting efficency,” V. G. Niziev, A. V. Nestorov, J. Phys. D: Appl. Phys. 32, 1455, (1999).
* [23] “Linear, annular, and radial focusing with axicons and applications to laser machining,” M. Rioux, R. Tremblay, and P.-A. Belanger, Appl. Opt. 17, 1532 (1978).
* [24] “Mono-energetic GeV electrons from ionization in a radially polarized laser beam,” Y. I. Salamin, Opt. Lett. 32, 90 (2007).
* [25] “A tunable doughnut laser beam for cold-atom experiments,” S. M. Iftiquar, J. Opt. B: Quantum Semiclass. Opt. 5, 40 (2003).
* [26] “Focusing of high numerical aperture cylindrical-vector beams,” K. Youngworth and T. Brown, Opt. Express 7, 77 (2000).
* [27] “Sharper focus for a radially polarized beam,” R. Dorn, S. Quabis, and G. Leuches Phys. Rev. Lett. 91, 233901 (2003).
* [28] “An all-fiber device for generating radially and other polarized light beams,” T. Grosjean, D. Courjon and M. Spajer, Opt. Commun. 203, 1 (2002).
* [29] “Generation of cylindrical vector beams with few-mode fibers excited by Laguerre Gaussian beams,” G. Volpe and D. Petrov, Opt. Commun. 237, 89 (2004).
* [30] “Generation of radially polarized mode in Yb fiber laser by using a dual conical prism,” J. Li, K. Ueda, M. Musha, A. Shirakawa, and L. Zhong, Opt. Lett. 31, 2969 (2006).
|
arxiv-papers
| 2009-05-22T05:00:01 |
2024-09-04T02:49:02.833071
|
{
"license": "Public Domain",
"authors": "R. Steven Kurti, Klaus Halterman, Ramesh K. Shori, and Michael J.\n Wardlaw",
"submitter": "Klaus Halterman",
"url": "https://arxiv.org/abs/0905.3606"
}
|
0905.3640
|
# Coevolutionary Genetic Algorithms for Establishing Nash Equilibrium in
Symmetric Cournot Games
Mattheos Protopapas111Department of Statistics, University of Rome “La
Sapienza”, Aldo Moro Square 5, 00185 Rome Italy. tel. +393391457307, e-mail:
matteo.protopapas@uniroma1.it Francesco Battaglia222Department of Statistics,
University of Rome “La Sapienza”, Aldo Moro Square 5, 00185 Rome Italy. tel.
+390649910440, e-mail: francesco.battaglia@uniroma1.it Elias
Kosmatopoulos333Department of Production Engineering and Management, Technical
University of Crete, Agiou Titou Square. tel. +302821037306. e-mail:
kosmatop@dssl.tuc.gr
Abstract. We use co-evolutionary genetic algorithms to model the players’
learning process in several Cournot models, and evaluate them in terms of
their convergence to the Nash Equilibrium. The “social-learning” versions of
the two co-evolutionary algorithms we introduce, establish Nash Equilibrium in
those models, in contrast to the “individual learning” versions which, as we
see here, do not imply the convergence of the players’ strategies to the Nash
outcome. When players use “canonical co-evolutionary genetic algorithms” as
learning algorithms, the process of the game is an ergodic Markov Chain, and
therefore we analyze simulation results using both the relevant methodology
and more general statistical tests, to find that in the “social” case, states
leading to NE play are highly frequent at the stationary distribution of the
chain, in contrast to the “individual learning” case, when NE is not reached
at all in our simulations; to find that the expected Hamming distance of the
states at the limiting distribution from the “NE state” is significantly
smaller in the “social” than in the “individual learning case”; to estimate
the expected time that the “social” algorithms need to get to the “NE state”
and verify their robustness and finally to show that a large fraction of the
games played are indeed at the Nash Equilibrium.
Keywords: Genetic Algorithms, Cournot oligopoly, Evolutionary Game Theory,
Nash Equilibrium.
## 1 Introduction
The “Cournot Game” models an oligopoly of two or more firms that
simultaneously define the quantities they supply to the market, which in turn
define both the market price and the equilibrium quantity in the market. Co-
evolutionary Genetic Algorithms have been used for studying Cournot games,
since Arifovic [3] studied the cobweb model. In contrast to the classical
genetic algorithms used for optimization, the co-evolutionary versions are
distinct at the issue of the objective function. In a classical genetic
algorithm the objective function for optimization is given before hand, while
in the co-evolutionary case, the objective function changes during the course
of play as it is based on the choices of the players. So the players’
strategies and, consequently, the genetic algorithms that are used to
determine the players’ choices, co-evolve with the goals of these algorithms,
within the dynamic process of the system under consideration. Arifovic (1994)
used four different co-evolutionary genetic algorithms to model players’
learning and decision making: two single-population algorithms, where each
player’s choice is represented by a single chromosome in the population of the
single genetic algorithm that is used to determine the evolution of the
system, and two multi-population algorithms, where each player has its own
population of chromosomes and its own Genetic Algorithm to determine his
strategy. Arifovic links the chromosomes’ fitness to the profit established
after a round of play, during which the algorithms define the active
quantities that players choose to produce and sell at the market. The
quantities chosen define, in turn, the total quantity and the price at the
market, leading to a specific profit for each player. Thus, the fitness
function is dependent on the actions of the players on the previous round, and
the co-evolutionary ”nature” of the algorithms is established.
In Arifovic’s algorithms [3], as well as any other algorithms we use here,
each chromosome’s fitness is proportional to its profit, as given by
$\pi(q_{i})=Pq_{i}-c_{i}(q_{i})$ (1)
where $c_{i}(q_{i})$ is the player’s cost for producing $q_{i}$ items of
product and P is the market price, as determined by all players’ quantity
choices, from the inverse demand function
$P=a-b\sum_{i=1}^{n}q_{i}$ (2)
In Arifovic’s algorithms, populations are updated after every single Cournot
game is played, and converge to the Walrasian (competitive) equilibrium and
not the Nash equilibrium [2],[14]. Convergence to the competitive equilibrium
means that agents’ actions -as determined by the algorithm- tend to maximize
(1), with price regarded as given, instead of
$\max_{q_{i}}\pi(q_{i})=P(q_{i})q_{i}-c_{i}(q_{i})$ (3)
that gives the Nash Equilibrium in pure strategies [2]. Later variants of
Arifovic’s model [5],[7] share the same properties.
Vriend was the first to present a co-evolutionary genetic algorithm in which
the equilibrium price and quantity on the market -but not the strategies of
the individual players as we will see later- converge to the respective values
of the Nash Equilibrium [15]. In his individual learning, multi-population
algorithm, which is one of the two algorithms that we study -and transform- in
this article, chromosomes’ fitness is calculated only after the chromosomes
are used in a game, and the population is updated after a given number of
games are played with the chromosomes of the current populations. Each player
has its own population of chromosomes, from which he picks at random one
chromosome to determine its quantity choice at the current round. The fitness
of the chromosome, based on the profit acquired from the current game is then
calculated, and after a given number of rounds, the population is updated by
the usual genetic algorithm operators (crossover and mutation). Since the
populations are updated separately, the algorithm is regarded as individual
learning. These settings yield Nash Equilibrium values for the total quantity
on the market and, consequently, for the price as well, as proven by Vallee
and Yildizoglou [14].
Finally Alkemade et al. [1] present the first (single population) social
learning algorithm that yields Nash Equilibrium values for the total quantity
and the price. The four players pick at random one chromosome from a single
population, in order to define their quantity for the current round. Then
profits are calculated and the fitness value of the active chromosomes is
updated, based on the profit of the player who has chosen them. The population
is updated by crossover and mutation, after all chromosomes have been used. As
Alkemade et al. [1] point out, the algorithm leads the total quantities and
the market price to the values corresponding to the NE for these measures.
## 2 The Models
In all the above models, researchers assume symmetric cost functions (all
players have identical cost functions), which implies that the Cournot games
studied are symmetric. Additionally, Vriend [15], Alkemade et al. [1] and
Arifovic [3] -in one of the models she investigates- use linear (and
decreasing) cost functions. If a symmetric Cournot Game, has in addition,
indivisibilities (discrete, but closed strategy sets), it is a pseudo-
potential game [6] and the following theorem holds:
Theorem 1. “Consider a n-player Cournot Game. We assume that the inverse
demand function P is strictly decreasing and log-concave; the cost function
$c_{i}$ of each firm is strictly increasing and left-continuous; and each
firm’s monopoly profit becomes negative for large enough $q$. The strategy
sets $S^{i}$, consisting of all possible levels of output producible by firm
$i$, are not required to be convex, but just closed. Under the above
assumptions, the Cournot Game has a Nash Equilibrium [in pure strategies]”
[6].
This theorem is relevant when one investigates Cournot Game equilibrium using
Genetic Algorithms, because a chromosome can have only a finite number of
values and, therefore, it is the discrete version of the Cournot Game that is
investigated, in principle. Of course, if one can have a dense enough
discretization of the strategy space, so that the NE value of the continuous
version of the Cournot Game is included in the chromosomes’ accepted values,
it is the case for the NE of the continuous and the discrete version under
investigation to coincide.
In all three models we investigate in this paper, the assumptions of the above
theorem hold, and hence there is a Nash Equilibrium in pure strategies. We
investigate those models for the cases of $n=4$ and $n=20$ players.
The first model we use is the linear model used in [1]: The inverse demand is
given by
$P=256-Q$ (4)
with $Q=\sum_{i=1}^{n}q_{i}$, and the common cost function of the $n$ players
is
$c(q_{i})=56q_{i}$ (5)
The Nash Equilibrium quantity choice of each of the 4 players is $\hat{q}=40$
[1]. In the case of 20 players we have, by solving (3), $\hat{q}=9.5238$. The
second model has a polynomial inverse demand function.
$P=aQ^{3}-b$ (6)
and linear symmetric cost function
$c=xq_{i}+y$ (7)
If we assume $a<0$ and $x>0$ the demand and cost functions will be decreasing
and increasing, respectively, and the assumptions of theorem (1) hold. We set
$a=-1$, $b=7.36\times 10^{7}+10$, $x=y=10$, so $\hat{q}=20$ for $n=20$ and
$\hat{q}=86.9401$ for $n=4$.
Finally, in the third model, we use a radical inverse demand function
$P=aQ^{\frac{3}{2}}+b$ (8)
and the linear cost function (7). For $a=-1$, $b=8300$, $x=100$ and $y=10$
theorem (1) holds and $\hat{q}=19.3749$ for $n=20$, while $\hat{q}=82.2143$
for $n=4$.
## 3 The Algorithms
We use two multi-population (each player has its own population of chromosomes
representing its alternative choices at any round) co-evolutionary genetic
algorithms, Vriend’s individual learning algorithm [15] and co-evolutionary
programming, a similar algorithm that has been used for the game of prisoner’s
dilemma [10] and, unsuccessfully, for Cournot Duopoly [13]. Since those two
algorithms don’t, as it will be seen, lead to convergence to the NE in the
models under consideration, we introduce two different versions of the
algorithms, as well, which are characterized by the use of opponent choices,
when the new generation of each player’s chromosome population is created, and
therefore can be regarded as “socialized” versions of the two algorithms. The
difference between the “individual” and the “social” learning versions of the
algorithms is that in the former case the population of each player is updated
on itself (i.e. only the chromosomes of the specific player’s population are
taken into account when the new generation is formed), while on the latter,
all chromosomes are copied into a common “pool”, then the usual genetic
operators (crossover and mutation) are used to form the new generation of that
aggregate population and finally each chromosome of the generation is copied
back to its corresponding player’s population. Thus we have “social learning”,
since the alternative strategic choices of a given player at a specific
generation, as given by the chromosomes that comprise its population, are
affected by the chromosomes (the ideas should we say) all other players had at
the previous generation.
Vriend’s individual learning algorithm is presented in pseudo-code [14].
1. 1.
“A set of strategies [chromosomes representing quantities] is randomly drawn
for each player.
2. 2.
While $Period<T$
1. (a)
(If $Period\ mod\ GArate=0$): Using GA procedures {as roulette wheel selection
single, random point crossover and mutation, for generating a new set of
strategies for each player [15]}, a new set of strategies is created for each
firm.
2. (b)
Each player selects one strategy. The realized profit is calculated [and the
fitness of the corresponding chromosomes, is defined, based on that profit].
Co-evolutionary programming is quite similar, with the difference that the
random match-ups between the chromosomes of the players’ population at a given
generation are finished when all chromosomes have participated in a game; and
then the population is updated, instead of having a parameter (GArate) that
defines the generations at which populations update takes place. The
algorithm, described by pseudo-code, is as follows [13]:
1. 1.
Initialize the strategy population of each player.
2. 2.
Choose one strategy from the population of each player randomly, among the
strategies that have not already been assigned profits. Input the strategy
information to the tournament. The result of the tournament will decide profit
and fitness values for these chosen strategies.
3. 3.
Repeat step (2) until all strategies have a profit value assigned.
4. 4.
Apply the evolutionary operators [selection, crossover, mutation] to each
player’s population. Keep the best strategy of the current generation alive
(elitism).
5. 5.
Repeat steps (2)-(4) until maximum number of generations has been reached.
In our implementation, we don’t use elitism. The reason is that by using only
selection proportional to fitness, single (random) point crossover and
finally, mutation with fixed mutation rate for each chromosome bit throughout
the simulation, we ensure that the algorithms can be classified as canonical
economic GA’s (Riechmann 2001), and that their underlying stochastic process
form an ergodic Markov Chain [12].
In order to ensure convergence to Nash Equilibrium, we introduce the two
“social” versions of the above algorithms. Vriend’s multi-population algorithm
could be transformed to:
1. 1.
A set of strategies [chromosomes representing quantities] is randomly drawn
for each player.
2. 2.
While $Period<T$
1. (a)
(If $Period\ mod\ GArate=0$): Use GA procedures (roulette wheel selection,
single, random point crossover and mutation), to create a new generation of
chromosomes, from a population consisting of the chromosomes belonging to the
union of the players’ populations. Copy the chromosomes of the new generation
to the corresponding player’s population, to form a new set of strategies for
each player.
2. (b)
Each player selects one strategy. The realized profit is calculated (and the
fitness of the corresponding chromosomes, is defined, based on that profit).
And social co-evolutionary programming is defined as:
1. 1.
Initialize the strategy population of each player
2. 2.
Choose one strategy of the population of each player randomly from among the
strategies that have not already been assigned profits. Input the strategy
information to the tournament. The result of the tournament will decide profit
values for these chosen strategies.
3. 3.
Repeat step (2) until all strategies are assigned a profit value.
4. 4.
Apply the evolutionary operators (selection, crossover, mutation) at the union
of players’ populations. Copy the chromosomes of the new generation to the
corresponding player’s population to form the new set of strategies.
5. 5.
Repeat steps (2)-(4) until maximum number of generations has been reached.
So the difference between the social and individual learning variants is that
chromosomes are first copied in an aggregate population, and the new
generation of chromosomes is formed from the chromosomes of this aggregate
population. From an economic point of view, this means that the players take
into account their opponents choices when they update their set of alternative
strategies. So we have a social variant of learning, and since each player has
its own population, the algorithms should be classified as “social multi-
population economic Genetic Algorithms” [11],[12]. It is important to note
that the settings of the game allow the players to observe their opponent
choices after every game is played, and take them into account, consequently,
when they update their strategy sets.
It is not difficult to show that the stochastic process of all the algorithms
presented here form a regular Markov chain [9]. In the co-evolutionary
programming algorithms (both individual and social), and since the matchings
are made at random, the expected profit of the $j_{th}$ chromosome of player’s
$i$ population $q_{ij_{i}}$ is (we assume $n$ players and $K$ chromosomes in
each population)
$E[\pi(q_{ij_{i}})]=\frac{1}{(n-1)K}\sum_{j_{1}=1}^{K}\ldots\sum_{j_{i-1}=1}^{K}\sum_{j_{i+1}=1}^{K}\ldots$
$\sum_{j_{n}=1}^{K}\pi(q_{ij_{i}};q_{1j_{1}},\ldots,q_{(i-1)(j_{i-1})},q_{(i+1)(j_{i+1})},\ldots,q_{nj_{n}})$
The expected profit for Vriend’s algorithm [14]
$E[\pi(q_{ij};Q_{-i})]=\bar{p}q_{ij}-C(q_{ij})$
with
$\bar{p}=\sum_{l\neq i}p(q_{ij},\sum_{l}q_{lj})f(q_{lj}|GArate)$
where $f(q_{ij}|GARate)$ is the frequency of each individual strategy of other
firms, conditioned by the strategy selection process and GArate.
Any fitness function that is defined on the profit of the chromosomes, either
proportional to profit, scaled or ordered, has a value that is solely
dependent on the chromosomes of the current population. And, since the
transition probabilities of the underlying stochastic process depend only on
the fitness and, additionally, the state of the chain is defined by the
chromosomes of the current population, the transition probabilities from one
state of the GA to another, are solely dependent on the current state (see
also [12]). The stochastic process of the populations is therefore, a Markov
Chain. And since the final operator used in all the algorithms presented here
is the mutation operator, there is a positive -and fixed- probability that any
bit of the chromosomes in the population is negated. Therefore any state (set
of populations) is reachable from any other state -in just one step actually-
and the chain is regular.
Having a Markov chain implies that the usual performance measures -namely mean
value and variance- are not adequate to perform statistical inference, since
the observed values in the course of the genetic algorithm are inter-
dependent. In a regular Markov chain however, one can estimate the limiting
probabilies of the chain by estimating the components of the fixed frequency
vector the chain converges to, by
$\hat{\pi_{i}}=\frac{N_{i}}{N}$ (9)
where $N_{i}$ is the number of observations in which the chain is at state $i$
and $N$ is the total number of observations [4]. In the algorithms presented
here, however, the number of states is extremely large. If we have $n$
players, with $k$ chromosomes consisting of $l$ bits in each player’s
population, the total number of possible states is $2^{knl}$, making the
estimation of the limiting probabilities of all possible states, practically
impossible. On the other hand, one can estimate the limiting probability of
one or more given states, without needing to estimate the limiting
probabilities of all the other states. A state of importance could be the
state where all chromosomes of all populations represent the Nash Equilibrium
quantity (which is the same for all players, since we have a symmetric game).
We call this state Nash State.
Another solution could be the introduction of lumped states [9]. Lumped states
are disjoint aggregate states consisting of more than one state, with their
union being the entire space. Although the resulting stochastic process is not
necessarily Markovian, the expected frequency of the lumped states can still
be estimated from (9). The definition of the lumped states can be based on the
average Hamming distance between the chromosomes in the populations and the
chromosome denoting the Nash Equilibrium quantity. Denoting $q_{ij}$ the
$j^{th}$ chromosome of the $i^{th}$ player’s population, and $NE$ the
chromosome denoting the Nash Equilibrium quantity, the Hamming distance
$d(q_{ij},NE)$ between $q_{ij}$ and $NE$ would be equal to the number of bits
that differ in the two chromosomes, and the average Hamming distance between
the chromosomes in the populations from the Nash chromosome would be
$\bar{d}=\frac{1}{nK}\sum_{i=1}^{n}\sum_{j=1}^{K}d(q_{ij},n)$ (10)
where $n$ is the number of players in the game and $K$ is the number of
chromosomes in each player’s population.We define the $i^{th}$ lumped state
$S_{i}$ as the set of states $s_{i}$, in which the chromosomes’ average
Hamming distance from the Nash chromosome is less or equal to $i$ and greater
to $i-1$
Definition 1. $S_{i}=\\{s_{i}|i-1<\bar{d}\left(q_{ij}\in s_{i},n\right)\leq
i\\}$, for $i=1,\ldots,n$
The maximum value of $\bar{d}$ is equal to the maximum value of the Hamming
distance between a given chromosome and the Nash chromosome. The maximum value
between two chromosomes is obtained when all bits differ, and it is equal to
the length of the chromosomes $L$. Therefore we have $L$ different lumped
states $S_{1},S_{2},\ldots,S_{L}$. We also define $S_{0}$ to be the individual
Nash state (the state reached when all populations consist of the single
chromosome that corresponds to the Nash Equilibrium quantity) which gives us a
total of $L+1$ states. This ensures that the union of the $S_{i}$ is the
entire populations’ space, and they consist, therefore, a set of lumped states
[9].
## 4 Simulation Settings
We use two variants of the three models in our simulations. One about $n=4$
players and one having $n=20$ players. We use 20-bits chromosomes for the
$n=4$ players case and 8-bits chromosomes for the $n=20$ case. A usual
mechanism [3],[15] is used to transform chromosome values to quantities. After
an arbitrary choice for the maximum quantity, the quantity that corresponds to
a given chromosome is given by:
$q=\frac{1}{q_{max}}\sum_{k=1}^{L}q_{ijk}2^{k-1}$ (11)
where $L$ is the length of the chromosome and $q_{ijk}$ is the value of the
$k_{th}$ bit of the given chromosome (0 or 1). According to (11) the feasible
quantities belong in the interval $[0,q_{max}]$. By setting
$q_{max}=3\hat{q}$ (12)
where $\hat{q}$ is the Nash Equilibrium quantity of the corresponding model,
we ensure that the Nash Equilibrium of the continuous model is one of the
feasible solutions of the discrete model, analyzed by the genetic algorithms,
and that the NE of the discrete model will be therefore, the same as the one
for the continuous case. And, as it can be easily proven by mathematical
induction, that the chromosome corresponding to the Nash Equilibrium quantity,
will always be $0101\ldots 01$, provided that chromosome length is an even
number.
The $GArate$ parameter needed in the original and the “socialized” versions of
Vriend’s algorithms, is set to $GArate=50$, an efficient value suggested in
the literature [15],[14]. We use single - point crossover, with the point at
which chromosomes are combined [8] chosen at random. Probability of crossover
is always set up to 1, i.e. all the chromosomes of a new generation are
products of the crossover operation, between selected parents. The probability
of mutating any single bit of a chromosome is fixed throughout any given
simulation -something that ensures the homogeneity of the underlying Markov
process. The values that have been used (for both cases of $n=4$ and $n=20$)
are
$p_{m}=0.1,0.075,\ldots,0.000025,0.00001.$
We used populations consisting of
$pop=20,30,40,50$
chromosomes. These choices were made after preliminary tests that evaluated
the convergence properties of the algorithms for various population choices,
and they are in accordance to the population sizes used in the literature
([15],[1], etc.).
Finally, the maximum number of generations that a given simulation runs, were
$T=10^{3},2*10^{3},5*10^{3},10^{4},2*10^{4},5*10^{4}$
Note that the number of total iterations (number of games played) of Vriend’s
individual and social algorithms is $GArate$ times the number of generations,
while in the co-evolutionary programming algorithms is number of generations
times the number of chromosomes in a population, which is the number of match-
ups.
We run 300 independent simulations for each set of settings for all the
algorithms, so that the test statistics and the expected time to reach the
Nash Equilibrium (NE state, or first game with NE played), are estimated
effectively.
## 5 Presentation of Selected Results
Although the individual - learning versions of the two algorithms led the
estimated expected value of the average quantity (as given in eq.(13))
$\bar{Q}=\frac{1}{nT}\sum_{t=1}^{T}\sum_{i=1}^{n}q_{it}$ (13)
($T=$ number of iterations, $n=$ number of players), close to the
corresponding average quantity of the NE, the strategies of each one of the
players converged to different quantities. That fact can be seen in figures 1
to 3, that show the outcome of some representative runs of the two individual
- learning algorithms in the polynomial model (6). The trajectory of the
average market quantity in Vriend’s algorithm
$Q=\frac{1}{n}\sum_{i=1}^{n}q_{it}$ (14)
(calculated in (14) and shown in figure 1) is quite similar to the trajectory
of the same measure in the co-evolutionary case, and a figure of the second
case is omitted. The estimated average values of the two measures (eq.(13))
were 86.2807 and 88.5472 respectively, while the NE quantity in the polynomial
model (6) is 86.9401. The unbiased estimators for the standard deviations of
the $Q$ (eq.(15)) were 3.9776 and 2.6838, respectively.
$s_{Q}=\frac{1}{T-1}\sum_{i=1}^{T}(Q_{i}-\bar{Q})^{2}$ (15)
Figure 1: Mean Quantity in one execution of Vriend’s individual learning
algorithm in the polynomial model for $n=4$ players.
$pop=50,GArate=50,p_{cr}=1,p_{mut}=0.01,T=2,000$ generations.
The evolution of the individual players’ strategies can be seen in figures 2
and 3.
Figure 2: Players’ quantities in one execution of Vriend’s individual learning
algorithm in the polynomial model for $n=4$ players.
$pop=50,GArate=50,p_{cr}=1,p_{mut}=0.01,T=2,000$ generations. Figure 3:
Players’ quantities in one execution of the individual - learning version of
the co-evolutionary programming algorithm in the polynomial model for $n=4$
players. $pop=50,p_{cr}=1,p_{mut}=0.01,T=2,000$ generations.
The estimators of the mean values of each player’s quantities (calculated by
eq.(16))
$\bar{q}_{i}=\frac{1}{T}\sum_{i=1}^{T}q_{i}$ (16)
are given on table 1, while the frequencies of the lumped states in these
simulations are given on table 2.
Player | Vriend’s algorithm | Co-evol. programming
---|---|---
1 | 91.8309 | 77.6752
2 | 65.3700 | 97.8773
3 | 93.9287 | 93.9287
4 | 93.9933 | 93.9933
Table 1: Mean values of players’ quantities in two runs of the individual-learning algorithms in the polynomial model for $n=4$ players. $pop=50,GArate=50,p_{cr}=1,p_{mut}=0.01,T=2,000$ generations. | $s_{0}$ | $s_{1}$ | $s_{2}$ | $s_{3}$ | $s_{4}$ | $s_{5}$ | $s_{6}$ | $s_{7}$ | $s_{8}$ | $s_{9}$ | $s_{10}$
---|---|---|---|---|---|---|---|---|---|---|---
VI | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | .8725 | .0775
| $s_{11}$ | $s_{12}$ | $s_{13}$ | $s_{14}$ | $s_{15}$ | $s_{16}$ | $s_{17}$ | $s_{18}$ | $s_{19}$ | $s_{20}$ |
| .05 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| $s_{0}$ | $s_{1}$ | $s_{2}$ | $s_{3}$ | $s_{4}$ | $s_{5}$ | $s_{6}$ | $s_{7}$ | $s_{8}$ | $s_{9}$ | $s_{10}$
CP | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | .0025 | .1178 | .867
| $s_{11}$ | $s_{12}$ | $s_{13}$ | $s_{14}$ | $s_{15}$ | $s_{16}$ | $s_{17}$ | $s_{18}$ | $s_{19}$ | $s_{20}$ |
| .0127 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Table 2: Lumped states frequencies in two runs of the individual-learning
algorithms in the polynomial model for $n=4$ players.
$pop=50,p_{cr}=1,p_{mut}=0.01,T=100,000$ generations.
That significant difference between the mean values of players’ quantities was
observed in all simulations of the individual - learning algorithms, in all
models and in both $n=4$ and $n=20$, for all the parameter sets used (which
were described in the previous section). We used a sample of 300 simulation
runs for each parameter set and model, for hypothesis testing. The hypothesis
$H_{0}:\bar{Q}=q_{Nash}$ was accepted for $a=.05$ in all cases. On the other
hand, the hypotheses $H_{0}:q_{i}=q_{Nash}$, were rejected for all players in
all models, when the probability of rejection the hypothesis, under the
assumption it is correct, was $a=.05$. There was not a single Nash Equilibrium
game played, in any of the simulations of the two individual - learning
algorithms.
In the social - learning versions of the two algorithms, both the hypotheses
$H_{0}:\bar{Q}=q_{Nash}$, and $H_{0}:q_{i}=q_{Nash}$ were accepted for
$a=.05$, for all models and parameters sets. We used a sample of 300 different
simulations for every parameter set, in those cases, as well.
The evolution of the individual players’ quantities in a given simulation of
Vriend’s algorithm on the polynomial model (as in fig.2) can be seen in fig.4.
Figure 4: Players’ quantities in one execution of the social - learning
version of Vriend’s algorithm in the polynomial model for $n=4$ players.
$pop=40,GArate=50,p_{cr}=1,p_{mut}=0.00025,T=10,000$ generations.
Notice that the all players’ quantities have the same mean values (eq. (16)).
The mean values of the individual players’ quantities for
$pop=40,p_{cr}=1,p_{mut}=0.00025,T=10,000$ generations, are given, for one
simulation of all the algorithms (social and individual versions) on table 3.
Player | Social | Social | Individual | Individual
---|---|---|---|---
| Vriend’s alg. | Co-evol. prog. | Vriend’s alg. | Co-evol. prog.
1 | 86.9991 | 87.0062 | 93.7536 | 97.4890
2 | 86.9905 | 87.0089 | 98.4055 | 74.9728
3 | 86.9994 | 87.0103 | 89.4122 | 82.4704
4 | 87.0046 | 86.9978 | 64.6146 | 90.4242
Table 3: Mean values of players’ quantities in two runs of the social-learning
algorithms in the polynomial model for $n=4$ players.
$pop=40,p_{cr}=1,p_{mut}=0.00025,T=10,000$ generations.
On the issue of establishing NE in -some- of the games played and reaching the
Nash State (all chromosomes of every population equals the chromosome
corresponding to the NE quantity) there are two alternative results. For one
subset of the parameters set, the social - learning algorithms managed to
reach the NE state and in a significant subset of the games played, all
players used the NE strategy (these subsets are shown on table 4).
Model | Algorithm | pop | $p_{mut}$ | T
---|---|---|---|---
4-Linear | Vriend | 20-40 | $.001-.0001$ | $\geq 5000$
4-Linear | Co-evol | 20-40 | $.001-.0001$ | $\geq 5000$
20-Linear | Vriend | 20 | $.00075-.0001$ | $\geq 5000$
20-Linear | Co-evol | 20 | $.00075-.0001$ | $\geq 5000$
4-poly | Vriend | 20-40 | $.001-.0001$ | $\geq 5000$
4-poly | Co-evol | 20-40 | $.001-.0001$ | $\geq 5000$
20-poly | Vriend | 20 | $.00075-.0001$ | $\geq 5000$
20-poly | Co-evol | 20 | $.00075-.0001$ | $\geq 5000$
4-radic | Vriend | 20-40 | $.001-.0001$ | $\geq 5000$
4-radic | Co-evol | 20-40 | $.001-.0001$ | $\geq 5000$
20-radic | Vriend | 20 | $.00075-.0001$ | $\geq 5000$
20-radic | Co-evol | 20 | $.00075-.0001$ | $\geq 5000$
Table 4: Parameter sets that yield NE. Holds true for both social - learning
algorithms.
In the cases where mutation probability was too large, the “Nash” chromosomes
were altered significantly and therefore the populations couldn’t converge to
the NE state (within the given iterations). On the other hand, when the
mutation probability was low the number of iterations was not enough to have
convergence. A larger population, requires more generations to converge to the
“NE state” as well. The estimators of the limiting probabilities of one
representative parameter set for representative cases of the first and second
parameter sets are given on table 5.
| $s_{0}$ | $s_{1}$ | $s_{2}$ | $s_{3}$ | $s_{4}$ | $s_{5}$ | $s_{6}$ | $s_{7}$ | $s_{8}$ | $s_{9}$ | $s_{10}$
---|---|---|---|---|---|---|---|---|---|---|---
No NE | 0 | 0 | .6448 | .3286 | .023 | .0036 | 0 | 0 | 0 | 0 | 0
| $s_{11}$ | $s_{12}$ | $s_{13}$ | $s_{14}$ | $s_{15}$ | $s_{16}$ | $s_{17}$ | $s_{18}$ | $s_{19}$ | $s_{20}$ |
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| $s_{0}$ | $s_{1}$ | $s_{2}$ | $s_{3}$ | $s_{4}$ | $s_{5}$ | $s_{6}$ | $s_{7}$ | $s_{8}$ | $s_{9}$ | $s_{10}$
NE | .261 | .4332 | .2543 | .0515 | 0 | 0 | 0 | 0 | 0 | 0 | 0
| $s_{11}$ | $s_{12}$ | $s_{13}$ | $s_{14}$ | $s_{15}$ | $s_{16}$ | $s_{17}$ | $s_{18}$ | $s_{19}$ | $s_{20}$ |
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Table 5: Lumped states frequencies in a run of a social-learning algorithm
that couldn’t reach NE and another that reached it. 20 players - polynomial
model, Vriend’s algorithms, $pop=20$ and $T=10,000$ in both cases,
$p_{mut}=.001$ in the $1^{st}$ case, $p_{mut}=.0001$ in the $2^{nd}$.
Apparently, the Nash state $s_{0}$ has greater than zero frequency in the
simulations that reach it. The estimated time needed to reach Nash State (in
generations), to return to it again after departing from it, and the
percentage of total games played that were played on NE, are presented on
table 6444Table 6: $GenNE=$ Average number of Generations needed to reach
$s_{0}$, starting from populations having all chromosomes equal to the
opposite chromosome of the NE chromosome, in the 300 simulations. $RetTime=$
Interarrival Times of $s_{0}$(average number of generations needed to return
to $s_{0}$) in the 300 simulations. $NEGames=$ Percentage of games played that
were NE in the 300 simulations..
Model | Algorithm | pop | $p_{mut}$ | T | Gen NE | Ret Time | NE Games
---|---|---|---|---|---|---|---
4-Linear | Vriend | 30 | $.001$ | 10,000 | 3,749.12 | 3.83 | 5.54
4-Linear | Co-evol | 40 | $.0005$ | 10,000 | 2,601.73 | 6.97 | 73.82
20-Linear | Vriend | 20 | $.0005$ | 20,000 | 2,712.45 | 6.83 | 88.98
20-Linear | Co-evol | 20 | $.0001$ | 20,000 | 2,321.32 | 6.53 | 85.64
4-poly | Vriend | 40 | $.00025$ | 10,000 | 2,483.58 | 3.55 | 83.70
4-poly | Co-evol | 40 | $.0005$ | 10,000 | 2,067.72 | 8.77 | 60.45
20-poly | Vriend | 20 | $.0005$ | 20,000 | 2,781.24 | 9.58 | 67.60
20-poly | Co-evol | 20 | $.0005$ | 50,000 | 2,297.72 | ,6.63 | 83.94
4-radic | Vriend | 40 | $.00075$ | 10,000 | 2,171.32 | 4.41 | 81.73
4-radic | Co-evol | 40 | $.0005$ | 10,000 | 2,917.92 | 5.83 | 73.69
20-radic | Vriend | 20 | $.0005$ | 20,000 | 2,136.31 | 7.87 | 75.34
20-radic | Co-evol | 20 | $.0005$ | 20,000 | 2,045.81 | 7.07 | 79.58
Table 6: Markov and other statistics for NE.
We have seen that the original individual - learning versions of the multi -
population algorithms do not lead to convergence of the individual players’
choices, at the Nash Equilibrium quantity. On the contrary, the “socialized”
versions introduced here, accomplish that goal and, for a given set of
parameters, establish a very frequent Nash State, making games with NE quite
frequent as well, during the course of the simulations. The statistical tests
employed, proved that the expected quantities chosen by players converge to
the NE in the social - learning versions while that convergence cannot be
achieved at the individual - learning versions of the two algorithms.
Therefore it can be argued that the learning process is qualitatively better
in the case of social learning. The ability of the players to take into
consideration their opponents strategies, when they update theirs, and base
their new choices at the totality of ideas that were used at the previous
period (as in [1]), forces the strategies into consideration to converge to
each other and to converge to the NE strategy as well. Of course this option
would not be possible, if the profit functions of the individual players were
not the same, or, to state that condition in an equivalent way, if there were
no symmetry at the cost functions. If the cost functions are symmetric, a
player can take note of its opponents realized strategies in the course of
play, and use them as they are when he updates his ideas, since the effect of
these strategies at his individual profit, will be the same. Therefore the
inadequate learning process of the individually based learning can be
perfected, at the symmetric case. One should note that the convergence to
almost identical values displayed in the representative cases of the previous
section, holds for any parameter set used in all the models presented in this
paper.
The stability properties of the algorithms, are identified by the frequencies
of the lumped states and the expected inter-arrival times estimated in the
previous section (table 6). The inter-arrival times of the representative
cases shown there are less than 10 generations. The inter-arrival times were
in the same range, when the other parameter sets that yielded convergence to
“Nash state” were used. The frequencies of the lumped states show that the
’Nash state’ $s_{0}$ was quite frequent -for the cases it was reached, of
course- and that the states defined by populations, whose chromosomes differ
in less than one bits, on the average, from the Nash state itself, define the
most frequent lumped state ($s_{1})$. As a matter of fact the sum of these two
lumped states $s_{0},s_{1}$ was usually higher than $.90$. As it has been
already shown [4] the estimators of the limiting probabilities calculated by
(9) and presented for given simulation runs, on tables 2 and 5, are unbiased
and efficient estimators for the expected frequencies of the algorithm’s
performance ad infinitum. The high expected frequencies of the lumped states
that are “near” the NE and the low inter-arrival time to the NE state itself,
ensure the stability of the algorithms.
Using these two algorithms as heuristics to discover unknown NE, requires a
way to distinguish the potential Nash Equilibrium chromosomes. When
VS555Social - learning version of Vriend’s algorithm or CS666Social - learning
version of co - evolutionary programming converge -in the sense mentioned
above- to the “Nash state”, most chromosomes in the populations of several of
the generations at the end of the simulation, should be identical or almost
identical (differing at a small number of bits) to the Nash Equilibrium
chromosome. Using this qualitatively rule, one should be able to find some
potential chromosomes to check for Nash Equilibrium. A more concise way, would
be to record the games that all players used the same quantities. Since
symmetric profits functions imply symmetric NE, apparently, one can confine
his attention on these games, of all the games played. In order to check if
any of these quantities is the NE quantity, one could assume that all but one
players use that quantity and then solve (either analytically, numerically or
by a heuristic, depending on the complexity of the model investigated) the
single - variable maximization problem for the player’s profit, given that the
other players choose the quantity under consideration. If the solution of the
problem is the same quantity, then that quantity should be the Nash
Equilibrium.
## 6 Conclusions
We have seen that the social-learning multi-population algorithms introduced
here lead to convergence of the individual quantities to the Nash Equilibrium
quantity on several Cournot models. That convergence was achieved for given
parameter sets (mutation probability, number of generations, etc.) and was
true in a “Ljapunov” sense, i.e. the strategies chosen fluctuated inside a
region around the NE, while the expected values were equal (as proven by a
series of statistical tests) to the desired value. This property, which does
not hold for the individual - learning variants of the two algorithms, allows
one to construct heuristic algorithms to discover an unknown Nash Equilibrium
in symmetric games, provided the parameters used are suitable and that the NE
belongs in the feasible set of the chromosomes’ values. Finally, the stability
properties of the social-learning versions of the algorithms allow one to use
them as modeling tools in a multi - agent learning environment, that leads to
effective learning of the Nash Strategy. Paths for future research could be
simulating these algorithms for different bit-lengths of the chromosomes in
the populations since, apparently, the use of more bits for chromosome
encoding implies more feasible values for the chromosomes and, therefore,
makes the inclusion of unknown NE in these sets, more probable. Another idea
would be to use different models, especially models that do not have single
NE. Finally one could try to apply the algorithms introduced here in different
game theoretic problems.
Aknowledgements
Funding by the EU Commission through COMISEF MRTN-CT-2006-034270 is gratefully
acknowledged. Mattheos Protopapas would also like to thank Professors Peter
Winker, Manfred Gilli, Dietmar Maringer and Thomas Wagner for their extremely
helpful courses.
References
* [1]
Alkemade F, La Poutre H, Amman H (2007) On Social Learning and Robust
Evolutionary Algorithm Design in the Cournot Oligopoly Game. Comput Intell 23:
162–175.
* [2]
Alos-Ferrer C, Ania A (2005) The Evolutionary Stability of Perfectly
Competitive Behavior. Econ Theor 26: 497–516.
* [3]
Arifovic J (1994) Genetic Algorithm Learning and the Cobweb Model. J Econ
Dynam Contr 18: 3–28.
* [4]
Basawa IV, Rao P (1980) Statistical Inference for Stochastic Processes.
Academic Press, London.
* [5]
Dawid H, Kopel M (1998) On Economic Applications of the Genetic Algorithm: A
Model of the Cobweb Type. J Evol Econ 8: 297–315.
* [6]
Dubey P, Haimanko O, Zapechelnyuk A (2006) Strategic Complements and
Subtitutes and Potential Games. Game Econ Behav 54: 77–94.
* [7]
Franke R (1998) Coevolution and Stable Adjustments in the Cobweb Model. J Evol
Econ 8: 383–406.
* [8]
Goldberg DE (1989) Genetic Algorithms in Search, Optimization and Machine
Learning. Addison - Wesley, Reading MA.
* [9]
Kemeny J, Snell J (1960) Finite Markov Chains. D.Van Nostrand Company Inc.,
Princeton MA.
* [10]
Price TC (1997) Using Co-Evolutionary Programming to Simulate Strategic
Behavior in Markets. J Evol Econ 7: 219–254.
* [11]
Riechmann T (1999) Learning and Behavioral Stability. J Evol Econ 9: 225–242.
* [12]
Riechmann T (2001) Genetic Algorithm Learning and Evolutionary Games. J Econ
Dynam Contr 25: 1019–1037.
* [13]
Son YS, Baldick R (2004) Hybrid Coevolutionary Programming for Nash
Equilibrium Search in Games with Local Optima. IEEE Trans Evol Comput 8:
305–315.
* [14]
Vallee T, Yildizoglou M (2007) Convergence in Finite Cournot Oligopoly with
Social and Individual Learning. Working Papers of GRETha, 2007-07. Available
by GRETha ( http://www.gretha.fr ) Accessed 10 November 2007.
* [15]
Vriend N (2000) An Illustration of the Essential Difference between Individual
and Social Learning, and its Consequences for Computational Analyses. J Econ
Dynam Contr 24: 1–19.
|
arxiv-papers
| 2009-05-22T19:07:21 |
2024-09-04T02:49:02.839895
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Mattheos K. Protopapas, Elias B. Kosmatopoulos, Francesco Battaglia",
"submitter": "Mattheos Protopapas",
"url": "https://arxiv.org/abs/0905.3640"
}
|
0905.3695
|
# Semiclassical suppression of weak anisotropies of a generic Universe
Marco Valerio Battisti battisti@icra.it ICRA - International Center for
Relativistic Astrophysics Centre de Physique Théorique de Luminy, Université
de la Méditerranée F-13288 Marseille, FR Dipartimento di Fisica (G9),
Università di Roma “Sapienza” P.le A. Moro 5 00185 Rome, IT Riccardo
Belvedere riccardo.belvedere@icra.it ICRA - International Center for
Relativistic Astrophysics Dipartimento di Fisica (G9), Università di Roma
“Sapienza” P.le A. Moro 5 00185 Rome, IT Giovanni Montani montani@icra.it
ICRA - International Center for Relativistic Astrophysics Dipartimento di
Fisica (G9), Università di Roma “Sapienza” P.le A. Moro 5 00185 Rome, IT ENEA
C.R. Frascati (Dipartimento FPN), Via E. Fermi 45 00044 Frascati Rome, IT
ICRANET C.C. Pescara, P.le della Repubblica 10 65100 Pescara, IT
###### Abstract
A semiclassical mechanism which suppresses the weak anisotropies of an
inhomogeneous cosmological model is developed. In particular, a wave function
of this Universe having a meaningful probabilistic interpretation is obtained
that is in agreement with the Copenhagen School. It describes the evolution of
the anisotropies with respect to the isotropic scale factor which is regarded
as a semiclassical variable playing an observer-like role. Near the
cosmological singularity the solution spreads over all values of the
anisotropies while, when the Universe expands sufficiently, the closed
Friedmann-Robertson-Walker model appears to be the favorite state.
###### pacs:
98.80.Qc; 04.60.Ds; 04.60.Bc
## I Introduction
Quantum cosmology denotes the application of the quantum theory to the entire
Universe as described by cosmological models (for reviews see QC ). All but a
finite number of degrees of freedom are frozen out by imposing symmetries
(homogeneity or isotropy) and the resulting finite dimensional configuration
space of the theory is known as minisuperspace. The resulting framework is
thus a natural arena to test ideas and constructions introduced in the (not
yet found) quantum theory of gravity.
In quantum cosmology the Universe is described by a single wave function
$\Psi$ providing puzzling interpretations as soon as the differences with
respect to ordinary quantum mechanics are addressed Vil ; pqc (see also Pin
). Quantum cosmology is defined up to the following two assumptions. (i) The
analyzed model is the Universe as a whole and thus there is no longer an a
priori splitting between classical and quantum worlds. No external measurement
crutch is available and an internal one can not play the observer-like role
because of the extreme conditions a primordial Universe is subjected to. (ii)
In general relativity time is an arbitrary label and clocks, being parts of
the Universe, are also described by the wave function $\Psi$. Time is thus
included in the configuration space and the integral of $|\Psi|^{2}$ over the
whole minisuperspace diverges as in quantum mechanics when the time coordinate
is included in the configuration-space element. As a result the standard
interpretation of quantum mechanics (the Copenhagen interpretation) does not
work in quantum cosmology. On a given (space-time) background structure only,
observations can take place in the sense of ordinary quantum theory.
In this work a wave function of a generic inhomogeneous Universe, which has a
clear probabilistic interpretation, is obtained. It can be meaningfully
interpreted because of a separation between semiclassical degrees of freedom,
in the Wentzel-Kramers-Brillouin (WKB) sense, and quantum ones. In particular,
the quantum dynamics of weak anisotropies (the physical degrees of freedom of
the Universe) is traced with respect to the isotropic scale factor which plays
an observer-like role as soon as the Universe expands sufficiently.
A generic inhomogeneous cosmological model, describing a Universe in which any
specific symmetry has been removed, represents a generic cosmological solution
of the Einstein field equations BKL . Belinski-Khalatnikov-Lifshitz (BKL)
showed that such a geometry evolves asymptotically to the singularity as an
ensemble, one for each causal horizon, of independent Bianchi IX homogeneous
Universes Mis . This model represents the best description we have of the
(classical) physics near a space-like cosmological singularity.
Our main result is that the wave function of the Universe is spread over all
values of anisotropies near the cosmological singularity, but it is
asymptotically peaked around the isotropic configuration. The closed
Friedmann-Robertson-Walker (FRW) cosmological model is then the naturally
privileged state as soon as a sufficiently large volume of the Universe is
taken into account. A semiclassical isotropization mechanism for the Universe
is thus predicted.
This model can be regarded as a concrete implementation, to a physically
interesting cosmological problem, of the semiclassical approach to quantum
cosmology Vil . An isotropization mechanism is in fact necessary to explain
the transition between a very early Universe and the observed one. The
isotropic FRW model can accurately describes the evolution of the Universe
until decoupling time, i.e. until $10^{-3}-10^{-2}s$ after the big-bang Kolb .
On the other hand, the description of its primordial stages requires more
general models. It is thus fundamental to recover a mechanism which can match
these two cosmological epochs. Although many efforts have been made inside
classical theory KM ; iso (especially by the use of the inflation field), no
quasiclassical (or purely quantum) isotropization mechanism is yet developed
in detail (for different attempts see qiso .) In particular this work improves
MR in which no clear (unique) probabilistic interpretation of the wave
function can be formulated at all.
The paper is organized as follows. In Section II the wave function of a
generic Universe and its interpretation are analyzed. Section III is devoted
to the study of an isotropization mechanism. Finally, in Section IV the
validity of the model is discussed. Concluding remarks follows.
We adopt natural units $\hbar=c=1$ apart from where the classical limits are
discussed.
## II Wave function in quantum cosmology
In this Section a wave function of a generic inhomogeneous Universe, which has
a meaningful probabilistic interpretation, is described. There are some
reliable indications that the early stages of the Universe evolution are
characterized by such a degree of generality KM ; ben . In the quantum regime,
dealing with the absence of global symmetry, it is however required by
indeterminism. On different causal regions the geometry has to fluctuate
independently so preventing global isometries.
The dynamics of a generic cosmological model is summarized, asymptotically to
the cosmological singularity, in the action
$I=\int_{\mathcal{M}}dtd^{3}x\left(p_{i}\partial_{t}q^{i}-N\mathcal{H}\right)$
ben (for a review see rev ). Here $q^{i}$ are the three scale factors,
$p_{i}$ the three conjugate momenta, $N$ the lapse function and
$\mathcal{H}=0$ is the scalar constraint which, in the Misner scheme, reads
$\mathcal{H}(x^{i})=\kappa\left[-\frac{p_{a}^{2}}{a}+\frac{1}{a^{3}}\left(p_{+}^{2}+p_{-}^{2}\right)\right]+\frac{a}{4\kappa}V(\beta_{\pm})+U(a)=0.$
(1)
The potential term $V(\beta_{\pm})$ accounts for the spatial curvature of the
model and is given by
$V=\lambda_{1}^{2}\left(e^{-8\beta_{+}}-2e^{4\beta_{+}}\right)+\lambda_{2}^{2}\left(e^{4(\beta_{+}+\sqrt{3}\beta_{-})}-2e^{-2(\beta_{+}+}\right.\\\
\left.{}^{-\sqrt{3}\beta_{-})}\right)+\lambda_{3}^{2}\left(e^{4(\beta_{+}-\sqrt{3}\beta_{-})}-2e^{-2(\beta_{+}+\sqrt{3}\beta_{-})}\right)+\lambda_{in}^{2}.$
(2)
In this expression $l_{in}^{i}=a/\lambda_{i}$
($\lambda_{i}=\lambda_{i}(x^{i})$) denotes the co-moving physical scale of
inhomogeneities BM06 ,
$\lambda^{2}=\lambda_{1}^{2}+\lambda_{2}^{2}+\lambda_{3}^{2}$ and $\kappa=8\pi
G$ is the Einstein constant. The function $a=a(t,x^{i})$ describes the
isotropic expansion of the Universe while its shape changes (the anisotropies)
are associated to $\beta_{\pm}=\beta_{\pm}(t,x^{i})$. It is relevant to remark
an important feature of such a model. Via the BKL scenario BKL the dynamics
of a generic inhomogeneous Universe reduces, point by point, to the one of a
Bianchi IX model. More precisely the spatial points dynamically decouple
toward the singularity and a generic Universe is thus described by a
collection of causal regions each of which evolves independently as a
homogeneous model, in general as Bianchi IX. This picture holds as far as the
inhomogeneities are stepped out of the cosmological horizon $l_{h}\sim t$,
i.e. the inequality $l_{in}\gg l_{h}$ holds BKL . In each space point the
phase space is then six dimensional with coordinates
$(a,p_{a},\beta_{\pm},p_{\pm})$ and the cosmological singularity appears as
$a\rightarrow 0$. In this system the matter terms are regarded negligible with
respect to the cosmological constant, i.e. the isotropic potential $U$ in (1)
reads
$U(a)=-\frac{a\lambda^{2}}{4\kappa}+\frac{\Lambda a^{3}}{\kappa}$ (3)
$\Lambda$ being the cosmological constant. Far enough from the singularity the
cosmological constant term dominates on the ordinary matter fields. Such a
contribution is necessary in order for the inflationary scenario to take place
KM ; iso ; Kolb .
As we said, a correct definition of probability (positive semidefinite) in
quantum cosmology can be formulated by distinguishing between semiclassical
and quantum variables Vil . Variables satisfying the Hamilton-Jacobi equation
are regarded as semiclassical and it is assumed that the semiclassical
dynamics is not affected by quantum dynamics. In this respect the quantum
variables describe a small subsystem of the Universe. It is thus natural to
regard the isotropic scale factor $a$ as semiclassical and to consider the
anisotropies $\beta_{\pm}$ (the two physical degrees of freedom of the
Universe) as quantum variables. In other words, we assume ab initio that the
radius of the Universe is of different nature with respect to its shape
changes. As we will see the isotropic share of the scalar constraint becomes
semiclassical before the anisotropic one. In agreement with such a reasoning
the wave functional of the Universe $\Psi=\Psi(a,\beta_{\pm})$ reads Vil
$\Psi\stackrel{{\scriptstyle a\rightarrow
0}}{{\longrightarrow}}\prod_{i}\Psi_{i}(x^{i}),\quad\Psi_{i}=\psi_{0}\chi=A(a)e^{iS(a)}\chi(a,\beta_{\pm})$
(4)
where the factorization is due to decoupling of the spatial point. This wave
function is WKB-like in $a$ (amplitude and phase depend only on the
semiclassical variable). The additional function $\chi$ depends on the quantum
variables $\beta_{\pm}$ and parametrically only, in the sense of the Born-
Oppenheimer approximation, on the scale factor. The effects of the
anisotropies on the Universe expansion, as well as the effects of the
electrons on the dynamics of nuclei, are thus regarded as negligible.
The canonical quantization of this model is achieved by the use of the Dirac
prescription for quantizing constrained systems HT , i.e. imposing that the
physical states are those annihilated by the self-adjoint operator
$\hat{\mathcal{H}}$ corresponding to the classical counterpart (1). We
represent the minisuperspace canonical commutation relations
$[\hat{q}_{i},\hat{p}_{j}]=i\delta_{ij}$ in the coordinate space, i.e.
$\hat{q}_{i}$ and $\hat{p}_{i}$ act as multiplicative and derivative operators
respectively. The Wheeler-DeWitt (WDW) equation for this model leads,
considering (4), to three different equations. We obtain the Hamilton-Jacobi
equation for $S$ and the equation of motion for $A$, which respectively read
$-\kappa
A\left(S^{\prime}\right)^{2}+aUA+\mathcal{V}_{q}=0,\qquad\frac{1}{A}\left(A^{2}S^{\prime}\right)^{\prime}=0.$
(5)
Here $(\cdot)^{\prime}=\partial_{a}$ and $\mathcal{V}_{q}=\kappa
A^{\prime\prime}$ is the so-called quantum potential which is negligible far
from the singularity even if the $\hbar\rightarrow 0$ limit is not taken into
account (see below). The action $S(a)$ defines a congruence of classical
trajectories, while the second equation in (5) is the continuity equation for
the amplitude $A(a)$.
The third equation we achieve describes the evolution of the quantum subsystem
and is given by
$a^{2}\left(2A^{\prime}\partial_{a}\chi+A\partial_{a}^{2}\chi+2iAS^{\prime}\partial_{a}\chi\right)+A\hat{H}_{q}\chi=0,$
(6)
where
$H_{q}=p_{+}^{2}+p_{-}^{2}+\frac{a^{4}}{4\kappa^{2}}V(\beta_{\pm})$ (7)
represents the quantum Hamiltonian. The first two terms in (6) are of higher
order in $\hbar$ than the third and can be neglected. We then deal with a
Schrödinger-like equation for the quantum wave function $\chi$
$-2ia^{2}S^{\prime}\partial_{a}\chi=\hat{H}_{q}\chi.$ (8)
Such an equation is in agreement with the assumption that the anisotropies
describe a quantum subsystem of the whole Universe, i.e. that the wave
function $\chi$ depends on $\beta_{\pm}$ only (in the Born-Oppenheimer sense).
The smallness of such a quantum subsystem can be formulated requiring that its
Hamiltonian $H_{q}$ is of order $\mathcal{O}(\epsilon^{-1})$, where $\epsilon$
is a small parameter proportional to $\hbar$. Since the action of the
semiclassical Hamiltonian operator
$\hat{H}_{0}=a^{2}\partial_{a}^{2}+a^{3}U/\kappa$ on the wave function
$\Psi_{i}$ is of order $\mathcal{O}(\epsilon^{-2})$, the idea that the
anisotropies do not influence the isotropic expansion of the Universe can be
formulated as $\hat{H}_{q}\Psi_{i}/\hat{H}_{0}\Psi_{i}=\mathcal{O}(\epsilon)$.
Such a requirement is physically reasonable since the semiclassical properties
of the Universe, as well as the smallness of the quantum subsystem, are both
related to the fact that the Universe is sufficiently large Vil .
A pure Schrödinger equation for $\chi$ is obtained taking into account the
tangent vector to the classical path. From the first of equations (5) we find
$p_{a}=S^{\prime}=-\frac{a}{\kappa}\sqrt{\Lambda
a^{2}-\frac{\lambda^{2}}{4}},$ (9)
and the minus sign is chosen to have compatibility between the time gauge
$da/dt=1$ and a positive lapse function $2N=(\Lambda
a^{2}-\lambda^{2}/4)^{-1/2}$. It is then possible to define a new time
variable $\tau$ such that $d\tau=(N\kappa/a^{3})da$ and, considering the lapse
function $N$, it reads
$\tau=\frac{\kappa}{a^{2}}\left[\frac{\sqrt{4\Lambda
a^{2}-\lambda^{2}}}{2}-2\Lambda a^{2}\tan^{-1}\left(\frac{1}{\sqrt{4\Lambda
a^{2}-\lambda^{2}}}\right)\right].$ (10)
This equation can be simplified in the asymptotic region
$a\gg\lambda/\sqrt{\Lambda}$ where $\tau$ behaves as
$\tau=(\kappa/12\sqrt{\Lambda})a^{-3}+\mathcal{O}(a^{-5})$. Such a region
deserves interest since the variable $a$ can be considered as semiclassical
and an isotropization mechanics for the Universe takes place (see below).
Choosing $\tau$ as time coordinate, equation (8) rewrites as
$i\partial_{\tau}\chi=\hat{H}_{q}\chi=\left(-\Delta_{\beta}+\frac{a^{4}}{4\kappa^{2}}V(\beta_{\pm})\right)\chi,$
(11)
which has the desired form.
Let us now discuss the implications of this approach for the definition of the
probability distribution. The wave function (4) defines a probability
distribution $\rho(a,\beta_{\pm})=\rho_{0}(a)\rho_{\chi}(a,\beta_{\pm})$,
where $\rho_{0}(a)$ is the classical probability distribution for the
semiclassical variable $a$. On the other hand, $\rho_{\chi}=|\chi|^{2}$
denotes the probability distribution for the quantum variables $\beta_{\pm}$
on the classical trajectories (5) where the wave function $\chi$ can be
normalized. An ordinary interpretation (in the Copenhagen sense) of a wave
function tracing a subsystem of the Universe is therefore recovered.
## III The isotropization mechanism
A wave function which naturally leads to an isotropic configuration of the
Universe is here obtained. As we said the generic inhomogeneous cosmological
model is described, toward the singularity, by a collection of $\infty^{3}$
independent Bianchi IX models each of which referred to a different spatial
point BKL . Bianchi IX (the Mixmaster Universe Mis ) is the most general,
together with Bianchi VIII, homogeneous cosmological model and its spatial
geometry is invariant under the $SO(3)$ group rev ; RS ; chaos . This system
generalizes the closed FRW cosmological dynamics if the isotropy hypothesis is
relaxed.
In order to enforce the idea that the anisotropies can be considered as the
only quantum degrees of freedom of the Universe we address the quasi-isotropic
regime, i.e. $|\beta_{\pm}|\ll 1$. Moreover, since we are interested in the
link existing between the isotropic and anisotropic dynamics, the Universe has
to be get through to such a quasi-isotropic era. In this regime the potential
term reads
$V(\beta_{\pm})=8\lambda^{2}(\beta_{+}^{2}+\beta_{-}^{2})+\mathcal{O}(\beta^{3})$
while for the equation (11) we get
$i\partial_{\tau}\chi=\frac{1}{2}\left(-\Delta_{\beta}+\omega^{2}(\tau)(\beta_{+}^{2}+\beta_{-}^{2})\right)\chi,$
(12)
where $\tau$ has been rescaled by a factor $2$ and
$\omega^{2}(\tau)=C/\tau^{4/3}$ ($C$ being a constant in each space point
given by $2C=\lambda^{2}((6)^{4/3}(\kappa\Lambda)^{2/3})^{-1}$). The dynamics
of the Universe anisotropies subsystem can then be regarded as a time-
dependent bi-dimensional harmonic oscillator with frequency $\omega(\tau)$.
The construction of the quantum theory for a time-dependent, linear, dynamical
system has remarkable differences with respect to the time-independent one
Wald . If the Hamiltonian fails to be time-independent, solutions which
oscillate with purely positive frequency do not exist at all (the dynamics is
not carried out by an unitary time operator). In particular, in the absence of
a time translation symmetry, no natural preferred choice of the Hilbert space
is available. In the finite-dimensional case (where the Stone-Von Neumann
theorem holds) no real mistake arises since for any choice of the Hilbert
space the theory is unitarily equivalent to the standard one. On the other
hand, as soon as a field theory is taken into account serious difficulties
appear and an algebraic approach is required Wald .
The quantum theory of a harmonic oscillator with time-dependent frequency is
known and the solution of the Schrödinger equation can be obtained
analytically harosc . The analysis is mainly based on the use of the “exact
invariants method” and on some time-dependent transformations. An exact
invariant $J(\tau)$ is a constant of motion (namely $\dot{J}\equiv
dJ/d\tau=\partial_{\tau}J-i[J,\hat{H}_{q}]=0$), is hermitian ($J^{\dagger}=J$)
and for the Hamiltonian $H_{q}$ as in (12) it explicitly reads
$J_{\pm}=\frac{1}{2}\left(\rho^{-2}\beta_{\pm}^{2}+(\rho
p_{\pm}-\dot{\rho}\beta_{\pm})^{2}\right).$ (13)
Here $\rho=\rho(\tau)$ is any function satisfying the auxiliary non-linear
differential equation $\ddot{\rho}+\omega^{2}\rho=\rho^{-3}$. The goal for the
use of such invariants (13) relies on the fact that they match the wave
function of a time-independent harmonic oscillator with the time-dependent
one. Let $\phi_{n}(\beta,\tau)$ the eigenfunctions of $J$ forming a complete
orthonormal set corresponding to the time-independent eigenvalues
$k_{n}=n+1/2$. These states turns out to be related to the eigenfunctions
$\tilde{\phi}_{n}(\xi)$ ($\xi=\beta/\rho$) of a time-independent harmonic
oscillator via the unitary transformation
$U=\exp(-i\dot{\rho}\beta^{2}/2\rho)$ as
$\tilde{\phi}_{n}=\rho^{1/2}U\phi_{n}$. The non-trivial (and in general non-
available) step in this construction is to obtain an exact solution of the
auxiliary equation for $\rho$. However in our case it can be constructed and
explicitly reads
$\rho=\sqrt{\frac{\tau}{\sqrt{C}}\left(1+\frac{\tau^{-2/3}}{9C}\right)}.$ (14)
Finally, the solution of the Schrödinger equation (12) is connected to the
$J$-eigenfunctions $\phi_{n}$ by the relation
$\chi_{n}(\beta,\tau)=e^{i\alpha_{n}(\tau)}\phi_{n}(\beta,\tau)$. (The general
solution of (12) can thus be written as the linear combination
$\chi(\beta,\tau)=\sum_{n}c_{n}\chi_{n}(\beta,\tau)$, $c_{n}$ being
constants.) Here the time-dependent phase $\alpha_{n}(\tau)$ is given by
$\alpha_{n}=-\left(n+\frac{1}{2}\right)\int\frac{d\tau}{\rho^{2}(\tau)}=\\\
=\frac{3\sqrt{C}}{2}\left(n+\frac{1}{2}\right)\left[\ln(9C^{3/2})+\ln\left(\frac{1}{9C^{3/2}}+\frac{\tau^{2/3}}{\sqrt{C}}\right)\right].$
(15)
Collecting these results the wave function $\chi_{n}(\beta_{\pm},\tau)$, which
describes the evolution of the anisotropies of the Universe with respect to
the scale factor, reads
$\chi_{n}=\chi_{n_{+}}(\beta_{+},\tau)\chi_{n_{-}}(\beta_{-},\tau)$. Here
$n=n_{+}+n_{-}$ and
$\chi_{n_{\pm}}(\beta_{\pm},\tau)=A\frac{e^{i\alpha_{n}(\tau)}}{\sqrt{\rho}}h_{n}(\xi_{\pm})\exp\left[\frac{i}{2}\left(\dot{\rho}\rho^{-1}+i\rho^{-2}\right)\beta_{\pm}^{2}\right],$
(16)
where $A$ is the normalization constant, $h_{n}$ are the usual Hermite
polynomial of order $n$ and $\rho(\tau)$ and the phase $\alpha(\tau)$ are
given by (14) and (15), respectively. It is immediate to verify that, as
$\omega(\tau)\rightarrow\omega_{0}$ and
$\rho(\tau)\rightarrow\rho_{0}=1/\sqrt{\omega_{0}}$ (namely
$\alpha(\tau)\rightarrow-\omega_{0}k_{n}\tau$), the solution of the time-
independent harmonic oscillator is recovered.
Let us investigate the probability density to find the quantum subsystem of
the Universe at a given state. As a result the anisotropies appear to be
probabilistically suppressed as soon as the Universe expands sufficiently far
from the cosmological singularity (it appears for $a\rightarrow 0$ or
$\tau\rightarrow\infty$). Such a feature can be realized from the behavior of
the squared modulus of the wave function (16) which is given by
$|\chi_{n}|^{2}\propto\frac{1}{\rho^{2}}|h_{n_{+}}(\xi_{+})|^{2}|h_{n_{-}}(\xi_{-})|^{2}e^{-\beta^{2}/\rho^{2}},$
(17)
where $\beta^{2}=\beta^{2}_{+}+\beta^{2}_{-}$. This probability density is
still time-dependent through $\rho=\rho(\tau)$ and
$\xi_{\pm}=\beta_{\pm}/\rho$ since the evolution of the wave function $\chi$
is not traced by an unitary time operator. As we can see from (17) when a
large enough isotropic cosmological region is considered (namely when the
limit $a\rightarrow\infty$ or $\tau\rightarrow 0$ is taken into account) the
probability density to find the Universe is sharply peaked around the
isotropic configuration $\beta_{\pm}=0$. In this limit (which corresponds to
$\rho\rightarrow 0$) the probability density $|\chi_{n=0}|^{2}$ of the ground
state $n=0$ is given by
$|\chi_{n=0}|^{2}\stackrel{{\scriptstyle\tau\rightarrow
0}}{{\longrightarrow}}\delta(\beta,0)$. It is thus proportional to the Dirac
$\delta$-distribution centered on $\beta=0$ (see Fig. 1).
Figure 1: The absolute value of the ground state of the wave function
$\chi(\beta_{\pm},\tau)$ far from the cosmological singularity. In the plot we
take $C=1$.
Summarizing, when the Universe moves away from the cosmological singularity,
the probability density to find it is asymptotically peaked (as a Dirac
$\delta$-distribution) around the closed FRW configuration. Near the
singularity all values of the anisotropies $\beta_{\pm}$ are almost equally
favored from a probabilistic point of view. On the other hand, as the volume
of the Universe grows, the isotropic state becomes the most probable state of
the Universe.
The key feature of such a result relies on the fact that the isotropic scalar
factor has been considered as an intrinsic variable with respect to the
anisotropies. It has been treated semiclassically (WKB) while the two physical
degrees of freedom of the Universe ($\beta_{\pm}$) have been described as real
quantum coordinates (the validity of this assumption is discussed in what
follows). In this way a positive semidefinite probability density can be
constructed for the wave function of the quantum subsystem of the Universe.
## IV Physical considerations on the model
To complete our analysis we investigate the range of validity of the model. In
particular, (i) we want to analyze in which sense it is correct to regard the
scale factor $a$ as a WKB variable and (ii) up to which regime the hypothesis
of a quasi-isotropic potential is reasonable.
A variable can be considered semiclassical in the WKB sense if its dynamics is
completely described by the zero-order Hamilton-Jacobi equation Sak . If the
dynamics of a system no longer evolves according to the Schrödinger equation
but all the informations are summarized in the Hamilton-Jacobi equation, then
such a system can be regarded as classical. In order to understand in which
regime the Hamiltonian $H_{0}=-a^{2}p_{a}^{2}+a^{3}U/\kappa$ is semiclassical
(or equivalently when $a$ is a WKB variable) the roles of the quantum
potential $\mathcal{V}_{q}$ and the WKB wave function $\psi_{0}$ have to be
investigated. The term $\mathcal{V}_{q}$ in the Hamilton-Jacobi equation (5)
can be easily computed. From the continuity equation (the second of (5)) and
(9), the amplitude $A$ turns out to be
$A=\sqrt{\frac{\sqrt{\kappa}}{a}}\left(\Lambda
a^{2}-\frac{\lambda^{2}}{4}\right)^{-1/4}.$ (18)
In the $a\gg\lambda/\sqrt{\Lambda}$ limit ($\tau\rightarrow 0$) the quantum
potential behaves like $\mathcal{V}_{q}\sim\mathcal{O}(1/a^{3})$. In other
words $\mathcal{V}_{q}$ can be neglected as soon as the Universe sufficiently
expands, even if the limit $\hbar\rightarrow 0$ is not taken into account. It
is also easy to verify that the function $\psi_{0}=\exp(iS+\ln A)$ approaches
the quasi-classical limit $e^{iS}$ for sufficiently large values of $a$. In
fact, considering (9) and (18), $\ln(\psi_{0})$ is given by
$i\left[-\frac{(4\Lambda
a^{2}-\lambda^{2})^{3/2}}{24\Lambda\kappa}+i\ln\left(\sqrt{\frac{a}{\sqrt{\kappa}}}\left(\Lambda
a^{2}-\frac{\lambda^{2}}{4}\right)^{1/4}\right)\right].$ (19)
The logarithmic term decays with respect to $S$ as soon as
$a\gg\lambda/\sqrt{\Lambda}$ and in this region $\psi_{0}\sim e^{iS}$. It is
worth noting that to have a self-consistent scheme both relations $l_{in}\gg
l_{h}\sim a$ (which ensures the validity range of the BKL picture) and
$l_{in}\gg l_{\Lambda}$ ($l_{\Lambda}\equiv 1/\sqrt{\Lambda}$ being the
inflation characteristic length) have to hold. (We stress that the
inflationary scenario usually takes place if $a\gg l_{\Lambda}$ Kolb ; KM ;
iso ). These constraints state the degree of inhomogeneity that is allowed in
our model so that the generic Universe can be described, point by point, by a
Mixmaster model. At the same time the scale factor $a$ can be considered as a
semiclassical variable playing the internal observer-like role for the quantum
dynamics of the Universe anisotropies.
Let us now analyze the hypothesis of a quasi-isotropic potential. A maximum
quantum number indicating that the mean value of the quantum Hamiltonian is
not compatible with an oscillatory potential has to be found. The expectation
value of the Hamiltonian $H_{q}$ in state $|n\rangle$ is given by
$\langle
H_{q}\rangle_{n}=\frac{1}{2}\left(n+\frac{1}{2}\right)\left(\rho^{-2}+\omega^{2}\rho^{2}+\dot{\rho}^{2}\right).$
(20)
These values are equally spaced at every instant as for a time-independent
harmonic oscillator. The ground state is obtained for $n=0$. The limit of
applicability of our scheme relies on an upper bound of the occupation number
$n$. Taking into account the expressions for $\omega^{2}(\tau)$ (12) and
$\rho(\tau)$ (14), the expectation value (20), far from the cosmological
singularity, behaves like
$\langle H_{q}\rangle_{n}\stackrel{{\scriptstyle\tau\rightarrow
0}}{{\sim}}\frac{1}{\tau^{5/2}}\left(n+\frac{1}{2}\right).$ (21)
It is then possible to obtain the maximum admissible quantum number
$n_{\max}$. The approximation of small anisotropies does not work for a given
value of $\beta_{\pm}$ (for example $\beta_{\pm}\sim\mathcal{O}(1)$). Such
hypothesis is thus incorrect as soon as $\langle H_{q}\rangle_{n}\sim
V^{\star}=V(\beta_{\pm}\sim\mathcal{O}(1))$. The value $n_{\max}$ then depends
on time $\tau$ and reads $n_{\max}\sim V^{\star}\tau^{5/2}$. In this way, when
a suitablly large configuration of the Universe is taken into account
($\tau\rightarrow 0$), not many excited states can be considered. This is not,
however, a severe limitation since it is expected that, when the Universe
moves away from the classical singularity, the ground state becomes the
favored configuration BM06 .
## V Concluding remarks
In this paper we have shown how a semiclassical isotropization mechanism for a
quasi-isotropic inhomogeneous Universe takes place. The wave function of the
Universe describes its two physical degrees of freedom (the shape changes
$\beta_{\pm}$), and it is meaningfully interpreted as soon as the isotropic
scale factor $a$ plays the observer-like role. This condition is satisfied for
large values of the volume, and the dynamics of the anisotropies can be
probabilistically interpreted since it describes a small quantum subsystem.
The probability density of the possible Universe configurations is stretched
over all values of anisotropies near the cosmological singularity. On the
other hand, at large scales the probability density is sharply peaked around
the closed FRW model. In this way, as the Universe expands from the initial
singularity, the FRW state becomes more and more probable.
## Acknowledgments
M.V.B. thanks “Fondazione Angelo Della Riccia” for financial support.
## References
* (1) J.J.Halliwell, arXiv:gr-qc/0208018; D.L.Wiltshire, gr-qc/0101003; C.Kiefer and B.Sandhoefer, arXiv:0804.0672.
* (2) A.Vilenkin, Phys.Rev.D 39 (1989) 1116; Phys.Rev.D 33 (1986) 3560.
* (3) J.B.Hartle and S.W.Hawking, Phys.Rev.D 28 (1983) 2960; J.Halliwell and S.Hawking, Phys.Rev.D 31 (1985) 1777; J.J.Halliwell, gr-qc/9208001; F.Embacher, gr-qc/9605019.
* (4) N.Pinto-Neto, Found.Phys. 35 (2005) 577.
* (5) V.A.Belinski, I.M.Khalatnikov and E.M.Lifshitz, Adv.Phys. 19 (1970) 525; Adv.Phys. 31 (1982) 639.
* (6) C.Misner, Phys.Rev.Lett. 22 (1969) 1071; Phys.Rev. 186 (1969) 1319.
* (7) E.W.Kolb and M.S.Turner, The Early Universe, (Addison-Wesley, New York, 1990).
* (8) A.A.Kirillov and G.Montani, Phys.Rev.D 66 (2002) 064010.
* (9) S.Coleman and E.Weinberg, Phys.Rev.D 7 (1973) 1888; R.M.Wald, Phys.Rev.D 28 (1983) 2118; X.F.Lin and R.M.Wald, Phys.Rev.D 41 (1990) 2444.
* (10) N.Pinto-Neto, A.F.Velasco and R.Colistete, Phys.Lett.A 277 (2000) 194; W.A.Wright and I.G.Moss, Phys.Lett.B 154 (1985) 115; P.Amsterdamski, Phys.Rev.D 31 (1985) 3073; B.K.Berger and C.N.Vogeli, Phys.Rev.D 32 (1985) 2477; S.Del Campo and A.Vilenkin, Phys.Lett.B 224 (1989) 45.
* (11) V.Moncrief and M.P.Ryan, Phys.Rev.D 44 (1991) 2375.
* (12) R.Benini and G.Montani, Phys.Rev.D 70 (2004) 103527; A.A.Kirillov and G.Montani, Phys.Rev.D 56 (1997) 6225.
* (13) G.Montani, M.V.Battisti, R.Benini and G.Imponente, Int.J.Mod.Phys.A 23 (2008) 2353.
* (14) M.V.Battisti and G.Montani, Phys.Lett.B 637 (2006) 203.
* (15) M.Henneaux and C.Teitelboim, Quantization of Gauge Systems (PUP, Princeton, 1992).
* (16) M.P.Ryan and L.C.Shapley, Homogeneous Relativistic Cosmologies (PUP, Princeton, 1975).
* (17) G.P.Imponente and G.Montani, Phys.Rev.D 63 (2001) 103501; T.Damour, M.Henneaux and H.Nicolai, Class.Quant.Grav. 20 (2003) R145.
* (18) R.M.Wald, Quantum Field Theory in Curved Spacetime and Black Hole Thermodynamics (CUP, Chicago, 1994).
* (19) H.R.Lewis, J.Math.Phys. 9 (1968) 1976; H.R.Lewis and W.B.Riesenfeld, J.Math.Phys. 10 (1969) 1458; C.M.A.Dantas, I.A.Pedrosa and B.Baseia, Phys.Rev.A 45 (1992) 1320.
* (20) J.J.Sakurai, Modern Quantum Mechanics (Addison-Wesley, New York, 1994).
|
arxiv-papers
| 2009-05-22T14:47:40 |
2024-09-04T02:49:02.849082
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Marco Valerio Battisti, Riccardo Belvedere, Giovanni Montani",
"submitter": "Marco Valerio Battisti",
"url": "https://arxiv.org/abs/0905.3695"
}
|
0905.3708
|
# The coming of age of X-ray polarimetry
Martin C. Weisskopf (NASA/Marshall Space Flight Center)
###### Abstract
We briefly discuss the history of X-ray polarimetry for astronomical
applications including a guide to the appropriate statistics. We also provide
an introduction to some of the new techniques discussed in more detail
elsewhere in these proceedings. We conclude our discussion with our concerns
over adequate ground calibration, especially with respect to unpolarized
beams, and at the system level.
## Chapter 0 X-Ray Polarimetry: Historical Remarks and Other Considerations
### 1 Introduction
Sensitive X-ray polarimetry promises to reveal unique and crucial information
about physical processes in and structures of neutron stars, black holes, and
ultimately all classes of X-ray sources. We do not review the astrophysical
problems for which X-ray polarization measurements will provide new insights,
as these will be discussed in some detail in many of the presentations at this
conference.
Despite major progress in X-ray imaging, spectroscopy, and timing, there have
been only modest attempts at X-ray polarimetry. The last such dedicated
experiment, conducted by Bob Novick (Columbia University) over three decades
ago, had such limited observing time (and sensitivity) that even $\sim 10\%$
degree of polarization would not have been detected from some of the brightest
X-ray sources in the sky. Statistically-significant X-ray polarization was
detected in only one X-ray source, the Crab Nebula.
#### 1 History
The first positive detection of X-ray polarization Netal72 was performed in a
sounding rocket experiment that viewed the Crab Nebula in 1971. Using the
X-ray polarimeter on the Orbiting Solar Observatory (OSO)-8, this result was
confirmed We76 with a 19-$\sigma$ detection ($P=19.2\%\pm 1.0$%),
conclusively proving the synchrotron origin of the X-ray emission.
Unfortunately, because of low sensitivity, only 99%-confidence upper limits
were found for polarization from other bright X-ray sources (e.g., $\leq
13.5\%$ and $\leq 60\%$ for accreting X-ray pulsars Cen X-3 and Her X-1,
respectively Setal79 . Since that time, although there have been several
missions that had planned to include X-ray polarimeters — such as the original
Einstein Observatory and Spectrum-X (v1) — no X-ray polarimeter has actually
managed to be launched.
### 2 Instrumental approaches
There are a limited number of ways to measure linear polarization in the
0.1–50 keV band, sufficiently sensitive for astronomical sources. We discuss
four techniques here, but see also G. Frazier’s contribution for a discussion
of other techniques. We emphasize that meaningful X-ray polarimetry is
difficult:
(i) In general, we do not expect sources to be strongly ($\gg$10%) polarized.
For example, the maximum polarization from scattering in an optically-thick,
geometrically-thin, accretion disc is only about 10% at the most favorable
(edge-on) viewing angle. Hence, most of the X rays from such a source carry no
polarization information and thus merely increase the background (noise) in
the polarization measurement.
(ii) With one notable exception — namely, the Bragg-crystal polarimeter — the
modulation of the polarization signal in the detector, the signature of
polarization, is much less than 100% (typically, 20%–40%) (and energy-
dependent) even for a 100%-polarized source.
(iii) The degree of linear polarization is positive definite, so that any
polarimeter will always measure a (not necessarily statistically
significantly) polarization signal, even from an unpolarized source.
Consequently, the statistical analysis is more unfamiliar to X-Ray
astronomers. For a detailed discussion of polarimeter statistics see We09 .
The relevant equations are also summarized in slides 18-20 of our
presentation.111http://projects.iasf-roma.inaf.it/xraypol/.
Concerning the statistics, one of the most important formulas is the minimum
detectable polarization (MDP) at a certain confidence level. In the absence of
any instrumental systematic effects, the 99%-confidence level MDP,
${\rm MDP}_{99}=\frac{4.29}{MR_{S}}[\frac{R_{S}+R_{B}}{T}]^{1/2}.$ (1)
where the “modulation factor”, $M$, is the degree of modulation expected in
the absence of background for a $100$%-polarized beam, $R_{S}$ and $R_{B}$
are, respectively, the source and background counting rates, and $T$ is the
observing time.
The MDP is not the uncertainty in the polarization measurement, but rather the
degree of polarization which has, in this case, only a 1% probablility of
being equalled or exceeded by chance. One may form an analogy with the
difference between measuring a handful of counts, say 9, with the Chandra
X-Ray Observatory and thus having high confidence (many sigmas) that one has
detected a source, yet understanding that the value of the flux is still
highly uncertain — 30% at the 1-sigma level in this example. We emphasize this
point because the MDP often serves as the figure of merit for polarimetry.
While it is a figure of merit that is useful and meaningful, a polarimeter
appropriate for attacking astrophysical problems must have an MDP
significantly smaller than the degree of polarization to be measured, a point
that is often overlooked.
As P. Kaaret noted during his summary, consider an instrument with no
background, a modulation factor of 0.5, and the desire to obtain an MDP of 1%,
this requires detection of $10^{6}$ counts! The statistics will be superb, but
the understanding of the response function needs to be compatible. I know of
no observatory where the response function is known so well that it may deal
with a million count spectrum.
#### 1 Crystal polarimeters
The first successful X-ray polarimeter for astronomical application utilized
the polarization dependence of Bragg reflection. In We72 we describe the
first sounding-rocket experiment using a crystal polarimeter, the use of which
Schnopper & Kalata SK69 had first suggested. The principal of operation is
summarized in slide 7 of the presentation and a photograph of the one of two
crystal panels that focused the X-rays onto a proportional counter was shown
in slide 8.
Only three crystal polarimeters (ignoring the crystal spectrometer on Ariel-5
which also served as a polarimeter) have ever been constructed for extra-solar
X-ray applications. Only two — both using graphite crystals without X-ray
telescopes — were ever flown (sounding rocket, We72 ; OSO-8 satellite,We76 ;
Spectrum-X (v1) (not flown), Ketal94 .)
One of the virtues of the crystal polarimeter is, for Bragg angles near 45
degrees, that modulation of the reflected flux approaches 100%. From Eq 1 we
see that this is very powerful, all other things being equal, since the MDP
scales directly with the inverse of the modulation factor but only as the
square root of the other variables. Obviously, a disadvantage is the narrow
bandwidth for Bragg reflection.
#### 2 Scattering polarimeters
There are two scattering processes from bound electrons — coherent and
incoherent scattering. A comprehensive discussion of these processes may be
found in many textbooks (see, e.g. Ja65 ).
Various factors dominate the consideration of the design of a scattering
polarimeter. The most important are these: (1) to scatter as large a fraction
of the incident flux as possible while avoiding multiple scatterings; (2) to
achieve as large a modulation factor as possible; (3) to collect as many of
the scattered X-rays as possible; and (4) to minimize the detector background.
The scattering competes with photoelectric absorption in the material, both on
the way in and on the way out. The collection efficiency competes with the
desire to minimize the background and to maximize the modulation factor.
Not counting more recent higher energy payloads being developed for balloon
and future satellite flights discussed elsewhere in these proceedings, only
two polarimeters of this type have ever been constructed for extra-solar X-ray
applications. The only ones ever flown were suborbital in 1968, see Aetal69 ,
in 1969 see Wetal70 , and in 1971 see, e.g., Netal72 . The scattering
polarimeter Ketal94 built for the Spectrum-X (v1) satellite was never flown.
The virtue of the scattering polarimeter is that it has reasonable efficiency
over a moderately large energy bandwidth, thus facilitating energy-resolved
polarimetry. The principal disadvantage is a modulation factor less than
unity, since only for scattering into $90$ degrees will the modulation
approach 1.0 in the absence of background, and for a $100$%-polarized beam. To
obtain reasonable efficiency requires integrating over a range of scattering
angles and realistic modulation factors are under 50%, unless the device is
placed at the focus of a telescope. The modulation factor for the scattering
polarimeter on Spectrum-X (v1) reached $\sim$75%. At the focus it is feasible
to make the scattering volume small which then limits the range of possible
scattering angles.
### 3 New approaches
In this conference we will hear detailed presentations of a number of new
approaches to X-ray (and higher energy) polarimeters. We mention two of these
approaches here.
#### 1 Photoelectron tracking
The angular distribution (e.g., H54 ) of the K-shell photoelectron emitted as
a result of the photoelectric absorption process depends upon the polarization
of the incident photon. The considerations for the design of a polarimeter
that exploits this effect are analogous to those for the scattering
polarimeter. In this case the competing effects are the desire for a high
efficiency for converting the incident X-ray flux into photoelectrons and the
desire for those photoelectrons to travel large distances before interacting
with elements of the absorbing material.
Here we concentrate on polarimeters that use gas mixtures to convert the
incident X-rays to photoelectrons. Currently there are three approaches to
electron tracking polarimetry that use this effect.
To our knowledge, the first electron tracking polarimeter specifically
designed to address polarization measurements for X-ray astronomy, and using a
gas as the photoelectron-emitting material, was that designed by Austin &
Ramsey at NASA/Marshall Space flight Center (AR92 \- see also AR93 ; AMR93 )
They used the light emitted by the electron avalanches which occur after the
release of the initial photoelectron in a parallel plate proportional counter.
The light was then focused and detected by a CCD camera.
Another gas-detector approach, first discussed by Cetal01 , uses “pixillated”
proportional counters (gas electron multipliers) to record the avalanche of
secondary electrons that result from gas-multiplication in a high field after
drift into a region where this multiplication may take place. A second
approach to such devices was suggested by Black B07 and exploits time of
flight, and rotates the readout plane to be at right angles to the incident
flux. This device sacrifices angular resolution when placed at the focus of a
telescope but gains efficiency by providing a greater absorption depth.
Detecting the direction of the emitted photoelectron (relative to the
direction of the incident flux) is not simple because the electrons, when they
interact with matter, give up most of their energy at the end of their track,
not the beginning. Of course, in the process of giving up energy to the local
medium in which the initial photo-ionization took place, the electron changes
its trajectory, thus losing the information as to the initial direction and
hence polarization.
It is instructive to examine the image of a track: Figure 1 shows one obtained
under relatively favorable conditions with an optical imaging chamber. The
initial photoionization has taken place at the small concentration of light to
the left in the figure. The size of the leftmost spot also indicates the short
track of the Auger electron. As the primary photoelectron travels through the
gas, it either changes direction through elastic scattering and/or both
changes direction and loses energy through ionization. As these interactions
occur, the path strays from the direction determined by the incident photon’s
polarization. Of course, the ionization process is energy dependent and most
of the electron’s energy is lost at the end, not the beginning, of its track.
It should be clear from this discussion that, even under favorable conditions
— where the range of the photoelectron is quite large compared to its
interaction length — the ability to determine a precise angular distribution
depends upon the capability and sophistication of the track-recognition
software, not just the spatial resolution of the detection system. The burden
falls even more heavily on the software at lower energies, where the
photoelectron track becomes very short and diffusion in the drifting
photoelectron cloud conspires to mask the essential track information. Thus,
the signal processing algorithms (rarely discussed) form an important part of
the experiment, are a possible source of systematic effects, and may
themselves reduce the efficiency for detecting polarized X-rays.
Figure 1: The two dimensional projection of a track produced when a 54keV
X-ray was absorbed in 2 atm of a mixture of argon (90%), CH4 (5%), and
trimethylamine (5%). This particular track is $\simeq 14$-mm in length.
Polarimeters exploiting the photoelectric effect have been discussed in the
literature, and two will be discussed in this conference. However, no device
of this type has ever been flown and those built have undergone relatively
limited testing in the laboratory. In some cases, performance claims depend
more upon Monte-Carlo simulations than actual experiments. We eagerly await
experimental verification of performance at lower energies, around 2-3 keV,
where the overall sensitivity peaks.
#### 2 transmission filters
The potential advent of extremely large-area telescope missions, such as the
International X-ray Observatory (IXO), may provide an opportunity to exploit
the polarization dependence of narrow-band dichroic transmission filters, as
discussed by G. Frazier elsewhere in these proceedings. The extremely narrow
band, a consequent requirement for a detector of a few eV resolution, the low
efficiency of the filter, and the association with a major observatory are all
issues to be addressed. Regarding the latter, the history of X-ray polarimetry
on major observatories has not been positive. The OSO-8 polarimeter received
only a very limited amount of observing time as the result conflicting
pointing requirements. The Spectrum-X (v1) polarimeter, one of at least two
detectors at the focus of its telescope, was allocated only 11 days in the
plan for the first year’s observing. The polarimeter on the original Einstein
Observatory was “descoped”. No polarimeter was selected to be part of either
the Chandra or XMM-Newton missions, despite the important capabilities that
each of these missions — especially XMM-Newton with its larger collecting area
— might provide.
### 4 Concluding remarks: systematic effects
Only a few people in the world have any flight experience with X-ray
polarimeters and it behooves one to take advantage of this experience.
Precision X-ray polarimetry depends crucially on the elimination of potential
systematic effects. This is especially true for polarimeters with modulation
factors less than unity. Consider, a polarimeter with a modulation factor of
40%, and a 5% polarized source. In the absence of any background, this means
one is dealing with a signal of only 2% modulation in the detector. To
validate a detection means that systematic effects must be understood and
calibrated well below the 1% level, a non-trivial task. If present, systematic
effects alter the statistics discussed previously, further reducing
sensitivity: it is harder to detect two signals at the same frequency than
one!.
To achieve high accuracy requires extremely careful calibration with
unpolarized beams, as a function of energy, at the detector and at the system
level! For example, suppose that the systematic error in the measured signal
of an unpolarized source were 1%. Then for a modulation factor of 40%, the
3-sigma upper limit due to systematic effects alone would be 7.5%
polarization. Thus, if a polarimeter is to measure few-percent polarized
sources with acceptable confidence, systematic effects in the modulated signal
must be understood at $\leq 0.2\%$. Careful calibration — over the full energy
range of performance — is essential.
99
* (1) Angel, J.R.P., Novick, R., vanden Bout, P., & Wolff, R. (1969) Phys. Rev. Lett. 22, 861.
* (2) Austin, R.A. & Ramsey, B. D.A. (1992) Proc SPIE 1743, 252.
* (3) Austin, R.A. & Ramsey, B. D.A. (1993). Optical Engineering 32, 1900.
* (4) Austin, R.A., Minamitani, T., & Ramsey, B. D. (1993). Proc. SPIE 2010, 118.
* (5) Black, J. K. (2007) Journal of Physics: Conference Series 65, 012005
* (6) Costa, E., Soffitta, P., Bellazzini, R., Brez, A., Lumb, N., Spandre, G. (2001). Nature 411, 662.
* (7) Heitler, W. (1954) The Quantum Theory of Radiation, (Third Edition, Dover Publications, Inc. New York)
* (8) James, R.W. (1965). The Optical Principals of the Diffraction of X-rays (Cornell University Press, Ithaca, NY)
* (9) Kaaret, P.E. et al. (1994) Proc. SPIE 2010, 22.
* (10) Schnopper, H.W. & Kalata, K. (1969). A.J. 74, 854.
* (11) Novick, R., Weisskopf, M.C., Berthelsdorf, R., Linke, R., Wolff, R.S. (1972) Ap.J. 174, L1.
* (12) Weisskopf. M.C., Elsner, R.F., Kaspi, V.M., O’Dell, S.L., Pavlov, G.G., Ramsey, B.D. (2009). In Neutron Stars and Pulsars, ed. W. Becker (Springer-Verlag, Berlin Heidelberg)
* (13) Silver, E.H., Weisskopf, M.C., Kestenbaum, H.L., Long, K.S., Novick, R., Wolff, R.S. (1979). Ap.J. 232, 248.
* (14) Weisskopf, M.C., Berthelsdorf, R., Epstein, G., Linke, R., Mitchell, D., Novick, R., Wolff, R.S. (1972). Rev. Sci. Instr. 43, 967.
* (15) Weisskopf, M.C., Cohen, C.G., Kestenbaum, H.L., Long, K.S., Novick, R., Wolff, R.S. (1976). Ap.J. 208, L125.
* (16) Wolff, R.S., Angel, J.R.P., Novick, R., vanden Bout, P. (1970). ApJ 60, L21.
|
arxiv-papers
| 2009-05-22T15:35:39 |
2024-09-04T02:49:02.856527
|
{
"license": "Public Domain",
"authors": "Martin C. Weisskopf",
"submitter": "Martin C. Weisskopf",
"url": "https://arxiv.org/abs/0905.3708"
}
|
0905.3916
|
# Energy renormalization and integrability
within the massive neutrinos model
Łukasz Andrzej Glinka
E-mail: laglinka@gmail.com
_International Institute for Applicable_
_Mathematics & Information Sciences,_
_Hyderabad (India) & Udine (Italy),_
_B.M. Birla Science Centre,_
_Adarsh Nagar, 500 063 Hyderabad, India_
###### Abstract
In this paper the massive neutrinos model arising due to the Snyder
noncommutative geometry, proposed recently by the author is partially
developed. By straightforward calculation it is shown that the masses of the
chiral left- and right-handed Weyl fields treated as parameters fixed by
experiments, lead to the consistent physical picture of the noncommutative
geometry, and consequently yield renormalization of an energy of a
relativistic particle and exact integrability within the proposed model. This
feature of the model in itself both defines and emphasizes its significance
and possible usefulness for both theory as well as phenomenology for high
energy physics and astrophysics.
Keywords models of neutrino mass ; noncommutative geometry ; Snyder model ;
energy renormalization ; exactly integrable systems ; Planck scale effects
PACS 14.60.St; 03.65.Pm ; 02.40.Gh ; 03.30.+p
## 1 Introduction
It was established recently by the author that the Dirac equation modified by
the $\gamma^{5}$-term arising due to the Snyder noncommutative geometry model,
yields the conventional Dirac theory with nonhermitian mass, or equivalently
to the massive neutrinos model given by the Weyl equation with a diagonal and
hermitian mass matrix. The model describes 4 massive chiral fields related to
any original, _i.e._ non-modified, massive or massless quantum state. Due to
spontaneous global chiral symmetry breaking mechanism it leads to the isospin-
symmetric effective field theory, that is composed chiral condensate of
massive neutrinos. All these results violate CP symmetry manifestly, however,
their possible physical application can be considered in a diverse way. On the
one hand the effective theory is beyond the Standard Model, yet can be
considered as its part due to the noncommutative geometry model contribution.
On the other in the massive neutrinos model masses of the two left- and two
right-handed chiral Weyl fields arise due to mass and energy of an original
state, and a minimal scale (_e.g._ the Planck scale), and its quantum
mechanical face becomes the mystic riddle.
This paper is mostly concentrated on the quantum mechanics aspect. It is shown
that the model in itself yields consistent physical explanation of the Snyder
noncommutative geometry model and consequently leads to energy renormalization
of an original quantum relativistic particle. Computations arising directly
from the Schrödinger formulation of both the Dirac and the Weyl equations, are
presented. First, the manifestly non hermitian modified Dirac Hamiltonian is
discussed. Its integrability is formulated by straightforward application of
the Zassenhaus formula for exponentialization of sum of two noncommuting
operators. It is shown, however, that this approach does not lead to well-
defined solutions; for this case the exponents are still sums of two
noncommuting operators, so that this procedure has a cyclic problem which can
not be finished, and by this reason is not algorithm. For solving the problem
instead of the Dirac equation we employ the Weyl equation with pure hermitian
mass matrix rewritten in the Schrödinger form. Its integration is
straightforward and elementary. We present computations in both the Dirac and
the Weyl representations of the Dirac gamma matrices.
The paper is organized as follows. The Section 2 presents the motivation for
further studies - the massive neutrinos model is recalled briefly. Next, in
the Sections 3 particle’s energy renormalization is discussed. In the Section
4 we present integrability problem for the modified Dirac equation. Section 5
is devoted for the massive Weyl equation integrability, and the Section 6
discusses some special case related to ultra-high energy physics. Finally in
the Section 7 the results of the entire paper are summarized briefly.
## 2 The massive neutrinos model
Let us recall briefly the massive neutrinos model resulting from [1]. The
starting point is the noncommutative geometry model [2] of phase-space and
space of a relativistic particle due to a fundamental scale $\ell$ proposed by
Snyder [3] (Cf. also Ref. [4]), and given by the following lattice model [5]
$x=ndx\quad,\quad dx=\ell\quad,\quad
n\in\mathbb{Z}\quad\longrightarrow\quad\ell=\dfrac{l_{0}}{n}e^{1/n}\quad,\quad\lim_{n\rightarrow\infty}\ell=0,$
(1)
where $l_{0}>0$ is a constant, together with the De Broglie formula relating
the coordinate $x$ with its conjugate momentum $p$
$p=\dfrac{\hbar}{x}.$ (2)
Application of the Kontsevich star-product [6] to the phase space $(x,p)$ and
two space points $x$ and $y$
$\displaystyle x\star p$ $\displaystyle=$ $\displaystyle
px+\sum_{n=1}^{\infty}\left(\dfrac{\alpha i\hbar}{2}\right)^{n}C_{n}(x,p),$
(3) $\displaystyle x\star y$ $\displaystyle=$ $\displaystyle
xy+\sum_{n=1}^{\infty}\left(\dfrac{i\beta}{2}\right)^{n}C_{n}(x,y),$ (4)
where for correctness $\alpha\sim 1$, $\beta$ are dimensionless constants, and
$C_{n}(f,g)$ are the Hochschild cochains, is yielding to the deformed Lie
brackets (For review of deformation quantization see _e.g._ Ref. [7])
$\displaystyle\left[x,p\right]_{\star}$ $\displaystyle=$
$\displaystyle\left[x,p\right]+\sum_{n=1}^{\infty}\left(\dfrac{\alpha
i\hbar}{2}\right)^{n}B_{n}(x,p),$ (5) $\displaystyle\left[x,y\right]_{\star}$
$\displaystyle=$
$\displaystyle\left[x,y\right]+\sum_{n=1}^{\infty}\left(\dfrac{i\beta}{2}\right)^{n}B_{n}(x,y),$
(6)
where $B_{n}(f,g)\equiv C_{n}(f,g)-C_{n}(g,f)$ are the Chevalley cochains.
Using $[x,p]=-i\hbar$ and $[x,y]=0$, and doing the first approximation one
obtains
$\displaystyle\left[x,p\right]_{\star}=-i\hbar+\dfrac{\alpha
i\hbar}{2}B_{1}(x,p)\quad,\quad\left[x,y\right]_{\star}=\dfrac{i\beta}{2}B_{1}(x,y).$
(7)
or in the Dirac ”method of classical analogy” form [8]
$\displaystyle\dfrac{1}{i\hbar}\left[p,x\right]_{\star}=1-\dfrac{\alpha}{2}B_{1}(x,p)\quad,\quad\dfrac{1}{i\hbar}\left[x,y\right]_{\star}=\dfrac{\beta}{2\hbar}B_{1}(x,y).$
(8)
Because, for any $f,g\in C^{\infty}(M)$ holds $B_{1}(f,g)=2\theta(df\wedge
dg)$, one obtains
$\displaystyle\dfrac{1}{i\hbar}\left[p,x\right]_{\star}=1-\dfrac{\alpha}{\hbar}(dx\wedge
dp)\quad,\quad\dfrac{1}{i\hbar}\left[x,y\right]_{\star}=\dfrac{\beta}{\hbar}dx\wedge
dy,$ (9)
where $\hbar$ in first relation was introduced for dimensional correctness.
Applying now the lattice model (1) and the De Broglie relation (2) one
receives
$\displaystyle\dfrac{i}{\hbar}\left[x,p\right]_{\star}=1+\dfrac{\alpha}{\hbar^{2}}\ell^{2}p^{2}\quad,\quad\dfrac{i}{\hbar}\left[x,y\right]_{\star}=-\dfrac{\beta}{\hbar}\ell^{2},$
(10)
that defines the Snyder model. Note that this model was studied in some aspect
by previous authors [9], but the model is related to this direction to a
slight degree. The model developed in this paper arise mostly from the idea of
the papers [1].
If we consider $\ell$ as a minimal scale, _e.g._ Planck or Compton scale, then
the model (10) can be rewritten in terms of the maximal energy $\epsilon$
$\dfrac{i}{\hbar}[x,p]=1+\dfrac{1}{\epsilon^{2}}c^{2}p^{2}\quad,\quad\dfrac{i}{\hbar}[x,y]=O\left(\dfrac{1}{\epsilon^{2}}\right)\quad,\quad\epsilon\equiv\dfrac{\hbar
c}{\sqrt{\alpha}\ell}.$ (11)
The lattice model (11) straightforwardly yield the contribution to the
Einstein Hamiltonian constraint of Special Relativity
$E^{2}-c^{2}p^{2}\equiv(\gamma^{\mu}p_{\mu})^{2}=m^{2}c^{4}+\dfrac{1}{\epsilon^{2}}c^{4}p^{4}\quad,\quad
p_{\mu}=[E,cp],$ (12)
where $m$ and $E$ are mass and energy of a particle, and consequently leads to
the generalized Sidharth $\gamma^{5}$-term within the usual Dirac equation
$\left(\gamma^{\mu}\hat{p}_{\mu}\pm
mc^{2}\pm\dfrac{1}{\epsilon}c^{2}\hat{p}^{2}\gamma^{5}\right)\psi=0\quad,\quad\hat{p}_{\mu}=i\hbar[\partial_{0},c\partial_{i}],$
(13)
violating the Lorentz symmetry manifestly. In fact the equation (13) describes
4 cases, that are dependent on the choice of the signs of mass $m$ and the
$\gamma^{5}$-term. Here we will consider, however, positive signs case only.
The negative ones are due to the changes $\epsilon\rightarrow-\epsilon$ and
$m\rightarrow-m$ in the results obtained from the positive signs case.
Preservation of the Minkowski momentum space structure within the modified
Einstein constraint (12)
$p_{\mu}p^{\mu}=\left(\gamma^{\mu}p_{\mu}\right)^{2}=m^{2}c^{4},$ (14)
moves back considerations to the generic Einstein theory with
$\epsilon\equiv\infty$, while application of the hyperbolic relation (14)
within the modified Dirac equation (13) leads to two the conventional Dirac
theories
$\left(\gamma^{\mu}\hat{p}_{\mu}+M_{\pm}c^{2}\right)\psi^{\pm}=0,$ (15)
where $\psi^{\pm}$ are the Dirac fields related to the nonhermitian mass
matrices $M_{\pm}$, that in general are dependent on an energy $E$ and a mass
$m$ of an original quantum relativistic particle. It is of course the case of
positive signs in the equation (13). In fact for any signs case there 2 the
Dirac fields, so that the equation (13) describes 8 the Dirac fields.
With using of the basis of projectors, $M_{\pm}$ can be decomposed as follows
$\displaystyle
M_{\pm}=\mu_{R}^{\pm}\dfrac{1+\gamma^{5}}{2}+\mu_{L}^{\pm}\dfrac{1-\gamma^{5}}{2},$
(16)
$\displaystyle\mu_{R}^{\pm}=-\dfrac{1}{c^{2}}\left(\dfrac{\epsilon}{2}\pm\sqrt{{\epsilon^{2}-4\epsilon
mc^{2}-4E^{2}}}\right),$ (17)
$\displaystyle\mu_{L}^{\pm}=\dfrac{1}{c^{2}}\left(\dfrac{\epsilon}{2}\pm\sqrt{{\epsilon^{2}+4\epsilon
mc^{2}-4E^{2}}}\right),$ (18)
where $\mu^{\pm}_{R,L}$ are the projected masses, or equivalently can be
presented as a sum of its hermitian $\mathfrak{H}(M)$ and antihermitian
$\mathfrak{A}(M)$ parts
$\displaystyle M_{\pm}=\mathfrak{H}(M_{\pm})+\mathfrak{A}(M_{\pm}),$ (19)
$\displaystyle\mathfrak{H}(M_{\pm})=\dfrac{\mu_{R}^{\pm}+\mu_{L}^{\pm}}{2}\mathbf{1}_{4}\quad,\quad\mathfrak{A}(M_{\pm})=\dfrac{\mu_{R}^{\pm}-\mu_{L}^{\pm}}{2}\gamma^{5}.$
(20)
Introducing the chiral right- and left-handed Weyl fields $\psi_{R,L}^{\pm}$
related to the Dirac field $\psi^{\pm}$ according to the standard
transformation $\psi_{R,L}^{\pm}=\dfrac{1\pm\gamma^{5}}{2}\psi^{\pm}$ one
obtains two the massive Weyl equations with diagonal and hermitian mass
matrices $\mu_{\pm}$
$\left(\gamma^{\mu}\hat{p}_{\mu}+\mu_{\pm}c^{2}\right)\left[\begin{array}[]{c}\psi_{R}^{\pm}\\\
\psi_{L}^{\pm}\end{array}\right]=0\quad,\quad\mu^{\pm}=\left[\begin{array}[]{cc}\mu_{R}^{\pm}&0\\\
0&\mu_{L}^{\pm}\end{array}\right],$ (21)
so that we have totally 16 chiral fields describing by the Weyl equations (21)
received from the Dirac equations (13). The massive Weyl theories (21) are the
Euler–Lagrange equations of motion for the gauge field theory with chiral
symmetry $SU(3)_{C}^{TOT}=SU(3)_{C}^{+}\oplus SU(3)_{C}^{-}$
$\mathcal{L}=\mathcal{L}^{+}+\mathcal{L}^{-},$ (22)
where $\mathcal{L}^{\pm}$ are the Lagrangians associated with the fields
$\psi^{p}m_{R,L}$ as follows
$\displaystyle\mathcal{L}^{\pm}=\bar{\psi}^{\pm}_{R}\gamma^{\mu}\hat{p}_{\mu}\psi_{R}^{\pm}+\bar{\psi}_{L}^{\pm}\gamma^{\mu}\hat{p}_{\mu}\psi_{L}^{\pm}+\mu_{R}^{\pm}c^{2}\bar{\psi}_{R}\psi_{R}^{\pm}+\mu_{L}^{\pm}c^{2}\bar{\psi}_{L}\psi_{L}^{\pm},$
(23)
which is spontaneously broken to the composed gauge field theory with the
isospin symmetry $SU(2)_{V}^{TOT}=SU(2)_{V}^{+}\oplus SU(2)_{V}^{-}$
$\displaystyle\mathcal{L}$ $\displaystyle=$
$\displaystyle\bar{\psi^{+}}\left(\gamma^{\mu}\hat{p}_{\mu}+\mu_{eff}^{+}c^{2}\right)\psi^{+}+\bar{\psi^{-}}\left(\gamma^{\mu}\hat{p}_{\mu}+\mu_{eff}^{-}c^{2}\right)\psi^{-}=$
(24) $\displaystyle=$
$\displaystyle\bar{\Psi}\left(\gamma^{\mu}\hat{p}_{\mu}+M_{eff}c^{2}\right)\Psi,$
(25)
where $\mu_{eff}^{\pm}$ are the effective mass matrices of the gauge fields
$\psi^{\pm}$, and $M_{eff}$ is the mass matrix of the effective composed field
$\Psi=\left[\begin{array}[]{c}{\psi^{+}}\\\ {\psi^{-}}\end{array}\right]$
given by
$\displaystyle\mu_{eff}^{\pm}=\dfrac{\mu_{R}^{\pm}-\mu_{L}^{\pm}}{2}\gamma^{5}\quad,\quad
M_{eff}=\left[\begin{array}[]{cc}{\mu^{+}_{eff}}&0\\\
0&{\mu^{-}_{eff}}\end{array}\right].$ (28)
The Lagrangian (25) describes the effective field theory – composed chiral
condensate of massive neutrinos.
In this paper we will consider both the Dirac equations (15) and the massive
Weyl equations (21), but we are not going to discuss the gauge field theory
(25) that will be studying in our next topical papers. We will assume that
both the neutrinos masses (17) and (18) are real numbers, _i.e._ we will
consider situation when the maximal energy $\epsilon$ deforming Special
Relativity is determined by a relativistic particle characteristics as follows
$\displaystyle\epsilon$ $\displaystyle\in$
$\displaystyle\left(-\infty,-2mc^{2}\left(1+\sqrt{{1+\left(\dfrac{E}{mc^{2}}\right)^{2}}}\right)\right]\cup$
(29) $\displaystyle\cup$
$\displaystyle\left[-2mc^{2}\left(1-\sqrt{{1+\left(\dfrac{E}{mc^{2}}\right)^{2}}}\right),2mc^{2}\left(1+\sqrt{{1+\left(\dfrac{E}{mc^{2}}\right)^{2}}}\right)\right]\cup$
$\displaystyle\cup$
$\displaystyle\left[2mc^{2}\left(1+\sqrt{{1+\left(\dfrac{E}{mc^{2}}\right)^{2}}}\right),\infty\right),$
or for the case of massless relativistic particle possessing an energy $E$ is
$\displaystyle\epsilon\in\left(-\infty,-2|E|\right]\cup\left[2|E|,\infty\right).$
(30)
## 3 Energy renormalization
In fact existence of the massive neutrinos allows to explain in a consistent
physical way the nature of the Snyder noncommutative geometry model. Let us
see that by direct elementary algebraic manipulations the relations for masses
of left- and right- chiral Weyl fields (17) and (18) can be rewritten in the
form of the system of equations
$\left\\{\begin{array}[]{c}\left(\mu_{R}^{\pm}c^{2}+\dfrac{\epsilon}{2}\right)^{2}=\epsilon^{2}-4\epsilon
mc^{2}-4E^{2}\\\
\left(\mu_{L}^{\pm}c^{2}-\dfrac{\epsilon}{2}\right)^{2}=\epsilon^{2}+4\epsilon
mc^{2}-4E^{2}\end{array}\right.$ (31)
which allows to study dependence of the deformation energy parameter
$\epsilon$ and the particle energy $E$ from the masses
$m,\mu_{R}^{\pm},\mu_{L}^{\pm}$ treated as physically measurable quantities.
By subtraction of the second equation from the first one (31) one establishes
the relation
$\left(\mu_{L}^{\pm}c^{2}-\dfrac{\epsilon}{2}\right)^{2}-\left(\mu_{R}^{\pm}c^{2}+\dfrac{\epsilon}{2}\right)^{2}=8\epsilon
mc^{2},$ (32)
which after application of elementary algebraic manipulations allows to derive
the deformation energy in the Snyder model (11) as
$\epsilon=\dfrac{\left(\mu_{L}^{\pm}-\mu_{R}^{\pm}\right)c^{2}}{1-\dfrac{8m}{\mu_{L}^{\pm}+\mu_{R}^{\pm}}}.$
(33)
A maximal energy (33) does not vanish for all
$\mu_{L}^{\pm}\neq\mu_{L}^{\pm}\neq 0$, and is finite for all
$\mu_{R}^{\pm}+\mu_{L}^{\pm}\neq 8m$. Here $m$ is the mass of an original
quantum state, and both $\mu_{R}^{\pm}$ and $\mu_{L}^{\pm}$ are assumed as
physical quantities. In supposition all the masses can be fixed by
experiments. In the case, when an original state is massless, one obtains
$\epsilon(m=0)=\left(\mu_{L}^{\pm}-\mu_{R}^{\pm}\right)c^{2}\equiv\epsilon_{0},$
(34)
that is finite and non vanishing for finite $\mu_{R}^{\pm}\neq 0$ and
$\mu_{L}^{\pm}\neq 0$. In this manner we have
$\epsilon=\epsilon_{0}\left[1+\dfrac{8m}{\mu_{R}^{\pm}+\mu_{L}^{\pm}}+O\left(\left(\dfrac{8m}{\mu_{R}^{\pm}+\mu_{L}^{\pm}}\right)^{2}\right)\right],$
(35)
for all $|\mu_{R}^{\pm}+\mu_{L}^{\pm}|>8m$, and
$\epsilon=\epsilon_{0}\left[\dfrac{\mu_{R}^{\pm}+\mu_{L}^{\pm}}{8m}+O\left(\left(\dfrac{\mu_{R}^{\pm}+\mu_{L}^{\pm}}{8m}\right)^{2}\right)\right],$
(36)
for all $|\mu_{R}^{\pm}+\mu_{L}^{\pm}|<8m$. On the other hand, however,
addition of the second equation to the first one in (31) gives the relation
$\left(\mu_{L}^{\pm}c^{2}-\dfrac{\epsilon}{2}\right)^{2}+\left(\mu_{R}^{\pm}c^{2}+\dfrac{\epsilon}{2}\right)^{2}=2\left(\epsilon^{2}-4E^{2}\right),$
(37)
which can be treated as the constraint for the energy $E$ of a relativistic
particle, immediately solved with respect $E$, and presented in the canonical
quadratic form with respect to the energy parameter $\epsilon$
$E^{2}=\dfrac{3}{16}\left\\{\left[\epsilon+\dfrac{\mu_{L}^{\pm}-\mu_{R}^{\pm}}{3}c^{2}\right]^{2}-\left[\dfrac{\mu_{L}^{\pm}-\mu_{R}^{\pm}}{3}c^{2}\right]^{2}\left[7+\dfrac{12\mu_{L}^{\pm}\mu_{R}^{\pm}}{\left(\mu_{L}^{\pm}-\mu_{R}^{\pm}\right)^{2}}\right]\right\\}.$
(38)
By application of the deformation parameter energy (33) into the energetic
constraint of a relativistic particle (38) one obtains the formula
$E^{2}=\dfrac{\left[\left(\mu_{L}^{\pm}-\mu_{R}^{\pm}\right)c^{2}\right]^{2}}{48}\left\\{\left(\dfrac{4-\dfrac{8m}{\mu_{L}^{\pm}+\mu_{R}^{\pm}}}{1-\dfrac{8m}{\mu_{L}^{\pm}+\mu_{R}^{\pm}}}\right)^{2}-\left[7+\dfrac{12\mu_{L}^{\pm}\mu_{R}^{\pm}}{\left(\mu_{L}^{\pm}-\mu_{R}^{\pm}\right)^{2}}\right]\right\\},$
(39)
which for the case of originally massless state reduces into the form
$E^{2}(m=0)=\dfrac{1}{16}\left[\left(\mu_{L}^{\pm}-\mu_{R}^{\pm}\right)c^{2}\right]^{2}\left[3-4\dfrac{\mu_{L}^{\pm}\mu_{R}^{\pm}}{\left(\mu_{L}^{\pm}-\mu_{R}^{\pm}\right)^{2}}\right]\equiv
E^{2}_{0}.$ (40)
In fact, for given $E_{0}$ the equation (40) can be used for establishment of
the relation between masses of the neutrinos. In result one receives two
possible solutions
$\mu_{R}^{\pm}=\dfrac{4}{3}\mu_{L}^{\pm}\left[\dfrac{5}{4}\pm\sqrt{{1+3\left(\dfrac{\mu_{0}}{\mu_{L}^{\pm}}\right)^{2}}}\right]\quad,\quad\mu_{0}\equiv\dfrac{E_{0}}{c^{2}},$
(41)
which are minimized for the value $\mu_{0}\equiv 0$ by the values
$\mu_{R}^{\pm}=\left\\{3\mu_{L}^{\pm},\dfrac{1}{3}\mu_{L}^{\pm}\right\\}.$
(42)
Interestingly, there is a possibility of the one solution between the masses
$\mu_{L}^{\pm}$ and $\mu_{R}^{\pm}$ that is given by putting $\mu_{0}$ as a
tachyonic mass
$\mu_{0}=i\dfrac{\mu_{L}^{\pm}}{\sqrt{3}},$ (43)
and results in the relation
$\mu_{R}^{\pm}=\dfrac{5}{3}\mu_{L}^{\pm}.$ (44)
For all $|\mu_{R}^{\pm}+\mu_{L}^{\pm}|>8m$ the constraint (39) can be
approximated by
$E^{2}-E_{0}^{2}=\left(\dfrac{\mu_{L}^{\pm}-\mu_{R}^{\pm}}{2}c^{2}\right)^{2}\left[2\dfrac{8m}{\mu_{L}^{\pm}+\mu_{R}^{\pm}}+O\left[\left(\dfrac{8m}{\mu_{L}^{\pm}+\mu_{R}^{\pm}}\right)^{2}\right]\right],$
(45)
and for $|\mu_{R}^{\pm}+\mu_{L}^{\pm}|<8m$ the leading approximation is
$E^{2}-E_{0}^{2}=\left(\dfrac{\mu_{L}^{\pm}-\mu_{R}^{\pm}}{2}c^{2}\right)^{2}\left[-\dfrac{5}{4}-\dfrac{1}{2}\dfrac{8m}{\mu_{L}^{\pm}+\mu_{R}^{\pm}}+O\left[\left(\dfrac{8m}{\mu_{L}^{\pm}+\mu_{R}^{\pm}}\right)^{2}\right]\right],$
(46)
From the relation (37) one sees that, because the LHS as a sum of two squares
of real numbers is always positive, it follows that the RHS must be positive
also. In result we obtain the renormalization of a relativistic particle’s
energy $E$ values
$-\dfrac{\epsilon}{2}\leqslant E\leqslant\dfrac{\epsilon}{2},$ (47)
Naturally, for the generic case of Special Relativity we have
$\epsilon\equiv\infty$ and by this energy $E$ values are not limited. In this
manner, in fact the Snyder noncommutative geometry results in energy
renormalization of a relativistic particle.
## 4 Integrability I: The Dirac equation
The modified Dirac equation (15) can be straightforwardly rewritten in the
Schrödinger evolutionary equation form (See _e.g._ the papers [11] and the
books [12])
$i\hbar\partial_{0}\psi^{\pm}=\hat{H}\psi^{\pm},$ (48)
where in the present case the Hamilton operator $\hat{H}$ can be established
as
$\hat{H}=-i\hbar
c\gamma^{0}\gamma^{i}\partial_{i}-\dfrac{\mu_{L}^{\pm}+\mu_{R}^{\pm}}{2}c^{2}\gamma^{0}+\dfrac{\mu_{L}^{\pm}-\mu_{R}^{\pm}}{2}c^{2}\gamma^{0}\gamma^{5},$
(49)
and splitted into its hermitian $\mathfrak{H}(\hat{H})$ and antihermitian
$\mathfrak{A}(\hat{H})$ parts
$\displaystyle\hat{H}$ $\displaystyle=$
$\displaystyle\mathfrak{H}(\hat{H})+\mathfrak{A}(\hat{H}),$ (50)
$\displaystyle\mathfrak{H}(\hat{H})$ $\displaystyle=$ $\displaystyle-i\hbar
c\gamma^{0}\gamma^{i}\partial_{i}-\dfrac{\mu_{L}^{\pm}+\mu_{R}^{\pm}}{2}c^{2}\gamma^{0},$
(51) $\displaystyle\mathfrak{A}(\hat{H})$ $\displaystyle=$
$\displaystyle\dfrac{\mu_{L}^{\pm}-\mu_{R}^{\pm}}{2}c^{2}\gamma^{0}\gamma^{5},$
(52)
with (anti)hermiticity defined standardly
$\displaystyle\int d^{3}x\bar{\psi^{\pm}}\mathfrak{H}(\hat{H})\psi^{\pm}$
$\displaystyle=$ $\displaystyle\int
d^{3}x\overline{\mathfrak{H}(\hat{H})\psi^{\pm}}\psi^{\pm},$ (53)
$\displaystyle\int d^{3}x\bar{\psi^{\pm}}\mathfrak{A}(\hat{H})\psi^{\pm}$
$\displaystyle=$ $\displaystyle-\int
d^{3}x\overline{\mathfrak{A}(\hat{H})\psi^{\pm}}\psi^{\pm}.$ (54)
Note that in the case of equal masses $\mu_{R}^{\pm}=\mu_{L}^{\pm}\equiv\mu$
the antihermitian part (52) vanishes identically, so that the hermitian one
(51) gives the full contribution, and consequently (50) becomes the usual
Dirac Hamiltonian
$\hat{H}_{D}=-\gamma^{0}\left(i\hbar c\gamma^{i}\partial_{i}+\mu
c^{2}\right).$ (55)
For this usual case, however, from (33) one concludes that
$\epsilon\equiv 0,$ (56)
so in fact the minimal scale becomes infinite formally $\ell\equiv\infty$, and
by (39) relativistic particle’s energy becomes $E=i\dfrac{1}{2}\mu c^{2}$ with
some mass $\mu$. If we, however, take into account the tachyonic mass case
$\mu\rightarrow i\mu=\mu^{\prime}$ then (55) becomes
$\hat{H}_{D}=-\gamma^{0}\left(i\hbar
c\gamma^{i}\partial_{i}+i\mu^{\prime}c^{2}\right),$ (57)
and $E\equiv\dfrac{1}{2}\mu^{\prime}c^{2}$. The relation (47), however, is not
validate in this case.
The full modified Hamiltonian (49) has nonhermitian character evidently, so
consequently the time evolution (48) is manifestly non unitary. Its formal
integration, however, can be carried out in the usual way with the following
time evolution operator
$\psi^{\pm}(x,t)=G(t,t_{0})\psi^{\pm}(x,t_{0})\quad,\quad
G(t,t_{0})\equiv\exp\left\\{-\dfrac{i}{\hbar}\int_{t_{0}}^{t}d\tau\hat{H}(\tau)\right\\}.$
(58)
By this reason, the integrability problem for (48) is contained in the
appropriate Zassenhaus formula
$\displaystyle\exp\left\\{A+B\right\\}$ $\displaystyle=$
$\displaystyle\exp(A)\exp(B)\prod_{n=2}^{\infty}\exp{C_{n}},$ (59)
$\displaystyle C_{2}$ $\displaystyle=$ $\displaystyle-\frac{1}{2}C,$ (60)
$\displaystyle C_{3}$ $\displaystyle=$
$\displaystyle-\frac{1}{6}(2[C,B]+[C,A]),$ (61) $\displaystyle C_{4}$
$\displaystyle=$
$\displaystyle-\frac{1}{24}([[C,A],A]+3[[C,A],B]+3[[C,B],B]),$
$\displaystyle\ldots$
where $C=[A,B]$. By the formula (58) one has identification
$\displaystyle A$ $\displaystyle\equiv$ $\displaystyle
A(t)=-\frac{i}{\hbar}\int_{t_{0}}^{t}d\tau\mathfrak{H}(\hat{H})(\tau),$ (63)
$\displaystyle B$ $\displaystyle\equiv$ $\displaystyle
B(t)=-\frac{i}{\hbar}\int_{t_{0}}^{t}d\tau\mathfrak{A}(\hat{H})(\tau),$ (64)
so that the commutator $C$ is established as
$C=-\dfrac{1}{\hbar^{2}}\int_{t_{0}}^{t}d\tau^{\prime}\int_{t_{0}}^{t}d\tau^{\prime\prime}\mathfrak{C}\left(\tau^{\prime},\tau^{\prime\prime}\right),$
(65)
where
$\mathfrak{C}\left(\tau^{\prime},\tau^{\prime\prime}\right)\equiv\left[\mathfrak{H}(\hat{H})(\tau^{\prime}),\mathfrak{A}(\hat{H})(\tau^{\prime\prime})\right].$
(66)
Straightforward calculation of $\mathfrak{C}$ can be done by elementary
algebra
$\displaystyle\mathfrak{C}$ $\displaystyle=$
$\displaystyle\left(i\hbar\dfrac{\mu_{R}^{\pm}-\mu_{L}^{\pm}}{2}c^{3}\partial_{i}\right)\gamma^{0}\gamma^{i}\gamma^{0}\gamma^{5}+\left(\dfrac{(\mu_{R}^{\pm})^{2}-(\mu_{L}^{\pm})^{2}}{4}c^{4}\right)\gamma^{0}\gamma^{0}\gamma^{5}-$
(67) $\displaystyle-$
$\displaystyle\left(i\hbar\dfrac{\mu_{R}^{\pm}-\mu_{L}^{\pm}}{2}c^{3}\partial_{i}\right)\gamma^{0}\gamma^{5}\gamma^{0}\gamma^{i}-\left(\dfrac{(\mu_{R}^{\pm})^{2}-(\mu_{L}^{\pm})^{2}}{4}c^{4}\right)\gamma^{0}\gamma^{5}\gamma^{0}=$
(68) $\displaystyle=$ $\displaystyle
2\left(i\hbar\dfrac{\mu_{R}^{\pm}-\mu_{L}^{\pm}}{2}c^{3}\partial_{i}\right)\gamma^{0}\gamma^{i}\gamma^{0}\gamma^{5}+2\left(\dfrac{(\mu_{R}^{\pm})^{2}-(\mu_{L}^{\pm})^{2}}{4}c^{4}\right)\gamma^{0}\gamma^{0}\gamma^{5},$
(69)
where we have applied the relations
$\gamma^{0}\gamma^{5}\gamma^{0}\gamma^{i}=-\gamma^{0}\gamma^{i}\gamma^{0}\gamma^{5}\quad,\quad\gamma^{0}\gamma^{5}\gamma^{0}=-\gamma^{0}\gamma^{0}\gamma^{5},$
(70)
arising by employing the usual Clifford algebra of the Dirac matrices
$\left\\{\gamma^{\mu},\gamma^{\nu}\right\\}=2\eta^{\mu\nu}\mathbf{1}_{4}\quad,\quad\left\\{\gamma^{5},\gamma^{\mu}\right\\}=0\quad,\quad\gamma^{5}=i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}.$
(71)
So, consequently one obtains the result
$\mathfrak{C}(\tau^{\prime},\tau^{\prime\prime})=2\mathfrak{H}(\hat{H})(\tau^{\prime})\mathfrak{A}(\hat{H})(\tau^{\prime\prime}),$
(72)
that leads to the equivalent statement – for any times $\tau^{\prime}$ and
$\tau^{\prime\prime}$ the Poisson bracket between the hermitian
$\mathfrak{H}(\hat{H})(\tau^{\prime})$ and the antihermitian
$\mathfrak{A}(\hat{H})(\tau^{\prime\prime})$ parts of the total Hamiltonian
(50) is trivial
$\left\\{\mathfrak{H}(\hat{H})(\tau^{\prime}),\mathfrak{A}(\hat{H})(\tau^{\prime\prime})\right\\}=0.$
(73)
Naturally, by simple factorization one obtains also
$C=2AB\quad,\quad\\{A,B\\}=0,$ (74)
and consequently
$\displaystyle\left[C,A\right]$ $\displaystyle=$ $\displaystyle CA,$ (75)
$\displaystyle\left[C,B\right]$ $\displaystyle=$ $\displaystyle CB,$ (76)
$\displaystyle\left[\left[C,A\right],A\right]$ $\displaystyle=$ $\displaystyle
2\left[C,A\right]A,$ (77) $\displaystyle\left[\left[C,A\right],B\right]$
$\displaystyle=$ $\displaystyle 2\left[C,A\right]B,$ (78)
$\displaystyle\left[\left[C,B\right],A\right]$ $\displaystyle=$ $\displaystyle
2\left[C,B\right]A,$ (79)
and so on. In result the 4th order approximation of the Zassehnaus formula
(59) in the present case is
$\displaystyle\exp\left\\{A+B\right\\}$ $\displaystyle\approx$
$\displaystyle\exp(A)\exp(B)\exp{C_{2}}\exp{C_{3}}\exp{C_{4}},$ (80)
$\displaystyle C_{2}$ $\displaystyle=$ $\displaystyle-\frac{1}{2}C,$ (81)
$\displaystyle C_{3}$ $\displaystyle=$ $\displaystyle-\frac{1}{6}(CA+2CB),$
(82) $\displaystyle C_{4}$ $\displaystyle=$
$\displaystyle-\frac{1}{12}\left(CA^{2}+3CB^{2}+\dfrac{3}{2}C^{2}\right).$
(83)
For the case of constant in time masses $\mu_{R}^{\pm}$ and $\mu_{L}^{\pm}$
one determine the relations
$\displaystyle A$ $\displaystyle=$
$\displaystyle\dfrac{i}{\hbar}(t-t_{0})\left(-i\hbar
c\gamma^{i}\partial_{i}+\dfrac{\mu_{L}^{\pm}+\mu_{R}^{\pm}}{2}c^{2}\right)\gamma^{0},$
(84) $\displaystyle B$ $\displaystyle=$
$\displaystyle\dfrac{i(\mu_{L}^{\pm}-\mu_{R}^{\pm})c^{2}}{2\hbar}(t-t_{0})\gamma^{5}\gamma^{0},$
(85) $\displaystyle C$ $\displaystyle=$
$\displaystyle\dfrac{(\mu_{L}^{\pm}-\mu_{R}^{\pm})c^{2}}{\hbar^{2}}(t-t_{0})^{2}\left(-i\hbar
c\gamma^{i}\partial_{i}+\dfrac{\mu_{L}^{\pm}+\mu_{R}^{\pm}}{2}c^{2}\right)\gamma^{5},$
(86)
and consequently by elementary algebraic manipulations one establishes the
Zassenhaus exponents as
$\displaystyle C_{2}$ $\displaystyle=$
$\displaystyle-\dfrac{(\mu_{L}^{\pm}-\mu_{R}^{\pm})c^{2}}{2\hbar^{2}}(t-t_{0})^{2}\left(-i\hbar
c\gamma^{i}\partial_{i}+\dfrac{\mu_{L}^{\pm}+\mu_{R}^{\pm}}{2}c^{2}\right)\gamma^{5},$
(87) $\displaystyle C_{3}$ $\displaystyle=$
$\displaystyle-\dfrac{i}{6\hbar^{3}}(\mu_{L}^{\pm}-\mu_{R}^{\pm})c^{2}(t-t_{0})^{3}\left(-i\hbar
c\gamma^{i}\partial_{i}+\dfrac{\mu_{L}^{\pm}+\mu_{R}^{\pm}}{2}c^{2}\right)\times$
(88) $\displaystyle\times$ $\displaystyle\left[\left(-i\hbar
c\gamma^{i}\partial_{i}+\dfrac{\mu_{L}^{\pm}+\mu_{R}^{\pm}}{2}c^{2}\right)\gamma^{5}+(\mu_{L}^{\pm}-\mu_{R}^{\pm})c^{2}\right]\gamma^{0},$
$\displaystyle C_{4}$ $\displaystyle=$
$\displaystyle\dfrac{(\mu_{L}^{\pm}-\mu_{R}^{\pm})c^{2}}{12\hbar^{4}}(t-t_{0})^{4}\left(-i\hbar
c\gamma^{i}\partial_{i}+\dfrac{\mu_{L}^{\pm}+\mu_{R}^{\pm}}{2}c^{2}\right)\times$
(89) $\displaystyle\times$ $\displaystyle\Bigg{\\{}\left[\left(-i\hbar
c\gamma^{i}\partial_{i}+\dfrac{\mu_{L}^{\pm}+\mu_{R}^{\pm}}{2}c^{2}\right)^{2}+3\left(\dfrac{\mu_{L}^{\pm}-\mu_{R}^{\pm}}{2}c^{2}\right)^{2}\right]\gamma^{5}+$
$\displaystyle+$ $\displaystyle
3\dfrac{\mu_{L}^{\pm}-\mu_{R}^{\pm}}{2}c^{2}\left(-i\hbar
c\gamma^{i}\partial_{i}+\dfrac{\mu_{L}^{\pm}+\mu_{R}^{\pm}}{2}c^{2}\right)\Bigg{\\}}.$
Exponents $C_{n}$ show in a manifest way that the integrability problem is not
well defined. Namely, the Zassenhaus coefficients $C_{n}$ are still a sums of
two noncommuting operators. The fundamental stage, _i.e._ the
exponentialization procedure, must be applied again, so that consequently in
the next step one has the same problem, _i.e._ the cyclic problem. Therefore
this recurrence is not algorithm, that is the symptom of non integrability of
(48).
## 5 Integrability II: The Weyl equation
For solving the problem, let us consider the integrability procedure with
respect to the massive Weyl equation (21). This equation can be
straightforwardly rewritten in the form of the Schrödinger time evolution
$i\hbar\partial_{0}\left[\begin{array}[]{c}\psi^{\pm}_{R}(x,t)\\\
\psi^{\pm}_{L}(x,t)\end{array}\right]=\hat{H}\left(\partial_{i}\right)\left[\begin{array}[]{c}\psi^{\pm}_{R}(x,t)\\\
\psi^{\pm}_{L}(x,t)\end{array}\right],$ (90)
where the hermitian Hamilton operator $\hat{H}$ defines to the unitary
evolution
$\hat{H}=-\gamma^{0}\left(i\hbar
c\gamma^{i}\partial_{i}+\left[\begin{array}[]{cc}\mu_{R}^{\pm}c^{2}&0\\\
0&\mu_{L}^{\pm}c^{2}\end{array}\right]\right),$ (91)
so that the integration can be done in the usual quantum mechanical way.
Integrability of (90) is well defined. The solutions are
$\left[\begin{array}[]{c}\psi^{\pm}_{R}(x,t)\\\
\psi^{\pm}_{L}(x,t)\end{array}\right]=U(t,t_{0})\left[\begin{array}[]{c}\psi^{\pm}_{R}(x,t_{0})\\\
\psi^{\pm}_{L}(x,t_{0})\end{array}\right],$ (92)
where $U(t,t_{0})$ is the unitary time-evolution operator, that for the
constant masses is explicitly given by
$U(t,t_{0})=\exp\left\\{-\dfrac{i}{\hbar}(t-t_{0})\hat{H}\right\\},$ (93)
and $\psi^{\pm}_{R,L}(x,t_{0})$ are the initial time $t_{0}$ eigenstates with
defined momenta
$i\hbar\sigma^{i}\partial_{i}\psi^{\pm}_{R,L}(x,t_{0})={p_{R,L}^{\pm}}^{0}\psi^{\pm}_{R,L}(x,t_{0}),$
(94)
where the momenta ${p_{R}^{\pm}}^{0}$ and ${p_{L}^{\pm}}^{0}$ are related to
the right- $\psi^{\pm}_{R}(x,t_{0})$ or left-handed $\psi^{\pm}_{L}(x,t_{0})$
chiral fields, respectively. The eigenequation (94), however, can be
straightforwardly integrated. The result can be presented in the compact form
$\psi^{\pm}_{R,L}(x,t_{0})=\exp\left\\{-\dfrac{i}{\hbar}{p_{R,L}^{\pm}}^{0}(x-x_{0})_{i}\sigma^{i}\right\\}\psi^{\pm}_{R,L}(x_{0},t_{0}),$
(95)
or after direct exponentialization
$\displaystyle\psi^{\pm}_{R,L}(x,t_{0})=\Bigg{\\{}\mathbf{1}_{2}\cos\left|\dfrac{{p_{R,L}^{\pm}}^{0}}{\hbar}(x-x_{0})_{i}\right|-$
(96) $\displaystyle-$ $\displaystyle
i\left[\dfrac{{p_{R,L}^{\pm}}^{0}}{\hbar}(x-x_{0})_{i}\sigma^{i}\right]\dfrac{\sin\left|\dfrac{{p_{R,L}^{\pm}}^{0}}{\hbar}(x-x_{0})_{i}\right|}{\left|\dfrac{{p_{R,L}^{\pm}}^{0}}{\hbar}(x-x_{0})_{i}\right|}\Bigg{\\}}\psi^{\pm}_{R,L}(x_{0},t_{0}).$
Currently the embarrassing problem presented in the integration procedure of
the Dirac equation, discussed in the previous section, is absent. The
Zessenhaus formula is not troublesome now because the matrix $\gamma^{5}$ is
by definition included into the Weyl fields, so that the Hamilton operator
(91) is pure hermitian, and consequently the exponentialization (93) can be
done is the usual way. At first glance, however, the mass matrix presence in
the Hamilton operator (91) causes that one chooses at least two nonequivalent
representations of the Dirac $\gamma$ matrices. Straightforward analogy to the
massless Weyl equation says that the appropriate choice is the Weyl basis. On
the other hand, however, the Hamilton operator (91) is usual hermitian Dirac
Hamiltonian, so consequently the Dirac basis is the right representation. In
this manner, in fact, we should consider rather both the chiral fields and the
time evolution operator (93) strictly related the chosen representation $(r)$
$\displaystyle U(t,t_{0})$ $\displaystyle\rightarrow$ $\displaystyle
U^{r}(t,t_{0}),$ (97) $\displaystyle\psi^{\pm}_{R,L}(x,t_{0})$
$\displaystyle\rightarrow$ $\displaystyle(\psi^{\pm}_{R,L})^{r}(x,t_{0}),$
(98) $\displaystyle\psi^{\pm}_{R,L}(x_{0},t_{0})$ $\displaystyle\rightarrow$
$\displaystyle(\psi^{\pm}_{R,L})^{r}(x_{0},t_{0})$ (99)
where the upper index $r=D,W$ means that the quantities are taken in the Dirac
or the Weyl basis. The eigenequation (94), however, is independent on the
representation choice, so that it physical condition - the fields have
measurable momenta ${p_{R,L}^{\pm}}^{0}$. For full correctness, let us test
both choices.
### 5.1 The Dirac basis
The Dirac basis of the gamma matrices is defined as
$\gamma^{0}=\left[\begin{array}[]{cc}I&0\\\
0&-I\end{array}\right]\quad,\quad\gamma^{i}=\left[\begin{array}[]{cc}0&\sigma^{i}\\\
-\sigma^{i}&0\end{array}\right]\quad,\quad\gamma^{5}=\left[\begin{array}[]{cc}0&I\\\
I&0\end{array}\right],$ (100)
where $I$ is the $2\times 2$ unit matrix, and
$\sigma^{i}=[\sigma_{x},\sigma_{y},\sigma_{z}]$ is a vector of the $2\times 2$
Pauli matrices
$\sigma_{x}=\left[\begin{array}[]{cc}0&1\\\
1&0\end{array}\right]\quad,\quad\sigma_{y}=\left[\begin{array}[]{cc}0&-i\\\
i&0\end{array}\right]\quad,\quad\sigma_{z}=\left[\begin{array}[]{cc}1&0\\\
0&-1\end{array}\right].$ (101)
Consequently, by using of (100) the Hamilton operator (91) becomes
$\hat{H}=\left[\begin{array}[]{cc}\mu_{R}^{\pm}&i\dfrac{\hbar}{c}\sigma^{i}\partial_{i}\\\
i\dfrac{\hbar}{c}\sigma^{i}\partial_{i}&-\mu_{L}^{\pm}\end{array}\right]c^{2},$
(102)
and for the case of constant in time neutrinos masses yields a solution by the
unitary time evolution operator $U$
$U^{D}=\exp\left\\{-i\dfrac{c^{2}}{\hbar}(t-t_{0})\left[\begin{array}[]{cc}\mu_{R}^{\pm}&i\dfrac{\hbar}{c}\sigma^{i}\partial_{i}\\\
i\dfrac{\hbar}{c}\sigma^{i}\partial_{i}&-\mu_{L}^{\pm}\end{array}\right]\right\\}.$
(103)
After straightforward exponentialization (103) can be written in the compact
form
$\displaystyle U^{D}$ $\displaystyle=$
$\displaystyle\Bigg{\\{}\left[\begin{array}[]{cc}I&0\\\
0&I\end{array}\right]\cos\left[\dfrac{t-t_{0}}{\hbar}c^{2}\sqrt{{\left(\dfrac{\mu_{R}^{\pm}+\mu_{L}^{\pm}}{2}\right)^{2}}+\left(i\dfrac{\hbar}{c}\sigma^{i}\partial_{i}\right)^{2}}\right]-$
(106) $\displaystyle-$ $\displaystyle
i\left[\begin{array}[]{cc}\dfrac{\mu_{L}^{\pm}+\mu_{R}^{\pm}}{2}&i\dfrac{\hbar}{c}\sigma^{i}\partial_{i}\\\
i\dfrac{\hbar}{c}\sigma^{i}\partial_{i}&-\dfrac{\mu_{L}^{\pm}+\mu_{R}^{\pm}}{2}\end{array}\right]\times$
$\displaystyle\times$
$\displaystyle\dfrac{\sin\left[\dfrac{t-t_{0}}{\hbar}c^{2}\sqrt{{\left(\dfrac{\mu_{R}^{\pm}+\mu_{L}^{\pm}}{2}\right)^{2}+\left(i\dfrac{\hbar}{c}\sigma^{i}\partial_{i}\right)^{2}}}\right]}{\sqrt{{\left(\dfrac{\mu_{R}^{\pm}+\mu_{L}^{\pm}}{2}\right)^{2}+\left(i\dfrac{\hbar}{c}\sigma^{i}\partial_{i}\right)^{2}}}}\Bigg{\\}}\times$
$\displaystyle\times$
$\displaystyle\exp\left\\{-i\dfrac{(\mu_{R}^{\pm}-\mu_{L}^{\pm})c^{2}}{2\hbar}(t-t_{0})\right\\},$
(110)
where we understand the all the functions are treated by the appropriate
Taylor series expansions.
### 5.2 The Weyl basis
Equivalently, however, one can consider employing of the Weyl representation
of the Dirac $\gamma$ matrices. This basis is defined as
$\gamma^{0}=\left[\begin{array}[]{cc}0&I\\\
I&0\end{array}\right]\quad,\quad\gamma^{i}=\left[\begin{array}[]{cc}0&\sigma^{i}\\\
-\sigma^{i}&0\end{array}\right]\quad,\quad\gamma^{5}=\left[\begin{array}[]{cc}-I&0\\\
0&I\end{array}\right].$ (111)
For the choice of a representation in the form (111) the massive Weyl equation
(90) is governed by the Hamilton operator (91) having the following form
$\hat{H}=\left[\begin{array}[]{cc}i\dfrac{\hbar}{c}\sigma^{i}\partial_{i}&-\mu_{L}^{\pm}\\\
-\mu_{R}^{\pm}&-i\dfrac{\hbar}{c}\sigma^{i}\partial_{i}\end{array}\right]c^{2}.$
(112)
Consequently, for the case of constant in time neutrinos masses one
establishes the unitary time evolution operator in the following formal form
$U^{W}=\exp\left\\{-i\dfrac{c^{2}}{\hbar}(t-t_{0})\left[\begin{array}[]{cc}i\dfrac{\hbar}{c}\sigma^{i}\partial_{i}&-\mu_{L}^{\pm}\\\
-\mu_{R}^{\pm}&-i\dfrac{\hbar}{c}\sigma^{i}\partial_{i}\end{array}\right]\right\\},$
(113)
which after straightforward elementary exponentialization procedure can be
presented in the form
$\displaystyle U^{W}$ $\displaystyle=$
$\displaystyle\left[\begin{array}[]{cc}I&0\\\
0&I\end{array}\right]\cos\left[\dfrac{t-t_{0}}{\hbar}c^{2}\sqrt{{\mu_{L}^{\pm}\mu_{R}^{\pm}+\left(i\dfrac{\hbar}{c}\sigma^{i}\partial_{i}\right)^{2}}}\right]-$
(116) $\displaystyle-$ $\displaystyle
i\left[\begin{array}[]{cc}i\dfrac{\hbar}{c}\sigma^{i}\partial_{i}&-\mu_{L}^{\pm}\\\
-\mu_{R}^{\pm}&-i\dfrac{\hbar}{c}\sigma^{i}\partial_{i}\end{array}\right]\times$
(119) $\displaystyle\times$
$\displaystyle\dfrac{\sin\left[\dfrac{t-t_{0}}{\hbar}c^{2}\sqrt{{\mu_{L}^{\pm}\mu_{R}^{\pm}+\left(i\dfrac{\hbar}{c}\sigma^{i}\partial_{i}\right)^{2}}}\right]}{\sqrt{{\mu_{L}^{\pm}\mu_{R}^{\pm}+\left(i\dfrac{\hbar}{c}\sigma^{i}\partial_{i}\right)^{2}}}}.$
(120)
Evidently, time evolution operator derived in the Weyl representation (116)
has simpler form then its Dirac’s equivalent (106). In this way the choices
are not physically equivalent, _i.e._ will yield different solutions of the
same equation. It is not, however, the strangest result. Namely, both the
choices can be related to physics in different energy regions. So that it is
useful to solve the massive Weyl equation in both mentioned representations.
It must be emphasized that strictly speaking the results obtained in this
subsection are related to the massive Weyl equations presented in the
Schrödinger time-evolution form (90).
### 5.3 The space-time evolution
Presently, one can employ the results received above, _i.e._ the momentum
eigenequations (94), the spatial evolutions (95), and the unitary time
evolution operators (106) and (116), for an exact determination of the
appropriate solutions of the massive Weyl equation (90) in both the Dirac and
the Weyl representations of the Dirac gamma matrices.
#### 5.3.1 Dirac-like solutions
Applying first the Dirac representation, by elementary algebraic manipulations
one receives straightforwardly the right-handed chiral Weyl fields in the
following form
$\displaystyle(\psi^{\pm}_{R})^{D}(x,t)=\Bigg{\\{}\Bigg{[}\cos\left[\dfrac{t-t_{0}}{\hbar}E^{D}({p_{R}^{\pm}}^{0})\right]-$
(121) $\displaystyle-$ $\displaystyle
i\mu_{\pm}^{D}c^{2}\dfrac{\sin\left[\dfrac{t-t_{0}}{\hbar}E^{D}({p_{R}^{\pm}}^{0})\right]}{E^{D}({p_{R}^{\pm}}^{0})}\Bigg{]}\exp\left\\{-\dfrac{i}{\hbar}{p_{R}^{\pm}}^{0}(x-x_{0})_{i}\sigma^{i}\right\\}(\psi^{\pm}_{R})^{D}_{0}-$
$\displaystyle-$ $\displaystyle
i{p_{L}^{\pm}}^{0}c\dfrac{\sin\left[\dfrac{t-t_{0}}{\hbar}E^{D}({p_{L}^{\pm}}^{0})\right]}{E^{D}({p_{L}^{\pm}}^{0})}\exp\left\\{-\dfrac{i}{\hbar}{p_{L}^{\pm}}^{0}(x-x_{0})_{i}\sigma^{i}\right\\}(\psi^{\pm}_{L})^{D}_{0}\Bigg{\\}}\times$
$\displaystyle\times$
$\displaystyle\exp\left\\{-i\dfrac{(\mu_{R}^{\pm}-\mu_{L}^{\pm})c^{2}}{2\hbar}(t-t_{0})\right\\},$
where for shorten notation
$(\psi^{\pm}_{R,L})^{D}_{0}=(\psi^{\pm}_{R,L})^{D}(x_{0},t_{0})$,
${\mu_{\pm}^{D}}=\dfrac{\mu_{R}^{\pm}+\mu_{L}^{\pm}}{2}$ and
$E^{D}({p_{R}^{\pm}}^{0})\equiv
c^{2}\sqrt{{\left(\mu_{\pm}^{D}\right)^{2}+\left(\dfrac{{p_{R}^{\pm}}^{0}}{c}\right)^{2}}}.$
(122)
Similarly, the left-handed chiral Weyl fields also can be determined in an
exact way, the result is analogical to the right-handed case
$\displaystyle(\psi^{\pm}_{L})^{D}(x,t)=\Bigg{\\{}\Bigg{[}\cos\left[\dfrac{t-t_{0}}{\hbar}E^{D}({p_{L}^{\pm}}^{0})\right]+$
(123) $\displaystyle+$ $\displaystyle
i\mu_{\pm}^{D}c^{2}\dfrac{\sin\left[\dfrac{t-t_{0}}{\hbar}E^{D}({p_{L}^{\pm}}^{0})\right]}{E^{D}({p_{L}^{\pm}}^{0})}\Bigg{]}\exp\left\\{-\dfrac{i}{\hbar}{p_{L}^{\pm}}^{0}(x-x_{0})_{i}\sigma^{i}\right\\}(\psi^{\pm}_{L})^{D}_{0}-$
$\displaystyle-$ $\displaystyle
i{p_{R}^{\pm}}^{0}c\dfrac{\sin\left[\dfrac{t-t_{0}}{\hbar}E^{D}({p_{R}^{\pm}}^{0})\right]}{E^{D}({p_{R}^{\pm}}^{0})}\exp\left\\{-\dfrac{i}{\hbar}{p_{R}^{\pm}}^{0}(x-x_{0})_{i}\sigma^{i}\right\\}(\psi^{\pm}_{R})^{D}_{0}\Bigg{\\}}\times$
$\displaystyle\times$
$\displaystyle\exp\left\\{-i\dfrac{(\mu_{R}^{\pm}-\mu_{L}^{\pm})c^{2}}{2\hbar}(t-t_{0})\right\\}.$
#### 5.3.2 Weyl-like solutions
Similar line of thought can be carried out in the Weyl basis. An elementary
calculation leads to the right-hand chiral Weyl fields in the form
$\displaystyle(\psi^{\pm}_{R})^{W}(x,t)=\Bigg{\\{}\cos\left[\dfrac{t-t_{0}}{\hbar}E^{W}({p_{R}^{\pm}}^{0})\right]-$
(124) $\displaystyle-$ $\displaystyle
i{p^{\pm}_{R}}^{0}c\dfrac{\sin\left[\dfrac{t-t_{0}}{\hbar}E^{W}({p_{R}^{\pm}}^{0})\right]}{E^{W}({p_{R}^{\pm}}^{0})}\Bigg{\\}}\exp\left\\{-\dfrac{i}{\hbar}{p_{R}^{\pm}}^{0}(x-x_{0})_{i}\sigma^{i}\right\\}(\psi^{\pm}_{R})^{W}_{0}+$
$\displaystyle+$ $\displaystyle
i\mu_{L}^{\pm}c^{2}\dfrac{\sin\left[\dfrac{t-t_{0}}{\hbar}E^{W}({p_{L}^{\pm}}^{0})\right]}{E^{W}({p_{L}^{\pm}}^{0})}\exp\left\\{-\dfrac{i}{\hbar}{p_{L}^{\pm}}^{0}(x-x_{0})_{i}\sigma^{i}\right\\}(\psi^{\pm}_{L})^{W}_{0},$
where similarly as in the Dirac-like case we have introduced the shorten
notation $(\psi^{\pm}_{R,L})^{W}_{0}=(\psi^{\pm}_{R,L})^{W}(x_{0},t_{0})$,
${\mu_{\pm}^{W}}=\mu_{\pm}^{W}=\sqrt{{\mu_{L}^{\pm}\mu_{R}^{\pm}}}$ and
$E^{W}({p_{R}^{\pm}}^{0})\equiv
c^{2}\sqrt{{\left(\mu_{\pm}^{W}\right)^{2}+\left(\dfrac{{p_{R}^{\pm}}^{0}}{c}\right)^{2}}}.$
(125)
For the left-hand chiral Weyl fields one obtains the formula
$\displaystyle(\psi^{\pm}_{L})^{W}(x,t)=\Bigg{\\{}\cos\left[\dfrac{t-t_{0}}{\hbar}E^{W}({p_{L}^{\pm}}^{0})\right]-$
(126) $\displaystyle+$ $\displaystyle
i{p^{\pm}_{L}}^{0}c\dfrac{\sin\left[\dfrac{t-t_{0}}{\hbar}E^{W}({p_{L}^{\pm}}^{0})\right]}{E^{W}({p_{L}^{\pm}}^{0})}\Bigg{\\}}\exp\left\\{-\dfrac{i}{\hbar}{p_{L}^{\pm}}^{0}(x-x_{0})_{i}\sigma^{i}\right\\}(\psi^{\pm}_{L})^{W}_{0}+$
$\displaystyle+$ $\displaystyle
i\mu_{R}^{\pm}c^{2}\dfrac{\sin\left[\dfrac{t-t_{0}}{\hbar}E^{W}({p_{R}^{\pm}}^{0})\right]}{E^{W}({p_{R}^{\pm}}^{0})}\exp\left\\{-\dfrac{i}{\hbar}{p_{R}^{\pm}}^{0}(x-x_{0})_{i}\sigma^{i}\right\\}(\psi^{\pm}_{R})^{W}_{0}.$
In this manner one sees that the difference between obtained solutions is
crucial. Direct comparing of the Weyl-like solutions (124) and (126) with the
Dirac-like solutions (121) and (123) shows that in the Dirac basis case there
are different coefficients of cosinuses and sinuses, there is additional time-
exponent, and moreover the functions $M^{D}({p_{R}^{\pm}}^{0})$ and
$M^{W}({p_{R}^{\pm}}^{0})$ having a basic status for both the solutions also
have different form with respect to choice of the Dirac $\gamma$ matrices
representation.
### 5.4 Probability density. Normalization
If we know the chiral Weyl fields, then in the Dirac representation, one can
derive the usual Dirac fields by the following way
$(\psi^{\pm})^{D}=\left[\begin{array}[]{cc}\dfrac{(\psi^{\pm}_{R})^{D}+(\psi^{\pm}_{L})^{D}}{2}\mathbf{1}_{2}&\dfrac{(\psi^{\pm}_{R})^{D}-(\psi^{\pm}_{L})^{D}}{2}\mathbf{1}_{2}\\\
\dfrac{(\psi^{\pm}_{R})^{D}-(\psi^{\pm}_{L})^{D}}{2}\mathbf{1}_{2}&\dfrac{(\psi^{\pm}_{R})^{D}+(\psi^{\pm}_{L})^{D}}{2}\mathbf{1}_{2}\end{array}\right],$
(127)
where for shorten notation $(\psi^{\pm})^{D}=(\psi^{\pm})^{D}(x,t)$, and
$(\psi^{\pm}_{R,L})^{D}=(\psi^{\pm}_{R,L})^{D}(x,t)$. Similarly in the Weyl
representation, the Dirac fields can be determined as
$(\psi^{\pm})^{W}=\left[\begin{array}[]{cc}(\psi^{\pm}_{L})^{W}\mathbf{1}_{2}&\mathbf{0}_{2}\\\
\mathbf{0}_{2}&(\psi^{\pm}_{R})^{W}\mathbf{1}_{2}\end{array}\right],$ (128)
where also we have used the shorten notation
$(\psi^{\pm})^{W}=(\psi^{\pm})^{W}(x,t)$, and
$(\psi^{\pm}_{R,L})^{W}=(\psi^{\pm}_{R,L})^{W}(x,t)$. It is evident now, that
in general these two cases are different from physical, mathematical, and
computational points of view. In this manner, if we consider the quantum
mechanical probability density and its normalization, we are forced to relate
the Lorentz invariant probability density to the chosen representation
$\Omega^{D,W}\equiv(\bar{\psi^{\pm}})^{D,W}(\psi^{\pm})^{D,W},$ (129) $\int
d^{3}x\Omega^{D,W}=\mathbf{1}_{4}.$ (130)
Using of (127) by elementary calculation one obtains
$\Omega^{D}=\left[\begin{array}[]{cc}\dfrac{(\bar{\psi^{\pm}}_{R})^{D}(\psi^{\pm}_{R})^{D}+(\bar{\psi^{\pm}}_{L})^{D}(\psi^{\pm}_{L})^{D}}{2}\mathbf{1}_{2}&\dfrac{(\bar{\psi^{\pm}}_{R})^{D}(\psi^{\pm}_{R})^{D}-(\bar{\psi^{\pm}}_{L})^{D}(\psi^{\pm}_{L})^{D}}{2}\mathbf{1}_{2}\\\
\dfrac{(\bar{\psi^{\pm}}_{R})^{D}(\psi^{\pm}_{R})^{D}-(\bar{\psi^{\pm}}_{L})^{D}(\psi^{\pm}_{L})^{D}}{2}\mathbf{1}_{2}&\dfrac{(\bar{\psi^{\pm}}_{R})^{D}(\psi^{\pm}_{R})^{D}+(\bar{\psi^{\pm}}_{L})^{D}(\psi^{\pm}_{L})^{D}}{2}\mathbf{1}_{2}\end{array}\right],$
(131)
By application of (128) the probability density (129) becomes
$\Omega^{W}=\left[\begin{array}[]{cc}(\bar{\psi^{\pm}}_{R})^{W}(\psi^{\pm}_{R})^{W}\mathbf{1}_{2}&\mathbf{0}_{2}\\\
\mathbf{0}_{2}&(\bar{\psi^{\pm}}_{L})^{W}(\psi^{\pm}_{L})^{W}\mathbf{1}_{2}\end{array}\right].$
(132)
Employing the normalization condition (130) in the Dirac representation one
obtains the system of equations
$\dfrac{1}{2}\left(\int
d^{3}x(\bar{\psi^{\pm}}_{R})^{D}(\psi^{\pm}_{R})^{D}+\int
d^{3}x(\bar{\psi^{\pm}}_{L})^{D}(\psi^{\pm}_{L})^{D}\right)=1,$ (133)
$\dfrac{1}{2}\left(\int
d^{3}x(\bar{\psi^{\pm}}_{R})^{D}(\psi^{\pm}_{R})^{D}-\int
d^{3}x(\bar{\psi^{\pm}}_{L})^{D}(\psi^{\pm}_{L})^{D}\right)=0,$ (134)
which leads to
$\displaystyle\int d^{3}x(\bar{\psi^{\pm}}_{R})^{D}(\psi^{\pm}_{R})^{D}$
$\displaystyle=$ $\displaystyle 1,$ (135) $\displaystyle\int
d^{3}x(\bar{\psi^{\pm}}_{L})^{D}(\psi^{\pm}_{L})^{D}$ $\displaystyle=$
$\displaystyle 1.$ (136)
In the case of Weyl representation one receives straightforwardly
$\displaystyle\int d^{3}x(\bar{\psi^{\pm}}_{R})^{W}(\psi^{\pm}_{R})^{W}$
$\displaystyle=$ $\displaystyle 1,$ (137) $\displaystyle\int
d^{3}x(\bar{\psi^{\pm}}_{L})^{W}(\psi^{\pm}_{L})^{W}$ $\displaystyle=$
$\displaystyle 1.$ (138)
In this manner one sees that in fact both the conditions (135), (136) and
(137), (138)) are invariant with respect to choice of gamma matrices
representations
$\int d^{3}x(\bar{\psi^{\pm}}_{R,L})^{D,W}(\psi^{\pm}_{R,L})^{D,W}=1,$ (139)
that means they are physical. Using of the fact that full space-time evolution
is determined as
$\displaystyle(\psi^{\pm}_{R,L})^{D,W}(x,t)=U^{D,W}(t,t_{0})(\psi^{\pm}_{R,L})^{D,W}(x,t_{0}),$
(140)
$\displaystyle\left[U^{D,W}(t,t_{0})\right]^{\dagger}U^{D,W}(t,t_{0})=\mathbf{1}_{2},$
(141)
one finds easily the condition
$\int
d^{3}x(\bar{\psi^{\pm}}_{R,L})^{D,W}(x,t_{0})(\psi^{\pm}_{R,L})^{D,W}(x,t_{0})=1.$
(142)
By using of the spatial evolution (96) one obtains the relation
$C\int
d^{3}x\left(\mathbf{1}_{2}+\dfrac{(x-x_{0})_{i}}{|x-x_{0}|}\Im\sigma^{i}\sin\left|2\dfrac{{p_{R,L}^{\pm}}^{0}}{\hbar}(x-x_{0})_{i}\right|\right)=1,$
(143)
where $C\equiv\left|(\psi^{\pm}_{R,L})^{D,W}(x_{0},t_{0})\right|^{2}$ is a
constant, and $\Im{\sigma^{i}}=\dfrac{\sigma^{i}-\sigma^{i\dagger}}{2i}$ is a
imaginary part of the vector $\sigma^{i}$. The decomposition
$\sigma_{i}=[\sigma_{x},0,\sigma_{z}]+i[0,-i\sigma_{y},0]$ yields
$\Im\sigma^{i}=[0,-i\sigma_{y},0]$, and the equation (143) becomes
$C\int
d^{3}x\left(\mathbf{1}_{2}-i\dfrac{(x-x_{0})_{y}}{|x-x_{0}|}\sigma_{y}\sin\left|2\dfrac{{p_{R,L}^{\pm}}^{0}}{\hbar}(x-x_{0})_{i}\right|\right)=1.$
(144)
Introducing the change of variables $(x-x_{0})_{i}\rightarrow{x^{\prime}}_{i}$
in the following way
${x^{\prime}}_{i}\equiv 2\dfrac{{p_{R,L}^{\pm}}^{0}}{\hbar}(x-x_{0})_{i},$
(145)
and the effective volume $V^{\prime}$ due to the vector ${x^{\prime}}_{i}$
$V^{\prime}\mathbf{1}_{2}=\int
d^{3}x^{\prime}\left\\{\mathbf{1}_{2}-i\sigma_{y}{x^{\prime}}_{y}\dfrac{\sin|x^{\prime}|}{|x^{\prime}|}\right\\},$
(146)
the equation (144) can be rewritten in the form
$\int d^{3}x^{\prime}V^{\prime}\mathbf{1}_{2}=\dfrac{1}{C}\mathbf{1}_{2},$
(147)
so that one obtains easily
$(\psi^{\pm}_{R,L})^{D,W}(x_{0},t_{0})=\left(2\dfrac{{p_{R,L}^{\pm}}^{0}}{\hbar}\right)^{3/2}\dfrac{1}{\sqrt{V^{\prime}}}\exp{i\theta_{\pm}},$
(148)
where $\theta_{\pm}$ are arbitrary constant phases. The volume (146) differs
from the standard one by the presence of the extra axial (y) volume $V_{y}$
$V_{y}=-i\sigma_{y}\int
d^{3}x^{\prime}{x^{\prime}}_{y}\dfrac{\sin|x^{\prime}|}{|x^{\prime}|},$ (149)
which is the axial effect and has nontrivial feature, namely
$V_{y}=\left\\{\begin{array}[]{cc}0&\mathrm{on}\leavevmode\nobreak\
\mathrm{finite}\leavevmode\nobreak\ \mathrm{symmetrical}\leavevmode\nobreak\
\mathrm{spaces}\\\ \infty&\mathrm{on}\leavevmode\nobreak\
\mathrm{infinite}\leavevmode\nobreak\ \mathrm{symmetrical}\leavevmode\nobreak\
\mathrm{spaces}\\\ <\infty&\mathrm{on}\leavevmode\nobreak\
\mathrm{sections}\leavevmode\nobreak\ \mathrm{of}\leavevmode\nobreak\
\mathrm{symmetrical}\leavevmode\nobreak\ \mathrm{spaces}\end{array}\right..$
(150)
One sees now that the normalization is strictly speaking dependent on the
choice of a region of integrability. For infinite symmetric spatial regions
this procedure is not well defined, because the axial effect is infinite.
However, one can consider some reasonable cases that consider the quantum
theory on finite symmetric spatial regions. Moreover, the problem of
integrability is defined with respect to choice of the initial momentum of the
Weyl chiral fields ${p_{R,L}^{\pm}}^{0}$. In fact there are many possible
nonequivalent physical situations connected with a concrete choice of this
eigenvalue. The one of this type situations related to a finite symmetric
spatial region, we are going to discuss in the next section as the example of
the massive neutrinos model, which in general was solved in this paper.
## 6 The reasonable case
Let us consider finally the reasonable case, that is based on the
normalization in a finite symmetrical box and putting by hands the value of
initial momenta of the chiral Weyl fields according to the Special Relativity
${p_{R,L}^{\pm}}^{0}=\mu_{R,L}^{\pm}c.$ (151)
For that simplified case the normalization discussed in the previous section
leads to the following initial data condition
$(\psi^{\pm}_{R,L})^{D,W}(x_{0},t_{0})=\sqrt{\left(2\dfrac{c}{\hbar}\right)^{3}\dfrac{\mu^{3}_{R,L}}{V^{\prime}}}\exp{i\theta_{\pm}}=\dfrac{1}{\sqrt{V}}\exp{i\theta_{\pm}},$
(152)
where $V=\int d^{3}x$. Introducing the function $E^{D}(x,y)$
$E^{D}(x,y)\equiv c^{2}\sqrt{{\left(\dfrac{x+y}{2}\right)^{2}}+x^{2}},$ (153)
the right- and the left-hand chiral Weyl fields in the Dirac representation
take the following form
$\displaystyle(\psi^{\pm}_{R})^{D}(x,t)=\Bigg{\\{}\Bigg{[}\cos\left[\dfrac{t-t_{0}}{\hbar}E^{D}(\mu_{R}^{\pm},\mu_{L}^{\pm})\right]-$
(154) $\displaystyle-$ $\displaystyle
i\dfrac{\mu_{\pm}^{D}c^{2}}{E^{D}(\mu_{R}^{\pm},\mu_{L}^{\pm})}\sin\left[\dfrac{t-t_{0}}{\hbar}E^{D}(\mu_{R}^{\pm},\mu_{L}^{\pm})\right]\Bigg{]}\exp\left\\{-\dfrac{ic}{\hbar}\mu_{R}^{\pm}(x-x_{0})_{i}\sigma^{i}\right\\}-$
$\displaystyle-$ $\displaystyle
i\dfrac{\mu_{L}^{\pm}c^{2}}{E^{D}(\mu_{L}^{\pm},\mu_{R}^{\pm})}\sin\left[\dfrac{t-t_{0}}{\hbar}E^{D}(\mu_{L}^{\pm},\mu_{R}^{\pm})\right]\exp\left\\{-\dfrac{ic}{\hbar}\mu_{L}^{\pm}(x-x_{0})_{i}\sigma^{i}\right\\}\Bigg{\\}}\times$
$\displaystyle\times$
$\displaystyle\dfrac{1}{\sqrt{V}}\exp\left\\{i\left[\theta_{\pm}-\dfrac{(\mu_{R}^{\pm}-\mu_{L}^{\pm})c^{2}}{2\hbar}(t-t_{0})]\right]\right\\}.$
and
$\displaystyle(\psi^{\pm}_{L})^{D}(x,t)=\Bigg{\\{}\Bigg{[}\cos\left[\dfrac{t-t_{0}}{\hbar}E^{D}(\mu_{L}^{\pm},\mu_{R}^{\pm})\right]+$
(155) $\displaystyle+$ $\displaystyle
i\dfrac{\mu_{\pm}^{D}c^{2}}{E^{D}(\mu_{L}^{\pm},\mu_{R}^{\pm})}\sin\left[\dfrac{t-t_{0}}{\hbar}E^{D}(\mu_{L}^{\pm},\mu_{R}^{\pm})\right]\Bigg{]}\exp\left\\{-\dfrac{ic}{\hbar}\mu_{L}^{\pm}(x-x_{0})_{i}\sigma^{i}\right\\}-$
$\displaystyle-$ $\displaystyle
i\dfrac{\mu_{R}^{\pm}c^{2}}{E^{D}(\mu_{R}^{\pm},\mu_{L}^{\pm})}\sin\left[\dfrac{t-t_{0}}{\hbar}E^{D}(\mu_{R}^{\pm},\mu_{L}^{\pm})\right]\exp\left\\{-\dfrac{ic}{\hbar}\mu_{R}^{\pm}(x-x_{0})_{i}\sigma^{i}\right\\}\Bigg{\\}}\times$
$\displaystyle\times$
$\displaystyle\dfrac{1}{\sqrt{V}}\exp\left\\{i\left[\theta_{\pm}-\dfrac{(\mu_{R}^{\pm}-\mu_{L}^{\pm})c^{2}}{2\hbar}(t-t_{0})]\right]\right\\}.$
Similarly, introducing the function $E^{W}(x,y)$
$E^{W}(x,y)\equiv c^{2}\sqrt{xy+x^{2}},$ (156)
for the studied case the right- and the left-hand chiral Weyl fields in the
Weyl representation have a form
$\displaystyle(\psi^{\pm}_{R})^{W}(x,t)=\dfrac{\exp{i\theta_{\pm}}}{\sqrt{V}}\Bigg{\\{}\Bigg{[}\cos\left[\dfrac{t-t_{0}}{\hbar}E^{W}(\mu_{R}^{\pm},\mu_{L}^{\pm})\right]-\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ $ (157) $\displaystyle-$
$\displaystyle\dfrac{i\mu_{R}^{\pm}c^{2}}{E^{W}(\mu_{R}^{\pm},\mu_{L}^{\pm})}\sin\left[\dfrac{t-t_{0}}{\hbar}E^{W}(\mu_{R}^{\pm},\mu_{L}^{\pm})\right]\Bigg{]}\exp\left\\{-\dfrac{ic}{\hbar}\mu_{R}^{\pm}(x-x_{0})_{i}\sigma^{i}\right\\}+\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ $ $\displaystyle+$
$\displaystyle\dfrac{i\mu_{L}^{\pm}c^{2}}{E^{W}(\mu_{L}^{\pm},\mu_{R}^{\pm})}\sin\left[\dfrac{t-t_{0}}{\hbar}E^{W}(\mu_{L}^{\pm},\mu_{R}^{\pm})\right]\exp\left\\{-\dfrac{ic}{\hbar}\mu_{L}^{\pm}(x-x_{0})_{i}\sigma^{i}\right\\}\Bigg{\\}},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ $
and
$\displaystyle(\psi^{\pm}_{L})^{W}(x,t)=\dfrac{\exp{i\theta_{\pm}}}{\sqrt{V}}\Bigg{\\{}\Bigg{[}\cos\left[\dfrac{t-t_{0}}{\hbar}E^{W}(\mu_{L}^{\pm},\mu_{R}^{\pm})\right]-\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ $ (158) $\displaystyle-$
$\displaystyle\dfrac{i\mu_{L}^{\pm}c^{2}}{E^{W}(\mu_{L}^{\pm},\mu_{R}^{\pm})}\sin\left[\dfrac{t-t_{0}}{\hbar}E^{W}(\mu_{L}^{\pm},\mu_{R}^{\pm})\right]\Bigg{]}\exp\left\\{-\dfrac{ic}{\hbar}\mu_{L}^{\pm}(x-x_{0})_{i}\sigma^{i}\right\\}+\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ $ $\displaystyle+$
$\displaystyle\dfrac{i\mu_{R}^{\pm}c^{2}}{E^{W}(\mu_{R}^{\pm},\mu_{L}^{\pm})}\sin\left[\dfrac{t-t_{0}}{\hbar}E^{W}(\mu_{R}^{\pm},\mu_{L}^{\pm})\right]\exp\left\\{-\dfrac{ic}{\hbar}\mu_{R}^{\pm}(x-x_{0})_{i}\sigma^{i}\right\\}\Bigg{\\}}.\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ $
The ”reasonable case” considered above is only the example following from the
massive neutrinos model given by the massive Weyl equations (21) obtained due
to the Snyder model of noncommutative geometry (11), and naturally it is not
the only case. Actually there are many other possibilities for determination
of the relation between the initial eigenmomentum values ${p_{R,L}^{\pm}}^{0}$
and the masses $\mu_{R,L}^{\pm}$ of the right- and left- hand chiral Weyl
fields $\psi_{R,L}^{\pm}$. However, the concrete choice (151) tested in this
section presents a crucial reasonability contained in its special-
relativistic-like character. Naturally this choice is connected with the
special equivalence principle applied to the massive neutrinos in at the
beginning of their space-time evolution, _i.e._
$E_{R,L}^{\pm}=\mu^{\pm}_{R,L}c^{2}={p_{R,L}^{\pm}}^{0}c$. This case, however,
is also nontrivial from the high energy physics point of view [13], namely it
is related to the region of ultra-high energies, widely considered in the
modern astrophysics (See _e.g._ [14] and suitable references therein). So, the
presented reasonable case of the massive neutrinos evolution in fact describes
their physics in this region, and has possible natural application in ultra-
high energy astrophysics.
## 7 Outlook
In this paper we have discussed in some detail the consequences of the massive
neutrinos model arising due to the Snyder model of noncommutative geometry.
The massive neutrinos model is a consequence of the Dirac equations for a
usual relativistic quantum state supplemented by the generalized
$\gamma^{5}$-term [1]. In fact, Sidharth has suggested that this term could
give a neutrino mass, however, in spite of a good physical intuition he has
finished considerations on a laconic statement only, with no any concrete
calculations and propositions for a generation mechanism of neutrinos masses
[15].
First we have considered the physical status of the Snyder model. By detailed
calculation we have shown that, in contrast to Special Relativity theory,
within the massive neutrinos model an energy of any original relativistic
massive or massless quantum state is strictly renormalized due to a maximal
energy, directly related to a minimal scale $\ell$ being the deformation
parameter in the Snyder noncommutative geometry. In this manner the Snyder
model has received a deep physical sense, that is in some partial relation to
the Markov–Kadyshevsky approach [9].
Next the integrability problem of the massive neutrinos model was detailed
discussed. First we have considered the modified Dirac equations, which
rewritten in the Schrödinger form have yielded manifestly nonhermitian
Hamiltonian being a sum of a hermitian and a antihermitian parts. By employing
the 4th order approximation of the Zassenhaus formula we have proven that the
procedure is not algorithm by the presence of the cyclic problem in
exponentialization. Consequently, the Dirac equations are not integrable
exactly. By this formal reason we have redefined the integrability problem
with respect to the massive Weyl equations corresponding to the Dirac
equations. The massive Weyl equations was also rewritten in the Schrödinger
form, and by using of both the Dirac and the Weyl representations of the Dirac
$\gamma$ matrices, we have constructed its analytical exact solutions. We have
shown that the normalizability of a solution is correctly defined only for
special regions of spatial integration, _i.e._ sections of symmetric spaces or
finite symmetric spaces.
Finally, the case related to ultra-high energy physics and astrophysics was
shortly discussed. The all obtained results in general present interesting new
physical content. In itself the investigated quantum-mechanical approach to
the massive Weyl equation is novel. There are still possible applications of
the proposed massive neutrinos model to phenomenology of particle physics and
astrophysics, especially in the ultra-high energy region. The open question is
the gauge field theory related to the massive neutrinos model, possessing some
features of QCD.
## Acknowledgements
The author benefitted many valuable discussions from Profs. A. B. Arbuzov, I.
Ya. Aref’eva, K. A. Bronnikov, I. L. Buchbinder, and V. N. Pervushin. Special
thanks are directed to Profs. B. G. Sidharth and S. R. Valluri.
## References
* [1] L. A. Glinka, [arXiv:0812.0551 [hep-th]] ; [arXiv:0902.4811 [hep-ph]]
* [2] A. Connes, Noncommutative Geometry. Academic Press (1994);
N. Seiberg and E. Witten, JHEP 09, 032 (1999) [arXiv:hep-th/9908142];
D. J. Gross, A. Hashimoto, and N. Itzhaki, Adv. Theor. Math. Phys. 4, 893-928
(2000) [arXiv:hep-th/0008075];
M. R. Douglas and N. A. Nekrasov, Rev. Mod. Phys. 73, 977-1029 (2002)
[arXiv:hep-th/0106048];
R. J. Szabo, Phys. Rep. 378, 207-299 (2003) [arXiv:hep-th/0109162];
M. Chaichian, K. Nishijima and A. Tureanu, Phys. Lett. B 568, 146-152 (2003)
[arXiv:hep-th/0209008];
L. Alvarez-Gaume and M. A. Vazquez-Mozo, Nucl. Phys. B 668, 293-321 (2003)
[arXiv:hep-th/0305093];
A. Berard and H. Mohrbach, Phys. Rev. D 69, 127701 (2004) [arXiv:hep-
th/0310167];
A. Das and J. Frenkel, Phys. Rev. D 69, 065017 (2004) [arXiv:hep-th/0311243];
M. Chaichian, M. N. Mnatsakanova, K. Nishijima, A. Tureanu, and Yu. A. Vernov,
[arXiv:hep-th/0402212];
D. H. T. Franco and C. M. M. Polito, J. Math. Phys. 46, 083503 (2005)
[arXiv:hep-th/0403028];
M. Chaichian, P. P. Kulish, K. Nshijima, and A. Tureanu, Phys. Lett. B 604,
98-102 (2004) [arXiv:hep-th/0408069];
C. D. Fosco and G. Torroba, Phys. Rev. D 71, 065012 (2005) [arXiv:hep-
th/0409240];
C.-S. Chu, K. Furuta, and T. Inami, Int. J. Mod. Phys. A 21, 67 (2006)
[arXiv:hep-th/0502012];
B. Schroer, Annals Phys. 319, 92 (2005) [arXiv:hep-th/0504206];
O. W. Greenberg, Phys. Rev. D 73, 045014 (2006) [arXiv:hep-th/0508057];
M. A. Soloviev, Theor. Math. Phys. 147, 660-669 (2006), FIAN/TD/7-06
[arXiv:hep-th/0605249]; Theor. Math. Phys. 153, 1351-1363 (2007), FIAN-
TD/2007-16 [arXiv:0708.0811 [math-ph]];
E. Harikumar and V. O. Rivelles, Class. Quantum Grav. 23, 7551-7560 (2006)
[arXiv:hep-th/0607115];
G. Fiore and J. Wess, Phys. Rev. D 75, 105022 (2007) [arXiv:hep-th/0701078];
M. Chaichian, M. N. Mnatsakanova, A. Tureanu, and Yu. A. Vernov, JHEP 0809,
125 (2008) [arXiv:0706.1712 [hep-th]].
* [3] H. S. Snyder, Phys. Rev. 71, 38-41 (1947); _ibid._ 72, 68-71 (1947).
* [4] M. V. Battisti and S. Meljanac, Phys. Rev. D 79, 067505 (2009) [arXiv:0812.3755 [hep-th]].
* [5] I. Montvay and G. Münster, Quantum Fields on a Lattice. Cambridge University Press 1994.
* [6] M. Kontsevich, Lett. Math. Phys. 66, 157-216 (2003) [arXiv:q-alg/9709040].
* [7] G. Dito and D. Sternheimer, Lect. Math. Theor. Phys. 1, 9-54, (2002) [arXiv:math/0201168].
* [8] P. A. M. Dirac, The Principles of Quantum Mechanics. Clarendon Press (1958).
* [9] M. A. Markov, Prog. Theor. Phys. Suppl. E65, 85-95 (1965); Sov. Phys. JETP 24, 584 (1967);
V. G. Kadyshevsky, Sov. Phys. JETP 14, 1340-1346 (1962); Nucl. Phys. B 141,
477 (1978); in Group Theoretical Methods in Physics: Seventh International
Colloquium and Integrative Conference on Group Theory and Mathematical
Physics, Held in Austin, Texas, September 11–16, 1978. ed. by W. Beiglböck, A.
Böhm, and E. Takasugi, Lect. Notes Phys. 94, 114-124 (1978); Phys. Elem.
Chast. Atom. Yadra 11, 5 (1980);
V. G. Kadyshevsky and M. D. Mateev, Phys. Lett. B 106, 139 (1981); Nuovo Cim.
A 87, 324 (1985).
M. V. Chizhov, A. D. Donkov, V. G. Kadyshevsky, and M. D. Mateev, Nuovo Cim. A
87, 350 (1985); Nuovo Cim. A 87, 373 (1985).
V. G. Kadyshevsky, Phys. Part. Nucl. 29, 227 (1998).
V. G. Kadyshevsky, M. D. Mateev, V. N. Rodionov, and A. S. Sorin, Dokl. Phys.
51, 287 (2006) [arXiv:hep-ph/0512332]; CERN-TH/2007-150, [arXiv:0708.4205
[hep-ph]];
V. N. Rodionov, [arXiv:0903.4420 [hep-ph]].
* [10] B. G. Sidharth, Int. J. Mod. Phys. E 14, 927-929 (2005).
* [11] E. Schrödinger, Ann. Phys. 4, 79 (1926); _ibid._ , 80 (1926); _ibid._ , 81 (1926); Phys. Rev. 28, 6, 1049 (1926).
* [12] L. I. Schiff, Quantum Mechanics. 3rd ed., McGraw-Hill Company 1968;
I. Białynicki-Birula and Z. Białynicka-Birula, Quantum Electrodynamics.
Pergamon Press 1975;
W. Greiner, Relativistic Quantum Mechanics. Wave Equations. 3rd ed., Springer
2000;
J. Schwinger, Quantum Mechanics. Symbolism of Atomic Measurements. ed. by B.G.
Englert, Springer 2001;
F. Gross, Relativistic Quantum Mechanics and Field Theory. Wiley-VCH Verlag
GmBH 2004;
F. Dyson and D. Derbes, Advanced Quantum Mechanics. World Scientific 2007.
* [13] G. ’t Hooft, Phys. Lett. B 198, 61-63 (1987).
* [14] M. Lemoine and G. Sigl (eds.), Physics and Astrophysics of Ultra-High-Energy Cosmic Rays. Lect. Notes Phys. 576, Springer 2001;
L. Maccione, A. M. Taylor, D. M. Mattingly, and S. Liberati, [arXiv:0902.1756
[astro-ph.HE]];
F. W. Stecker and S. T. Scully, [arXiv:0906.1735 [astro-ph.HE]].
* [15] B. G. Sidharth, _Private Communication_ , May 2009.
|
arxiv-papers
| 2009-05-24T19:25:04 |
2024-09-04T02:49:02.867844
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "L. A. Glinka",
"submitter": "Lukasz Andrzej Glinka",
"url": "https://arxiv.org/abs/0905.3916"
}
|
0905.3937
|
# Incompressible limit of the compressible magnetohydrodynamic equations with
vanishing viscosity coefficients
Song Jiang LCP, Institute of Applied Physics and Computational Mathematics,
P.O. Box 8009, Beijing 100088, P.R. China jiang@iapcm.ac.cn , Qiangchang Ju
Institute of Applied Physics and Computational Mathematics, P.O. Box 8009-28,
Beijing 100088, P.R. China qiangchang_ju@yahoo.com and Fucai Li∗ Department
of Mathematics, Nanjing University, Nanjing 210093, P.R. China fli@nju.edu.cn
###### Abstract.
This paper is concerned with the incompressible limit of the compressible
magnetohydrodynamic equations with vanishing viscosity coefficients and
general initial data in the whole space $\mathbb{R}^{d}$ $(d=2$ or $3$). It is
rigorously showed that, as the Mach number, the shear viscosity coefficient
and the magnetic diffusion coefficient simultaneously go to zero, the weak
solutions of the compressible magnetohydrodynamic equations converge to the
strong solution of the ideal incompressible magnetohydrodynamic equations as
long as the latter exists.
###### Key words and phrases:
compressible MHD equations, ideal incompressible MHD equations, low Mach
number, zero viscosities
###### 2000 Mathematics Subject Classification:
76W05, 35B40
∗ Corresponding author
## 1\. Introduction
Magnetohydrodynamics (MHD) studies the dynamics of compressible quasineutrally
ionized fluids under the influence of electromagnetic fields. The applications
of MHD cover a very wide range of physical objects, from liquid metals to
cosmic plasmas. The compressible viscous magnetohydrodynamic equations in the
isentropic case take the form ([11, 12, 17])
$\displaystyle\partial_{t}\tilde{\rho}+{\rm div}(\tilde{\rho}\tilde{{\bf
u}})=0,$ (1.1) $\displaystyle\partial_{t}(\tilde{\rho}\tilde{{\bf u}})+{\rm
div}(\tilde{\rho}\tilde{{\bf u}}\otimes\tilde{{\bf u}})+\nabla\tilde{P}=({\rm
curl\,}\tilde{{\bf H}})\times\tilde{{\bf H}}+\tilde{\mu}\Delta\tilde{{\bf
u}}+(\tilde{\mu}+\tilde{\lambda})\nabla({\rm div}\tilde{{\bf u}}),$ (1.2)
$\displaystyle\partial_{t}\tilde{{\bf H}}-{\rm curl\,}(\tilde{{\bf
u}}\times\tilde{{\bf H}})=-{\rm curl\,}(\tilde{\nu}\,{\rm curl\,}\tilde{{\bf
H}}),\quad{\rm div}\tilde{{\bf H}}=0.$ (1.3)
Here $x\in\mathbb{R}^{d},d=2$ or $3$, $t>0$, the unknowns $\tilde{\rho}$
denotes the density, $\tilde{{\bf
u}}=(\tilde{u}_{1},\dots,\tilde{u}_{d})\in\mathbb{R}^{d}$ the velocity, and
$\tilde{{\bf H}}=(\tilde{H}_{1},\dots,\tilde{H}_{d})\in\mathbb{R}^{d}$ the
magnetic field, respectively. The constants $\tilde{\mu}$ and
$\tilde{\lambda}$ are the shear and bulk viscosity coefficients of the flow,
respectively, satisfying $\tilde{\mu}>0$ and
$2\tilde{\mu}+d\tilde{\lambda}\geq 0$; the constant $\tilde{\nu}>0$ is the
magnetic diffusivity acting as a magnetic diffusion coefficient of the
magnetic field. $\tilde{P}(\tilde{\rho})$ is the pressure-density function and
here we consider the case
$\tilde{P}(\tilde{\rho})=a\tilde{\rho}^{\gamma},\quad a>0,\gamma>1.$ (1.4)
There are a lot of studies on the compressible MHD equations in the
literature. Here we mention some results on the multi-dimensional case. For
the two-dimensional case, Kawashima [10] obtained the global existence of
smooth solutions to the general electromagnetofluid equations when the initial
data are small perturbations of some given constant state. For the three-
dimensional compressible MHD equations, Umeda, Kawashima and Shizuta [19]
obtained the global existence and the time decay of smooth solutions to the
linearized MHD equations. Li and Yu [13] obtained the optimal decay rate of
classical solutions to the compressible MHD equations around a constant
equilibrium. The local strong solution to the compressible MHD equations with
general initial data was obtained by Vol’pert and Khudiaev [20], and Fan and
Yu [4]. Recently, Hu and Wang [6, 7], and Fan and Yu [5] established the
existence of global weak solutions to the compressible MHD equations with
general initial data; while in [21] Zhang, Jiang and Xie discussed a MHD model
describing the screw pinch problem in plasma physics and showed the global
existence of weak solutions with symmetry.
From the physical point of view, one can formally derive the incompressible
models from the compressible ones when the Mach number goes to zero and the
density becomes almost constant. Recently, Hu and Wang [8] obtained the
convergence of weak solutions of the compressible MHD equations (1.1)-(1.3) to
the weak solution of the incompressible viscous MHD equations. In [9], the
authors have employed the modulated energy method to verify the limit of weak
solutions of the compressible MHD equations (1.1)-(1.3) in the torus to the
strong solution of the incompressible viscous or partial viscous MHD equations
(the shear viscosity coefficient is zero but magnetic diffusion coefficient is
a positive constant). It is worth mentioning that the analysis in [9] turns
out that the magnetic diffusion is very important to control the interaction
between the oscillations and the magnetic field in the torus. The rigorous
justification of convergence for compressible MHD equations to the _ideal
incompressible MHD equations_ (inviscid, incompressible MHD equations) in the
torus is left open in [9] for general initial data.
The purpose of this paper is to derive the ideal incompressible MHD equations
from the compressible MHD equations (1.1)-(1.3) in the whole space
$\mathbb{R}^{d}$ $(d=2$ or $3$) with general initial data. In fact, when the
viscosities (including the shear viscosity coefficient and the magnetic
diffusion coefficient) also go to zero, we lose the spatial compactness
property of the velocity and the magnetic field, and the arguments in [8] can
not work in this case. In order to surmount this difficulty, we shall
carefully use the energy arguments. Precisely, we shall describe the
oscillations caused by the general initial data and include them in the energy
estimates. Some ideas of this type were introduced by Schochet [18] and
extended to the case of vanishing viscosity coefficients in [15, 16]. To begin
our argument, we first give some formal analysis. Formally, by utilizing the
identity
$\nabla(|\tilde{{\bf H}}|^{2})=2(\tilde{{\bf H}}\cdot\nabla)\tilde{{\bf
H}}+2\tilde{{\bf H}}\times{\rm curl\,}\tilde{{\bf H}},$
we can rewrite the momentum equation (1.2) as
$\partial_{t}(\tilde{\rho}\tilde{{\bf u}})+{\rm div}(\tilde{\rho}\tilde{{\bf
u}}\otimes\tilde{{\bf u}})+\nabla\tilde{P}=(\tilde{{\bf
H}}\cdot\nabla)\tilde{{\bf H}}-\frac{1}{2}\nabla(|\tilde{{\bf
H}}|^{2})+\tilde{\mu}\Delta\tilde{{\bf
u}}+(\tilde{\mu}+\tilde{\lambda})\nabla({\rm div}\tilde{{\bf u}}).$ (1.5)
By the identities
${\rm curl\,}{\rm curl\,}\tilde{{\bf H}}=\nabla\,{\rm div}\tilde{{\bf
H}}-\Delta\tilde{{\bf H}}$
and
${\rm curl\,}(\tilde{{\bf u}}\times\tilde{{\bf H}})=\tilde{{\bf u}}({\rm
div}\tilde{{\bf H}})-\tilde{{\bf H}}({\rm div}\tilde{{\bf u}})+(\tilde{{\bf
H}}\cdot\nabla)\tilde{{\bf u}}-(\tilde{{\bf u}}\cdot\nabla)\tilde{{\bf H}},$
together with the constraint ${\rm div}\tilde{{\bf H}}=0$, the magnetic field
equation (1.3) can be expressed as
$\partial_{t}\tilde{{\bf H}}+({\rm div}\tilde{{\bf u}})\tilde{{\bf
H}}+(\tilde{{\bf u}}\cdot\nabla)\tilde{{\bf H}}-(\tilde{{\bf
H}}\cdot\nabla)\tilde{{\bf u}}=\tilde{\nu}\Delta\tilde{{\bf H}}.$ (1.6)
We introduce the scaling
$\tilde{\rho}(x,t)=\rho^{\epsilon}(x,\epsilon t),\quad\tilde{{\bf
u}}(x,t)=\epsilon{\bf u}^{\epsilon}(x,\epsilon t),\quad\tilde{{\bf
H}}(x,t)=\epsilon{\bf H}^{\epsilon}(x,\epsilon t)$
and assume that the viscosity coefficients $\tilde{\mu}$, $\tilde{\lambda}$
and $\tilde{\nu}$ are small constants and scaled like
$\tilde{\mu}=\epsilon\mu^{\epsilon},\quad\tilde{\lambda}=\epsilon\lambda^{\epsilon},\quad\tilde{\nu}=\epsilon\nu^{\epsilon},$
(1.7)
where $\epsilon\in(0,1)$ is a small parameter and the normalized coefficients
$\mu^{\epsilon}$, $\lambda^{\epsilon}$ and $\nu^{\epsilon}$ satisfy
$\mu^{\epsilon}>0$, $2\mu^{\epsilon}+d\lambda^{\epsilon}\geq 0$, and
$\nu^{\epsilon}>0$. With the preceding scalings and using the pressure
function (1.4), the compressible MHD equations (1.1), (1.5) and (1.6) take the
form
$\displaystyle\partial_{t}{\rho}^{\epsilon}+\text{div}({\rho}^{\epsilon}{{\bf
u}}^{\epsilon})=0,$ (1.8) $\displaystyle\partial_{t}({\rho}^{\epsilon}{{\bf
u}}^{\epsilon})+\text{div}({\rho}^{\epsilon}{{\bf u}}^{\epsilon}\otimes{{\bf
u}}^{\epsilon})+\frac{a\nabla(\rho^{\epsilon})^{\gamma}}{\epsilon^{2}}=({{\bf
H}}^{\epsilon}\cdot\nabla){{\bf H}}^{\epsilon}-\frac{1}{2}\nabla(|{{\bf
H}}^{\epsilon}|^{2})$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad+{\mu}^{\epsilon}\Delta{{\bf
u}}^{\epsilon}+({\mu}^{\epsilon}+{\lambda}^{\epsilon})\nabla({\rm div}{{\bf
u}}^{\epsilon}),$ (1.9) $\displaystyle\partial_{t}{{\bf H}}^{\epsilon}+({\rm
div}{{\bf u}}^{\epsilon}){{\bf H}}^{\epsilon}+({{\bf
u}}^{\epsilon}\cdot\nabla){{\bf H}}^{\epsilon}-({{\bf
H}}^{\epsilon}\cdot\nabla){{\bf u}}^{\epsilon}={\nu}^{\epsilon}\Delta{{\bf
H}}^{\epsilon},\ \ {\rm div}{{\bf H}}^{\epsilon}=0.$ (1.10)
Formally if we let $\epsilon\rightarrow 0$, we obtain from the momentum
equation (1.9) that $\rho^{\epsilon}$ converges to some function
$\bar{\rho}(t)\geq 0$ . If we further assume that the initial datum
$\rho^{\epsilon}_{0}$ is of order $1+O(\epsilon)$ (this can be guaranteed by
the initial energy bound (2.5) below), then we can expect that $\bar{\rho}=1$.
If the limits ${{\bf u}}^{\epsilon}\rightarrow{\bf u}$ and ${{\bf
H}}^{\epsilon}\rightarrow{\bf H}$ exist, then the continuity equation (1.8)
gives ${\rm div}\,{{\bf u}}=0$. Assuming that
$\mu^{\epsilon}\rightarrow 0,\quad\nu^{\epsilon}\rightarrow
0\quad\text{as}\quad\epsilon\rightarrow 0,$ (1.11)
we obtain the following ideal incompressible MHD equations
$\displaystyle\partial_{t}{\bf u}+({\bf u}\cdot\nabla){\bf u}-({{\bf
H}}\cdot\nabla){{\bf H}}+\nabla p+\frac{1}{2}\nabla(|{{\bf H}}|^{2})=0,$
(1.12) $\displaystyle\partial_{t}{{\bf H}}+({{\bf u}}\cdot\nabla){{\bf
H}}-({{\bf H}}\cdot\nabla){{\bf u}}=0,$ (1.13) $\displaystyle{\rm div}{\bf
u}=0,\quad{\rm div}{\bf H}=0.$ (1.14)
In the present paper we will prove rigorously that a weak solution of the
compressible MHD equations (1.8)-(1.10) with general initial data converges
to, as the small parameter $\epsilon$ goes to $0$, the strong solution of the
ideal incompressible MHD equations (1.12)-(1.14) in the time interval where
the strong solution of (1.12)-(1.14) exists.
Before ending the introduction, we give the notations used throughout the
present paper. We denote the space $L^{q}_{2}(\mathbb{R}^{d})\,(q\geq 1)$ by
$L^{q}_{2}(\mathbb{R}^{d})=\\{f\in L_{loc}(\mathbb{R}^{d}):f1_{\\{|f|\leq
1/2\\}}\in L^{2},f1_{\\{|f|\geq 1/2\\}}\in L^{q}\\}.$
The letters $C$ and $C_{T}$ denote various positive constants independent of
$\epsilon$, but $C_{T}$ may depend on $T$. For convenience, we denote by
$W^{s,r}\equiv W^{s,r}(\mathbb{R}^{d})$ the standard Sobolev space. In
particular, if $r=2$ we denote $W^{s,r}\equiv H^{s}$. For any vector field
$\mathbf{v}$, we denote by $P\mathbf{v}$ and $Q\mathbf{v}$, respectively, the
divergence-free part of $\mathbf{v}$ and the gradient part of $\mathbf{v}$,
namely, $Q\mathbf{v}=\nabla\Delta^{-1}(\text{div}\mathbf{v})$ and
$P\mathbf{v}=\mathbf{v}-Q\mathbf{v}$.
We state our main results in Section 2 and present the proofs in Section 3.
## 2\. main results
We first recall the local existence of strong solution to the ideal
incompressible MHD equations (1.12)-(1.14). The proof can be found in [3].
###### Proposition 2.1 ([3]).
Assume that the initial data $({\bf u}(x,t),{\bf H}(x,t))|_{t=0}=({\bf
u}_{0}(x),\linebreak{\bf H}_{0}(x))$ satisfy ${\bf u}_{0},{\bf
H}_{0}\in{H}^{s}$, $s>d/2+1$, and ${\rm div}{\bf u}_{0}=0,{\rm div}{\bf
H}_{0}=0$. Then, there exist a $T^{*}\in(0,\infty)$ and a unique solution
$({\bf u},{\bf H})\in L^{\infty}([0,T^{*}),{H}^{s})$ to the ideal
incompressible MHD equations (1.12)-(1.14) satisfying, for any $0<T<T^{*}$,
${\rm div}{\bf u}=0$, ${\rm div}{\bf H}=0$, and
$\sup_{0\leq t\leq T}\\!\big{\\{}\|({\bf u},{\bf
H})(t)\|_{H^{s}}+\|(\partial_{t}{\bf u},\partial_{t}{\bf
H})(t)\|_{H^{s-1}}\big{\\}}\leq C_{T}.$ (2.1)
We prescribe the initial conditions to the compressible MHD equations
(1.8)-(1.10) as
$\rho^{\epsilon}|_{t=0}=\rho^{\epsilon}_{0}(x),\quad\rho^{\epsilon}{\bf
u}^{\epsilon}|_{t=0}=\rho^{\epsilon}_{0}(x){\bf
u}^{\epsilon}_{0}(x)\equiv\mathbf{m}^{\epsilon}_{0}(x),\quad{\bf
H}^{\epsilon}|_{t=0}={\bf H}^{\epsilon}_{0}(x),$ (2.2)
and assume that
$\rho_{0}^{\epsilon}\geq 0,\,\,\rho^{\epsilon}_{0}-1\in
L^{\gamma}_{2},\,\,\rho^{\epsilon}_{0}|{\bf u}^{\epsilon}_{0}|^{2}\in
L^{1},\,\,\mathbf{m}^{\epsilon}_{0}=0\,\,\text{for
a.e.}\,\,{\rho^{\epsilon}_{0}=0},$ (2.3)
and
${\bf H}^{\epsilon}_{0}\in L^{2},\;\;{\rm div}{\bf H}^{\epsilon}_{0}=0.$ (2.4)
Moreover, we assume that the initial data also satisfy the following uniform
bound
$\int_{\mathbb{R}^{d}}\Big{[}\frac{1}{2}\rho^{\epsilon}_{0}|{\bf
u}^{\epsilon}_{0}|^{2}+\frac{1}{2}|{\bf
H}^{\epsilon}_{0}|^{2}+\frac{a}{\epsilon^{2}{(\gamma-1)}}\big{(}(\rho^{\epsilon}_{0})^{\gamma}-1-\gamma(\rho^{\epsilon}_{0}-1)\big{)}\Big{]}dx\leq
C.$ (2.5)
Under the above assumptions, it was pointed out in [6] that the Cauchy problem
of the compressible MHD equations (1.8)-(1.10) has a global weak solution,
which can be stated as follows (see also [8]).
###### Proposition 2.2 ([6, 8]).
Let $\gamma>d/2$. Supposing that the initial data
$(\rho^{\epsilon}_{0},\mathbf{m}^{\epsilon}_{0},{\bf H}^{\epsilon}_{0})$
satisfy the assumptions (2.3)-(2.5), then the compressible MHD equations
(1.8)-(1.10) with the initial data (2.2) enjoy at least one global weak
solution $(\rho^{\epsilon},{\bf u}^{\epsilon},{\bf H}^{\epsilon})$ satisfying
1. (1)
$\rho^{\epsilon}-1\in L^{\infty}(0,\infty;L^{\gamma}_{2})\cap
C([0,\infty),L^{r}_{2})$ for all $1\leq r<\gamma$, $\rho^{\epsilon}|{\bf
u}^{\epsilon}|^{2}\in L^{\infty}(0,\infty;L^{1})$, ${\bf H}^{\epsilon}\in
L^{\infty}(0,\infty;L^{2})$, ${\bf u}^{\epsilon}\in L^{2}(0,T;H^{1}),$
$\rho^{\epsilon}{\bf u}^{\epsilon}\in
C([0,T],L^{\frac{2\gamma}{\gamma+1}}_{weak})$, and ${\bf H}^{\epsilon}\in
L^{2}(0,T;H^{1})\cap C([0,T],L^{\frac{2\gamma}{\gamma+1}}_{weak})$ for all
$T\in(0,\infty)$;
2. (2)
the energy inequality
$\mathcal{E}^{\epsilon}(t)+\int^{t}_{0}\mathcal{D}^{\epsilon}(s)ds\leq\mathcal{E}^{\epsilon}(0)$
(2.6)
holds with the finite total energy
$\qquad\qquad\mathcal{E}^{\epsilon}(t)\equiv\int_{\mathbb{R}^{d}}\bigg{[}\frac{1}{2}\rho^{\epsilon}|{\bf
u}^{\epsilon}|^{2}+\frac{1}{2}|{\bf
H}^{\epsilon}|^{2}+\frac{a}{\epsilon^{2}{(\gamma-1)}}\big{(}(\rho^{\epsilon})^{\gamma}-1-\gamma(\rho^{\epsilon}-1)\big{)}\bigg{]}(t)$
(2.7)
and the dissipation energy
$\mathcal{D}^{\epsilon}(t)\equiv\int_{\mathbb{R}^{d}}\big{[}\mu^{\epsilon}|\nabla{\bf
u}^{\epsilon}|^{2}+(\mu^{\epsilon}+\lambda^{\epsilon})|{\rm div}{\bf
u}^{\epsilon}|^{2}+\nu^{\epsilon}|\nabla{\bf H}^{\epsilon}|^{2}\big{]}(t);$
(2.8)
3. (3)
the continuity equation is satisfied in the sense of renormalized solutions,
i.e.,
$\partial_{t}b(\rho^{\epsilon})+{\rm div}(b(\rho^{\epsilon}){\bf
u}^{\epsilon})+\big{(}b^{\prime}(\rho^{\epsilon})\rho^{\epsilon}-b(\rho^{\epsilon})\big{)}{\rm
div}{\bf u}^{\epsilon}=0$ (2.9)
for any $b\in C^{1}(\mathbb{R})$ such that $b^{\prime}(z)$ is a constant for
$z$ large enough;
4. (4)
the equations (1.8)-(1.10) hold in
$\mathcal{D}^{\prime}(\mathbb{R}^{d}\times(0,\infty))$.
The initial energy inequality (2.5) implies that $\rho^{\epsilon}_{0}$ is of
order $1+O(\epsilon)$. We write $\rho^{\epsilon}=1+\epsilon\varphi^{\epsilon}$
and denote
$\Pi^{\epsilon}(x,t)=\frac{1}{\epsilon}\sqrt{\frac{2a}{\gamma-1}\big{(}(\rho^{\epsilon})^{\gamma}-1-\gamma(\rho^{\epsilon}-1)\big{)}}.$
We will use the above approximation $\Pi^{\epsilon}(x,t)$ instead of
$\varphi^{\epsilon}$, since we can not obtain any bound for
$\varphi^{\epsilon}$ in $L^{\infty}(0,T;L^{2})$ directly if $\gamma<2$.
We also need to impose the following conditions on the solution
$(\rho^{\epsilon},{\bf u}^{\epsilon},{\bf H}^{\epsilon})$ at infinity
$\rho^{\epsilon}\rightarrow 1,\quad{\bf u}^{\epsilon}\rightarrow\mathbf{0},\ \
{\bf
H}^{\epsilon}\rightarrow\mathbf{0}\quad\text{as}\quad|x|\rightarrow+\infty.$
The main results of this paper can be stated as follows.
###### Theorem 2.3.
Let $s>{d}/{2}+2$ and the conditions in Proposition 2.2 hold. Assume that the
shear viscosity $\mu^{\epsilon}$ and the magnetic diffusion coefficient
$\nu^{\epsilon}$ satisfy
$\displaystyle\mu^{\epsilon}=\epsilon^{\alpha},\quad\nu^{\epsilon}=\epsilon^{\beta}$
(2.10)
for some constants $\alpha,\beta>0$ satisfying $0<\alpha+\beta<2$. Moreover we
assume that $\sqrt{\rho^{\epsilon}_{0}}{\bf u}^{\epsilon}_{0}$ converges
strongly in $L^{2}$ to some $\tilde{{\bf u}}_{0}$, ${\bf H}^{\epsilon}_{0}$
converges strongly in $L^{2}$ to some ${\bf H}_{0}$, and $\Pi^{\epsilon}_{0}$
converges strongly in $L^{2}$ to some $\varphi_{0}$. Let $({\bf u},{\bf H})$
be the smooth solution to the ideal incompressible MHD equations (1.12)-(1.14)
defined on $[0,T^{*})$ with $({\bf u},{\bf H})|_{t=0}=({\bf u}_{0},{\bf
H}_{0}),{\bf u}_{0}=P\tilde{{\bf u}}_{0}$. Then, for any $0<T<T^{*}$, the
global weak solution $(\rho^{\epsilon},{\bf u}^{\epsilon},{\bf H}^{\epsilon})$
of the compressible MHD equations (1.8)-(1.10) established in Proposition 2.2
satisfies
1. (1)
$\rho^{\epsilon}$ converges strongly to $1$ in
$C([0,T],L^{\gamma}_{2}(\mathbb{R}^{d}))$;
2. (2)
${\bf H}^{\epsilon}$ converges strongly to ${\bf H}$ in
$L^{\infty}(0,T;L^{2}(\mathbb{R}^{d}))$;
3. (3)
$P(\sqrt{\rho^{\epsilon}}{\bf u}^{\epsilon})$ converges strongly to ${\bf u}$
in $L^{\infty}(0,T;L^{2}(\mathbb{R}^{d}))$;
4. (4)
$\sqrt{\rho^{\epsilon}}{\bf u}^{\epsilon}$ converges strongly to ${\bf u}$ in
$L^{r}(0,T;L^{2}_{\mathrm{loc}}(\mathbb{R}^{d}))$ for all $1\leq r<+\infty$.
The proof of above results is based on the combination of the modulated energy
method, motivated by Brenier [1], the Strichartz’s estimate of linear wave
equations [2] and the weak convergence method. Masmoudi [16] made use of such
ideas to consider the incompressible, inviscid convergence of weak solutions
of compressible Navier-Stokes equations to the strong solution of the
incompressible Euler equations in the whole space and the torus. We shall
follow and adapt the method used in [16] to prove our results. Besides the
difficulties pointed out in [16], we have to overcome the additional
difficulties caused by the strong coupling of the hydrodynamic motion and the
magnetic fields. Very careful energy estimates are employed to estimate those
nonlinear terms for large data, see Step 4 in the proof of Theorem 2.3 below.
On the other hand, because the general initial data are considered in the
present paper, the oscillations appear and propagate with the solution. We
will use the Strichartz’s estimate of linear wave equations to deal with such
oscillations and their interactions with velocity and magnetic fields.
Finally, the weak convergence method and refined energy analysis are employed
to obtain the desired convergence results.
###### Remark 2.1.
The assumption that $\Pi^{\epsilon}_{0}$ converges strongly in $L^{2}$ to some
$\varphi_{0}$ in fact implies that $\varphi^{\epsilon}_{0}$ converges strongly
to $\varphi_{0}$ in $L^{\gamma}_{2}$.
###### Remark 2.2.
The condition (2.10) is required in our proof to control the terms caused by
the strong coupling between the hydrodynamic motion and the magnetic fields,
see (3) below on the estimates of $S^{\epsilon}_{4}(t)$, where
$\rho^{\epsilon},{\bf u}^{\epsilon}$ and ${\bf H}^{\epsilon}$ are involved.
Notice that such condition is not needed in the proof of the inviscid and
incompressible limit for the compressible Navier-Stokes equations, see [16].
Thus it shows the presence of magnetic effect for our problem.
###### Remark 2.3.
Compared with the results obtained in [9] on the periodic case where the
convergence of compressible MHD equations to partial viscous MHD equations
(shear viscosity coefficient is zero but magnetic diffusion coefficient is a
positive constant) is rigorously proved, here we can allow the magnetic
diffusion coefficient also goes to zero due to the dispersive property of the
acoustic wave in the whole space.
###### Remark 2.4.
Our arguments in this paper can be applied, after slight modifications, to the
case
$\mu^{\epsilon}\rightarrow\mu^{0},\quad\nu^{\epsilon}\rightarrow\nu^{0}\quad\text{as}\quad\epsilon\rightarrow
0,$
where $\mu^{0}$ and $\nu^{0}$ are nonnegative constants satisfying (1)
$\mu_{0}>0,\nu^{0}=0$, or (2) $\mu_{0}=0,\nu^{0}>0$ or (3)
$\mu_{0}>0,\nu^{0}>0$. Hence we can also obtain the convergence of
compressible MHD equations (1.8)-(1.10) to the incompressible MHD equations
$\displaystyle\partial_{t}{\bf u}+({\bf u}\cdot\nabla){\bf u}-({{\bf
H}}\cdot\nabla){{\bf H}}-\mu^{0}\Delta{\bf u}+\nabla
p+\frac{1}{2}\nabla(|{{\bf H}}|^{2})=0,$ $\displaystyle\partial_{t}{{\bf
H}}+({{\bf u}}\cdot\nabla){{\bf H}}-({{\bf H}}\cdot\nabla){{\bf
u}}-\nu^{0}\Delta{\bf H}=0,$ $\displaystyle{\rm div}{\bf u}=0,\quad{\rm
div}{\bf H}=0.$
We omit the details for conciseness.
## 3\. Proof of Theorem 2.3
In this section, we shall prove our convergence results by combining the
modulated energy method with the Strichartz’s estimate of linear wave
equations and employing the weak convergence method.
###### Proof of Theorem 2.3.
We divide the proof into several steps.
_Step 1: Basic energy estimates and compact arguments._
By the assumption on the initial data we obtain, from the energy inequality
(2.6), that the total energy $\mathcal{E}^{\epsilon}(t)$ has a uniform upper
bound for a.e. $t\in[0,T],T>0$. This uniform bound implies that
$\rho^{\epsilon}|{\bf u}^{\epsilon}|^{2}$ and
$\big{(}(\rho^{\epsilon})^{\gamma}-1-\gamma(\rho^{\epsilon}-1)\big{)}/{\epsilon^{2}}$
are bounded in $L^{\infty}(0,T;L^{1})$ and ${\bf H}^{\epsilon}$ is bounded in
$L^{\infty}(0,T;L^{2})$. By an analysis similar to that used in [14], we find
$\int_{\mathbb{R}^{d}}\frac{1}{\epsilon^{2}}|\rho^{\epsilon}-1|^{2}1_{\\{|\rho^{\epsilon}-1|\leq\frac{1}{2}\\}}+\int_{\mathbb{R}^{d}}\frac{1}{\epsilon^{2}}|\rho^{\epsilon}-1|^{\gamma}1_{\\{|\rho^{\epsilon}-1|\geq\frac{1}{2}\\}}\leq
C,$ (3.1)
which implies that
$\rho^{\epsilon}\rightarrow 1\ \ \text{strongly in}\ \
C([0,T],L^{\gamma}_{2}).$ (3.2)
By the results of [2], we have the following estimate on ${\bf u}^{\epsilon}$,
$\displaystyle\|{\bf u}^{\epsilon}\|^{2}_{L^{2}}\leq
C+C\epsilon^{4/d}\|\nabla{\bf u}^{\epsilon}\|^{2}_{L^{2}},\quad d=2,3.$ (3.3)
Furthermore, the fact that $\rho^{\epsilon}|{\bf u}^{\epsilon}|^{2}$ and
$|{\bf H}^{\epsilon}|^{2}$ are bounded in $L^{\infty}(0,T;L^{1})$ implies the
following convergence (up to the extraction of a subsequence $\epsilon_{n}$):
$\displaystyle\sqrt{\rho^{\epsilon}}{\bf u}^{\epsilon}\ \text{converges
weakly-$\ast$}\,\,\text{to
some}\,\mathbf{J}\,\,\text{in}\,L^{\infty}(0,T;L^{2}),$ $\displaystyle{\bf
H}^{\epsilon}\ \text{converges weakly-$\ast$}\,\,\text{to
some}\,\mathbf{K}\,\,\text{in}\,L^{\infty}(0,T;L^{2}).$
Our main task in this section is to show that $\mathbf{J}={\bf u}$ and
$\mathbf{K}={\bf H}$ in some sense, where $({\bf u},{\bf H})$ is the strong
solution to the ideal incompressible MHD equations (1.12)-(1.14).
_Step 2: Description and cancelation of the oscillations._
In order to describe the oscillations caused by the initial data, we use the
ideas introduced in [18, 16] and the dispersion property of the linear wave
equations [2, 16].
We project the momentum equation (1.9) on the “gradient vector-fields” to find
$\displaystyle\partial_{t}Q(\rho^{\epsilon}{\bf u}^{\epsilon})+Q[{\rm
div}(\rho^{\epsilon}{\bf u}^{\epsilon}\otimes{\bf
u}^{\epsilon})]-(2\mu^{\epsilon}+\lambda^{\epsilon})\nabla{\rm div}{\bf
u}^{\epsilon}+\frac{1}{2}\nabla(|{{\bf H}}^{\epsilon}|^{2})$
$\displaystyle\quad-Q[({{\bf H}}^{\epsilon}\cdot\nabla){{\bf
H}}^{\epsilon}]+\frac{a}{\epsilon^{2}}\nabla\big{(}(\rho^{\epsilon})^{\gamma}-1-\gamma(\rho^{\epsilon}-1)\big{)}+\frac{a\gamma}{\epsilon^{2}}\nabla(\rho^{\epsilon}-1)=0.$
(3.4)
Below we assume $a\gamma=1$ for simplicity, otherwise we can change
$\varphi^{\epsilon}$ to $\varphi^{\epsilon}/({a\gamma}).$ Noticing
$\rho^{\epsilon}=1+\epsilon\varphi^{\epsilon}$, we can write (1.8) and (3) as
$\displaystyle\epsilon\partial_{t}\varphi^{\epsilon}+{\rm
div}Q(\rho^{\epsilon}{\bf u}^{\epsilon})=0,$
$\displaystyle\epsilon\partial_{t}Q(\rho^{\epsilon}{\bf
u}^{\epsilon})+\nabla\varphi^{\epsilon}=\epsilon\mathbf{F}^{\epsilon},$
where $\mathbf{F}^{\epsilon}$ is given by
$\displaystyle\mathbf{F}^{\epsilon}=$ $\displaystyle-Q[{\rm
div}(\rho^{\epsilon}{\bf u}^{\epsilon}\otimes{\bf
u}^{\epsilon})]+(2\mu^{\epsilon}+\lambda^{\epsilon})\nabla{\rm div}{\bf
u}^{\epsilon}-\frac{1}{2}\nabla(|{{\bf H}}^{\epsilon}|^{2})$
$\displaystyle+Q[({{\bf H}}^{\epsilon}\cdot\nabla){{\bf
H}}^{\epsilon}]-\frac{1}{\gamma\epsilon^{2}}\nabla\big{(}(\rho^{\epsilon})^{\gamma}-1-\gamma(\rho^{\epsilon}-1)\big{)}.$
Therefore, we introduce the following group defined by
$\mathcal{L}(\tau)=e^{\tau L}$, $\tau\in\mathbb{R}$, where $L$ is the operator
defined on $\mathcal{D}^{\prime}\times(\mathcal{D}^{\prime})^{d}$ by
$L\Big{(}\begin{array}[]{c}\phi\\\
\mathbf{v}\end{array}\Big{)}=\Big{(}\begin{array}[]{c}-\text{div}\mathbf{v}\\\
-\nabla\phi\end{array}\Big{)}.$
Then, it is easy to check that $e^{\tau L}$ is an isometry on each
$H^{r}\times(H^{r})^{d}$ for all $r\in\mathbb{R}$ and for all
$\tau\in\mathbb{R}$. Denoting
$\Big{(}\begin{array}[]{c}\bar{\phi}(\tau)\\\
\bar{\mathbf{v}}(\tau)\end{array}\Big{)}=e^{\tau
L}\Big{(}\begin{array}[]{c}\phi\\\ \mathbf{v}\end{array}\Big{)},$
then we have
$\frac{\partial\bar{\phi}}{\partial\tau}=-\text{div}\bar{\mathbf{v}},\quad\frac{\partial\bar{\mathbf{v}}}{\partial\tau}=-\nabla\bar{\phi}.$
Thus, $\frac{\partial^{2}\bar{\phi}}{\partial\tau^{2}}-\Delta\bar{\phi}=0$.
Let $(\phi^{\epsilon},\mathbf{g}^{\epsilon}=\nabla q^{\epsilon})$ be the
solution of the following system
$\displaystyle\frac{\partial\phi^{\epsilon}}{\partial
t}=-\frac{1}{\epsilon}\text{div}{\bf
g}^{\epsilon},\qquad\phi^{\epsilon}|_{t=0}=\Pi^{\epsilon}_{0},$ (3.5)
$\displaystyle\frac{\partial{\bf g}^{\epsilon}}{\partial
t}=-\frac{1}{\epsilon}\nabla\phi^{\epsilon},\qquad\ \ \ {\bf
g}^{\epsilon}|_{t=0}=Q(\sqrt{\rho^{\epsilon}_{0}}{\bf u}^{\epsilon}_{0}).$
(3.6)
Here we have used $Q(\sqrt{\rho^{\epsilon}_{0}}{\bf u}^{\epsilon}_{0})$ as an
approximation of $Q({\rho^{\epsilon}_{0}}{\bf u}^{\epsilon}_{0})$ since
$\|Q({\rho^{\epsilon}_{0}}{\bf
u}^{\epsilon}_{0})-Q(\sqrt{\rho^{\epsilon}_{0}}{\bf
u}^{\epsilon}_{0})\|_{L^{1}}\rightarrow 0$ as $\epsilon\rightarrow 0$.
Our main idea is to use $\phi^{\epsilon}$ and ${\bf g}^{\epsilon}$ as test
functions and plug them into the total energy $\mathcal{E}^{\epsilon}(t)$ to
cancel the oscillations. We introduce the following regularization for the
initial data, $\Pi^{\epsilon,\delta}_{0}=\Pi^{\epsilon}_{0}\ast\chi^{\delta}$,
$Q^{\delta}(\sqrt{\rho^{\epsilon}_{0}}{\bf
u}^{\epsilon}_{0})=Q(\sqrt{\rho^{\epsilon}_{0}}{\bf
u}^{\epsilon}_{0})\ast\chi^{\delta}$, and denote by
$(\phi^{\epsilon,\delta},{\bf g}^{\epsilon,\delta}=\nabla
q^{\epsilon,\delta})$ the corresponding solution to the equations (3.5)-(3.6)
with initial data $\phi^{\epsilon,\delta}|_{t=0}=\Pi^{\epsilon,\delta}_{0}$,
${\bf
g}^{\epsilon,\delta}|_{t=0}=Q^{\delta}(\sqrt{\rho^{\epsilon}_{0}}u^{\epsilon}_{0})$.
Here $\chi\in C^{\infty}_{0}(\mathbb{R}^{d})$ is the Friedrich’s mollifier,
i.e., $\int_{\mathbb{R}^{d}}\chi=1$ and
$\chi^{\delta}(x)=(1/\delta^{d})\chi(x/\delta)$. Since the equations
(3.5)-(3.6) are linear, it is easy to verify that
$\phi^{\epsilon,\delta}=\phi^{\epsilon}\ast\chi^{\delta},{\bf
g}^{\epsilon,\delta}={\bf g}^{\epsilon}\ast\chi^{\delta}$.
Using the Strichartz estimate of the linear wave equations [2], we have
$\bigg{\|}\Big{(}\begin{array}[]{c}\phi^{\epsilon,\delta}\\\ \nabla
q^{\epsilon,\delta}\end{array}\Big{)}\bigg{\|}_{L^{l}(\mathbb{R},W^{s,r})}\leq
C\epsilon^{1/l}\bigg{\|}\bigg{(}\begin{array}[]{c}\Pi^{\epsilon,\delta}_{0}\\\
Q^{\delta}(\sqrt{\rho^{\epsilon}_{0}}{\bf
u}^{\epsilon}_{0})\end{array}\Big{)}\bigg{\|}_{H^{s+\sigma}}$ (3.7)
for all $l,r>2$ and $\sigma>0$ such that
$\frac{2}{r}=(d-1)\Big{(}\frac{1}{2}-\frac{1}{l}\Big{)},\qquad\sigma=\frac{d+1}{d-1}.$
The estimate (3.7) implies that for arbitrary but fixed $\delta$ and for all
$s\in\mathbb{R}$, we have
$\phi^{\epsilon,\delta},\,\,{\bf g}^{\epsilon,\delta}\rightarrow
0\quad\text{in}\ \ L^{l}(\mathbb{R},W^{s,r})\ \ \text{as}\ \
\epsilon\rightarrow 0.$ (3.8)
_Step 3: The modulated energy functional and uniform estimates._
We first recall the energy inequality of the compressible MHD equations
(1.8)-(1.10) for almost all $t$,
$\displaystyle\frac{1}{2}\int_{\mathbb{R}^{d}}\Big{[}\rho^{\epsilon}(t)|{\bf
u}^{\epsilon}|^{2}(t)+|{\bf
H}^{\epsilon}|^{2}(t)+(\Pi^{\epsilon}(t))^{2}\Big{]}+\mu^{\epsilon}\int^{t}_{0}\\!\int_{\mathbb{R}^{d}}|\nabla{\bf
u}^{\epsilon}|^{2}$ $\displaystyle\ \
+(\mu^{\epsilon}+\lambda^{\epsilon})\int^{t}_{0}\\!\int_{\mathbb{R}^{d}}|{\rm
div}{\bf
u}^{\epsilon}|^{2}+\nu^{\epsilon}\int^{t}_{0}\\!\int_{\mathbb{R}^{d}}|\nabla{\bf
H}^{\epsilon}|^{2}$ $\displaystyle\ \ \
\leq\frac{1}{2}\int_{\mathbb{R}^{d}}\Big{[}\rho_{0}^{\epsilon}|{\bf
u}_{0}^{\epsilon}|^{2}+|{\bf
H}_{0}^{\epsilon}|^{2}+(\Pi^{\epsilon}_{0})^{2}\Big{]}.$ (3.9)
The conservation of energy for the ideal incompressible MHD equations
(1.12)-(1.14) reads
$\frac{1}{2}\int_{\mathbb{R}^{d}}\big{[}|{\bf u}|^{2}(t)+|{\bf
H}|^{2}(t)\big{]}=\frac{1}{2}\int_{\mathbb{R}^{d}}\big{[}|{\bf
u}_{0}|^{2}+|{\bf H}_{0}|^{2}\big{]}.$ (3.10)
From the system (3.5)-(3.6) we get that, for all $t$,
$\frac{1}{2}\int_{\mathbb{R}^{d}}\Big{[}|\phi^{\epsilon,\delta}|^{2}+|{\bf
g}^{\epsilon,\delta}|^{2}\Big{]}(t)=\frac{1}{2}\int_{\mathbb{R}^{d}}\Big{[}|\phi^{\epsilon,\delta}|^{2}+|{\bf
g}^{\epsilon,\delta}|^{2}\Big{]}(0).$ (3.11)
Using $\phi^{\epsilon,\delta}$ as a test function and noticing
$\rho^{\epsilon}=1+\epsilon\varphi^{\epsilon}$, we obtain the following weak
formulation of the continuity equation (1.8)
$\int_{\mathbb{R}^{d}}\phi^{\epsilon,\delta}(t)\varphi^{\epsilon}(t)+\frac{1}{\epsilon}\int^{t}_{0}\int_{\mathbb{R}^{d}}\big{[}\text{div}(\nabla
q^{\epsilon,\delta})\varphi^{\epsilon}+\mbox{div}(\rho^{\epsilon}{\bf
u}^{\epsilon})\phi^{\epsilon,\delta}\big{]}=\int_{\mathbb{R}^{d}}\phi^{\epsilon,\delta}(0)\varphi^{\epsilon}_{0}.$
(3.12)
We use ${\bf u}$ and ${\bf g}^{\epsilon,\delta}=\nabla q^{\epsilon,\delta}$ as
test functions to the momentum equation (1.9) respectively to deduce
$\displaystyle\int_{\mathbb{R}^{d}}(\rho^{\epsilon}{\bf u}^{\epsilon}\cdot{\bf
u})(t)+\int^{t}_{0}\int_{\mathbb{R}^{d}}\rho^{\epsilon}{\bf
u}^{\epsilon}\cdot\big{[}({\bf u}\cdot\nabla){\bf u}-({\bf H}\cdot\nabla){\bf
H}+\nabla p+\frac{1}{2}\nabla(|{\bf H}|^{2})\big{]}$
$\displaystyle-\int^{t}_{0}\int_{\mathbb{R}^{d}}\big{[}(\rho^{\epsilon}{\bf
u}^{\epsilon}\otimes{\bf u}^{\epsilon})\cdot\nabla{\bf u}+({\bf
H}^{\epsilon}\cdot\nabla){\bf H}^{\epsilon}\cdot{\bf
u}-\mu^{\epsilon}\nabla{\bf u}^{\epsilon}\cdot\nabla{\bf
u}\big{]}=\int_{\mathbb{R}^{d}}\rho^{\epsilon}_{0}{\bf
u}^{\epsilon}_{0}\cdot{\bf u}_{0},$ (3.13)
and
$\displaystyle\int_{\mathbb{R}^{d}}(\rho^{\epsilon}{\bf
u}^{\epsilon}\cdot\nabla
q^{\epsilon,\delta})(t)+\int^{t}_{0}\int_{\mathbb{R}^{d}}\rho^{\epsilon}{\bf
u}^{\epsilon}\cdot\Big{(}\frac{1}{\epsilon}\nabla\phi^{\epsilon,\delta}\Big{)}-\int^{t}_{0}\int_{\mathbb{R}^{d}}(\rho^{\epsilon}{\bf
u}^{\epsilon}\otimes{\bf u}^{\epsilon})\cdot\nabla{\bf g}^{\epsilon,\delta}$
$\displaystyle\quad+\int^{t}_{0}\int_{\mathbb{R}^{d}}\big{[}\mu^{\epsilon}\nabla{\bf
u}^{\epsilon}\cdot\nabla{\bf
g}^{\epsilon,\delta}+(\mu^{\epsilon}+\lambda^{\epsilon})\text{div}{\bf
u}^{\epsilon}\text{div}\,{\bf g}^{\epsilon,\delta}\big{]}$
$\displaystyle\quad-\int^{t}_{0}\int_{\mathbb{R}^{d}}({\bf
H}^{\epsilon}\cdot\nabla){\bf H}^{\epsilon}\cdot{\bf
g}^{\epsilon,\delta}-\int^{t}_{0}\int_{\mathbb{R}^{d}}\frac{1}{2}|{\bf
H}^{\epsilon}|^{2}{\rm div}{\bf g}^{\epsilon,\delta}$
$\displaystyle\quad-\int^{t}_{0}\int_{\mathbb{R}^{d}}\Big{(}\frac{1}{\epsilon}\varphi^{\epsilon}+\frac{\gamma-1}{2}(\Pi^{\epsilon})^{2}\Big{)}\text{div}\,{\bf
g}^{\epsilon,\delta}=\int_{\mathbb{R}^{d}}\rho^{\epsilon}_{0}{\bf
u}^{\epsilon}_{0}\cdot{\bf g}^{\epsilon,\delta}(0).$ (3.14)
Similarly, using ${\bf H}$ as a test function to the magnetic field equation
(1.10), we get
$\displaystyle\int_{\mathbb{R}^{d}}({\bf H}^{\epsilon}\cdot{\bf
H})(t)+\int^{t}_{0}\int_{\mathbb{R}^{d}}{\bf H}^{\epsilon}\cdot\big{[}({\bf
u}\cdot\nabla){\bf H}-({\bf H}\cdot\nabla){\bf
u}\big{]}+\nu^{\epsilon}\int^{t}_{0}\int_{\mathbb{R}^{d}}\nabla{\bf
H}^{\epsilon}\cdot\nabla{\bf H}$
$\displaystyle\quad+\int^{t}_{0}\int_{\mathbb{R}^{d}}\big{[}({\rm div}{{\bf
u}}^{\epsilon}){{\bf H}}^{\epsilon}+({{\bf u}}^{\epsilon}\cdot\nabla){{\bf
H}}^{\epsilon}-({{\bf H}}^{\epsilon}\cdot\nabla){{\bf
u}}^{\epsilon}\big{]}\cdot{\bf H}=\int_{\mathbb{R}^{d}}{\bf
H}^{\epsilon}_{0}\cdot{\bf H}_{0}.$ (3.15)
Summing up (3), (3.10) and (3.11), and inserting (3.12)-(3) into the resulting
inequality, we can deduce the following inequality by a straightforward
computation
$\displaystyle\frac{1}{2}\int_{\mathbb{R}^{d}}\Big{\\{}|\sqrt{\rho^{\epsilon}}{\bf
u}^{\epsilon}-{\bf u}-{\bf g}^{\epsilon,\delta}|^{2}(t)+|{\bf
H}^{\epsilon}-{\bf
H}|^{2}(t)+(\Pi^{\epsilon}-\phi^{\epsilon,\delta})^{2}(t)\Big{\\}}$
$\displaystyle\quad+{\mu^{\epsilon}}\int^{t}_{0}\int_{\mathbb{R}^{d}}|\nabla{\bf
u}^{\epsilon}|^{2}+(\mu^{\epsilon}+\lambda^{\epsilon})\int^{t}_{0}\\!\int_{\mathbb{R}^{d}}|{\rm
div}{\bf
u}^{\epsilon}|^{2}+{\nu^{\epsilon}}\int^{t}_{0}\int_{\mathbb{R}^{d}}|\nabla{\bf
H}^{\epsilon}|^{2}$
$\displaystyle\leq\frac{1}{2}\int_{\mathbb{R}^{d}}\Big{\\{}|\sqrt{\rho^{\epsilon}}{\bf
u}^{\epsilon}-{\bf u}-{\bf g}^{\epsilon,\delta}|^{2}(0)+|{\bf
H}^{\epsilon}-{\bf
H}|^{2}(0)+(\Pi^{\epsilon}_{0}-\phi^{\epsilon,\delta}(0))^{2}\Big{\\}}$
$\displaystyle\quad+{\mu^{\epsilon}}\int^{t}_{0}\int_{\mathbb{R}^{d}}\nabla{\bf
u}^{\epsilon}\cdot\nabla{\bf
u}+{\nu^{\epsilon}}\int^{t}_{0}\int_{\mathbb{R}^{d}}\nabla{\bf
H}^{\epsilon}\cdot\nabla{\bf
H}+R_{1}^{\epsilon}(t)+R_{2}^{\epsilon}(t)+R_{3}^{\epsilon,\delta}(t),$ (3.16)
where
$\displaystyle R_{1}^{\epsilon}(t)=$
$\displaystyle-\int^{t}_{0}\int_{\mathbb{R}^{d}}\rho^{\epsilon}{\bf
u}^{\epsilon}\cdot[({\bf H}\cdot\nabla){\bf
H}]-\int^{t}_{0}\int_{\mathbb{R}^{d}}({\bf H}^{\epsilon}\cdot\nabla){\bf
H}^{\epsilon}\cdot{\bf u}$
$\displaystyle+\int^{t}_{0}\int_{\mathbb{R}^{d}}{\bf
H}^{\epsilon}\cdot\big{[}({\bf u}\cdot\nabla){\bf H}-({\bf H}\cdot\nabla){\bf
u}\big{]}+\frac{1}{2}\int^{t}_{0}\int_{\mathbb{R}^{d}}\rho^{\epsilon}{\bf
u}^{\epsilon}\cdot\nabla(|{\bf H}|^{2})$
$\displaystyle+\int^{t}_{0}\int_{\mathbb{R}^{d}}\big{[}({\rm div}{{\bf
u}}^{\epsilon}){{\bf H}}^{\epsilon}+({{\bf u}}^{\epsilon}\cdot\nabla){{\bf
H}}^{\epsilon}-({{\bf H}}^{\epsilon}\cdot\nabla){{\bf
u}}^{\epsilon}\big{]}\cdot{\bf H},$ $\displaystyle R_{2}^{\epsilon}(t)=$
$\displaystyle\int^{t}_{0}\int_{\mathbb{R}^{d}}\rho^{\epsilon}{\bf
u}^{\epsilon}\cdot\big{[}({\bf u}\cdot\nabla){\bf u}+\nabla
p\big{]}-\int^{t}_{0}\int_{\mathbb{R}^{d}}(\rho^{\epsilon}{\bf
u}^{\epsilon}\otimes{\bf u}^{\epsilon})\cdot\nabla u,$ $\displaystyle
R_{3}^{\epsilon,\delta}(t)=$
$\displaystyle\int_{\mathbb{R}^{d}}\sqrt{\rho^{\epsilon}}-1)\sqrt{\rho^{\epsilon}}{\bf
u}^{\epsilon}\cdot{\bf
g}^{\epsilon,\delta}(t)-\int_{\mathbb{R}^{d}}\sqrt{\rho^{\epsilon}}-1)\sqrt{\rho^{\epsilon}}{\bf
u}^{\epsilon}\cdot{\bf g}^{\epsilon,\delta}(0)$
$\displaystyle\int_{\mathbb{R}^{d}}(\sqrt{\rho^{\epsilon}}-1)\sqrt{\rho^{\epsilon}}{\bf
u}^{\epsilon}\cdot{\bf
g}^{\epsilon,\delta}(t)-\int_{\mathbb{R}^{d}}\big{[}(\Pi^{\epsilon}-\varphi^{\epsilon})\phi^{\epsilon,\delta}\big{]}(t)$
$\displaystyle-\int^{t}_{0}\int_{\mathbb{R}^{d}}(\rho^{\epsilon}{\bf
u}^{\epsilon}\otimes{\bf u}^{\epsilon})\cdot\nabla{\bf
g}^{\epsilon,\delta}+\int^{t}_{0}\int_{\mathbb{R}^{d}}\big{[}\mu^{\epsilon}\nabla{\bf
u}^{\epsilon}\cdot\nabla{\bf
g}^{\epsilon,\delta}+(\mu^{\epsilon}+\lambda^{\epsilon})\text{div}{\bf
u}^{\epsilon}\text{div}{\bf g}^{\epsilon,\delta}\big{]}$
$\displaystyle-\frac{\gamma-1}{2}\int^{t}_{0}\int_{\mathbb{R}^{d}}(\Pi^{\epsilon})^{2}\text{div}{\bf
g}^{\epsilon,\delta}-\int_{\mathbb{R}^{d}}(\sqrt{\rho^{\epsilon}}-1)\sqrt{\rho^{\epsilon}}{\bf
u}^{\epsilon}\cdot{\bf g}^{\epsilon,\delta}(0)$
$\displaystyle-\int^{t}_{0}\int_{\mathbb{R}^{d}}\frac{1}{2}|{\bf
H}^{\epsilon}|^{2}{\rm div}{\bf
g}^{\epsilon,\delta}-\int^{t}_{0}\int_{\mathbb{R}^{d}}({\bf
H}^{\epsilon}\cdot\nabla){\bf H}^{\epsilon}\cdot{\bf
g}^{\epsilon,\delta}+\int_{\mathbb{R}^{d}}(\Pi^{\epsilon}_{0}-\varphi^{\epsilon}_{0})\phi^{\epsilon,\delta}(0).$
We first deal with $R_{1}^{\epsilon}(t)$ and $R_{2}^{\epsilon}(t)$ on the
right-hand side of the inequality (3). Denoting
$\mathbf{w}^{\epsilon,\delta}=\sqrt{\rho^{\epsilon}}{\bf u}^{\epsilon}-{\bf
u}-{\bf g}^{\epsilon,\delta}$, $\mathbf{Z}^{\epsilon}={\bf H}^{\epsilon}-{\bf
H}$, integrating by parts and making use of the facts that ${\rm div}\,{\bf
H}^{\epsilon}=0,{\rm div}\,{\bf u}=0,$ and ${\rm div}\,{\bf H}=0$, we obtain
that
$\displaystyle R_{1}^{\epsilon}(t)=$
$\displaystyle-\int^{t}_{0}\int_{\mathbb{R}^{d}}\rho^{\epsilon}{\bf
u}^{\epsilon}\cdot[({\bf H}\cdot\nabla){\bf
H}]+\int^{t}_{0}\int_{\mathbb{R}^{d}}({\bf H}^{\epsilon}\cdot\nabla){\bf
u}\cdot{\bf H}^{\epsilon}$
$\displaystyle+\int^{t}_{0}\int_{\mathbb{R}^{d}}({\bf u}\cdot\nabla){\bf
H}\cdot{\bf H}^{\epsilon}-\int^{t}_{0}\int_{\mathbb{R}^{d}}({\bf
H}\cdot\nabla){\bf u}\cdot{\bf H}^{\epsilon}$
$\displaystyle-\int^{t}_{0}\int_{\mathbb{R}^{d}}({{\bf
u}}^{\epsilon}\cdot\nabla){\bf H}\cdot{{\bf
H}}^{\epsilon}+\int^{t}_{0}\int_{\mathbb{R}^{d}}({{\bf
H}}^{\epsilon}\cdot\nabla){\bf H}\cdot{{\bf
u}}^{\epsilon}+\frac{1}{2}\int^{t}_{0}\int_{\mathbb{R}^{d}}\rho^{\epsilon}{\bf
u}^{\epsilon}\cdot\nabla(|{\bf H}|^{2})$ $\displaystyle=$
$\displaystyle\int^{t}_{0}\int_{\mathbb{R}^{d}}(1-\rho^{\epsilon}){\bf
u}^{\epsilon}\cdot[({\bf H}\cdot\nabla){\bf
H}]+\int^{t}_{0}\int_{\mathbb{R}^{d}}[({\bf H}^{\epsilon}-{\bf
H})\cdot\nabla]{\bf u}\cdot({\bf H}^{\epsilon}-{\bf H})$
$\displaystyle+\int^{t}_{0}\int_{\mathbb{R}^{d}}[({\bf H}^{\epsilon}-{\bf
H})\cdot\nabla]{\bf H}\cdot({\bf u}^{\epsilon}-{\bf
u})-\int^{t}_{0}\int_{\mathbb{R}^{d}}[({\bf u}^{\epsilon}-{\bf
u})\cdot\nabla]{\bf H}\cdot({\bf H}^{\epsilon}-{\bf H})$
$\displaystyle+\frac{1}{2}\int^{t}_{0}\int_{\mathbb{R}^{d}}(\rho^{\epsilon}-1){{\bf
u}}^{\epsilon}\nabla(|{\bf H}|^{2})$ $\displaystyle\leq$
$\displaystyle\int^{t}_{0}\int_{\mathbb{R}^{d}}(1-\rho^{\epsilon}){\bf
u}^{\epsilon}\cdot[({\bf H}\cdot\nabla){\bf
H})]+\int^{t}_{0}\|\mathbf{Z}^{\epsilon}(s)\|^{2}_{L^{2}}\|\nabla{\bf
u}(s)\|_{L^{\infty}}ds$
$\displaystyle+\int^{t}_{0}\big{[}\|\mathbf{w}^{\epsilon,\delta}(s)\|^{2}_{L^{2}}+\|\mathbf{Z}^{\epsilon}(s)\|^{2}_{L^{2}}\big{]}\|\nabla{\bf
H}(s)\|_{L^{\infty}}ds$
$\displaystyle+\int^{t}_{0}\\!\int_{\mathbb{R}^{d}}(\mathbf{Z}^{\epsilon}\cdot\nabla){\bf
H}\cdot[(1-\sqrt{\rho^{\epsilon}}){\bf u}^{\epsilon}+{\bf
g}^{\epsilon,\delta}]$
$\displaystyle-\int^{t}_{0}\\!\int_{\mathbb{R}^{d}}\big{\\{}[(1-\sqrt{\rho^{\epsilon}}){\bf
u}^{\epsilon}+{\bf g}^{\epsilon,\delta}]\cdot\nabla\big{\\}}{\bf
H}\cdot\mathbf{Z}^{\epsilon}+\frac{1}{2}\int^{t}_{0}\int_{\mathbb{R}^{d}}(\rho^{\epsilon}-1){{\bf
u}}^{\epsilon}\nabla(|{\bf H}|^{2})$ (3.17)
and
$\displaystyle R_{2}^{\epsilon}(t)=$
$\displaystyle-\int^{t}_{0}\int_{\mathbb{R}^{d}}(\mathbf{w}^{\epsilon,\delta}\otimes\mathbf{w}^{\epsilon,\delta})\cdot\nabla{\bf
u}+\int^{t}_{0}\int_{\mathbb{R}^{d}}(\rho^{\epsilon}-\sqrt{\rho^{\epsilon}}){\bf
u}^{\epsilon}\cdot(({\bf u}\cdot\nabla){\bf u})$
$\displaystyle-\int^{t}_{0}\int_{\mathbb{R}^{d}}({\bf
g}^{\epsilon,\delta}\cdot\nabla){\bf
u}\cdot\mathbf{w}^{\epsilon,\delta}+\int^{t}_{0}\int_{\mathbb{R}^{d}}[(\sqrt{\rho^{\epsilon}}{\bf
u}^{\epsilon}-{\bf u})\cdot\nabla]{\bf u}\cdot{\bf g}^{\epsilon,\delta}$
$\displaystyle+\int^{t}_{0}\int_{\mathbb{R}^{d}}\rho^{\epsilon}{\bf
u}^{\epsilon}\cdot\nabla
p-\int^{t}_{0}\int_{\mathbb{R}^{d}}(\sqrt{\rho^{\epsilon}}{\bf
u}^{\epsilon}-{\bf u})\cdot\nabla\big{(}\frac{|{\bf u}|^{2}}{2}\big{)}.$
(3.18)
Substituting (3) and (3) into the inequality (3), we conclude
$\displaystyle\|\mathbf{w}^{\epsilon,\delta}(t)\|^{2}_{L^{2}}+\|\mathbf{Z}^{\epsilon}(t)\|^{2}_{L^{2}}+\|\Pi^{\epsilon}(t)-\phi^{\epsilon,\delta}(t)\|^{2}_{L^{2}}+2{\mu^{\epsilon}}\int^{t}_{0}\int_{\mathbb{R}^{d}}|\nabla{\bf
u}^{\epsilon}|^{2}$
$\displaystyle+2(\mu^{\epsilon}+\lambda^{\epsilon})\int^{t}_{0}\\!\int_{\mathbb{R}^{d}}|{\rm
div}{\bf
u}^{\epsilon}|^{2}+2{\nu^{\epsilon}}\int^{t}_{0}\int_{\mathbb{R}^{d}}|\nabla{\bf
H}^{\epsilon}|^{2}$ $\displaystyle\leq$
$\displaystyle\|\mathbf{w}^{\epsilon,\delta}(0)\|^{2}_{L^{2}}+\|\mathbf{Z}^{\epsilon}(0)\|^{2}_{L^{2}}+\|\Pi^{\epsilon}_{0}-\phi^{\epsilon,\delta}(0)\|^{2}_{L^{2}}$
$\displaystyle+2C\int^{t}_{0}\big{(}\|\mathbf{w}^{\epsilon,\delta}(s)\|^{2}_{L^{2}}+\|\mathbf{Z}^{\epsilon}(s)\|^{2}_{L^{2}}\big{)}\big{(}\|\nabla{\bf
u}(s)\|_{L^{\infty}}+\|\nabla{\bf H}(s)\|_{L^{\infty}}\big{)}ds$
$\displaystyle+2S^{\epsilon,\delta}_{1}(t)+2\sum_{i=2}^{7}S^{\epsilon}_{2}(t),$
(3.19)
where
$\displaystyle S^{\epsilon,\delta}_{1}(t)=$ $\displaystyle
R^{\epsilon,\delta}_{1}(t)+\int^{t}_{0}\\!\int_{\mathbb{R}^{d}}(\mathbf{Z}^{\epsilon}\cdot\nabla){\bf
H}\cdot{\bf g}^{\epsilon,\delta}-\int^{t}_{0}\\!\int_{\mathbb{R}^{d}}({\bf
g}^{\epsilon,\delta}\cdot\nabla){\bf H}\cdot\mathbf{Z}^{\epsilon}$
$\displaystyle-\int^{t}_{0}\int_{\mathbb{R}^{d}}({\bf
g}^{\epsilon,\delta}\cdot\nabla){\bf
u}\,\mathbf{w}^{\epsilon,\delta}+\int^{t}_{0}\int_{\mathbb{R}^{d}}[(\sqrt{\rho^{\epsilon}}{\bf
u}^{\epsilon}-{\bf u})\cdot\nabla]{\bf u}\cdot{\bf g}^{\epsilon,\delta},$
$\displaystyle S^{\epsilon}_{2}(t)=$
$\displaystyle{\mu^{\epsilon}}\int^{t}_{0}\int_{\mathbb{R}^{d}}\nabla{\bf
u}^{\epsilon}\cdot\nabla{\bf
u}+{\nu^{\epsilon}}\int^{t}_{0}\int_{\mathbb{R}^{d}}\nabla{\bf
H}^{\epsilon}\cdot\nabla{\bf H},$ $\displaystyle S^{\epsilon}_{3}(t)=$
$\displaystyle\int_{\mathbb{R}^{d}}\big{[}(\sqrt{\rho^{\epsilon}}-1)\sqrt{\rho^{\epsilon}}{\bf
u}^{\epsilon}\cdot{\bf
u}\big{]}(t)-\int_{\mathbb{R}^{d}}\big{[}(\sqrt{\rho^{\epsilon}}-1)\sqrt{\rho^{\epsilon}}{\bf
u}^{\epsilon}\cdot{\bf u}\big{]}(0)$
$\displaystyle+\int^{t}_{0}\int_{\mathbb{R}^{d}}(\rho^{\epsilon}-\sqrt{\rho^{\epsilon}}){\bf
u}^{\epsilon}\cdot(({\bf u}\cdot\nabla){\bf u}),$ $\displaystyle
S^{\epsilon}_{4}(t)=$
$\displaystyle\int^{t}_{0}\\!\int_{\mathbb{R}^{d}}(\mathbf{Z}^{\epsilon}\cdot\nabla){\bf
H}\cdot[(1-\sqrt{\rho^{\epsilon}}){\bf
u}^{\epsilon}]-\int^{t}_{0}\\!\int_{\mathbb{R}^{d}}\big{\\{}[(1-\sqrt{\rho^{\epsilon}}){\bf
u}^{\epsilon}]\cdot\nabla\big{\\}}{\bf H}\cdot\mathbf{Z}^{\epsilon},$
$\displaystyle S^{\epsilon}_{5}(t)=$
$\displaystyle\int^{t}_{0}\int_{\mathbb{R}^{d}}(1-\rho^{\epsilon}){\bf
u}^{\epsilon}\cdot[({\bf H}\cdot\nabla){\bf
H})]+\frac{1}{2}\int^{t}_{0}\int_{\mathbb{R}^{d}}(\rho^{\epsilon}-1){\bf
u}^{\epsilon}\cdot\nabla(|{\bf H}|^{2}),$ $\displaystyle S^{\epsilon}_{6}(t)=$
$\displaystyle\int^{t}_{0}\int_{\mathbb{R}^{d}}\rho^{\epsilon}{\bf
u}^{\epsilon}\cdot\nabla p,$ $\displaystyle S^{\epsilon}_{7}(t)=$
$\displaystyle-\int^{t}_{0}\int_{\mathbb{R}^{d}}(\sqrt{\rho^{\epsilon}}{\bf
u}^{\epsilon}-{\bf u})\cdot\nabla\big{(}\frac{|{\bf u}|^{2}}{2}\big{)}.$
_Step 4: Convergence of the modulated energy functional._
To show the convergence of the modulated energy functional and to finish our
proof, we have to estimate the reminders $S^{\epsilon,\delta}_{1}(t)$ and
$S^{\epsilon}_{i}(t),i=2,\dots,7$. We remark that the terms
$S^{\epsilon}_{3}(t)$, $S^{\epsilon}_{6}(t)$ and $S^{\epsilon}_{7}(t)$ where
the magnetic field is not included are estimated in the similar way as in
[16].
First in view of (3.1) and the following two elementary inequalities
$\displaystyle|\sqrt{x}-1|^{2}\leq
M|x-1|^{\gamma},\;\;\;|x-1|\geq\delta,\;\;\gamma\geq 1,$ (3.20)
$\displaystyle|\sqrt{x}-1|^{2}\leq M|x-1|^{2},\;\;\;x\geq 0$ (3.21)
for some positive constant $M$ and $0<\delta<1$, we obtain
$\displaystyle\int_{\mathbb{R}^{d}}|\sqrt{\rho^{\epsilon}}-1|^{2}=$
$\displaystyle\int_{|\rho^{\epsilon}-1|\leq
1/2}|\sqrt{\rho^{\epsilon}}-1|^{2}+\int_{|\rho^{\epsilon}-1|\geq
1/2}|\sqrt{\rho^{\epsilon}}-1|^{2}$ $\displaystyle\leq$ $\displaystyle
M\int_{|\rho^{\epsilon}-1|\leq
1/2}|\rho^{\epsilon}-1|^{2}+M\int_{|\rho^{\epsilon}-1|\geq
1/2}|\rho^{\epsilon}-1|^{\gamma}$ $\displaystyle\leq$ $\displaystyle
M\epsilon^{2}.$ (3.22)
Then, by using Hölder inequality, the estimates (3.22) and (3.7), the
assumption on the initial data, the strong convergence of $\rho^{\epsilon}$,
the estimates on ${\bf u}^{\epsilon}$, $\sqrt{\rho^{\epsilon}}{\bf
u}^{\epsilon}$ and ${\bf H}^{\epsilon}$, and the regularities of ${\bf u}$ and
${\bf H}$, we can show that $S^{\epsilon,\delta}_{1}(t)$ converges to $0$ for
all fixed $\delta$ and almost all $t$, uniformly in $t$ when $\epsilon$ goes
to $0$. The similar arguments can be found in [2] and [8].
Next, we begin to estimate the terms $S^{\epsilon}_{i}(t),i=2,\dots,7$. For
the term $S^{\epsilon}_{2}(t)$, by Young’s inequality and the regularity of
${\bf u}$ and ${\bf H}$, we have
$\displaystyle|S^{\epsilon}_{2}(t)|\leq\frac{\mu^{\epsilon}}{2}\int^{t}_{0}\int_{\mathbb{R}^{d}}|\nabla{\bf
u}^{\epsilon}|^{2}+\frac{\nu^{\epsilon}}{2}\int^{t}_{0}\int_{\mathbb{R}^{d}}|\nabla{\bf
H}^{\epsilon}|^{2}+C_{T}\mu^{\epsilon}+C_{T}\nu^{\epsilon}.$ (3.23)
For the term $S^{\epsilon}_{3}(t)$, by Hölder’s inequality, the estimate
(3.22), the assumption on the initial data, the estimate on
$\sqrt{\rho^{\epsilon}}{\bf u}^{\epsilon}$, and the regularity of ${\bf u}$,
we infer that
$\displaystyle|S^{\epsilon}_{3}(t)|\leq$ $\displaystyle C\epsilon+\|{\bf
u}(t)\|_{L^{\infty}}\Big{(}\int_{\mathbb{R}^{d}}|\sqrt{\rho^{\epsilon}}-1|^{2}\Big{)}^{\frac{1}{2}}\Big{(}\int_{\mathbb{R}^{d}}\rho^{\epsilon}|{\bf
u}^{\epsilon}|^{2}\Big{)}^{\frac{1}{2}}$ $\displaystyle+\|[({\bf
u}\cdot\nabla){\bf
u}](t)\|_{L^{\infty}}\Big{(}\int^{t}_{0}\int_{\mathbb{R}^{d}}|\sqrt{\rho^{\epsilon}}-1|^{2}\Big{)}^{\frac{1}{2}}\Big{(}\int^{t}_{0}\int_{\mathbb{R}^{d}}\rho^{\epsilon}|{\bf
u}^{\epsilon}|^{2}\Big{)}^{\frac{1}{2}}$ $\displaystyle\leq$ $\displaystyle
C_{T}\epsilon.$ (3.24)
For the term $S^{\epsilon}_{4}(t)$, making use of the basic inequality (2.6),
the regularity of ${\bf H}$, and Hölder’s inequality, we get
$\displaystyle|S^{\epsilon}_{4}(t)|\leq$ $\displaystyle(\|[({\bf
H}\cdot\nabla){\bf H}](t)\|_{L^{\infty}}+\|\nabla{\bf
H}(t)\|_{L^{\infty}}\cdot\|{\bf H}(t)\|_{L^{\infty}})$
$\displaystyle\times\Big{(}\int^{t}_{0}\int_{\mathbb{R}^{d}}|\sqrt{\rho^{\epsilon}}-1|^{2}\Big{)}^{\frac{1}{2}}\Big{(}\int^{t}_{0}\int_{\mathbb{R}^{d}}|{\bf
u}^{\epsilon}|^{2}\Big{)}^{\frac{1}{2}}$ $\displaystyle+\|\nabla{\bf
H}(t)\|_{L^{\infty}}\int^{t}_{0}\Big{[}\Big{(}\int_{\mathbb{R}^{d}}|\sqrt{\rho^{\epsilon}}-1|^{2}\Big{)}^{\frac{1}{2}}\|{\bf
u}^{\epsilon}(\tau)\|_{L^{6}}\|{\bf H}^{\epsilon}(\tau)\|_{L^{3}}\Big{]}d\tau$
$\displaystyle\leq$ $\displaystyle
C_{T}\epsilon\Big{(}\int^{t}_{0}\int_{\mathbb{R}^{d}}|{\bf
u}^{\epsilon}|^{2}\Big{)}^{\frac{1}{2}}+C_{T}\epsilon\int^{t}_{0}\|{\bf
u}^{\epsilon}(\tau)\|_{L^{6}}\|{\bf H}^{\epsilon}(\tau)\|_{L^{3}}d\tau\equiv
Y^{\epsilon}(t).$ (3.25)
Note that there is no uniform estimate on $||{\bf
u}^{\epsilon}||_{L^{2}(0,T;L^{2})}$ in $Y^{\epsilon}(t)$ (see the estimate
(3.3)). We have to deal with $Y^{\epsilon}(t)$ in dimension two and three,
respectively. In the case of $d=2$, using Sobolev’s imbedding, the basic
inequality (2.6), the assumption (2.10), and the inequality (3.3), we obtain
$\displaystyle Y^{\epsilon}(t)\leq$ $\displaystyle
C_{T}\epsilon\big{[}1+\epsilon(\mu^{\epsilon})^{-\frac{1}{2}}\big{]}+C_{T}\epsilon\Big{(}\int^{t}_{0}\|{\bf
u}^{\epsilon}(\tau)\|^{2}_{H^{1}(\mathbb{R}^{2})}d\tau\Big{)}^{\frac{1}{2}}\Big{(}\int^{t}_{0}\|{\bf
H}^{\epsilon}(\tau)\|^{2}_{H^{1}(\mathbb{R}^{2})}d\tau\Big{)}^{\frac{1}{2}}$
$\displaystyle=$ $\displaystyle
C_{T}\epsilon\big{[}1+\epsilon(\mu^{\epsilon})^{-\frac{1}{2}}\big{]}+C_{T}\epsilon(\|{\bf
u}^{\epsilon}\|_{L^{2}(0,T;L^{2}(\mathbb{R}^{2}))}+\|\nabla{\bf
u}^{\epsilon}\|_{L^{2}(0,T;L^{2}(\mathbb{R}^{2}))})$
$\displaystyle\quad\times(\|{\bf
H}^{\epsilon}\|_{L^{2}(0,T;L^{2}(\mathbb{R}^{2}))}+\|\nabla{\bf
H}^{\epsilon}\|_{L^{2}(0,T;L^{2}(\mathbb{R}^{2}))})$ $\displaystyle\leq$
$\displaystyle
C_{T}\epsilon+C_{T}\epsilon\big{[}1+(1+\epsilon)(\mu^{\epsilon})^{-\frac{1}{2}}\big{]}\cdot\big{[}1+(\nu^{\epsilon})^{-\frac{1}{2}}\big{]}$
$\displaystyle\leq$ $\displaystyle C_{T}\epsilon+C_{T}\epsilon^{\sigma}\leq
C_{T}\epsilon^{\sigma},$ (3.26)
where $\sigma=1-(\alpha+\beta)/2>0$ due to $0<\alpha+\beta<2$. For $d=3$,
using Sobolev’s imbedding, the basic inequality (2.6), the assumption (2.10),
the inequality (3.3), and $\|{\bf u}^{\epsilon}\|_{L^{6}(\mathbb{R}^{3})}\leq
C\|\nabla{\bf u}^{\epsilon}||_{L^{2}(\mathbb{R}^{3})}$, we obtain
$\displaystyle Y^{\epsilon}(t)\leq$ $\displaystyle
C_{T}\epsilon\big{[}1+\epsilon^{\frac{2}{3}}(\mu^{\epsilon})^{-\frac{1}{2}}\big{]}$
$\displaystyle+C_{T}\epsilon\Big{(}\int^{t}_{0}\|\nabla{\bf
u}^{\epsilon}(\tau)\|^{2}_{L^{2}(\mathbb{R}^{3})}d\tau\Big{)}^{\frac{1}{2}}\Big{(}\int^{t}_{0}\|{\bf
H}^{\epsilon}(\tau)\|^{2}_{H^{1}(\mathbb{R}^{3})}d\tau\Big{)}^{\frac{1}{2}}$
$\displaystyle=$ $\displaystyle
C_{T}\epsilon\big{[}1+\epsilon^{\frac{2}{3}}(\mu^{\epsilon})^{-\frac{1}{2}}\big{]}$
$\displaystyle+C_{T}\epsilon\|\nabla{\bf
u}^{\epsilon}\|_{L^{2}(0,T;L^{2}(\mathbb{R}^{3}))}(\|{\bf
H}^{\epsilon}\|_{L^{2}(0,T;L^{2}(\mathbb{R}^{3}))}+\|\nabla{\bf
H}^{\epsilon}\|_{L^{2}(0,T;L^{2}(\mathbb{R}^{3}))})$ $\displaystyle\leq$
$\displaystyle
C_{T}\epsilon+C_{T}\epsilon^{\frac{5}{3}}(\mu^{\epsilon})^{-\frac{1}{2}}+C_{T}\epsilon(\mu^{\epsilon})^{-\frac{1}{2}}\big{(}1+(\nu^{\epsilon})^{-\frac{1}{2}}\big{)}$
$\displaystyle\leq$ $\displaystyle
C_{T}\Big{(}\epsilon+\epsilon^{\frac{5}{3}-\frac{\alpha}{2}}+\epsilon^{1-\frac{\alpha+\beta}{2}}\Big{)}\leq
C_{T}\epsilon^{\sigma},$ (3.27)
where $\sigma=1-(\alpha+\beta)/2>0$.
For the term $S^{\epsilon}_{5}(t)$, one can utilize the inequality (3.22), the
inequality (3.3), the estimate on $\sqrt{\rho^{\epsilon}}{\bf u}^{\epsilon}$,
the regularity of ${\bf H}$, and
$\rho^{\epsilon}-1=\rho^{\epsilon}-\sqrt{\rho^{\epsilon}}+\sqrt{\rho^{\epsilon}}-1$
to deduce
$\displaystyle|S^{\epsilon}_{5}(t)|\leq$ $\displaystyle(\|[({\bf
H}\cdot\nabla){\bf H}](t)\|_{L^{\infty}}+\|\nabla(|{\bf
H}|^{2})\|_{L^{\infty}})\Big{(}\int^{t}_{0}\int_{\mathbb{R}^{d}}|\sqrt{\rho^{\epsilon}}-1|^{2}\Big{)}^{\frac{1}{2}}$
$\displaystyle\times\bigg{[}\Big{(}\int^{t}_{0}\int_{\mathbb{R}^{d}}|{\bf
u}^{\epsilon}|^{2}\Big{)}^{\frac{1}{2}}+\Big{(}\int^{t}_{0}\int_{\mathbb{R}^{d}}\rho^{\epsilon}|{\bf
u}^{\epsilon}|^{2}\Big{)}^{\frac{1}{2}}\bigg{]}$ $\displaystyle\leq$
$\displaystyle\left\\{\begin{array}[]{ll}C_{T}\epsilon\big{[}1+\epsilon(\mu^{\epsilon})^{-\frac{1}{2}}\big{]}=C_{T}\epsilon+C_{T}\epsilon^{2-\frac{\alpha}{2}}\leq
C_{T}\epsilon,&{d=2;}\\\
C_{T}\epsilon\big{[}1+\epsilon^{\frac{2}{3}}(\mu^{\epsilon})^{-\frac{1}{2}}\big{]}=C_{T}\epsilon+C_{T}\epsilon^{\frac{5}{3}-\frac{\alpha}{2}},&{d=3.}\end{array}\right.$
(3.30)
Here $2-{\alpha}/{2}>1$ and $5/3-{\alpha}/{2}>0$ since $0<\alpha<2$.
Using (2.1), (3.1) and (3.2), the term $S^{\epsilon}_{6}(t)$ can be bounded as
follows.
$\displaystyle|S^{\epsilon}_{6}(t)|=$
$\displaystyle\Big{|}\int^{t}_{0}\int_{\mathbb{R}^{d}}\rho^{\epsilon}{\bf
u}^{\epsilon}\cdot\nabla p\Big{|}$ $\displaystyle=$
$\displaystyle\Big{|}\int_{\mathbb{R}^{d}}\big{\\{}((\rho^{\epsilon}-1)p)(t)-((\rho^{\epsilon}-1)p)(0)\big{\\}}-\int^{t}_{0}\int_{\mathbb{R}^{d}}(\rho^{\epsilon}-1)\partial_{t}p\Big{|}$
$\displaystyle\leq$ $\displaystyle\Big{(}\int_{|\rho^{\epsilon}-1|\leq
1/2}|\rho^{\epsilon}-1|^{2}\Big{)}^{\frac{1}{2}}\Big{[}\Big{(}\int_{\mathbb{R}^{d}}|p(t)|^{2}\Big{)}^{\frac{1}{2}}+\Big{(}\int_{\mathbb{R}^{d}}|p(0)|^{2}\Big{)}^{\frac{1}{2}}\Big{]}$
$\displaystyle+\Big{(}\int_{|\rho^{\epsilon}-1|\geq
1/2}|\rho^{\epsilon}-1|^{\gamma}\Big{)}^{\frac{1}{\gamma}}\Big{[}\Big{(}\int_{\mathbb{R}^{d}}|p(t)|^{\frac{\gamma}{\gamma-1}}\Big{)}^{\frac{\gamma-1}{\gamma}}+\Big{(}\int_{\mathbb{R}^{d}}|p(0)|^{\frac{\gamma}{\gamma-1}}\Big{)}^{\frac{\gamma-1}{\gamma}}\Big{]}$
$\displaystyle+\int^{t}_{0}\Big{(}\int_{|\rho^{\epsilon}-1|\leq
1/2}|\rho^{\epsilon}-1|^{2}\Big{)}^{\frac{1}{2}}\Big{(}\int_{\mathbb{R}^{d}}|\partial_{t}p(t)|^{2}\Big{)}^{\frac{1}{2}}$
$\displaystyle+\int^{t}_{0}\Big{(}\int_{|\rho^{\epsilon}-1|\geq
1/2}|\rho^{\epsilon}-1|^{\gamma}\Big{)}^{{\frac{1}{\gamma}}}\Big{(}\int_{\mathbb{R}^{d}}|\partial_{t}p(t)|^{\frac{\gamma}{\gamma-1}}\Big{)}^{{\frac{\gamma-1}{\gamma}}}$
$\displaystyle\leq$ $\displaystyle C_{T}(\epsilon+\epsilon^{2/\kappa})\leq
C_{T}\epsilon,$ (3.31)
where $\kappa=\min\\{2,\gamma\\}$ and we have used the conditions $s>d/2+2$
and $\gamma>1$.
Finally, to estimate the term $S^{\epsilon}_{7}(t)$, we rewrite it as
$\displaystyle S^{\epsilon}_{7}(t)=$
$\displaystyle-\int^{t}_{0}\int_{\mathbb{R}^{d}}(\sqrt{\rho^{\epsilon}}{\bf
u}^{\epsilon}-{\bf u})\cdot\nabla\big{(}\frac{|{\bf u}|^{2}}{2}\big{)}$
$\displaystyle=$
$\displaystyle\int^{t}_{0}\int_{\mathbb{R}^{d}}\sqrt{\rho^{\epsilon}}(\sqrt{\rho^{\epsilon}}-1){\bf
u}^{\epsilon}\cdot\nabla\big{(}\frac{|{\bf
u}|^{2}}{2}\big{)}-\int^{t}_{0}\int_{\mathbb{R}^{d}}\rho^{\epsilon}{\bf
u}^{\epsilon}\cdot\nabla\big{(}\frac{|{\bf u}|^{2}}{2}\big{)}$
$\displaystyle=$ $\displaystyle S^{\epsilon}_{71}(t)+S^{\epsilon}_{72}(t),$
(3.32)
where
$\displaystyle S^{\epsilon}_{71}(t)$
$\displaystyle=\int^{t}_{0}\int_{\mathbb{R}^{d}}\sqrt{\rho^{\epsilon}}(\sqrt{\rho^{\epsilon}}-1){\bf
u}^{\epsilon}\cdot\nabla\big{(}\frac{|{\bf u}|^{2}}{2}\big{)},$ $\displaystyle
S^{\epsilon}_{72}(t)$
$\displaystyle=\int^{t}_{0}\int_{\mathbb{R}^{d}}(\rho^{\epsilon}-1)\partial_{t}\big{(}\frac{|{\bf
u}|^{2}}{2}\big{)}-\int_{\mathbb{R}^{d}}\Big{[}\Big{(}(\rho^{\epsilon}-1)\big{(}\frac{|{\bf
u}|^{2}}{2}\big{)}\Big{)}(t)-\Big{(}(\rho^{\epsilon}-1)\big{(}\frac{|{\bf
u}|^{2}}{2}\big{)}\Big{)}(0)\Big{]}.$
Applying arguments similar to those used for $S^{\epsilon}_{3}(t)$ and
$S^{\epsilon}_{6}(t)$, we arrive at the following boundedness
$\displaystyle|S^{\epsilon}_{7}(t)|\leq|S^{\epsilon}_{71}(t)|+|S^{\epsilon}_{72}(t)|\leq
C_{T}\epsilon.$ (3.33)
Thus by collecting all estimates above and applying the Gronwall’s inequality
we deduce that, for almost all $t\in(0,T)$,
$\displaystyle\|\mathbf{w}^{\epsilon,\delta}(t)\|^{2}_{L^{2}}+\|\mathbf{Z}^{\epsilon}(t)\|^{2}_{L^{2}}+\|\Pi^{\epsilon}(t)-\phi^{\epsilon,\delta}(t)\|^{2}_{L^{2}}$
$\displaystyle\leq$
$\displaystyle\bar{C}\Big{[}\|\mathbf{w}^{\epsilon,\delta}(0)\|^{2}_{L^{2}}+\|\mathbf{Z}^{\epsilon}(0)\|^{2}_{L^{2}}+\|\Pi^{\epsilon}_{0}-\phi^{\epsilon,\delta}(0)\|^{2}_{L^{2}}+C_{T}\epsilon^{\sigma}+\sup_{0\leq
s\leq t}S^{\epsilon,\delta}_{1}(s)\Big{]},$ (3.34)
where
$\bar{C}=\exp{\Big{\\{}C\int^{T}_{0}\big{[}\|\nabla{\bf
u}(s)\|_{L^{\infty}}+\|\nabla{\bf
H}(s)\|_{L^{\infty}}\big{]}ds\Big{\\}}}<+\infty.$
Noticing that the projection $P$ is a bounded linear mapping from $L^{2}$ to
$L^{2}$, we obtain that
$\displaystyle\sup_{0\leq t\leq T}\|P(\sqrt{\rho^{\epsilon}}{\bf
u}^{\epsilon})-{\bf u}\|_{L^{2}}$ $\displaystyle=\sup_{0\leq t\leq
T}\Big{\|}P\Big{(}\sqrt{\rho^{\epsilon}}{\bf u}^{\epsilon}-{\bf u}-{\bf
g}^{\epsilon,\delta}\Big{)}\Big{\|}_{L^{2}}$ $\displaystyle\leq\sup_{0\leq
t\leq T}\|\sqrt{\rho^{\epsilon}}{\bf u}^{\epsilon}-{\bf u}-{\bf
g}^{\epsilon,\delta}\|_{L^{2}}.$ (3.35)
Thanks to (3), we further obtain that
$\displaystyle\overline{\lim}_{\epsilon\rightarrow
0}\|P(\sqrt{\rho^{\epsilon}}{\bf u}^{\epsilon})-{\bf
u}\|_{L^{\infty}(0,T;L^{2})}$ $\displaystyle\leq$ $\displaystyle
C\bar{C}\lim_{\delta\rightarrow 0}\Big{[}\|\mathbf{J}_{0}-{\bf
u}_{0}-Q(\mathbf{J}_{0})\ast\chi_{\delta}\|^{2}_{L^{2}}+\|\varphi_{0}-\varphi_{0}\ast\chi_{\delta}\|^{2}_{L^{2}}\Big{]}=0.$
The uniform convergence (in $t$) of $P(\sqrt{\rho^{\epsilon}}{\bf
u}^{\epsilon})$ to ${\bf u}$ in $L^{2}(\mathbb{R}^{d})$ is proved.
Now we claim that $\mathbf{J}={\bf u}$ and $\mathbf{K}={\bf H}$. Since ${\bf
g}^{\epsilon,\delta}$ tends weakly-$\ast$ to zero by (3.7),
$\mathbf{w}^{\epsilon,\delta}$ tends weakly-$\ast$ to $\mathbf{J}-{\bf u}$. It
is easy to see that $\mathbf{Z}^{\epsilon}$ tends weakly-$\ast$ to
$\mathbf{K}-{\bf H}$. Hence by (3), we obtain that
$\displaystyle\|\mathbf{J}-{\bf
u}\|^{2}_{L^{\infty}(0,T;L^{2})}+\|\mathbf{K}-{\bf
H}\|^{2}_{L^{\infty}(0,T;L^{2})}$ $\displaystyle\quad\leq
C\bar{C}\Big{[}\|\mathbf{J}_{0}-{\bf
u}_{0}-Q(\mathbf{J}_{0})\ast\chi_{\delta}\|^{2}_{L^{2}}+\|\varphi_{0}-\varphi_{0}\ast\chi_{\delta}\|^{2}_{L^{2}}\Big{]},$
(3.36)
where we have used the fact that
$\|\text{weak-}\ast\text{--}\lim\theta^{\epsilon}\|_{L^{\infty}(0,T;L^{2})}\leq\varlimsup_{\epsilon\rightarrow
0}\|\theta^{\epsilon}||_{L^{\infty}(0,T;L^{2})}$ for any $\theta^{\epsilon}\in
L^{\infty}(0,T;L^{2})$. Thus we deduce $\mathbf{J}={\bf u}$ and
$\mathbf{K}={\bf H}$ in $L^{\infty}(0,T;L^{2})$ by letting $\delta$ go to $0$.
Finally, we show the local strong convergence of $\sqrt{\rho^{\epsilon}}{\bf
u}^{\epsilon}$ to ${\bf u}$ in $L^{r}(0,T;L^{2}(\Omega))$ for all $1\leq
r<+\infty$ on any bounded domain $\Omega\subset\mathbb{R}^{d}$. In fact, for
all $t\in[0,T]$, we have
$\displaystyle\|\sqrt{\rho^{\epsilon}}{\bf u}^{\epsilon}-{\bf
u}\|_{L^{2}(\Omega)}$ $\displaystyle\leq
C(\Omega)\|\sqrt{\rho^{\epsilon}}u^{\epsilon}-{\bf u}-{\bf
g}^{\epsilon,\delta}\|_{L^{2}(\Omega)}+C(\Omega)\|{\bf
g}^{\epsilon,\delta}\|_{L^{2}(\Omega)}$ $\displaystyle\leq
C(\Omega)\|\sqrt{\rho^{\epsilon}}{\bf u}^{\epsilon}-{\bf u}-{\bf
g}^{\epsilon,\delta}\|_{L^{2}(\Omega)}+C(\Omega)\|{\bf
g}^{\epsilon,\delta}\|_{L^{q}(\Omega)}$
for any $q>2$. Using the estimate (3.8), we can take the limit on $\epsilon$
and then on $\delta$ as above to deduce that $\sqrt{\rho^{\epsilon}}{\bf
u}^{\epsilon}$ converges to ${\bf u}$ in $L^{r}(0,T;L^{2}(\Omega))$. Thus we
complete our proof. ∎
Acknowledgement: The authors are very grateful to the referees for their
constructive comments and helpful suggestions, which considerably improved the
earlier version of this paper. This work was partially done when Fucai Li
visited the Institute of Applied Physics and Computational Mathematics. He
would like to thank the institute for hospitality. Jiang was supported by the
National Basic Research Program (Grant No. 2005CB321700) and NSFC (Grant No.
40890154). Ju was supported by NSFC(Grant No. 10701011). Li was supported by
NSFC (Grant No. 10971094).
## References
* [1] Y. Brenier, Convergence of the Vlasov-Poisson system to the incompressible Euler equations, Comm. Partial Differential Equations, 25 (2000), pp. 737–754.
* [2] B. Desjardins and E. Grenier, Low Mach number limit of viscous compressible flows in the whole space, R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci., 455 (1999), pp. 2271–2279.
* [3] J. I. Díaz and M. B. Lerena, On the inviscid and non-resistive limit for the equations of incompressible magnetohydrodynamics, Math. Models Meth. Appl. Sci., 12 (2002), pp. 1401–1419.
* [4] J. S. Fan and W. H. Yu, Strong solution to the compressible MHD equations with vacuum, Nonlinear Anal. Real World Appl., 10 (2009), pp. 392–409.
* [5] J. S. Fan and W. H. Yu, Global variational solutions to the compressible MHD equations, Nonlinear Anal., 69 (2008), pp. 3637–3660.
* [6] X. P. Hu and D. H. Wang, Global existence and large-time behavior of solutions to the three-dimensional equations of compressible magnetohydrodynamic flows, Arch. Ration. Mech. Anal., 197 (2010), pp. 203–238.
* [7] X. P. Hu and D. H. Wang, Global solutions to the three-dimensional full compressible magnetohydrodynamic flows, Comm. Math. Phys., 283 (2008), pp. 255–284.
* [8] X. P. Hu and D. H. Wang, Low Mach number limit of viscous compressible magnetohydrodynamic flows, SIAM J. Math. Anal., 41 (2009), pp. 1272–1294.
* [9] S. Jiang, Q. C. Ju and F. C. Li, Incompressible limit of the compressible Magnetohydrodynamic equations with periodic boundary conditions, Comm. Math. Phys., 297 (2010), pp. 371–400.
* [10] S. Kawashima, Smooth global solutions for two-dimensional equations of electromagnetofluid dynamics, Japan J. Appl. Math., 1 (1984), pp. 207–222.
* [11] A. G. Kulikovskiy and G. A. Lyubimov, Magnetohydrodynamics, Addison-Wesley, Reading, Massachusetts, 1965.
* [12] L. D. Laudau and E. M. Lifshitz, Electrodynamics of Continuous Media, 2nd ed., Pergamon, New York, 1984.
* [13] F.C. Li and H. J. Yu, Optimal decay rate of classical solution to the compressible magnetohydrodynamic equations, Proc. Roy. Soc. Edinburgh Sect. A, (2010), in press.
* [14] P.-L. Lions and N. Masmoudi, Incompressible limit for a viscous compressible fluid, J. Math. Pures Appl., 77 (1998), pp. 585–627.
* [15] N. Masmoudi, Ekman layers of rotating fluids, the case of general initial data, Comm. Pure Appl. Math., 53 (2000), pp. 432–483.
* [16] N. Masmoudi, Incompressible, inviscid limit of the compressible Navier-Stokes system, Ann. Inst. H. Poincaré Anal. Non Linéaire, 18 (2001), pp. 199–224.
* [17] R. V. Polovin and V. P. Demutskii, Fundamentals of Magnetohydrodynamics, Consultants, Bureau, New York, 1990.
* [18] S. Schochet, Fast singular limits of hyperbolic PDEs, J. Differential Equations, 114 (1994), pp. 476–512.
* [19] T. Umeda, S. Kawashima and Y. Shizuta, On the decay of solutions to the linearized equations of electromagnetofluid dynamics, Japan J. Appl. Math., 1 (1984), pp. 435–457.
* [20] A. I. Vol’pert and S. I. Khudiaev, On the Cauchy problem for composite systems of nonlinear equations, Mat. Sb., 87 (1972), pp. 504–528.
* [21] J. W. Zhang, S. Jiang and F. Xie, Global weak solutions of an initial boundary value problem for screw pinches in plasma physics, Math. Models Methods Appl. Sci., 19 (2009), pp. 833–875.
|
arxiv-papers
| 2009-05-25T02:37:28 |
2024-09-04T02:49:02.877919
|
{
"license": "Public Domain",
"authors": "Song Jiang, Qiangchang Ju and Fucai Li",
"submitter": "Fucai Li",
"url": "https://arxiv.org/abs/0905.3937"
}
|
0905.4068
|
2010489-500Nancy, France 489
Łukasz Jeż
# A $\frac{4}{3}$-competitive randomized algorithm for online scheduling of
packets with agreeable deadlines
Ł. Jeż University of Wrocław, Institute of Computer Science
lje@cs.uni.wroc.pl http://www.ii.uni.wroc.pl/ lje/
###### Abstract.
In 2005 Li et al. gave a $\phi$-competitive deterministic online algorithm for
scheduling of packets with agreeable deadlines [12] with a very interesting
analysis. This is known to be optimal due to a lower bound by Hajek [7]. We
claim that the algorithm by Li et al. can be slightly simplified, while
retaining its competitive ratio. Then we introduce randomness to the modified
algorithm and argue that the competitive ratio against oblivious adversary is
at most $\frac{4}{3}$. Note that this still leaves a gap between the best
known lower bound of $\frac{5}{4}$ by Chin et al. [5] for randomized
algorithms against oblivious adversary.
###### Key words and phrases:
online algorithms, scheduling, buffer management
###### 1991 Mathematics Subject Classification:
F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms
and Problems
This work has been supported by MNiSW grants no. N N206 1723 33, 2007–2010 and
N N206 490638, 2010–2011
## 1\. Introduction
We consider the problem of buffer management with bounded delay (aka _packet
scheduling_), introduced by Kesselman et al. [11]. It models the behaviour of
a single network switch. We assume that time is slotted and divided into
steps. At the beginning of a time step, any number of packets may arrive at a
switch and are stored in its buffer. A packet has a positive weight and a
deadline, which is the number of step right before which the packet expires:
unless it has already been transmitted, it is removed from the buffer at the
very beginning of that step and thus can no longer be transmitted. Only one
packet can be transmitted in a single step. The goal is to maximize the
_weighted throughput_ , i.e., the total weight of transmitted packets.
As the process of managing packet queue is inherently a real-time task, we
investigate the online variant of the problem. This means that the algorithm
has to base its decision of which packet to transmit solely on the packets
which have already arrived at a switch, without the knowledge of the future.
### 1.1. Competitive Analysis.
To measure the performance of an online algorithm, we use a standard notion of
competitive analysis [3], which compares the gain of the algorithm to the gain
of the optimal solution on the same input sequence. For any algorithm Alg, we
denote its gain on the input sequence $I$ by $\mathcal{G}_{\mathrm{ALG}}(I)$;
we denote the optimal offline algorithm by Opt. We say that a deterministic
algorithm Alg is $\mathcal{R}$-competitive if on any input sequence $I$, it
holds that
$\mathcal{G}_{\mathrm{ALG}}(I)\geq\frac{1}{\mathcal{R}}\cdot\mathcal{G}_{\mathrm{OPT}}(I)$.
When analysing the performance of an online algorithm Alg, we view the process
as a game between Alg and an adversary. The adversary controls the packets’
injection into the buffer and chooses which of them to send. The goal is then
to show that the adversary’s gain is at most $\mathcal{R}$ times Alg’s gain.
If the algorithm is randomized, we consider its expected gain,
$\mathbf{E}[\mathcal{G}_{\mathrm{ALG}}(I)]$, where the expectation is taken
over all possible random choices made by Alg. However, in the randomized case,
the power of the adversary has to be further specified. Following Ben-David et
al. [1], we distinguish between an oblivious and adaptive-online adversary
(called adaptive for short). An oblivious adversary has to construct the whole
input sequence in advance, not knowing the random bits used by an algorithm.
The expected gain of Alg is compared to the gain of the optimal offline
solution on $I$. An adaptive adversary decides packet injections upon seeing
which packets are transmitted by the algorithm. However, it has to provide an
answering entity Adv, which creates a solution on-line (in parallel to Alg)
and cannot change it afterwards. We say that Alg is $\mathcal{R}$-competitive
against an adaptive adversary if for any input sequence $I$ created adaptively
and any answering algorithm Adv, it holds that
$\mathbf{E}[\mathcal{G}_{\mathrm{ALG}}(I)]\geq\frac{1}{\mathcal{R}}\cdot\mathbf{E}[\mathcal{G}_{\mathrm{ADV}}(I)]$.
We note that Adv is (wlog) deterministic, but as Alg is randomized, so is the
input sequence $I$.
In the literature on online algorithms (see e.g. [3]), the definition of the
competitive ratio sometimes allows an additive constant, i.e., a deterministic
algorithm is $\mathcal{R}$-competitive if there exists a constant $\alpha\geq
0$ such that
$\mathcal{G}_{\mathrm{ALG}}(I)\geq\frac{1}{\mathcal{R}}\cdot\mathcal{G}_{\mathrm{OPT}}(I)-\alpha$
holds for evry input sequence $I$. An analogous definition applies to
randomized case. Our upper bounds hold for $\alpha=0$.
### 1.2. Previous work
The best known deterministic and randomized algorithms for general instances
have competitive ratios at most $2\sqrt{2}-1\approx 1.828$ [6] and
$e/(e-1)\approx 1.582$ [4], respectively. A recent analysis of the latter
algorithm shows that it retains its competitive ratio even against adaptive-
online adversary [8].
The best known lower bounds on competitive ratio against either adversary type
use rather restricted _$2$ -bounded_ sequences in which every packet has
lifespan (deadline $-$ release time) either $1$ or $2$. The lower bounds in
question are $\phi\approx 1.618$ for deterministic algorithms [7],
$\frac{4}{3}$ for randomized algorithms against adaptive adversary [2], and
$\frac{5}{4}$ for randomized algorithms against oblivious adversary [5]. All
these bounds are tight for $2$-bounded sequences [11, 2, 4].
We restrict ourselves to sequences with _agreeable deadlines_ , in which
packets released later have deadlines at least as large as those released
before ($r_{i}<r_{j}$ implies $d_{i}\leq d_{j}$). These strictly generalize
the $2$-bounded sequences. Sequences with agreeable deadlines also properly
contain _s-uniform_ sequences for all $s$, i.e., sequences in which every
packet has lifespan exactly $s$. An optimal $\phi$-competitive deterministic
algorithm for sequences with agreeable deadlines is known [12].
Jeżabek studied the impact of _resource augmentation_ on the deterministic
competitive ratio [10, 9]. It turns out that while allowing the deterministic
algorithm to transmit $k$ packets in a single step for any constant $k$ cannot
make it $1$-competitive (compared to the single-speed offline optimum) on
unrestricted sequences [10], $k=2$ is sufficient for sequences with agreeable
deadlines [9].
### 1.3. Our contribution
Motivated by aforementioned results for sequences with agreeable deadlines, we
investigate randomized algorithms for such instances. We devise a
$\frac{4}{3}$-competitive randomized algorithm against oblivious adversary.
The algorithm and its analysis are inspired by those by Li et al. [12] for
deterministic case. The key insight is as follows. The algorithm MG by Li et
al. [12] can be simplified by making it always send either $e$, the heaviest
among the earliest non-dominated packets, or $h$, the earliest among the
heaviest non-dominated packets. We call this algorithm $\textsc{MG}^{\prime}$,
and prove that it remains $\phi$-competitive. Then we turn it into a
randomized algorithm RG, simply by making it always transmit $e$ with
probability $\frac{w_{e}}{w_{h}}$ and $h$ with the remaining probability. The
proof of RG’s $\frac{4}{3}$-competitiveness against oblivious adversary
follows by similar analysis.
## 2\. Preliminaries
We denote the release time, weight, and deadline of a packet $j$ by $r_{j}$,
$w_{j}$, and $d_{j}$, respectively. A packet $j$ is _pending_ at step $t$ if
$r_{j}\leq t$, it has not yet been transmitted, and $d_{j}>t$. We introduce a
linear order $\unlhd$ on the packets as follows: $i\unlhd j$ if either
$\displaystyle d_{i}<d_{j}\enspace\text{, or}$ $\displaystyle
d_{i}=d_{j}\enspace\text{and }w_{i}>w_{j}\enspace\text{, or}$ $\displaystyle
d_{i}=d_{j}\enspace\text{and }w_{i}=w_{j}\enspace\text{and }r_{i}\leq
r_{j}\enspace.$
To make $\unlhd$ truly linear we assume that in every single step the packets
are released one after another rather then all at once, e.g. that they have
unique fractional release times.
A _schedule_ is a mapping from time steps to packets to be transmitted in
those time steps. A schedule is _feasible_ if it is injective and for every
time step $t$ the packet that $t$ maps to is pending at $t$. It is convenient
to view a feasible schedule $S$ differently, for example as the set
$\\{S(t)\colon t>0\\}$, the sequence $S(1),S(2),\ldots$, or a matching in the
_schedulability graph_. The schedulability graph is a bipartite graph, one of
whose partition classes is the set of packets and the other is the set of time
steps. Each packet $j$ is connected precisely to each of the time steps $t$
such that $r_{j}\leq t<d_{j}$ by an edge of weight $w_{j}$; an example is
given in Figure 1. Observe that optimal offline schedules correspond to
maximum weight matchings in the schedulability graph. Thus an optimal offline
schedule can be found in polynomial time using the classic “Hungarian
algorithm”, see for example [13]. One may have to remove appropriately chosen
time step vertices first, so that the remaining ones match the number of
packet vertices, though.
Figure 1. Schedulability graph for packets $j_{1},j_{2},\ldots,j_{5}$, whose
release times and deadlines are (2,3), (2,4), (3,7), (4,7), (6,7)
respectively; we ignore packet weights in the figure. Packets are represented
by discs, time steps by squares.
Given any linear order $\preceq$ on packets and a (feasible) schedule $S$, we
say that $S$ is consistent with $\preceq$, or that $S$ is a
$\preceq$-schedule, if for every $t$ the packet $S(t)$ is the minimum pending
packet with respect to $\preceq$. It is fairly easy to observe that if
$\preceq$ is any _earliest deadline first_ , with ties broken in an arbitrary
way, then any feasible schedule can be turned to a unique $\preceq$-schedule
by reordering its packets; in particular this applies to $\unlhd$.
Recall that the _oblivious_ adversary prepares the whole input sequence in
advance and cannot alter it later on. Thus its solution is simply the offline
optimal schedule for the complete sequence. Nevertheless, we still refer to
the answering entity Adv rather than Opt in our analysis, as it involves
altering the set of packets pending for the adversary, which may well be
viewed as altering the input sequence. Now we introduce two schedules that are
crucial for our algorithms and our analyzes.
###### Definition 2.1.
The _oblivious schedule_ at time step $t$, denoted $O_{t}$, is any fixed
optimal feasible $\unlhd$-schedule over all the packets pending at step $t$.
For fixed $O_{t}$, a packet $j$ pending at $t$ is called _dominated_ if
$j\notin O_{t}$, and _non-dominated_ otherwise. For fixed $O_{t}$ let $e$
denote $O_{t}(t)$, the $\unlhd$-minimal of all non-dominated packets, and $h$
denote the $\unlhd$-minimal of all non-dominated maximum-weight packets.
Note that both the adversary and the algorithm can calculate their oblivious
schedules at any step, and that these will coincide if their buffers are the
same.
###### Definition 2.2.
For a fixed input sequence, the _clairvoyant schedule_ at time step $t$,
denoted $C_{t}$, is any fixed optimal feasible schedule over all the packets
pending at step $t$ and all the packets that will arrive in the future.
Naturally, the adversary can calculate the clairvoyant schedule, as it knows
the fixed input sequence, while the algorithm cannot, since it only knows the
part of input revealed so far. However, the oblivious schedule gives some
partial information about the clairvoyant schedule: intuitively, if $p$ is
dominated at $t$, it makes no sense to transmit it at $t$. Formally, (wlog)
dominated packets are not included in the clairvoyant schedule, as stated in
the following.
###### Fact 1.
For any fixed input sequence, time step $t$, and oblivious schedule $O_{t}$,
there is a clairvoyant schedule $C^{*}_{t}$ such that $C^{*}_{t}\cap\\{j\colon
r_{j}\leq t\\}\subseteq O_{t}$.
###### Proof 2.3.
This is a standard alternating path argument about matchings. If you are
unfamiliar with these concepts, refer to a book by A. Schrijver [13] for
example.
Let $O_{t}$ be the oblivious schedule and $C_{t}$ be any clairvoyant schedule.
Treat both as matchings in the schedulability graph and consider their
symmetric difference $C_{t}\oplus O_{t}$. Consider any job $j\in
C_{t}\setminus O_{t}$ such that $r_{j}\leq t$. It is an endpoint of an
alternating path $P$ in $C_{t}\oplus O_{t}$. Note that all the jobs on $P$ are
already pending at time $t$: this is certainly true about $j$, and all the
successive jobs belong to $O_{t}$, so they are pending as well.
First we prove that $P$ has even length, i.e., it ends in a node corresponding
to a job. Assume for contradiction that $P$’s length is odd, and that $P$ ends
in a node corresponding to a timestep $t^{\prime}$. Note that no job is
assigned to $t^{\prime}$ in $O_{t}$. Then $O_{t}\oplus P$ is a feasible
schedule that, treated as a set, satisfies $O_{t}\subseteq O_{t}\oplus P$ and
$j\in O_{t}\oplus P$. This contradicts optimality of $O_{t}$. See Figure 2a
for illustration.
Thus $P$ has even length and ends with a job $j^{\prime}\in O_{t}\setminus
C_{t}$. By optimality of both $O_{t}$ and $C_{t}$, $w_{j}=w_{j^{\prime}}$
holds. Thus $C_{t}\oplus P$ is an optimal feasible schedule: in terms of sets
the only difference between $C_{t}$ and $C_{t}\oplus P$ is that $j$ has been
replaced by $j^{\prime}$, a job of the same weight. See Figure 2b for
illustration.
(a) $P$ cannot have odd length: in such case the assignment of jobs on $P$ in
$O_{t}$ could be changed to match the strictly better assignment of $C_{t}$.
(b) $P$ has even length: now the assignment of jobs on $P$ in $C_{t}$ can be
changed to match the assignment of $O_{t}$ so that the value of $\Delta$
drops.
Figure 2. The alternating path $P$. Packets are represented by discs, time
steps by squares. Dashed lines represent $C_{t}$, solid lines represent
$O_{t}$.
Applying such changes iteratively transforms $C_{t}$ to a clairvoyant schedule
$C^{*}_{t}$ as announced. To observe that a finite number of iterations
suffices, define $\Delta(S):=|S\cap\\{j\colon r_{j}\leq t\\}\setminus O_{t}|$
for any schedule $S$. It follows that $\Delta(C_{t}\oplus P)=\Delta(C_{t})-1$.
Since $\Delta$ is non-negative and its value drops by one with each iteration,
$C^{*}_{t}$ is obtained in a finite number of steps.
###### Definition 2.4.
We say that a clairvoyant schedule $C_{t}$ conforms with an oblivious schedule
$O_{t}$ if $C_{t}$ is a $\unlhd$-schedule, $C_{t}\cap\\{j\colon r_{j}\leq
t\\}\subseteq O_{t}$, and for all $i\in O_{t}$ such that $i\lhd j=C_{t}(t)$,
$w_{i}<w_{j}$ holds.
###### Fact 2.
For every oblivious schedule $O_{t}$ there is a conforming clairvoyant
schedule $C^{*}_{t}$.
###### Proof 2.5.
Let $C_{t}$ be a clairvoyant schedule such that $C_{t}\cap\\{j\colon r_{j}\leq
t\\}\subseteq O_{t}$; Fact 1 guarantees its existence. Let $C^{*}_{t}$ be the
schedule obtained from $C_{t}$ by first turning it into a $\unlhd$-schedule
$C^{\prime}_{t}$ and then replacing $j=C^{\prime}_{t}(t)$ with a
$\unlhd$-minimal non-dominated packet $j^{\prime}$ of the same weight.
If $j^{\prime}=j$, then $C^{*}_{t}=C^{\prime}_{t}$, and thus it is a
clairvoyant $\unlhd$-schedule. Assume $j^{\prime}\neq j$, i.e.,
$j^{\prime}\lhd j$. Then $j^{\prime}\notin C^{\prime}_{t}$, since
$C^{\prime}_{t}$ is a $\unlhd$-schedule. Thus $C^{*}_{t}$ is feasible as we
replace $C^{\prime}_{t}$’s very first packet by another pending packet which
was not included in $C_{t}$. Observe that $C^{*}_{t}$ is indeed a clairvoyant
$\unlhd$-schedule: optimality follows from $w_{j^{\prime}}=w_{j}$, while
consistency with $\unlhd$ follows from $j^{\prime}\lhd j$.
It remains to prove that for every $i\in O_{t}$ such that $i\lhd j^{\prime}$,
$w_{i}<w_{j^{\prime}}=w_{j}$ holds. Note that $i\notin C^{*}_{t}$ as
$C^{*}_{t}$ is a $\unlhd$-schedule, and that $w_{i}\neq w_{j^{\prime}}=w_{j}$
holds by the choice of $j^{\prime}$. Assume for contradiction that
$w_{i}>w_{j}$. Then $C^{*}_{t}$ with $j$ replaced by $i$ is a feasible
schedule contradicting optimality of $C^{*}_{t}$.
Now we inspect some properties of conforming schedules.
###### Fact 3.
Let $C_{t}$ be a clairvoyant schedule conforming with an oblivious schedule
$O_{t}$. If $i,j\in O_{t}$, $w_{i}<w_{j}$ and $d_{i}<d_{j}$ (or, equivalently
$w_{i}<w_{j}$ and $i\lhd j$), and $i\in C_{t}$, then also $j\in C_{t}$.
###### Proof 2.6.
Assume for contradiction that $j\notin C_{t}$. Then $C_{t}$ with $i$ replaced
by $j$ is a feasible schedule contradicting optimality of $C_{t}$.
###### Lemma 2.7.
Let $C_{t}$ be a clairvoyant schedule conforming with an oblivious schedule
$O_{t}$. Suppose that $e=O_{t}(t)\notin C_{t}$. Then there is a clairvoyant
schedule $C^{*}_{t}$ obtained from $C_{t}$ by reordering of packets such that
$C^{*}_{t}(t)=h$.
###### Proof 2.8.
Let $j=C_{t}(t)\neq h$ and let $O_{t}=p_{1},p_{2},\ldots,p_{s}$. Observe that
$h\in C_{t}$ by Fact 3. So in particular $e=p_{1}$, $j=p_{k}$, and $h=p_{l}$
for some $1<k<l\leq s$. Let $d_{i}$ denote the deadline of $p_{i}$ for $1\leq
i\leq s$. Since $O_{t}$ is feasible in the absence of future arrivals,
$d_{i}\geq t+i$ for $i=1,\ldots,s$.
Recall that $p_{k},p_{l}\in C_{t}$ and that there can be some further packets
$p\in C_{t}$ such that $p_{k}\lhd p\lhd p_{l}$; some of these packets may be
not pending yet. We construct a schedule $C^{\prime}_{t}$ by reordering
$C_{t}$. Precisely, we put all the packets from $C_{t}$ that are not yet
pending at $t$ after all the packets from $C_{t}$ that are already pending,
keeping the order between the pending packets and between those not yet
pending. By the agreeable deadlines property, this is an earliest deadline
first order, so $C^{\prime}_{t}$ is a clairvoyant schedule.
As $e=p_{1}\notin C^{\prime}_{t}$ and $d_{i}\geq t+i$ for $i=1,\ldots,s$, all
the packets $x\in C^{\prime}_{t}$ preceding $h$ in $C^{\prime}_{t}$ (i.e.,
$x\in C^{\prime}_{t}$ such that $r_{x}\leq t$ and $x\lhd p_{l}=h$) have
_slack_ in $C^{\prime}_{t}$, i.e., each of them could also be scheduled one
step later. Hence $h=p_{l}$ can be moved to the very front of $C^{\prime}_{t}$
while keeping its feasibility, i.e.,
$C^{\prime}_{t}=p_{k},p_{k^{\prime}},\ldots,p_{l^{\prime}},p_{l}$ can be
transformed to a clairvoyant schedule
$C^{*}_{t}=p_{l},p_{k},p_{k^{\prime}},\ldots,p_{l^{\prime}}$. The reordering
is illustrated in Figure 3.
Figure 3. Construction of the schedule $C^{*}_{t}$. Packets are represented by
circles: the ones included in $C_{t}$ ($C^{*}_{t}$) are filled, the remaining
ones are hollow.
## 3\. Algorithms and their analyzes
### 3.1. The Algorithms
The algorithm MG [12] works as follows: at the beginning of each step $t$ it
considers the packets in the buffer and the newly arrived packets, and
calculates $O_{t}$. Then MG identifies the packets $e$ and $h$. If $\phi
w_{e}\geq w_{h}$, MG sends $e$. Otherwise, it sends the $\unlhd$-minimal
packet $f$ such that $w_{f}\geq\phi w_{e}$ and $\phi w_{f}\geq w_{h}$; the
latter exists as $h$ itself is a valid candidate. Our deterministic algorithm
MG′ does exactly the same with one exception: if $\phi w_{e}<w_{h}$, it sends
$h$ rather than $f$. Our randomized algorithm RG also works in a similar
fashion: it transmits $e$ with probability $\frac{w_{e}}{w_{h}}$ and $h$ with
the remaining probability. For completeness, we provide pseudo-codes of all
three algorithms in Figure 4.
* MG (step $t$)
$O_{t}\leftarrow$ oblivious schedule at $t$ $e\leftarrow$ the $\unlhd$-minimal
packet from $O_{t}$ $h\leftarrow$ the $\unlhd$-minimal of all the heaviest
packets from $O_{t}$ if $\phi w_{e}\geq w_{h}$ then transmit $e$ else
$f\leftarrow$ the $\unlhd$-minimal of all $j\in O_{t}$ s.t. $w_{j}\geq\phi
w_{e}$ and $\phi w_{j}\geq w_{h}$ transmit $f$
* MG′ (step $t$)
$O_{t}\leftarrow$ oblivious schedule at $t$ $e\leftarrow$ the $\unlhd$-minimal
packet from $O_{t}$ $h\leftarrow$ the $\unlhd$-minimal of all the heaviest
packets from $O_{t}$ if $\phi w_{e}\geq w_{h}$ then transmit $e$ else transmit
$h$
* RG (step $t$)
$O_{t}\leftarrow$ oblivious schedule at $t$ $e\leftarrow$ the $\unlhd$-minimal
packet from $O_{t}$ $h\leftarrow$ the $\unlhd$-minimal of all the heaviest
packets from $O_{t}$ transmit $e$ with probability $\frac{w_{e}}{w_{h}}$ and
$h$ with probability $1-\frac{w_{e}}{w_{h}}$
Figure 4. The three algorithms
### 3.2. Analysis Idea
The analysis of Li et al. [12] uses the following idea: in each step, after
both MG and Adv transmitted their packets, modify Adv’s buffer in such a way
that it remains the same as MG’s and that this change can only improve Adv’s
gain, both in this step and in the future. Sometimes Adv’s schedule is also
modified to achieve this goal, specifically, the packets in it may be
reordered, and Adv may sometimes be allowed to transmit two packets in a
single step. It is proved that in each such step the ratio of Adv’s to MG’s
gain is at most $\phi$. As was already noticed by Li et al. [12], this is
essentially a potential function argument. To simplify the analysis, it is
assumed (wlog) that Adv transmits its packets in the $\unlhd$ order.
Our analysis follows the outline of the one by Li et al., but we make it more
formal. Observe that there may be multiple clairvoyant schedules, and that Adv
can transmit $C_{t}(t)$ at every step $t$, where $C_{t}$ is a clairvoyant
schedule chosen arbitrarily in step $t$. As our algorithms
$\textsc{MG}^{\prime}$ and RG determine the oblivious schedule $O_{t}$ at each
step, we assume that every $C_{t}$ is a clairvoyant schedule conforming with
$O_{t}$.
There is one exception though. Sometimes, when a reordering in $C_{t}$ does
not hinder Adv’s performance (taking future arrivals into account), we “force”
Adv to follow the reordered schedule. This is the situation described in Lemma
2.7: when $e\notin C_{t}$, there is a clairvoyant schedule $C^{*}_{t}$ such
that $h=C^{*}_{t}(t)$. In such case we may assume that Adv follows $C^{*}_{t}$
rather than $C_{t}$, i.e., that it transmits $h$ at $t$. Indeed, we make that
assumption whenever our algorithm (either MG′ or RG) transmits $h$ at such
step: then Adv and MG′ (RG) transmit the same packet, which greatly simplifies
the analysis.
Our analysis of $\textsc{MG}^{\prime}$ is essentially the same as the original
analysis of MG by Li et al. [12], but lacks one case which is superfluous due
to our modification. As our algorithm $\textsc{MG}^{\prime}$ always transmits
either $e$ or $h$, and the packet $j$ that Adv transmits always satisfies
$j\unlhd h$ by definition of the clairvoyant schedule conforming with $O_{t}$,
the case which MG transmits $f$ such that $e\lhd f\lhd j$ does not occur to
$\textsc{MG}^{\prime}$. The same observation applies to RG, whose analysis
also follows the ideas of Li et al.
### 3.3. Analysis of the Deterministic Algorithm
We analyze this algorithm as mentioned before, i.e., assuming (wlog) that at
every step $t$ Adv transmits $C_{t}(t)$, where $C_{t}$ is a clairvoyant
schedule conforming with $O_{t}$.
###### Theorem 3.1.
MG′ is $\phi$-competitive on sequences with agreeable deadlines.
###### Proof 3.2.
Note that whenever $MG^{\prime}$ and Adv transmit the same packet, clearly
their gains are the same, as are their buffers right after such step. In
particular this happens when $e=h$ as then $\textsc{MG}^{\prime}$ transmits
$e=h$ and Adv does the same: in such case $h$ is both the heaviest packet and
the $\unlhd$-minimal non-dominated packet, so $h=C_{t}(t)$ by definition of
the clairvoyant schedule conforming with $O_{t}$.
In what follows we inspect the three remaining cases.
#### $\phi w_{e}\geq w_{h}\colon\textsc{MG}^{\prime}\text{ transmits }e.\
\textsc{Adv}\text{ transmits }j\neq e.$
To make the buffers of MG′ and Adv identical right after this step, we replace
$e$ in Adv’s buffer by $j$. This is advantageous for Adv as $d_{j}\geq d_{e}$
and $w_{j}\geq w_{e}$ follows from $e\unlhd j$ and the definition of a
clairvoyant schedule conforming with $O_{t}$. As $\phi w_{e}\geq w_{h}$, the
ratio of gains is
$\frac{w_{j}}{w_{e}}\leq\frac{w_{h}}{w_{e}}\leq\phi\enspace.$
#### $\phi w_{e}<w_{h}\colon\textsc{MG}^{\prime}\text{ transmits }h.\
\textsc{Adv}\text{ transmits }e.$
Note that Adv’s clairvoyant schedule from this step contains $h$ by Fact 3. We
let Adv transmit both $e$ and $h$ in this step and keep $e$ in its buffer,
making it identical to the buffer of MG′. Keeping $e$, as well as transmitting
two packets at a time is clearly advantageous for Adv. As $\phi w_{e}<w_{h}$,
the ratio of gains is
$\frac{w_{e}+w_{h}}{w_{h}}\leq\frac{1}{\phi}+1=\phi\enspace.$
#### $\phi w_{e}<w_{h}\colon\textsc{MG}^{\prime}\text{ transmits }h.\
\textsc{Adv}\text{ transmits }j\neq e.$
Note that $j\unlhd h$: by definition of the clairvoyant schedule conforming
with $O_{t}$, for every $i\in O_{t}$ such that $i\lhd j$, $w_{i}<w_{j}$ holds.
There are two cases: either $j=h$, or $w_{j}<w_{h}$ and $d_{j}<d_{h}$. In the
former one both players do the same and end up with identical buffers. Thus we
focus on the latter case. Fact 3 implies that $h\in C_{t}$. By Lemma 2.7,
$C_{t}$ remains feasible when $h$ is moved to its very beginning. Hence we
assume that Adv transmits $h$ in the current step. As this is the packet that
MG′ sends, the gains of Adv and $\textsc{MG}^{\prime}$ are the same and no
changes need be made to Adv’s buffer.
### 3.4. Analysis of the Randomized Algorithm
We analyze this algorithm as mentioned before, i.e., assuming (wlog) that at
every step $t$ Adv transmits $C_{t}(t)$, where $C_{t}$ is a clairvoyant
schedule conforming with $O_{t}$.
###### Theorem 3.3.
RG is $\frac{4}{3}$-competitive against oblivious adversary on sequences with
agreeable deadlines.
###### Proof 3.4.
Observe that if $e=h$, then RG transmits $e=h$ and Adv does the same: as in
such case $h$ is both the heaviest packet and the $\unlhd$-minimal non-
dominated packet, $h=C_{t}(t)$ by definition of the clairvoyant schedule
conforming with $O_{t}$. In such case the gains of RG and Adv are clearly the
same, as are their buffers right after step $t$. Thus we assume $e\neq h$ from
now on.
Let us first bound the algorithm’s expected gain in one step. It equals
$\displaystyle\mathcal{G}_{\mathrm{RG}}$
$\displaystyle=\frac{w_{e}}{w_{h}}\cdot
w_{e}+\left(1-\frac{w_{e}}{w_{h}}\right)\cdot w_{h}$
$\displaystyle=\frac{1}{w_{h}}\left(w_{e}^{2}-w_{e}w_{h}+w_{h}^{2}\right)$
$\displaystyle=\frac{1}{w_{h}}\left(\left(w_{e}-\frac{w_{h}}{2}\right)^{2}+\frac{3}{4}w_{h}^{2}\right)$
$\displaystyle\geq\frac{3}{4}w_{h}\enspace.$ (1)
Now we describe the changes to Adv’s scheduling policy and buffer in the given
step. These make Adv’s RG’s buffers identical, and, furthermore, make the
expected gain of the adversary equal exactly $w_{h}$. This, together with
(3.4) yields the desired bound. To this end we consider cases depending on
Adv’s choice.
1. (1)
Adv transmits $e$. Note that Adv’s clairvoyant schedule from this step
contains $h$ by Fact 3.
If RG transmits $e$, which it does with probability $\frac{w_{e}}{w_{h}}$,
both players gain $w_{e}$ and no changes are required.
Otherwise RG transmits $h$, and we let Adv transmit both $e$ and $h$ in this
step and keep $e$ in its buffer, making it identical to RG’s buffer. Keeping
$e$, as well as transmitting two packets at a time is clearly advantageous for
Adv.
Thus in this case the adversary’s expected gain is
$\mathcal{G}_{\mathrm{ADV}}=\frac{w_{e}}{w_{h}}\cdot
w_{e}+\left(1-\frac{w_{e}}{w_{h}}\right)\left(w_{e}+w_{h}\right)=w_{e}+(w_{h}-w_{e})=w_{h}\enspace.$
2. (2)
Adv transmits $j\neq e$. Note that $j\unlhd h$: by definition of the
clairvoyant schedule conforming with $O_{t}$, for every $i\in O_{t}$ such that
$i\lhd j$, $w_{i}<w_{j}$ holds.
If RG sends $e$, which it does with probability $\frac{w_{e}}{w_{h}}$, we
simply replace $e$ in Adv’s buffer by $j$. This is advantageous for Adv as
$w_{j}>w_{e}$ and $d_{j}>d_{e}$ follow from $e\lhd j$ and the definition of
the clairvoyant schedule conforming with $O_{t}$.
Otherwise RG sends $h$, and we claim that (wlog) Adv does the same. Suppose
that $j\neq h$, which implies that $w_{j}<w_{h}$ and $d_{j}<d_{h}$. Then $h\in
C_{t}$, by Fact 3. Thus, by Lemma 2.7, $C_{t}$ remains feasible when $h$ is
moved to its very beginning. Hence we assume that Adv transmits $h$ in the
current step. No further changes need be made to Adv’s buffer as RG also sends
$h$.
Thus in this case the adversary’s expected gain is $w_{h}$.
## 4\. Conclusion and Open Problems
We have shown that, as long as the adversary is oblivious, the ideas of Li et
al. [12] can be applied to randomized algorithms, and devised a
$\frac{4}{3}$-competitive algorithm this way. However, the gap between the
$\frac{5}{4}$ lower bound and our $\frac{4}{3}$ upper bound remains.
Some parts of our analysis hold even in the adaptive adversary model [8]. On
the other hand, other parts do not extend to adaptive adversary model, since
in general such adversary’s schedule is a random variable depending on the
algorithm’s random choices. Therefore it is not possible to assume that this
“schedule” is ordered by deadlines, let alone perform reordering like the one
in proof of Lemma 2.7.
This makes bridging either the $\left[\frac{5}{4},\frac{4}{3}\right]$ gap in
the oblivious adversary model, or the $\left[\frac{4}{3},\frac{e}{e-1}\right]$
gap in the adaptive adversary model all the more interesting.
### Acknowledgements
I would like to thank my brother, Artur Jeż, for numerous comments on the
draft of this paper.
## References
* [1] S. Ben-David, A. Borodin, R. M. Karp, G. Tardos, and A. Wigderson. On the power of randomization in on-line algorithms. Algorithmica, 11(1):2–14, 1994. Also appeared in Proc. of the 22nd STOC, 1990.
* [2] M. Bieńkowski, M. Chrobak, and Ł. Jeż. Randomized algorithms for Buffer Management with 2-Bounded Delay. In Proc. of the 6th WAOA, pages 92–104, 2008.
* [3] A. Borodin and R. El-Yaniv. Online Computation and Competitive Analysis. Cambridge University Press, 1998.
* [4] F. Y. L. Chin, M. Chrobak, S. P. Y. Fung, W. Jawor, J. Sgall, and T. Tichý. Online competitive algorithms for maximizing weighted throughput of unit jobs. J. Discrete Algorithms, 4(2):255–276, 2006.
* [5] F. Y. L. Chin and S. P. Y. Fung. Online scheduling with partial job values: Does timesharing or randomization help? Algorithmica, 37(3):149–164, 2003.
* [6] M. Englert and M. Westermann. Considering suppressed packets improves buffer management in QoS switches. In Proc. of the 18th SODA, pages 209–218, 2007.
* [7] B. Hajek. On the competitiveness of online scheduling of unit-length packets with hard deadlines in slotted time. In Conference in Information Sciences and Systems, pages 434–438, 2001.
* [8] Ł. Jeż. Randomised buffer management with bounded delay against adaptive adversary. CoRR, abs/0907.2050, 2009.
* [9] J. Jeżabek. Resource augmentation for QoS Buffer Management with Agreeable Deadlines. Unpublished, $2009^{+}$.
* [10] J. Jeżabek. Increasing machine speed in on-line scheduling of weighted unit-length jobs in slotted time. In Proc. of the 35th SOFSEM, pages 329–340, 2009.
* [11] A. Kesselman, Z. Lotker, Y. Mansour, B. Patt-Shamir, B. Schieber, and M. Sviridenko. Buffer overflow management in QoS switches. SIAM J. Comput., 33(3):563–583, 2004.
* [12] F. Li, J. Sethuraman, and C. Stein. An optimal online algorithm for packet scheduling with agreeable deadlines. In Proc. of the 16th SODA, pages 801–802, 2005.
* [13] A. Schrijver. Combinatorial Optimization, volume A. Springer, 2003.
|
arxiv-papers
| 2009-05-25T19:43:24 |
2024-09-04T02:49:02.891683
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "{\\L}ukasz Je\\.z",
"submitter": "{\\L}ukasz Je\\.z",
"url": "https://arxiv.org/abs/0905.4068"
}
|
0905.4143
|
# Unitary dual of $\mathbf{GL}(n)$ at archimedean places and global Jacquet-
Langlands correspondence
I. A Badulescu and D. Renard Centre de Math matiques Laurent Schwartz
Ecole Polytechnique
###### Abstract.
In [7], results about the global Jacquet-Langlands correspondence, (weak and
strong) multiplicity-one theorems and the classification of automorphic
representations for inner forms of the general linear group over a number
field are established, under the condition that the local inner forms are
split at archimedean places. In this paper, we extend the main local results
of [7] to archimedean places so that this assumption can be removed. Along the
way, we collect several results about the unitary dual of general linear
groups over $\mathbb{R}$, $\mathbb{C}$ or $\mathbb{H}$ of independent
interest.
###### Contents
1. 1 Introduction
2. 2 Notation
3. 3 Langlands classification
4. 4 Jacquet-Langlands correspondence
5. 5 Support and infinitesimal character
6. 6 Bruhat $G$-order
7. 7 Unitary dual
8. 8 Classification of generic irreducible unitary representations
9. 9 Classification of discrete series : archimedean case
10. 10 $U(3)$ for $A=\mathbb{H}$
11. 11 $U(1)$ : archimedean case
12. 12 Vogan’s classification and $U(0)$ in the archimedean case
13. 13 Jacquet-Langlands correspondence in the archimedean case
14. 14 Character formulas and ends of complementary series
15. 15 Compatibility and further comments
16. 16 Notation for the global case
17. 17 Second insight of some local results
18. 18 Global results
19. 19 $L$-functions $\epsilon$-factors and transfer
## 1\. Introduction
In [7], results about the global Jacquet-Langlands correspondence, (weak and
strong) multiplicity-one theorems and the classification of automorphic
representations for inner forms of the general linear group over a number
field are established, under the condition that the local inner forms are
split at archimedean places. The main goal of this paper is to remove this
hypothesis. The paper consists in two parts. In the first part, we extend the
main local results of [7] to archimedean places. In the second part, we
explain how to use these local results to establish the global results in
their full generality. Along the way, we collect several results about the
unitary dual of general linear groups over $\mathbb{R}$, $\mathbb{C}$ or
$\mathbb{H}$ of independent interest. Let us now explain in more details the
content of this paper.
### 1.1. Some notation
Let $A$ be one of the division algebras $\mathbb{R}$, $\mathbb{C}$ or
$\mathbb{H}$. If $A=\mathbb{R}$ or $A=\mathbb{C}$ and
$n\in\mathbb{N}^{\times}$, we denote by $\det$ the determinant map on
$\mathbf{GL}(n,A)$ (taking values in $A$). If $A=\mathbb{H}$, let $RN$ be the
reduced norm map on $\mathbf{GL}(n,\mathbb{H})$ (taking values in
$\mathbb{R}^{\times}_{+}$).
If $n\in\mathbb{N}$ and $\sum_{i=1}^{s}n_{i}=n$ is a partition of $n$, the
group
$\mathbf{GL}({n_{1}},A)\times\mathbf{GL}({n_{2}},A)\times...\times\mathbf{GL}({n_{s}},A)$
is identified with the subgroup of $\mathbf{GL}(n,A)$ of bloc diagonal
matrices of size $n_{1},\ldots,n_{s}$. Let us denote
$G_{(n_{1},\ldots,n_{s})}$ this subgroup and $P_{(n_{1},\ldots,n_{s})}$ the
parabolic subgroup of $GL(n,A)$ containing $G_{(n_{1},\ldots,n_{s})}$ and the
Borel subgroup of invertible upper triangular matrices. For $1\leq i\leq s$
let $\pi_{i}$ be an admissible representation of $\mathbf{GL}({n_{i}},A)$ of
finite length. We write then $\pi_{1}\times\pi_{2}\times...\times\pi_{s}$ for
the representation induced from the representation
$\pi_{1}\otimes\pi_{2}\otimes...\otimes\pi_{s}$ of $G_{(n_{1},\ldots,n_{s})}$
with respect to $P_{(n_{1},\ldots,n_{s})}$. We will also use this notation for
the image of representations in the Grothendieck group of virtual characters,
which makes the above product commutative. We also often do not distinguish
between a representation and its isomorphy class and write “equal” for
“isomorphic”.
### 1.2. Classification of unitary representations
We recall first Tadić classification of the unitary dual of the groups
$\mathbf{GL}(n,\mathbb{R})$ and $\mathbf{GL}(n,\mathbb{C})$, following [46].
The classification is similar to the one for non archimedean local fields
([41],[43]) and is explained in details in Section 7. Part of the arguments do
not appear in the literature in the case of $\mathbf{GL}(n,\mathbb{H})$, so we
give the complete proofs in Section 10, 11 and 12, using Vogan’s
classification [51].
Let $X_{\mathbb{C}}$ be the set of unitary characters of
$\mathbb{C}^{\times}$. If $\chi\in X_{\mathbb{C}}$, $n\in\mathbb{N}^{\times}$
let $\chi_{n}$ be the character $\chi\circ\det$ of
$\mathbf{GL}(n,\mathbb{C})$. Let $\nu_{n}$ be the character of
$\mathbf{GL}(n,\mathbb{C})$ given by the square of the module of the
determinant. If $\sigma$ is a representation of $\mathbf{GL}(n,\mathbb{C})$
and $\alpha\in\mathbb{R}$, write $\pi(\sigma,\alpha)$ for the representation
$\nu_{n}^{\alpha}\sigma\times\nu_{n}^{-\alpha}\sigma$ of
$\mathbf{GL}(2n,\mathbb{C})$. Set
$\mathcal{U}_{\mathbb{C}}=\\{\chi_{n},\pi(\chi_{n},\alpha)\ |\ \ \chi\in
X_{\mathbb{C}},\,n\in\mathbb{N}^{\times},\,\alpha\in]0,\frac{1}{2}[\\}.$
Let $X_{\mathbb{R}}$ be the set of unitary characters of
$\mathbb{R}^{\times}$. Let sgn denote the sign character. If $\chi\in
X_{\mathbb{R}}$, $n\in\mathbb{N}^{\times}$ let $\chi_{n}$ be the character
$\chi\circ\det$ of $\mathbf{GL}(n,\mathbb{R})$ and $\chi^{\prime}_{n}$ the
character $\chi\circ RN$ of $\mathbf{GL}(n,\mathbb{H})$. For fixed $n$, the
map $\chi\mapsto\chi_{n}$ is an isomorphism from the group of unitary
characters of $\mathbb{R}^{\times}$ to the group of unitary characters of
$\mathbf{GL}(n,\mathbb{R})$, while $\chi\mapsto\chi^{\prime}_{n}$ is a
surjective map from the group of unitary characters of $\mathbb{R}^{\times}$
to the group of unitary characters of $\mathbf{GL}(n,\mathbb{H})$, with kernel
$\\{1,\mathrm{sgn}\\}$.
Let $\nu_{n}$ (resp. $\nu^{\prime}_{n}$) be the character of
$\mathbf{GL}(n,\mathbb{R})$ (resp. $\mathbf{GL}(n,\mathbb{H})$) given by the
absolute value (resp. the reduced norm) of the determinant. If $\sigma$ is a
representation of $\mathbf{GL}(n,\mathbb{R})$ (resp.
$\mathbf{GL}(n,\mathbb{H})$) and $\alpha\in\mathbb{R}$, write
$\pi(\sigma,\alpha)$ for the representation
$\nu_{n}^{\alpha}\sigma\times\nu_{n}^{-\alpha}\sigma$ of
$\mathbf{GL}(2n,\mathbb{R})$ (resp. the representation
${\nu^{\prime}}_{n}^{\alpha}\sigma\times{\nu^{\prime}}_{n}^{-\alpha}\sigma$ of
$\mathbf{GL}(2n,\mathbb{H})$).
Let $D^{u}_{2}$ be the set of isomorphy classes of square integrable (modulo
center) representations of $\mathbf{GL}(2,\mathbb{R})$. For $\delta\in
D^{u}_{2}$ and $k\in\mathbb{N}^{\times}$, write $u(\delta,k)$ for the
Langlands quotient of the representation
$\nu_{2}^{\frac{k-1}{2}}\delta\times\nu_{2}^{\frac{k-3}{2}}\delta\times\nu_{2}^{\frac{k-5}{2}}\delta\times...\times\nu_{2}^{-\frac{k-1}{2}}\delta$.
Then $u(\delta,k)$ is a representation of $\mathbf{GL}({2k},\mathbb{R})$. Set
$\displaystyle\mathcal{U}_{\mathbb{R}}=\\{\chi_{n},\pi(\chi_{n},\alpha)\ |\ \
\chi\in
X_{\mathbb{R}},\,n\in\mathbb{N}^{\times},\,\alpha\in]0,\frac{1}{2}[\\}$
$\displaystyle\cup\\{u(\delta,k),\pi(u(\delta,k),\alpha)\ |\ \ \delta\in
D^{u}_{2},\,k\in\mathbb{N}^{\times},\ \alpha\in]0,\frac{1}{2}[\\}.$
Let now $D$ be the set of unitary representations of $\mathbb{H}^{\times}$
which are not one-dimensional. For $\delta\in D$ and
$k\in\mathbb{N}^{\times}$, write $u(\delta,k)$ for the Langlands quotient of
the representation
${\nu^{\prime}}_{1}^{\frac{k-1}{2}}\delta\times{\nu^{\prime}}_{1}^{\frac{k-3}{2}}\delta\times{\nu^{\prime}}_{1}^{\frac{k-5}{2}}\delta\times...\times{\nu^{\prime}}_{1}^{-\frac{k-1}{2}}\delta$.
Then $u(\delta,k)$ is a representation of $\mathbf{GL}(k,\mathbb{H})$.
Set
$\displaystyle\mathcal{U}_{\mathbb{H}}=\\{\chi^{\prime}_{n},\pi(\chi^{\prime}_{n},\alpha)\
|\ \ \chi\in X_{\mathbb{R}},\,n\in\mathbb{N}^{\times},\,\alpha\in]0,1[\\}$
$\displaystyle\cup\ \ \\{u(\delta,k),\pi(u(\delta,k),\alpha)\ |\ \ \delta\in
D,\,k\in\mathbb{N}^{\times}\,\alpha\in]0,\frac{1}{2}[\\}.$
###### Theorem 1.1.
For $A=\mathbb{C},\mathbb{R},\mathbb{H}$, any representation in
$\mathcal{U}_{A}$ is irreducible and unitary, any product of representations
in $\mathcal{U}_{A}$ is irreducible and unitary, and any irreducible unitary
representation $\pi$ of $\mathbf{GL}(n,A)$ can be written as a product of
elements in $\mathcal{U}_{A}$. Moreover, $\pi$ determines the factors of the
product (up to permutation).
Notice the two different ranges for the the possible values of $\alpha$ in the
case $A=\mathbb{H}$.
### 1.3. Jacquet-Langlands for unitary representations
Any element in $\mathbf{GL}(n,\mathbb{H})$ has a characteristic polynomial of
degree $2n$ with coefficients in $\mathbb{R}$. We say that two elements
$g\in\mathbf{GL}(2n,\mathbb{R})$ and $g^{\prime}\in\mathbf{GL}(n,\mathbb{H})$
correspond (to each other) if they have the same characteristic polynomial and
this polynomial has distinct roots in $\mathbb{C}$ (this last condition means
that $g$ and $g^{\prime}$ are regular semisimple). We then write
$g\leftrightarrow g^{\prime}$.
Let $\mathbf{C}$ denote the Jacquet-Langlands correspondence between
irreducible square integrable representations of $\mathbf{GL}(2,\mathbb{R})$
and irreducible unitary representations of $\mathbb{H}^{\times}$ ([23]). This
correspondence can be extended to a correspondence $|\mathbf{LJ}|$ between all
irreducible unitary representations of $\mathbf{GL}(2n,\mathbb{R})$ and
$\mathbf{GL}(n,\mathbb{H})$ (it comes from a ring morphism $\mathbf{LJ}$
between the respective Grothendieck groups, defined in Section 4, which
explain the notation). In what follows, it is understood that each time we
write the relation $|\mathbf{LJ}|(\pi)=\pi^{\prime}$ for $\pi$ and
$\pi^{\prime}$ representations of $\mathbf{GL}(2n,\mathbb{R})$ and
$\mathbf{GL}(n,\mathbb{H})$ respectively, then $\pi$ and $\pi^{\prime}$
satisfy the character relation
$\Theta_{\pi}(g)=\varepsilon(\pi)\Theta_{\pi}^{\prime}(g^{\prime})$ for all
$g\leftrightarrow g^{\prime}$ where $\varepsilon(\pi)$ is an explicit sign
($\pi$ clearly determines $\pi^{\prime}$ and $\varepsilon$). The
correspondence $|\mathbf{LJ}|$ for unitary representations is given first on
elements in $\mathcal{U}_{\mathbb{R}}$:
(a) $|\mathbf{LJ}|(\chi_{2n})=\chi^{\prime}_{n}$ and
$|\mathbf{LJ}|(\pi(\chi_{2n},\alpha))=\pi(\chi^{\prime}_{n},\alpha)$ for all
$\chi\in X_{\mathbb{R}}$ and $\alpha\in]0,\frac{1}{2}[$;
(b) If $\delta\in D^{u}_{2}$ is such that $\mathbf{C}(\delta)$ is in $D$ (i.e.
is not one-dimensional) then
$|\mathbf{LJ}|(u(\delta,k))=|\mathbf{LJ}|(\mathbf{C}(\delta),k)$ and
$\mathbf{LJ}(u(\delta,k),\alpha)=\pi(u(\mathbf{C}(\delta),k),\alpha)$ for all
$\alpha\in]0,\frac{1}{2}[$;
(c) If $\delta\in D^{u}_{2}$ is such that $\mathbf{C}(\delta)$ is a one-
dimensional representation $\chi^{\prime}_{1}$, then
— $|\mathbf{LJ}|(u(\delta,k))=\pi(\chi^{\prime}_{\frac{k}{2}},\frac{1}{2})$,
$|\mathbf{LJ}|(\pi(u(\delta,k),\alpha))=\pi(\pi(\chi^{\prime}_{\frac{k}{2}},\frac{1}{2}),\alpha)$
if $k$ is even and $\alpha\in]0,\frac{1}{2}[$.
—
$|\mathbf{LJ}|(u(\delta,k))=\chi^{\prime}_{\frac{k+1}{2}}\times\chi^{\prime}_{\frac{k-1}{2}}$,
$|\mathbf{LJ}|(\pi(u(\delta,k),\alpha))=\pi(\chi^{\prime}_{\frac{k+1}{2}},\alpha)\times\pi(\chi^{\prime}_{\frac{k-1}{2}},\alpha)$
if $k\neq 1$ odd and $\alpha\in]0,\frac{1}{2}[$.
— $|\mathbf{LJ}|(\delta)=\chi^{\prime}_{1}$,
$|\mathbf{LJ}|(\pi(\delta,\alpha))=\pi(\chi^{\prime}_{1},\alpha)$,
$\alpha\in]0,\frac{1}{2}[$,.
Let $\pi$ be an irreducible unitary representation of
$\mathbf{GL}(2n,\mathbb{R})$. If writing $\pi$ as a product of elements in
$\mathcal{U}_{\mathbb{R}}$ involves a factor not listed in (a), (b) or (c) it
is easy to show that $\pi$ has a character which vanishes on elements which
correspond to elements of $\mathbf{GL}(n,\mathbb{H})$, and we set
$|\mathbf{LJ}|(\pi)=0$. If all the factors $\sigma_{i}$ of $\pi$ are in (a)
(b) (c) above, $|\mathbf{LJ}|(\pi)$ is the product of the
$|\mathbf{LJ}|(\sigma_{i})$ (an irreducible unitary representation of
$\mathbf{GL}(n,\mathbb{H})$). Elements of $\mathcal{U}_{\mathbb{R}}$ not
listed at (a) (b) (c) are of type $\chi$ or $\pi(\chi,\alpha)$, with $\chi$ a
character of some $\mathbf{GL}(k,\mathbb{R})$ and $k$ odd.
Notice that some unitary irreducible representations of
$\mathbf{GL}(n,\mathbb{H})$ are not in the image of this map (if $n\geq 2$).
For instance, when $\chi\in X_{\mathbb{R}}$ and $\frac{1}{2}<\alpha<1$, then
both $\pi(\chi_{2},\alpha)$ and $\pi(\chi^{\prime}_{1},\alpha)$ are
irreducible and correspond to each other by the character relation, but
$\pi(\chi^{\prime}_{1},\alpha)$ is unitary while $\pi(\chi_{2},\alpha)$ is
not. Using the classification of unitary representations for
$\mathbf{GL}(4,\mathbb{R})$ and basic information from the infinitesimal
character, it is clear that no (possibly other) unitary representation of
$\mathbf{GL}(4,\mathbb{R})$ has matching character with
$\pi(\chi^{\prime}_{1},\alpha)$.
As a consequence of the above results, we get:
###### Theorem 1.2.
Let $u$ be a unitary irreducible representation of
$\mathbf{GL}(2n,\mathbb{R})$. Then either the character $\Theta_{u}$ of $u$
vanishes on the set of elements of $\mathbf{GL}(2n,\mathbb{R})$ which
correspond to some element of $\mathbf{GL}(n,\mathbb{H})$, or there exists a
unique irreducible unitary (smooth) representation $u^{\prime}$ of
$\mathbf{GL}(n,\mathbb{H})$ such that
$\Theta_{u}(g)=\varepsilon(u)\Theta_{u^{\prime}}(g^{\prime})$
for all $g\leftrightarrow g^{\prime}$, where $\varepsilon(u)\in\\{-1,1\\}$.
The above results are proved in Section 13 and are based on the fact that
$\mathbf{GL}(2n,\mathbb{R})$ and $\mathbf{GL}(n,\mathbb{H})$ share Levi
subgroups (of $\theta$-stable parabolic subgroups, the ones used in
cohomological induction ([26])) which are products of
$\mathbf{GL}(n_{i},\mathbb{C})$. The underlying principle (a nice instance of
Langlands’ functoriality) is that the Jacquet-Langlands morphism $\mathbf{LJ}$
commutes with cohomological induction. The same principle, with Kazhdan-
Patterson lifting instead of Jacquet-Langlands correspondence, was already
used in [1].
### 1.4. Character identities and ends of complementary series
In Section 14, we give the composition series of ends of complementary series
in most cases. This is not directly related to the main purpose of the paper,
which is the global theory of the second part, but it solves some old
conjectures of Tadić which are important in understanding the topology of the
unitary dual of the groups $\mathbf{GL}(n,A)$,
$A=\mathbb{R},\mathbb{C},\mathbb{H}$. The starting point is Zuckerman formula
for the trivial representation of $\mathbf{GL}(n,A)$. Together with
cohomological induction, it gives character formulas for unitary
representations of the groups $\mathbf{GL}(n,A)$. In case $A=\mathbb{C}$,
Zuckerman formula is given by a determinant (see formula (14.2)), and the
Lewis Carroll identity of [14] allows us to deduce formulas (14.3), (14.5),
(14.6), (14.7), (14.10) for the ends of complementary series.
### 1.5. Global results
Let ${\mathcal{F}}$ be a global field of characteristic zero and
${\mathcal{D}}$ a central division algebra over ${\mathcal{F}}$ of dimension
$d^{2}$. Let $n\in\mathbb{N}^{*}$. Set $A^{\prime}=M_{n}({\mathcal{D}})$. For
each place $v$ of ${\mathcal{F}}$ let ${\mathcal{F}}_{v}$ be the completion of
${\mathcal{F}}$ at $v$ and set
$A^{\prime}_{v}=A^{\prime}\otimes{\mathcal{F}}_{v}$. For every place $v$ of
${\mathcal{F}}$, $A^{\prime}_{v}$ is isomorphic to the matrix algebra
$M_{r_{v}}({\mathcal{D}}_{v})$ for some positive number $r_{v}$ and some
central division algebra ${\mathcal{D}}_{v}$ of dimension $d_{v}^{2}$ over
${\mathcal{F}}_{v}$ such that $r_{v}d_{v}=nd$. We will fix once and for all an
isomorphism and identify these two algebras. Let $V$ be the (finite) set of
places where $M_{n}({\mathcal{D}})$ is not split (i.e. $d_{v}\neq 1$).
Let $G^{\prime}({\mathcal{F}})$ be the group
$A^{\prime\times}=\mathbf{GL}(n,{\mathcal{D}})$. For every place $v\in V$, set
$G^{\prime}_{v}=A^{\prime\times}_{v}=\mathbf{GL}(r_{v},{\mathcal{D}}_{v})$ and
$G_{v}=\mathbf{GL}(n,{\mathcal{F}}_{v})$. For a given place $v$ (clear from
the context) write $g\leftrightarrow g^{\prime}$ if $g\in G_{v}$ and
$g^{\prime}\in G_{v}^{\prime}$ are regular semisimple and have equal
characteristic polynomial.
If $v\notin V$, the groups $G_{v}$ and $G^{\prime}_{v}$ are isomorphic and we
fix once and for all an isomorphism which allows us to identify them (as every
automorphism of $G_{v}$ or $G^{\prime}_{v}$ is inner, the choice of this
isomorphism will not be relevant when working with equivalence classes of
representations).
Theorem 1.2 has been proved in the $p$-adic case too ([45], [7]). So, if $v\in
V$, with the same notation and conventions in the $p$-adic case as in the
archimedean case :
###### Theorem 1.3.
Let $u$ be a unitary irreducible smooth representation of $G_{v}$. Then we
have one and only one of the two following :
(i) the character $\Theta_{u}$ of $u$ vanishes on the set of elements of
$G_{v}$ which correspond to elements of $G^{\prime}_{v}$,
(ii) there exists a unique unitary smooth irreducible representation
$u^{\prime}$ of $G^{\prime}_{v}$ such that
$\Theta_{u}(g)=\varepsilon(u)\Theta_{u^{\prime}}(g^{\prime})$
for any $g\leftrightarrow g^{\prime}$, where $\varepsilon(u)\in\\{-1,1\\}$.
In the second case (ii) we say $u$ is compatible. We denote the map $u\mapsto
u^{\prime}$ defined on the set of compatible (unitary) representations by
$|\mathbf{LJ}_{v}|$.
Let ${\mathbb{A}}$ be the ring of adeles of ${\mathcal{F}}$,
$G^{\prime}({\mathbb{A}})$. The group $G^{\prime}({\mathcal{F}})$ (resp.
$G({\mathcal{F}})$) is a discrete subgroup of $G^{\prime}({\mathbb{A}})$
(resp. $G({\mathbb{A}})$). The centers of $G^{\prime}$ and $G$ consist of
scalar nonzero matrices and can be identified, so both will be denoted by $Z$.
We endow these local and global groups with measures as in [3] and for every
unitary smooth character $\omega$ of $Z({\mathbb{A}})$ trivial on
$Z({{\mathcal{F}}})$, we let
$L^{2}(G^{\prime}({\mathcal{F}})Z({\mathbb{A}})\backslash
G^{\prime}({\mathbb{A}});\omega)$ be the space of functions $f$ defined on
$G^{\prime}({\mathbb{A}})$ with values in ${\mathbb{C}}$ such that
i) $f$ is left invariant under $G^{\prime}({\mathcal{F}})$,
ii) $f(zg)=\omega(z)f(g)$ for all $z\in Z({\mathbb{A}})$ and all $g\in
G^{\prime}({\mathbb{A}})$,
iii) $|f|^{2}$ is integrable over
$G^{\prime}({\mathcal{F}})Z({\mathbb{A}})\backslash G^{\prime}({\mathbb{A}})$.
Let us denote by $R^{\prime}_{\omega}$ the representation of
$G^{\prime}({\mathbb{A}})$ on $L^{2}(G^{\prime}(F)Z({\mathbb{A}})\backslash
G^{\prime}({\mathbb{A}});\omega)$ by right translations. A discrete series of
$G^{\prime}({\mathbb{A}})$ is the equivalence class of an irreducible
subrepresentation of $R^{\prime}_{\omega}$ for some smooth unitary character
$\omega$ of $Z({\mathbb{A}})$ trivial on $Z({\mathcal{F}})$. Then $\omega$ is
the central character of $\pi$. Let $R^{\prime}_{\omega,disc}$ be the
subrepresentation of $R^{\prime}_{\omega}$ generated by irreducible
subrepresentations. It is known that a discrete series representation of
$G^{\prime}({\mathbb{A}})$ appears with finite multiplicity in
$R^{\prime}_{\omega,disc}$.
Similar definitions and statements can be made with $G$ instead of
$G^{\prime}$, with obvious notation. Every discrete series $\pi$ of
$G^{\prime}({\mathbb{A}})$ (resp. $G({\mathbb{A}})$) is “isomorphic” to a
restricted Hilbert tensor product of irreducible unitary smooth
representations $\pi_{v}$ of the groups $G^{\prime}_{v}$ (resp. $G_{v}$) - see
[17] for a precise statement and proof. The local components $\pi_{v}$ are
determined by $\pi$.
Denote $DS$ (resp. $DS^{\prime}$) the set of discrete series of
$G({\mathbb{A}})$ (resp. $G^{\prime}({\mathbb{A}})$). Let us say that a
discrete series $\pi$ of $G({\mathbb{A}})$ is ${\mathcal{D}}$-compatible if
$\pi_{v}$ is compatible for all places $v\in V$.
###### Theorem 1.4.
(a) There exists a unique map ${\bf G}:DS^{\prime}\to DS$ such that for every
$\pi^{\prime}\in DS^{\prime}$, if $\pi={\bf G}(\pi^{\prime})$, one has
— $\pi$ is $\mathcal{D}$-compatible,
— if $v\notin V$, then $\pi_{v}=\pi^{\prime}_{v}$ and
— if $v\in V$, then $|\mathbf{LJ}_{v}|(\pi_{v})=\pi^{\prime}_{v}$.
The map ${\bf G}$ is injective. The image of ${\bf G}$ is the set of all
${\mathcal{D}}$-compatible discrete series of $G({\mathbb{A}})$.
(b) If $\pi^{\prime}\in DS^{\prime}$, then the multiplicity of $\pi^{\prime}$
in the discrete spectrum is one (Multiplicity One Theorem).
(c) If $\pi^{\prime},\pi^{\prime\prime}\in DS^{\prime}$, if
$\pi^{\prime}_{v}\simeq\pi^{\prime\prime}_{v}$ for almost all $v$, then
$\pi^{\prime}=\pi^{\prime\prime}$ (Strong Multiplicity One Theorem).
With $\mathcal{D}$ fixed, we need now to consider all possible
$n\in\mathbb{N}^{*}$ at the same time and we add a subscript in the notation :
$A^{\prime}_{n}=M_{n}({\mathcal{F}})$, $A_{n}=M_{n}({\mathcal{D}})$, $G_{n}$,
$G^{\prime}_{n}$, $DS_{n}$, $DS^{\prime}_{n}$… We recall the Moeglin-
Waldspurger classification of the residual spectrum for the groups
$G_{n}({\mathbb{A}})$, $n\in\mathbb{N}^{*}$. Let $\nu$ be the character of
$G_{n}({\mathbb{A}})$ or $G^{\prime}_{n}({\mathbb{A}})$ given by the
restricted product of characters $\nu_{v}=|\det|_{v}$ where $|\ \ |_{v}$ is
the $v$-adic norm and $\det$ is the reduced norm at the place $v$. Let
$m\in\mathbb{N}^{*}$ and $\rho\in DS_{m}$ be a cuspidal representation. If
$k\in\mathbb{N}^{*}$, then the induced representation to
$G_{mk}({\mathbb{A}})$ from $\otimes_{i=0}^{k-1}(\nu^{\frac{k-1}{2}-i}\rho)$
has a unique constituent (in the sense of [29]) $\pi$ which is a discrete
series (i.e. $\pi\in DS_{mk}$). We set then $\pi=MW(\rho,k)$. Discrete series
$\pi$ of groups $G_{n}({\mathbb{A}})$, $n\in\mathbb{N}^{*}$, are all of this
type, $k$ and $\rho$ are determined by $\pi$. The discrete series $\pi$ is
cuspidal if $k=1$ and residual if $k>1$. These results are proved in [31].
The proof of the following propositions and corollary is the same as in [7],
once the local and global transfer are established without condition on
archimedean places. Firstly, concerning cuspidal representations of
$G^{\prime}({\mathbb{A}})$, we get:
###### Proposition 1.5.
Let $m\in\mathbb{N}^{*}$ and let $\rho\in DS_{m}$ be a cuspidal
representation. Then
(a) There exists $s_{\rho,{\mathcal{D}}}\in\mathbb{N}^{*}$ such that, for
$k\in\mathbb{N}^{*}$, $MW(\rho,k)$ is ${\mathcal{D}}$-compatible if and only
if $s_{\rho,{\mathcal{D}}}|k$. We have $s_{\rho,{\mathcal{D}}}|d$.
(b) ${{\bf G}}^{-1}(MW(\rho,s_{\rho,{\mathcal{D}}}))=\rho^{\prime}\in
DS^{\prime}_{\frac{ms_{\rho,{\mathcal{D}}}}{d}}$ is cuspidal. The map ${\bf
G}^{-1}$ sends cuspidal $\mathcal{D}$-compatible representations to cuspidal
representations.
(c) Every cuspidal representation in
$DS^{\prime}_{\frac{ms_{\rho,{\mathcal{D}}}}{d}}$ is obtained as in (b).
Let us call essentially cuspidal representation the twist of a cuspidal
representation by a real power of $\nu$. If $n_{1},n_{2},...,n_{k}$ are
positive integers such that $\sum_{i=1}^{k}n_{i}=n$, then the subgroup $L$ of
$G^{\prime}_{n}({\mathbb{A}})$ of diagonal matrices by blocks of sizes
$n_{1}$, $n_{2}$,…,$n_{k}$ will be called standard Levi subgroup of
$G^{\prime}_{n}({\mathbb{A}})$. We identify then $L$ to
$\times_{i=1}^{k}G^{\prime}_{n_{i}}({\mathbb{A}})$. All the definitions extend
in an obvious way to $L$. The two statements in the following Proposition
generalize respectively [31] and Theorem 4.4 in [24].
###### Proposition 1.6.
(a) Let $\rho^{\prime}\in DS^{\prime}_{m}$ be a cuspidal representation and
let $k\in\mathbb{N}^{*}$. The induced representation from
$\otimes_{i=0}^{k-1}(\nu_{\rho^{\prime}}^{\frac{k-1}{2}-i}\rho^{\prime})$ has
a unique constituent $\pi^{\prime}$ which is a discrete series, denoted by
$\pi^{\prime}=MW^{\prime}(\rho^{\prime},k)$. Every discrete series
$\pi^{\prime}$ of a group $G^{\prime}_{n}({\mathbb{A}})$,
$n\in\mathbb{N}^{*}$, is of this type, and $k$ and $\rho^{\prime}$ are
determined by $\pi^{\prime}$. The representation $\pi^{\prime}$ is cuspidal if
$k=1$, and residual if $k>1$. If $\pi^{\prime}=MW^{\prime}(\rho^{\prime},k)$,
then ${{\bf G}}(\rho^{\prime})=MW(\rho,s_{\rho,{\mathcal{D}}})$ if and only if
${{\bf G}}(\pi^{\prime})=MW(\rho,ks_{\rho,{\mathcal{D}}})$.
(b) Let $(L_{i},\rho^{\prime}_{i})$, $i=1,2$, be such that $L_{i}$ is a
standard Levi subgroup of $G^{\prime}_{n}({\mathbb{A}})$ and
$\rho^{\prime}_{i}$ is an essentially cuspidal representation of $L_{i}$ for
$i=1,2$. Fix any finite set of places $V^{\prime}$ containing the infinite
places and all the finite places $v$ where $\rho^{\prime}_{1,v}$ or
$\rho^{\prime}_{2,v}$ is ramified (i.e. has no non-zero vector fixed under
$K_{v}$). If, for all places $v\notin V^{\prime}$, the unramified subquotients
of the induced representations from $\rho^{\prime}_{i,v}$ to
$G^{\prime}_{n}({\mathbb{A}})$ are equal, then $(L_{1},\rho^{\prime}_{1})$ and
$(L_{2},\rho^{\prime}_{2})$ are conjugate.
We know by [29] that if $\pi^{\prime}$ is an automorphic representation of
$G^{\prime}_{n}$, then there exists $(L,\rho^{\prime})$ where $L$ is a
standard Levi subgroup of $G^{\prime}_{n}$ and $\rho^{\prime}$ is an
essentially cuspidal representation of $L$ such that $\pi^{\prime}$ is a
constituent of the induced representation from $\rho^{\prime}$ to
$G^{\prime}_{n}$. A corollary of the point (b) of the proposition is
###### Corollary 1.7.
$(L,\rho^{\prime})$ is unique up to conjugation.
### 1.6. Final comment and acknowledgment
Let us say a word about the length of the paper which can be explained by our
desire to give complete proofs or/and references of all the statements. For
instance, the proof of $U(3)$ for $\mathbf{GL}(n,\mathbb{H})$ in Section 10 is
quite long in itself, and requires the material about Bruhat $G$-order
introduced in the previous section, not needed otherwise. We could have saved
four or five pages by referring to [46] which gives the proof of $U(3)$ for
$\mathbf{GL}(n,\mathbb{R})$ and $\mathbf{GL}(n,\mathbb{C})$, but [46] is at
the time still unpublished, and our arguments using Bruhat $G$-order could be
used to simplify the proofs in [46]. Our exposition is also intended for the
reader who is interested in comparing the archimedean and non-archimedean
theory, by making them as similar as possible. Our discussion of Vogan’s
classification in Section 12 is also longer than strictly needed, but we feel
that it is important that the relation between Vogan and Tadić classifications
is explained somewhere in some details.
We would like to thank D. Vogan for answering many questions concerning his
work.
## 2\. Notation
### 2.1. Multisets
Let $X$ be a set. We denote by $M(X)$ the set of functions from $X$ to
$\mathbb{N}$ with finite support, and we consider an element $m\in M(X)$ as a
‘set with multiplicities’. Such an element $m\in M(X)$ will be typically
denoted by
$m=(x_{1},x_{2},\ldots,x_{r})$
It is a (non ordered) list of elements $x_{i}$ in $X$.
The multiset $M(X)$ is endowed with the structure of a monoid induced from the
one on $\mathbb{N}$ : if $m=(x_{1},,\ldots,x_{r})$, $n=(y_{1},,\ldots,y_{s})$
are in $M(X)$, we get
$m+n=(x_{1},\ldots,x_{r},y_{1},\ldots,y_{s}).$
### 2.2. Local fields and division algebras
In the following, we will use the following notation : $F$ is a local field,
$|\,.\,|_{F}$ is the normalized absolute value on $F$ and $A$ is a central
division algebra over $F$ with $\dim_{F}(A)=d^{2}$.
If $F$ is archimedean, then either $F=\mathbb{R}$ and $A=\mathbb{R}$ or
$A=\mathbb{H}$, the algebra of quaternions, or $F=A=\mathbb{C}$.
### 2.3. $\mathbf{GL}$
For $n\in\mathbb{N}^{*}$, we set $G_{n}=\mathbf{GL}(n,A)$ and $G_{0}=\\{1\\}$.
We denote the reduced norm on $G_{n}$ by
$RN\,:\,G_{n}\rightarrow F^{\times}.$
We set :
$\nu_{n}\,:G_{n}\rightarrow|RN(g)|_{F}.$
When the value of $n$ is not relevant to the discussion, we will simply put
$G=G_{n}$ and $\nu=\nu_{n}$.
###### Remark 2.1.
If $A=F$, the reduced norm is just the determinant.
The character $\nu$ of $G$ is unramified and in fact the group of unramified
characters of $G$ is
$\mathcal{X}(G)=\\{\nu^{s},\,s\in\mathbb{C}\\}.$
If $G$ is one of the groups $G_{n}$, or more generally, the group of rational
points of any reductive algebraic connected group defined over $F$, we denote
by $\mathcal{M}(G)$ the category of smooth representations of $G$ (in the non
archimedean case), or the category of Harish-Chandra modules (in the
archimedean case), with respect to a fixed maximal compact subgroup $K$ of
$G$. For $\mathbf{GL}(n,\mathbb{R})$, $\mathbf{GL}(n,\mathbb{C})$ and
$\mathbf{GL}(n,\mathbb{H})$, these maximal compact subgroups are respectively
chosen to be $\mathbf{O}(n)$, $\mathbf{U}(n)$ and $\mathbf{Sp}(n)$, embedded
in the standard way. Then $\mathcal{R}(G)$ denotes the Grothendieck group of
the category of finite length representations in $\mathcal{M}(G)$. This is the
free $\mathbb{Z}$-module with basis $\mathbf{Irr}(G)$, the set of equivalence
classes of irreducible representations in $\mathcal{M}(G)$. If
$\pi\in\mathcal{M}(G)$, of finite length, we will again denote by $\pi$ its
image in $\mathcal{R}(G)$. When confusion may occur, we will state precisely
if we consider $\pi$ as a representation or as an element in $\mathcal{R}(G)$.
Set
$\mathbf{Irr}_{n}=\mathbf{Irr}(G_{n}),\qquad\mathbf{Irr}=\coprod_{n\in\mathbb{N}^{*}}\mathbf{Irr}_{n},\quad\mathcal{R}=\bigoplus_{n\in\mathbb{N}}\mathcal{R}(G_{n}),.$
If $\tau\in\mathcal{M}(G_{n})$ or $\mathcal{R}(G_{n})$, we set $\deg\tau=n$.
### 2.4. Standard parabolic and Levi subgroups
Let $n\in\mathbb{N}$ and let $\sum_{i=1}^{s}n_{i}=n$ be a partition of $n$.
The group
$\prod_{i=1}^{s}G_{n_{i}}$
is identified with the subgroup of $G_{n}$ of bloc diagonal matrices of
respective size $n_{1},\ldots,n_{s}$. Let us denote $G_{(n_{1},\ldots,n_{s})}$
this subgroup and $P_{(n_{1},\ldots,n_{s})}$ (resp.
$\bar{P}_{(n_{1},\ldots,n_{s})}$) the parabolic subgroup of $G_{n}$ containing
$G_{(n_{1},\ldots,n_{s})}$ and the Borel subgroup of invertible upper
triangular matrices (resp. lower triangular). The subgroup
$G_{(n_{1},\ldots,n_{s})}$ is a Levi factor of the standard parabolic subgroup
$P_{(n_{1},\ldots,n_{s})}$.
In this setting, we denote by $i_{(n_{1},\ldots,n_{s})}$ (resp.
$\underline{i}_{(n_{1},\ldots,n_{s})}$) the functor of normalized parabolic
induction from $\mathcal{M}(G_{(n_{1},\ldots,n_{s})})$ to $\mathcal{M}(G_{n})$
with respect to the parabolic subgroup $P_{(n_{1},\ldots,n_{s})}$ (resp.
$\bar{P}_{(n_{1},\ldots,n_{s})}$).
###### Definition 2.2.
Let $\pi_{1}\in\mathcal{M}(G_{n_{1}})$ and $\pi_{2}\in\mathcal{M}(G_{n_{2}})$,
both of finite length. We can then form the induced representation :
$\pi_{1}\times\pi_{2}:=i_{(n_{1},n_{2})}(\pi_{1}\otimes\pi_{2}).$
We still denote by $\pi_{1}\times\pi_{2}$ the image of
$i_{n_{1},n_{2}}(\pi_{1}\otimes\pi_{2})$ in the Grothendieck group
$\mathcal{R}_{n_{1}+n_{2}}$. This extends linearly to a product
$\times:\mathcal{R}\times\mathcal{R}\rightarrow\mathcal{R}.$
###### Remark 2.3.
Again we warn the reader that it is important to know when we consider
$\pi_{1}\times\pi_{2}$ as a representation or an element in $\mathcal{R}$. For
instance $\pi_{1}\times\pi_{2}=\pi_{2}\times\pi_{1}$ in $\mathcal{R}$ (see
below), but $i_{(n_{1},n_{2})}(\pi_{1}\otimes\pi_{2})$ is not isomorphic to
$i_{(n_{1},n_{2})}(\pi_{2}\otimes\pi_{1})$ in general.
###### Proposition 2.4.
The ring $(\mathcal{R},\times)$ is graded commutative. Its identity is the
unique element in $\mathbf{Irr}_{0}$.
## 3\. Langlands classification
We recall how to combine Langlands classification of $\mathbf{Irr}$ in terms
of irreducible essentially tempered representations, and the fact that for the
groups $G_{n}$, tempered representations are induced fully from irreducible
square integrable modulo center representations to give a classification of
$\mathbf{Irr}$ in terms of irreducible essentially square integrable modulo
center representations.
Let us denote respectively by
$D_{n}^{u}\subset\mathbf{Irr}_{n},\qquad D_{n}\subset\mathbf{Irr}_{n},$
the set of equivalence classes of irreducible, square integrable modulo center
(respectively essentially square integrable modulo center) representations of
$G_{n}$ and set
$D^{u}=\coprod_{n\in\mathbb{N}^{*}}D_{n}^{u},\qquad
D=\coprod_{n\in\mathbb{N}^{*}}D_{n}.$
Similarly,
$T^{u}_{n}\subset\mathbf{Irr}_{n},\qquad T_{n}\subset\mathbf{Irr}_{n},$
denote respectively the sets of equivalence classes of irreducible tempered
representations of $G_{n}$ and equivalence classes of irreducible essentially
tempered representations of $G_{n}$. Set
$T^{u}=\coprod_{n\in\mathbb{N}^{*}}T^{u}_{n},\qquad
T=\coprod_{n\in\mathbb{N}^{*}}T_{n}.$
For all $\tau\in T$, there exists a unique $e(\tau)\in\mathbb{R}$ and a unique
$\tau^{u}\in T^{u}$ such that
$\tau=\nu^{e(\tau)}\tau^{u}.$
###### Theorem 3.1.
Let $d=(\delta_{1},\ldots,\delta_{l})\in M(D^{u})$. Then
$\delta_{1}\times\delta_{2}\times\ldots\times\delta_{l}$
is irreducible, therefore in $T^{u}$. This defines a one-to-one correspondence
between $M(D^{u})$ and $T^{u}$.
This is due to Jacquet and Zelevinsky in the case $A=F$ non archimedean ([21]
or [52]). For a non archimedean division algebra, this is established in [16].
In the archimedean case, reducibility of induced from square integrable
representations are well-understood in terms of $R$-groups (Knapp-Zuckermann
[27]), and for the groups $G_{n}$, the $R$-groups are trivial.
###### Definition 3.2.
Let $t=(\tau_{1},\ldots,\tau_{l})\in M(T)$. We say that $t$ is written in a
standard order if
$e(\tau_{1})\geq\ldots\geq e(\tau_{l}).$
###### Theorem 3.3.
Let $d=(d_{1},\ldots,d_{l})\in M(D)$ written in a standard order, i.e.
$e(d_{1})\geq e(d_{2})>\ldots\geq e(d_{l}).$
Then :
— $(i)$ the representation :
$\lambda(d)=d_{1}\times\ldots\times d_{l}$
has a unique irreducible quotient $\mathbf{Lg}(d)$, appearing with
multiplicity one in a Jordan-Hölder sequence of $\lambda(d)$. It is also the
unique subrepresentation of
$d_{l}\times d_{l-1}\times\ldots\times d_{2}\times d_{1}.$
— $(ii)$ Up to a multiplicative scalar, there is an unique intertwining
operator
$J:d_{1}\times\ldots\times d_{l}\longrightarrow d_{l}\times\ldots\times
d_{1}.$
We have then $\mathbf{Lg}(d)\simeq\lambda(d)/\ker J\simeq\mathrm{Im}\,J$.
— $(iii)$ The map
$d\mapsto\mathbf{Lg}(d)$
is a bijection between $M(D)$ and $\mathbf{Irr}$.
For a proof in the non archimedean case, the reader may consult [35].
Representations of the form $\lambda(d)=d_{1}\times\ldots\times d_{l}$ with
$d=(d_{1},\ldots,d_{l})\in M(D)$ written in a standard order are called
standard representations.
###### Remark 3.4.
If $d$ is a multiset of representations in $\mathbf{Irr}$, we denote by $\deg
d$ the sum of the degrees of representations in $d$. Let $M(D)_{n}$ be the
subset of $M(D)$ of multisets of degree $n$. Then the theorem gives a one-to-
one correspondence between $M(D)_{n}$ and $\mathbf{Irr}_{n}$.
###### Proposition 3.5.
The ring $R$ is isomorphic to $\mathbb{Z}[D]$, the ring of polynomials in
$X_{d}$, $d\in D$ with coefficients in $\mathbb{Z}$, i.e.
$\\{[\lambda(d)]\\}_{d\in D}$ is a $\mathbb{Z}$-basis of $\mathcal{R}$.
See [52], Prop. 8.5 for a proof.
We give some easy consequences of the proposition :
###### Corollary 3.6.
$(i)$ The ring $R$ is a factorial domain.
— $(ii)$ If $\delta\in D$, $[\delta]$ is prime $\mathcal{R}$.
— $(iii)$ If $\pi\in\mathcal{R}$ is homogeneous and
$\pi=\sigma_{1}\times\sigma_{2}$ in $\mathcal{R}$ , the $\sigma_{1}$ and
$\sigma_{2}$ are homogeneous.
— $(iv)$ The group of invertible elements in $\mathcal{R}$ is
$\\{\pm\mathbf{Irr}_{0}\\}.$
## 4\. Jacquet-Langlands correspondence
In this section, we fix a central division algebra $A$ of dimension $d^{2}$
over the local field $F$. We recall the Jacquet-Langlands correspondence
between $\mathbf{GL}(n,A)$ and $\mathbf{GL}(nd,F)$. Since we need
simultaneously both $F$ and $A$ in the notation, we set $G_{n}^{A}$,
$G_{n}^{F}$ respectively for $\mathbf{GL}(n,A)$, $\mathbf{GL}(n,F)$, and
similarly with other notation e.g. $\mathcal{R}(G^{A}_{n})$ or
$\mathcal{R}(G^{F}_{n})$, $D_{n}^{A}$ or $D_{n}^{F}$, etc.
There is a standard way of defining the determinant and the characteristic
polynomial for elements of $G_{n}$, in spite of $A$ being non commutative (see
for example [33] Section 16), and the reduced norm $RN$ introduced above is
just given by the constant term of the characteristic polynomial. If $g\in
G_{n}$, then the characteristic polynomial of $g$ has coefficients in $F$, it
is monic and has degree $nd$. If $g\in G_{n}$ for some $n$, we say $g$ is
regular semisimple if the characteristic polynomial of $g$ has distinct roots
in an algebraic closure of $F$.
If $\pi\in\mathcal{R}(G_{n})$, then we let $\Theta_{\pi}$ denote the function
character of $\pi$, as a locally constant map, stable under conjugation,
defined on the set of regular semisimple elements of $G_{n}$.
We say that $g^{\prime}\in G_{n}^{A}$ corresponds to $g\in G_{nd}^{F}$ if $g$
and $g^{\prime}$ are regular semisimple and have the same characteristic
polynomial, and we write then $g^{\prime}\leftrightarrow g$. Notice that if
$g^{\prime}\leftrightarrow g$ and if $g_{1}^{\prime}$ and $g_{1}$ are
respectively conjugate to $g^{\prime}$ and $g$, then
$g^{\prime}_{1}\leftrightarrow g_{1}$. Said otherwise, it means that
$\leftrightarrow$ is really a correspondence between conjugacy classes.
###### Theorem 4.1.
There is a unique bijection $\mathbf{C}:D_{nd}^{F}\rightarrow D_{n}^{A}$ such
that for all $\pi\in D^{F}_{nd}$ we have
$\Theta_{\pi}(g)=(-1)^{nd-n}\Theta_{\mathbf{C}(\pi)}(g^{\prime})$
for all $g\in G^{F}_{nd}$ and $g^{\prime}\in G^{A}_{n}$ such that
$g^{\prime}\leftrightarrow g$.
For the proof, see [16] if the characteristic of the base field $F$ is zero
and [4] for the non zero characteristic case. In the archimedean case, see
sections 9.2 and 9.3, and Remark 9.6 for more details about this
correspondence ([23],[16]).
We identify the centers of $G^{F}_{nd}$ and $G^{A}_{n}$ via the canonical
isomorphisms with $F^{\times}$. Then the correspondence $\mathbf{C}$ preserves
central characters so in particular $\sigma$ is unitary if and only if
$\mathbf{C}(\sigma)$ is.
The correspondence $\mathbf{C}$ may be extended in a natural way to a
correspondence $\mathbf{LJ}$ between Grothendieck groups :
\- If $\sigma\in D^{F}_{nd}$, viewed as an element in
$\mathcal{R}(G^{F}_{nd})$, we set
$\mathbf{LJ}(\sigma)=(-1)^{nd-n}{\bf C}(\sigma),$
viewed as an element in $\mathcal{R}(G^{A}_{n})$.
\- If $\sigma\in D^{F}_{r}$, where $r$ is not divisible by $d$, we set
$\mathbf{LJ}(\sigma)=0$.
\- Since $\mathcal{R}^{F}$ is a polynomial algebra in the variables $d\in
D^{F}$, one can extend $\mathbf{LJ}$ in a unique way to an algebra morphism
between $\mathcal{R}^{F}$ and $\mathcal{R}^{A}$. It is clear that
$\mathbf{LJ}$ is surjective.
The fact that $\mathbf{LJ}$ is a ring morphism means that “it commutes with
parabolic induction”. Let us describe how to compute (theoretically)
$\mathbf{LJ}(\pi)$, $\pi\in\mathcal{R}^{F}$. Since $\\{\lambda(a)\\}_{a\in
M(D^{F})}$ is a basis of $\mathcal{R}^{F}$, we first write $\pi$ in this basis
as
$\pi=\sum_{a\in M(D^{F})}M(a,\pi)\lambda(a),$
with $M(a,\pi)\in\mathbb{Z}$ (see Section 6). Since $\mathbf{LJ}$ is linear,
$\mathbf{LJ}(\pi)=\sum_{a\in M(D^{F})}M(a,\pi)\;\mathbf{LJ}(\lambda(a)),$
so it remains to describe $\mathbf{LJ}(\lambda(a))$. If $a=(d_{1},\ldots
d_{k})$, then
$\lambda(a)=d_{1}\times\ldots\times d_{k}$
(since we consider $\lambda(a)$ as an element in $\mathcal{R}^{F}$, the order
of the $d_{j}$ is not important). Since $\mathbf{LJ}$ is an algebra morphism
$\mathbf{LJ}(\lambda(a))=\mathbf{LJ}(d_{1})\times\ldots\times\mathbf{LJ}(d_{k}).$
If $d$ does not divide one of the $\deg d_{i}$, this is $0$, and if $d$
divides all the $\deg d_{i}$, setting $\sum_{i}\deg d_{i}=nd$, we get
$\prod_{i=1}^{k}(-1)^{\deg d_{i}-\deg d_{i}/d}{\bf
C}(d_{1})\times\ldots\times{\bf C}(d_{k})=(-1)^{nd-n}{\bf
C}(d_{1})\times\ldots\times{\bf C}(d_{k}).$
## 5\. Support and infinitesimal character
The goal of this section is again to introduce some notation and to recall
well known results, but we want to adopt a uniform terminology for archimedean
and non archimedean case. In the non archimedean case, some authors, by
analogy with the archimedean case, call ‘infinitesimal character’ the cuspidal
support of a representation (a multiset of irreducible supercuspidal
representations). We take the opposite view of considering infinitesimal
characters in the archimedean case as multisets of complex numbers.
### 5.1. Non archimedean case
We start with the case $F$ non archimedean. We denote by $C$ (resp. $C^{u}$)
the subset of $\mathbf{Irr}$ of supercuspidal representations (resp. unitary
supercuspidal, i.e. such that $e(\rho)=0$).
For all $\pi\in\mathbf{Irr}$, there exist $\rho_{1},\ldots,\rho_{n}\in C$ such
that $\pi$ is a subquotient of
$\rho_{1}\times\rho_{2}\times\ldots\times\rho_{n}$. The multiset
$(\rho_{1},\ldots,\rho_{n})\in M(C)$ is uniquely determined by $\pi$, and we
denote it by $\mathbf{Supp}\,(\pi)$. It is called the cuspidal support of
$\pi$. When $\pi$ is a finite length representation whose irreducible
subquotients have same cuspidal support, we denote it by
$\mathbf{Supp}\,(\pi)$. If $\tau=\pi_{1}\times\pi_{2}$, with
$\pi_{1},\pi_{2}\in\mathbf{Irr}$ we have
(5.1)
$\mathbf{Supp}\,(\tau)=\mathbf{Supp}\,(\pi_{1})+\mathbf{Supp}\,(\pi_{2})$
For all $\omega\in M(C)$, denote by $\mathbf{Irr}_{\omega}$ the set of
$\pi\in\mathbf{Irr}$ whose cuspidal support is $\omega$. We obtain a
decomposition :
(5.2) $\mathbf{Irr}=\coprod_{\omega\in M(C)}\mathbf{Irr}_{\omega}.$
Set
$\mathcal{R}_{\omega}=\bigoplus_{\pi\in\mathbf{Irr}_{\omega}}\mathbb{Z}\,\pi.$
Then
(5.3) $\mathcal{R}=\bigoplus_{\omega\in M(C)}\mathcal{R}_{\omega}$
is a graduation of $\mathcal{R}$ by $M(C)$.
We recall the following well known result.
###### Proposition 5.1.
Let $\omega\in M(C)$. Then $\mathbf{Irr}_{\omega}$ is finite.
### 5.2. Archimedean case
Denote by $\mathfrak{g}_{n}$ the complexification of the Lie algebra of
$G_{n}$, $\mathfrak{U}_{n}=\mathfrak{U}(\mathfrak{g}_{n})$ its enveloping
algebra, and $\mathfrak{Z}_{n}$ the center of the latter. Let
$\mathfrak{h}_{n}$ be a Cartan subalgebra of $\mathfrak{g}_{n}$, and
$W_{n}=W(\mathfrak{g}_{n},\mathfrak{h}_{n})$ its Weyl group. Harish-Chandra
has defined an algebra isomorphism from $\mathfrak{Z}_{n}$ to the Weyl group
invariants in the symmetric algebra over $\mathfrak{h}_{n}$ :
$\mathrm{HC}_{n}\,:\,\mathfrak{Z}_{n}\longrightarrow
S(\mathfrak{h}_{n})^{W_{n}}.$
Using this isomorphism, every character of $\mathfrak{Z}_{n}$ (i.e. a morphism
of algebra with unit $\mathfrak{Z}_{n}\rightarrow\mathbb{C}$) is identified
with a character of $S(\mathfrak{h}_{n})^{W_{n}}$. Such characters are given
by orbits of $W_{n}$ in $\mathfrak{h}_{n}^{*}$, by evaluation at a point of
the orbit.
A representation (recall that in the archimedean case, this means a Harish-
Chandra module) admits an infinitesimal character if the center of the
enveloping algebra acts on it by scalars. Irreducible representations admit
infinitesimal character. For all $\lambda\in\mathfrak{h}_{n}^{*}$, let us
denote by $\mathbf{Irr}_{\lambda}$ the set of $\pi\in\mathbf{Irr}$ whose
infinitesimal character is given by $\lambda$.
We are now going to identify infinitesimal characters with multisets of
complex numbers.
— $A=\mathbb{R}$. In this case, $\mathfrak{g}_{n}=M_{n}(\mathbb{C})$ and we
can choose $\mathfrak{h}_{n}$ to be the space of diagonal matrices, identified
with $\mathbb{C}^{n}$. Its dual space is also identified with $\mathbb{C}^{n}$
by the canonical duality
$\mathbb{C}^{n}\times\mathbb{C}^{n}\rightarrow\mathbb{C},\quad((x_{1},\ldots,x_{n}),(y_{1},\ldots
y_{n}))\mapsto\sum_{i=1}^{n}x_{i}y_{i}.$
The Weyl group $W_{n}$ is then identified with the symmetric group
$\mathfrak{S}_{n}$, acting on $\mathbb{C}^{n}$ by permuting coordinates. Thus,
an infinitesimal character for $G_{n}$ is given by a multiset of $n$ complex
numbers.
— $A=\mathbb{C}$. In this case, $\mathfrak{g}_{n}=M_{n}(\mathbb{C})\oplus
M_{n}(\mathbb{C})$, and we can choose $\mathfrak{h}_{n}$ to be the space of
couples of diagonal matrices, identified with
$\mathbb{C}^{n}\times\mathbb{C}^{n}$. Its dual space is also identified with
$\mathbb{C}^{n}\times\mathbb{C}^{n}$ as above. The Weyl group is then
identified with $\mathfrak{S}_{n}\times\mathfrak{S}_{n}$, acting on
$\mathfrak{h}_{n}^{*}\simeq\mathbb{C}^{n}\times\mathbb{C}^{n}$ by permuting
coordinates. Thus, an infinitesimal character for $G_{n}$ is given by a couple
of multisets of $n$ complex numbers.
— $A=\mathbb{H}$. The group $G_{n}$ is a real form of
$\mathbf{GL}(2n,\mathbb{C})$, so $\mathfrak{g}_{n}=M_{2n}(\mathbb{C})$. The
discussion is then the same as for $F=\mathbb{R}$, with $2n$ replacing $n$.
By analogy with the non archimedean case, we denote by $M(C)$ the set of
multisets (or couple of multisets if $A=\mathbb{C}$) described above.
###### Definition 5.2.
Let $\omega\in M(C)$ be a multiset (or a couple of multisets of the same
cardinality, if $A=\mathbb{C}$) of complex numbers. If
$\pi\in\mathbf{Irr}_{n}$, we set
$\mathbf{Supp}\,(\pi)=\omega$
where $\omega\in M(C)$ is the multiset (or couple of multisets of the same
cardinality if $F=\mathbb{C}$) defined by the infinitesimal character of
$\pi$. We say that $\omega$ is the support of $\pi$. When $\pi$ is a finite
length representation whose subquotients have all same support, we denote it
by $\mathbf{Supp}\,(\pi)$. If $\pi\in\mathbf{Irr}$, $\pi=\mathbf{Lg}(a)$ for
$a\in M(D)$, we set
$\mathbf{Supp}\,(a):=\mathbf{Supp}\,(\pi).$
We denote by $M(D)_{\omega}$ the set of $a\in M(D)$ with support $\omega$.
###### Proposition 5.3.
The results of 5.1 are valid in the archimedean case.
By this, we mean (5.1), (5.2), (5.3) and Prop. 5.1 above.
## 6\. Bruhat $G$-order
We continue with the notation of the previous section. In the sequel, we will
use a partial order $\leq$ on $M(D)$, called Bruhat $G$-order, obtained from
partial orders on each $M(D)_{\omega}$, $\omega\in M(C)$ whose main properties
are described in the following :
###### Proposition 6.1.
Let $a\in M(D)$. Then the decomposition of $\lambda(a)$ in the basis
$\\{\mathbf{Lg}(b)\\}_{b\in M(D)}$ of $\mathcal{R}$ is of the form
$\lambda(a)=\sum_{b\leq a}m(b,a)\;\mathbf{Lg}(b),$
where the $m(a,b)$ are non negative integers. The decomposition of
$\mathbf{Lg}(a)$ in the basis $\\{\lambda(b)\\}_{b\in M(D)}$ of $\mathcal{R}$
is of the form
$\mathbf{Lg}(a)=\sum_{b\leq a}M(b,a)\;\lambda(b),$
where the $M(b,a)$ are integers. In particular, all factors $\mathbf{Lg}(b)$
(resp. $\lambda(b)$) appearing in the decomposition of $\lambda(a)$ (resp.
$\mathbf{Lg}(a)$) have same support. Furthermore, $m(a,a)=M(a,a)=1$.
In the non archimedean case, Bruhat $G$-order is described by Zelevinsky [52]
($A=F$) and Tadić [43] in terms of linked segments. On arbitrary real
reductive groups, Bruhat $G$-order is defined by Vogan on a different sets of
parameters, in terms of integral roots (see [50], def. 12.12). In all cases,
Bruhat $G$-order is constructed by defining first elementary operations,
starting from an element $a\in M(D)$ and obtaining another element
$a^{\prime}\in M(D)$. This is written
$a^{\prime}\prec a.$
Bruhat $G$-order is then generated by $\prec$111In the case $A=\mathbb{R}$,
the situation is a little more complicated.. Another important property of
Bruhat $G$-order is the following. One can define on all $M(D)_{\omega}$ a
length function :
$l:\,M(D)_{\omega}\rightarrow\mathbb{N}$
such that if $b\leq a$, then $l(b)\leq l(a)$, if $b\leq a$ and $l(b)=l(a)$
then $b=a$ and finally if $b\leq a$, and $l(b)=l(a)-1$ then $b\prec a$. In
particular, if $b\prec a$, there is no $c\in M(D)_{\omega}$ such that $b\leq
c<a$ but $b=c$.
We have then
###### Proposition 6.2.
Let $a,b\in M(D)_{\omega}$ such that $b\prec a$. Then $m(b,a)\neq 0$ and
$M(b,a)\neq 0$.
Proof. The first assertion follows from the recursion formulas for Kazhdan-
Lusztig-Vogan polynomials in the archimedean case [47]. We even have
$m(b,a)=1$ in this case. It is established by Zelevinsky [52] or Tadić [43] in
the non archimedean case, and the second assertion follows using Prop. 6.1. ∎
## 7\. Unitary dual
### 7.1. Representations $u(\delta,n)$ and $\pi(\delta,n;\alpha)$
Let $\delta\in D$. Then $\delta\times\delta$ is irreducible. Indeed, if
$\delta\in D^{u}$, this is 3.1, and the general case follows by tensoring with
an unramified character. Consider $\delta\times\nu^{\alpha}\delta$, with
$\alpha>0$. There exists a smallest $\alpha_{0}>0$ such that
$\delta\times\nu^{\alpha_{0}}\delta$ is reducible.
###### Definition 7.1.
Let $\delta\in D$. Set $\nu_{\delta}=\nu^{\alpha_{0}}$, where $\alpha_{0}>0$
is the smallest real number $\alpha>0$ such that
$\delta\times\nu^{\alpha}\delta$ is reducible.
For all $\delta\in D$, and for all $n\in\mathbb{N}^{*}$ we set
(7.1)
$a(\delta,n)=(\nu_{\delta}^{\frac{n-1}{2}}\delta,\nu_{\delta}^{\frac{n-1}{2}-1}\delta,\ldots,\nu_{\delta}^{-\frac{n-1}{2}}\delta)\in
M(D),$ (7.2) $u(\delta,n)=\mathbf{Lg}(a(\delta,n)).$
For all $\delta\in D$, for all $n\in\mathbb{N}^{*}$, and for all
$\alpha\in\mathbb{R}$, set
(7.3)
$\pi(\delta,n;\alpha)=\nu_{\delta}^{\alpha}u(\delta,n)\times\nu_{\delta}^{-\alpha}u(\delta,n).$
### 7.2. Tadić hypotheses $U(0),\ldots,U(4)$ and classification of the
unitary dual
We recall Tadić’s classification of the unitary dual of the groups $G_{n}$.
For a fixed division algebra $A$, consider the following hypotheses :
$U(0)$ : if $\sigma,\tau\in\mathbf{Irr}^{u}$, then
$\sigma\times\tau\in\mathbf{Irr}^{u}$.
$U(1)$ : if $\delta\in D^{u}$ and $n\in\mathbb{N}^{*}$, then
$u(\delta,n)\in\mathbf{Irr}^{u}$.
$U(2)$ : if $\delta\in D^{u}$, $n\in\mathbb{N}^{*}$ and $\alpha\in]0,1/2[$,
then $\pi(\delta,n;\alpha)\in\mathbf{Irr}^{u}$.
$U(3)$ : if $\delta\in D$, $u(\delta,n)$ is prime in $\mathcal{R}$.
$U(4)$ : if $a,b\in M(D)$, then $L(a)\times L(b)$ contains $L(a+b)$ as a
subquotient.
Suppose Tadić’s hypotheses are satisfied for $A$. We have then the following :
###### Theorem 7.2.
The set $\mathbf{Irr}^{u}$ is endowed with the structure of a free commutative
monoid, with product $(\sigma,\tau)\mapsto\sigma\times\tau$ and with basis
$\mathcal{B}=\\{u(\delta,n),\,\pi(\delta,n;\alpha)\,|\,\delta\in
D^{u},n\in\mathbb{N}^{*},\alpha\in]0,1/2[\;\\}.$
More explicitly, if $\pi_{1},\ldots,\pi_{k}\in\mathcal{B}$, then
$\pi_{1}\times\ldots\times\pi_{k}\in\mathbf{Irr}^{u}$ and if
$\pi\in\mathbf{Irr}^{u}$, there exists $\pi_{1},\ldots,\pi_{k}\in\mathcal{B}$,
unique up to permutation, such that $\pi=\pi_{1}\times\ldots\times\pi_{k}$.
This is proved in [44], prop 2.1. The proof is formal.
Let us first notice that $U(4)$ is a quite simple consequence of Langlands
classification, established by Tadić for all $A$ in [45] (the proof works also
for archimedean $A$, see [46]). It is also easy to see that $U(2)$ can be
deduced from $U(0)$ and $U(1)$ by the following simple principle : if
$(\pi_{t})_{t\in I}$, is a family of hermitian representations in
$\mathcal{M}(G)$, where $I$ is an open interval containing $0$, continuous in
a sense that we won’t make precise here, and if $\pi_{0}$ is unitary and
irreducible, then $\pi_{t}$ is unitary on the largest interval $J\subset I$
containing $0$ where $\pi_{t}$ is irreducible (the signature of the hermitian
form can change only when crossing reducibility points). Representations
$\pi(\delta,n;\alpha)$, $\alpha\in\mathbb{R}$ are hermitian,
$\pi(\delta,n;0)=u(\delta,n)\times u(\delta,n)$
is unitary and irreducible ($U(0)$ and $U(1)$), and $\pi(\delta,n;\alpha)$ is
irreducible for $\alpha\in]-\frac{1}{2},\frac{1}{2}[$. See [46] and the
references given there for details.
For the remaining $U(0)$, $U(1)$ and $U(3)$, the situation is more
complicated.
— $U(3)$ is proved by Tadić in the non archimedean case in [41], and for
$A=\mathbb{R},\mathbb{C}$ in [46]. We give below the proof for $A=\mathbb{H}$,
following Tadić’s ideas.
— $U(1)$ is proved by Tadić in the non archimedean case in [41] for the field
case $A=F$. The generalization to all division algebra over $F$ is given by
the authors in [9], using unitarity of some distinguished representations
closely related to the $u(\delta,n)$ established by the first named author in
[6] by global methods. For $F=\mathbb{C}$, $u(\delta,n)$ is a unitary
character, so the statement is obvious. For $F=\mathbb{R}$, $U(1)$ was first
proved by Speh in [40] using global method. It can also be proved using
Vogan’s results on cohomological induction (see details below). Finally, for
$A=\mathbb{H}$, $U(1)$ can be established using again the general results on
cohomological induction, and the argument in [9]. A more detailed discussion
of the archimedean case is in section 11.
— $U(0)$ is by far the most delicate point. For $A=F$ non archimedean, it is
established by Bernstein in [12], using reduction to the mirabolic subgroup.
For $A=\mathbb{R}$ or $\mathbb{C}$, the same approach can be used, but some
serious technical difficulties remained unsolved until the paper of Baruch
[11]. For $A$ a general non archimedean division algebra, $U(0)$ is
established by V. Sécherre [38] using his deep results on Bushnell-Kutzko’s
type theory for the groups $\mathbf{GL}(n,A)$, which give Hecke algebras
isomorphisms and allow one to reduce the problem to the field case (the proof
also uses in a crucial way Barbash-Moy results on unitarity for Hecke algebras
representations [10]). In the case $A=\mathbb{H}$, there is to our knowledge
no written references, but it is well-known to some experts that this can be
deduced from Vogan’s classification of the unitary dual of $G_{n}$ in the
archimedean case ([51]). Vogan’s classification is conceptually very different
from Tadić’s classification. It has its own merits, but the final result is
quite difficult to state and to understand, since it uses sophisticated
concepts and techniques of the theory of real reductive groups. So, for people
interested mainly in applications, to automorphic forms for instance, Tadić’s
classification is much more convenient. In the literature, before Baruch’s
paper was published, one can often find the statement of Tadić’s
classification, with reference to Vogan’s paper [51] for the proof. It might
not be totally obvious for non experts to derive Tadić’s classification from
Vogan’s. We take this opportunity to explain in this paper (see §12 below)
some aspects of Vogan’s classification’s, how it is related to Tadić’s
classification and how to deduce $U(0)$ from it. Of course, an independent
proof of $U(0)$ would be highly desirable in this case. It would be even
better to have an uniform proof of $U(0)$ for all cases, but for this, new
ideas are clearly needed.
— all these results are true if the characteristic of $F$ is positive (as
explained in [8]).
## 8\. Classification of generic irreducible unitary representations
From the classification of the unitary dual of $\mathbf{GL}(n,\mathbb{R})$
given above and the classification of irreducible generic representations of a
real reductive groups ([48], [28]), we deduce the classification of generic
irreducible unitary representations of $\mathbf{GL}(n,\mathbb{R})$. Let us
first recall that Vogan gives a classification of ’large’ irreducible
representations of a quasi-split real reductive group (i.e. having maximal
Gelfand-Kirillov dimension), that Kostant shows that such a group admits
generic representations if and only if the group is quasi-split, and that
“generic” is equivalent to “large”. Therefore, Vogan’s result can be stated as
follows :
###### Theorem 8.1.
Any generic irreducible representation of any quasisplit real reductive group
is irreducibly induced from a generic limit of discrete series, and
conversely, a representation which is irreducibly induced from a generic limit
of discrete series is generic.
Let us notice that in the above theorem, one can replace “limit of discrete
series” by “essentially tempered”, because according to [27], any tempered
representation is fully induced from a limit of discrete series. In the case
of $\mathbf{GL}(n,\mathbb{R})$ all discrete series are generic, so by Theorem
3.1, all essentially tempered representations are generic.
Let us denote by $\mathbf{Irr}^{u}_{gen}$ the subset of $\mathbf{Irr}^{u}$
consisting of generic representations. We have then the following
specialization of theorem 7.2.
###### Theorem 8.2.
The set $\mathbf{Irr}^{u}_{gen}$ is endowed with the structure of a free
commutative monoid, with product $(\sigma,\tau)\mapsto\sigma\times\tau$ and
with basis
$\mathcal{B}_{gen}=\\{u(\delta,1),\,\pi(\delta,1;\alpha)\,|\,\delta\in
D^{u},\alpha\in]0,1/2[\;\\}.$
More explicitly, if $\pi_{1},\ldots,\pi_{k}\in\mathcal{B}_{gen}$, then
$\pi_{1}\times\ldots\times\pi_{k}\in\mathbf{Irr}^{u}_{gen}$ and if
$\pi\in\mathbf{Irr}^{u}_{gen}$, there exists
$\pi_{1},\ldots,\pi_{k}\in\mathcal{B}_{gen}$, unique up to permutation, such
that $\pi=\pi_{1}\times\ldots\times\pi_{k}$.
## 9\. Classification of discrete series : archimedean case
In this section, we describe explicitly square integrable modulo center
irreducible representations of $G_{n}$ in the archimedean case. In the case
$A=\mathbb{H}$, we give also details about supports, Bruhat $G$-order… Since
the Bruhat $G$-order is defined by Vogan on a set of parameters for
irreducible representations consisting of (conjugacy classes of) characters of
Cartan subgroups, we also describe the bijections between the various sets of
parameters.
### 9.1. $A=\mathbb{C}$
There are square integrable modulo center irreducible representations of
$\mathbf{GL}(n,\mathbb{C})$ only when $n=1$. Thus
$D=D_{1}=\mathbf{Irr}_{1}.$
An element $\delta\in D$ is then a character
$\delta:\mathbf{GL}(1,\mathbb{C})\simeq\mathbb{C}^{\times}\rightarrow\mathbb{C}^{\times}$
Let $\delta\in D$. Then there exists a unique $n\in\mathbb{Z}$ and a unique
$\beta\in\mathbb{C}$ such that
$\delta(z)=|z|^{2\beta}\left(\frac{z}{|z|}\right)^{n}=|z|_{\mathbb{C}}^{\beta}\left(\frac{z}{|z|}\right)^{n}.$
Let $x,y\in\mathbb{C}$ satisfying
$\begin{cases}x+y&=2\beta\\\ x-y&=n.\end{cases}$
We set, with the above notation (and abusively writing a complex power of a
complex number),
$\delta(z)=\gamma(x,y)=z^{x}\bar{z}^{y}.$
The following is well-known.
###### Proposition 9.1.
Let $\delta=\gamma(x,y)\in D$ as above. Then $\delta\times\nu^{\alpha}\delta$
is reducible for $\alpha=1$ and irreducible for $0\leq\alpha<1$. Thus
$\nu_{\delta}=\nu$ (cf. 7.1). In the case of reducibility $\alpha=1$, we have
in $\mathcal{R}$:
$\gamma(x,y)\times\gamma(x+1,y+1)=\mathbf{Lg}((\gamma(x,y),\gamma(x+1,y+1)))+\gamma(x,y+1)\times\gamma(x+1,y)$
### 9.2. $A=\mathbb{R}$
There are square integrable modulo center irreducible representations of
$\mathbf{GL}(n,\mathbb{R})$ only when $n=1,2$ :
$D=D_{1}\coprod D_{2}=\mathbf{Irr}_{1}\coprod D_{2}.$
Let us start with the parametrization of $D_{1}$. An element $\delta\in D_{1}$
is a character
$\delta:\mathbf{GL}(1,\mathbb{R})\simeq\mathbb{R}^{\times}\rightarrow\mathbb{C}^{\times}$
Let $\delta\in D_{1}$. Then there exists a unique $\epsilon\in\\{0,1\\}$ and a
unique $\alpha\in\mathbb{C}$ such that
$\delta(x)=|x|^{\alpha}\mathrm{sgn}(x)^{\epsilon},\quad(x\in\mathbb{R}^{\times}).$
We set
$\delta=\delta(\alpha,\epsilon).$
Let us now give a parametrization of $D_{2}$. Let $\delta_{1},\delta_{2}\in
D_{1}$. Then $\delta_{1}\times\delta_{2}$ is reducible if and only if there
exists $p\in\mathbb{Z}^{*}$ such that
$\delta_{1}\delta_{2}^{-1}(x)=x^{p}\mathrm{sgn}(x),\quad(x\in\mathbb{R}^{\times})$
If $\delta_{i}=\delta(\alpha_{i},\epsilon_{i})$, we rewrite these conditions
as
(9.1) $\alpha_{1}-\alpha_{2}=p,\quad\epsilon_{1}-\epsilon_{2}=p+1\,\mod 2$
If $\delta_{1}\times\delta_{2}$ is reducible, we have in $R$,
(9.2)
$\delta_{1}\times\delta_{2}=\mathbf{Lg}((\delta_{1},\delta_{2}))+\eta(\delta_{1},\delta_{2})$
where $\eta(\delta_{1},\delta_{2})\in D_{2}$ and
$\mathbf{Lg}((\delta_{1},\delta_{2}))$ is an irreducible finite dimensional
representation (of dimension $|p|$ with the notation above).
###### Definition 9.2.
If $\alpha_{1},\alpha_{2}\in\mathbb{C}$ satisfy
$\alpha_{1}-\alpha_{2}\in\mathbb{Z}^{*}$, we set
(9.3) $\eta(\alpha_{1},\alpha_{2})=\eta(\delta_{1},\delta_{2})$
where $\delta_{1}(x)=|x|^{\alpha_{1}}$ and
$\delta_{2}(x)=|x|^{\alpha_{2}}\mathrm{sgn}(x)^{\alpha_{1}-\alpha_{2}+1}$.
This define a surjective map from
$\\{(\alpha_{1},\alpha_{2})\in\mathbb{C}^{2}\,|\,\alpha_{1}-\alpha_{2}\in\mathbb{Z}^{*}\\}$
to $D_{2}$ and
$\eta(\alpha_{1},\alpha_{2})=\eta(\alpha^{\prime}_{1},\alpha^{\prime}_{2})\Leftrightarrow\\{\alpha_{1},\alpha_{2}\\}=\\{\alpha^{\prime}_{1},\alpha^{\prime}_{2}\\}.$
This gives a parametrization of $D_{2}$ by pairs of complex numbers
$\alpha_{1},\alpha_{2}$ satisfying $\alpha_{1}-\alpha_{2}\in\mathbb{Z}^{*}$.
###### Remark 9.3.
The representation $\eta(x,y)\in D_{2}$, $x,y\in\mathbb{C}$,
$x-y\in\mathbb{Z}^{*}$ is obtained from the character $\gamma(x,y)$ of
$\mathbb{C}^{\times}$ by some appropriate functor of cohomological induction.
But, even when $x=y$, the functor of cohomological induction maps
$\gamma(x,x)$ to an irreducible essentially tempered representation of
$\mathbf{GL}(2,\mathbb{R})$, namely the limit of discrete series
$\delta(x,0)\times\delta(x,1)$, which is an irreducible principal series.
For that reason, we set for $x\in\mathbb{C}$ :
(9.4) $\eta(x,x):=\delta(x,0)\times\delta(x,1)\in\mathbf{Irr}_{2}$
###### Proposition 9.4.
Let $\delta\in D$. Then $\delta\times\nu^{\alpha}\delta$ is reducible for
$\alpha=1$ and irreducible for $0\leq\alpha<1$. Thus $\nu_{\delta}=\nu$ (cf.
7.1).
This is also well-known. Let us be more precise, by giving the composition
series for $\delta\times\nu\delta$. We start with the case
$\delta=\delta(\alpha,\epsilon)\in D_{1}$. Then we get from (9.2) that we have
in $\mathcal{R}$,
(9.5)
$\delta(\alpha,\epsilon)\times\delta(\alpha+1,\epsilon)=\mathbf{Lg}(\delta(\alpha,\epsilon),\delta(\alpha+1,\epsilon))+\eta(\alpha,\alpha+1).$
In the case where $\delta=\eta(x,y)\in D_{2}$, $x-y=r\in\mathbb{N}^{*}$, we
get if $r\neq 1$,
(9.6) $\displaystyle\eta(x,y)\times\eta(x+1,y+1)=$
$\displaystyle\mathbf{Lg}(\eta(x,y),$
$\displaystyle\eta(x+1,y+1))+\eta((x,y+1)\times\eta(x+1,y)).$
If $r=1$, the situation degenerates :
$\displaystyle\eta(x,y)\times\eta(x+1,y+1)=$
$\displaystyle\mathbf{Lg}(\eta(x,y),$
$\displaystyle\eta(x+1,y+1))+\eta(x,y+1)\times\eta(x+1,y).$
Recall that our convention is that
$\eta(y+1,y+1)=\delta(y+1,0)\times\delta(y+1,1)$
is a limit of discrete series, thus :
(9.7) $\displaystyle\eta(y+1,y)\times\eta(y+2,y+1)=$
$\displaystyle\mathbf{Lg}(\eta(y+1,y),$
$\displaystyle\eta(y+2,y+1))+\delta(y+1,0)\times\delta(y+1,1)\times\eta(y+2,y).$
### 9.3. $A=\mathbb{H}$
Let us identify quaternions and $2\times 2$ matrices of the form
$\left(\begin{array}[]{cc}\alpha&\beta\\\
-\bar{\beta}&\bar{\alpha}\end{array}\right),\quad\alpha,\beta\in\mathbb{C}.$
The reduced norm is given by
$RN\left(\begin{array}[]{cc}\alpha&\beta\\\
-\bar{\beta}&\bar{\alpha}\end{array}\right)=|\alpha|^{2}+|\beta|^{2}.$
The group of invertible elements $\mathbb{H}^{\times}$ contains
$\mathbf{SU}(2)$, the kernel of the reduced norm. Thus we have an exact
sequence
$1\rightarrow\mathbf{SU}(2)\hookrightarrow\mathbb{H}^{\times}\stackrel{{\scriptstyle
RN}}{{\longrightarrow}}\mathbb{R}^{\times}_{+}\rightarrow 1,$
and we can identify $\mathbb{H}^{\times}$ with the direct product
$\mathbf{SU}(2)\times\mathbb{R}^{\times}_{+}$.
The group $\mathbf{GL}(n,\mathbb{H})$ is a real form of
$\mathbf{GL}(2n,\mathbb{C})$, its elements are matrices composed of $2\times
2$ quaternionic matrices described above. Complex conjugacy on
$\mathbf{GL}(2n,\mathbb{C})$ for this real form is given on the $2\times 2$
blocs by
$\left(\begin{array}[]{cc}\alpha&\beta\\\
\gamma&\delta\end{array}\right)\mapsto\left(\begin{array}[]{cc}\bar{\delta}&-\bar{\gamma}\\\
-\bar{\beta}&\bar{\alpha}\end{array}\right).$
A maximal compact subgroup of $\mathbf{GL}(n,\mathbb{H})$ is then
$\mathbf{Sp}(n)\simeq\mathbf{U}(2n)\cap\mathbf{GL}(n,\mathbb{H}).$
Its rank is $n$, the rank of $\mathbf{GL}(n,\mathbb{H})$ is $2n$ and the split
rank of the center is $1$. Thus there are square integrable modulo center
representations only when $n=1$.
For $n=1$, $D_{1}=\mathbf{Irr}_{1}$, all irreducible representations of
$\mathbb{H}^{\times}$ are essentially square integrable modulo center, since
$\mathbb{H}^{\times}$ is compact modulo center. Harish-Chandra’s
parametrization in this case is as follows : irreducible representations of
$\mathbb{H}^{\times}$ are parametrized by some characters of a fundamental
Cartan subgroup, here we choose
$\mathbb{C}^{\times}\hookrightarrow\mathbb{H}^{\times},\quad\alpha\mapsto\left(\begin{array}[]{cc}\alpha&0\\\
0&\bar{\alpha}\end{array}\right),$
which is connected. Characters of $\mathbb{C}^{\times}$ were described in
section $F=\mathbb{C}$. They are of the form $\gamma(x,y)$,
$x-y\in\mathbb{Z}$. An irreducible representation of $\mathbb{H}^{\times}$ is
then parametrized by a couple of complex numbers $(x,y)$, such that
$x-y\in\mathbb{Z}$. The couples $(x,y)$ and $(x^{\prime},y^{\prime})$
parametrize the same representation if and only if the characters
$\gamma(x,y)$ and $\gamma(x^{\prime},y^{\prime})$ are conjugate under the Weyl
group, i.e. if the multisets $(x,y)$ and $(x^{\prime},y^{\prime})$ are equal.
Furthermore $\gamma(x,y)$ corresponds to an irreducible representation if and
only if $x\neq y$. Let us denote $\eta^{\prime}(x,y)$ the representation
parametrized by the multiset $(x,y)$, $x-y\in\mathbb{Z}^{*}$. It is obtained
from the character $\gamma(x,y)$ of the Cartan subgroup $\mathbb{C}^{\times}$
by cohomological induction.
###### Remark 9.5.
As opposed to the case $A=\mathbb{R}$, when we induced cohomologically the
character $\gamma(x,x)$ of the Cartan subgroup $\mathbb{C}^{\times}$ to
$\mathbb{H}^{\times}$, we get $0$ : there is no limits of discrete series.
Thus we set $\eta^{\prime}(x,x)=0$.
###### Remark 9.6.
Jacquet-Langlands correspondence (see Section 4) between representations of
$\mathbf{GL}(1,\mathbb{H})=\mathbb{H}^{\times}$ and essentially square
integrable modulo center irreducible representations
$\mathbf{GL}(2,\mathbb{R})$ is given by
${\bf C}(\eta(x,y))=\eta^{\prime}(x,y),\quad
x,y\in\mathbb{C},x-y\in\mathbb{Z}^{*}.$
The representations $\eta(x,y)$ and $\eta^{\prime}(x,y)$ are obtained by
cohomological induction from the same character $\gamma(x,y)$ of the Cartan
subgroup $\mathbb{C}^{\times}$ of $\mathbf{GL}(2,\mathbb{R})$ and
$\mathbb{H}^{\times}$. In the case $x=y$ the construction still respect the
Jacquet-Langlands character relation since both sides are equal to zero.
More generally let us give now the parametrization of irreducible
representations of $\mathbf{GL}(n,\mathbb{H})$ by conjugacy classes of
characters of Cartan subgroups. The group $\mathbf{GL}(n,\mathbb{H})$ has only
one conjugacy class of Cartan subgroups, a representative being $T_{n}$, which
consist of $2\times 2$ bloc diagonal matrices of the form
$\left(\begin{array}[]{cc}\alpha&0\\\ 0&\bar{\alpha}\end{array}\right)$. Thus
$T_{n}\simeq(\mathbb{C}^{\times})^{n}$ is connected,
$\mathfrak{t}_{n}=\mathrm{Lie}(T)\simeq\mathbb{C}^{n}$ and
$(\mathfrak{t}_{n})_{\mathbb{C}}\simeq(\mathbb{C}\oplus\mathbb{C})^{n}$.
Let $\Lambda$ be a character of $T_{n}$. Its differential
$\lambda=d\Lambda\,:\,\mathfrak{t}_{n}\rightarrow\mathbf{Lie}(\mathbb{C}^{\times})\simeq\mathbb{C},$
is a $\mathbb{R}$-linear map, with complexification the $\mathbb{C}$-linear
map
$\lambda=d\Lambda\,:\,\mathfrak{t}_{\mathbb{C}}\simeq(\mathbb{C}\oplus\mathbb{C})^{n}\rightarrow\mathbf{Lie}(\mathbb{C}^{\times})\simeq\mathbb{C}.$
Such a linear form is given by a $n$-tuple of couples $(\lambda_{i},\mu_{i})$
such that $\lambda_{i}-\mu_{i}\in\mathbb{Z}$.
Since $T_{n}$ is connected, a character $\Lambda$ of $T_{n}$ is determined by
its differential. We write
$\Lambda=\Lambda(\lambda_{1},\mu_{1},\ldots,\lambda_{n},\mu_{n})=\Lambda((\lambda_{i},\mu_{i})_{1\leq
i\leq n})$
if its differential is given by the $n$-tuple of couples
$(\lambda_{i},\mu_{i})$ such that $\lambda_{i}-\mu_{i}\in\mathbb{Z}$.
Let $\mathcal{P}$ the set of characters
$\Lambda=\Lambda((\lambda_{i},\mu_{i})_{1\leq i\leq n})$ of the Cartan
subgroup $T_{n}$ such that $\lambda_{i}-\mu_{i}\in\mathbb{Z}^{*}$.
Irreducible representations of $\mathbf{GL}(n,\mathbb{H})$ are parametrized by
$\mathcal{P}$, two characters $\Lambda_{1}$ and $\Lambda_{2}$ giving the same
irreducible representations if and only if they are conjugate under
$W(\mathbf{GL}(2n,\mathbb{C}),T_{n})$. This group is isomorphic to $\\{\pm
1\\}^{n}\times\mathfrak{S}_{n}$. Its action on
$\mathfrak{t}_{\mathbb{C}}\simeq(\mathbb{C}\oplus\mathbb{C})^{n}$ is as
follows : each factor $\\{\pm 1\\}$ acts inside the corresponding factor
$\mathbb{C}\oplus\mathbb{C}$ by permutation, and $\mathfrak{S}_{n}$ acts by
permuting the $n$ factors $\mathbb{C}\oplus\mathbb{C}$. Thus we see that
irreducible representations of $\mathbf{GL}(n,\mathbb{H})$ are parametrized by
multisets of cardinality $n$ of pairs of complex numbers
$(\lambda_{i},\mu_{i})$ such that $\lambda_{i}-\mu_{i}\in\mathbb{Z}^{*}$.
Since such a pair $(\lambda_{i},\mu_{i})$ parametrizes the representation
$\eta^{\prime}(\lambda_{i},\mu_{i})$, we recover the Langlands parametrization
of $\mathbf{Irr}$ by $M(D)$. Let us denote by $\sim$ the equivalence relation
on $\mathcal{P}$ given by the Weyl group action
$W(\mathbf{GL}(2n,\mathbb{C}),T)$. We have described one-to-one
correspondences
$\mathcal{P}/\sim\;\simeq\mathbf{Irr}_{n}\simeq M(D)_{n}$
Recall that a support for $\mathbf{GL}(n,\mathbb{H})$ is a multiset of $2n$
complex numbers, i.e. an element of the quotient of
$\mathfrak{t}_{\mathbb{C}}^{*}\simeq(\mathbb{C}\oplus\mathbb{C})^{n}\simeq\mathbb{C}^{2n}$,
by the action of the Weyl group $W_{\mathbb{C}}\simeq\mathfrak{S}_{2n}$.
###### Definition 9.7.
The support of a character $\Lambda=\Lambda((\lambda_{i},\mu_{i})_{1\leq i\leq
n})\in\mathcal{P}$ is the multiset
$(\lambda_{1},\mu_{1},\ldots,\lambda_{n},\mu_{n}).$
It does not depend on the equivalence class of $\Lambda$ for $\sim$. If
$\Lambda\in\mathcal{P}$ parametrizes the irreducible representation $\pi$, we
have $\mathbf{Supp}\,(\Lambda)=\mathbf{Supp}\,(\pi)$.
This describes explicitly the map
$\mathcal{P}\rightarrow M(C),\quad\Lambda\mapsto\mathbf{Supp}\,(\Lambda)$
and its fibers : two parameters
$\Lambda_{1}((\lambda_{i}^{1},\mu_{i}^{1}))\quad\text{ and
}\quad\Lambda_{2}((\lambda_{i}^{2},\mu_{i}^{2})),$
have same support if and only if the multisets
$(\lambda_{1}^{1},\ldots,\lambda_{n}^{1},\mu_{1}^{1},\ldots,\mu_{n}^{1},)\quad\text{
and
}\quad(\lambda_{1}^{2},\ldots,\lambda_{n}^{2},\mu_{1}^{2},\ldots,\mu_{n}^{2})$
are equal. We denote by $\mathcal{P}(\omega)$ the fiber at $\omega$.
Let us give now the description of the Bruhat $G$-order, in terms of integral
roots. We have the decomposition of Lie algebra :
$\mathrm{Lie}(\mathbf{GL}(2n,\mathbb{C}))=(\mathfrak{g}_{2n})_{\mathbb{C}}=(\mathfrak{t}_{n})_{\mathbb{C}}\bigoplus_{\alpha\in
R}\mathfrak{g}_{\mathbb{C}}^{\alpha}$
where $R=\\{\pm(e_{i}-e_{j}),\,1\leq i<j\leq 2n\\}$ is the usual root system
of type $A_{2n-1}$.
The roots $\pm(e_{2j-1}-e_{2j})$, $j=1,\ldots,n$ are imaginary compact, thus
$\sigma\cdot e_{2j-1}=e_{2j},\quad j=1,\ldots,n.$
Other roots are complex: for all $j,l$, $1\leq j\neq l\leq l$.
$\sigma\cdot(e_{2j-1}-e_{2l-1})=e_{2j}-e_{2l},\quad\sigma\cdot(e_{2j-1}-e_{2l})=e_{2j}-e_{2l-1}.$
Let us fix a support $\omega$ and let $\Lambda$ be a character of $T_{n}$ such
that $\mathbf{Supp}\,(\Lambda)=\omega$, say
$\Lambda=\Lambda((\lambda_{i},\mu_{i})_{i={1,\ldots,n}})$,
$\lambda_{i}-\mu_{i}\in\mathbb{Z}^{*}$. Notice that
$W_{\mathbb{C}}\simeq\mathfrak{S}_{2n}$ doesn’t act on $\mathcal{P}(\omega)$,
since the condition
$\lambda_{i}-\mu_{i}\in\mathbb{Z}$
might not hold anymore after some permutation of the $\lambda_{i}$.
Denote by $W_{\Lambda}$ the subgroup of $W_{\mathbb{C}}$ consisting of
elements $w$ such that
$w\cdot(\lambda_{i},\mu_{i})_{i}-(\lambda_{i},\mu_{i})_{i}\in(\mathbb{Z}\times\mathbb{Z})^{n}.$
Then $W_{\Lambda}$ is the Weyl group of the root system $R_{\Lambda}$ of
integral roots for $\Lambda$, a root $\alpha=e_{k}-e_{l}$ in $R$ being
integral for $\Lambda$ if, when writing
$\lambda_{1},\mu_{1},\lambda_{2},\mu_{2},\ldots,\lambda_{n},\mu_{n}=\nu_{1},\ldots,\nu_{2n}$
then $\nu_{k}-\nu_{l}\in\mathbb{Z}$.
Suppose that the support $\omega$ is regular, i.e. all the $\nu_{i}$, $1\leq
i\leq 2n$ are distinct. We choose as a positive root system
$R_{\Lambda}^{+}\subset R_{\Lambda}$ the roots $e_{k}-e_{l}$ such that
$\nu_{k}-\nu_{l}>0$. This defines simple roots.
Let us state first a necessary and sufficient condition for reducibility of
standard modules (for regular support).
###### Proposition 9.8.
Let $a=(\eta^{\prime}(\lambda_{i},\mu_{i})_{i=1,\ldots,n})\in M(D)_{\omega}$,
parametrized by the character
$\Lambda=\Lambda((\lambda_{i},\mu_{i})_{i={1,\ldots,n}})$ of $T_{n}$. Suppose
that the support
$\omega=(\lambda_{1},\mu_{1},\ldots,\lambda_{n},\mu_{n})$
is regular. Then $\lambda(a)$ is reducible if and only if there exists a
simple root $e_{k}-e_{l}$ in $R_{\Lambda}^{+}$, complex, such that, if
$e_{k}-e_{l}=e_{2i-1}-e_{2j-1}$, or $e_{k}-e_{l}=e_{2i}-e_{2j}$, $i\neq j$,
then
$\lambda_{i}-\lambda_{j}>0\quad\text{ and }\quad\mu_{i}-\mu_{j}>0,$
and if $e_{k}-e_{l}=e_{2i-1}-e_{2j}$, or $e_{k}-e_{l}=e_{2i}-e_{2j-1}$, $i\neq
j$, then
$\lambda_{i}-\mu_{j}>0\quad\text{ and }\quad\mu_{i}-\lambda_{j}>0.$
When $\omega$ is not regular, we still have a necessary condition for
reducibility: if $\lambda(a)$is reducible, then there exists a root
$e_{k}-e_{l}$ in $R_{\Lambda}^{+}$, not necessarily simple, but still
satisfying the condition above.
See [50].
###### Definition 9.9.
We still assume $\omega\in M(C)$ to be regular, and suppose that
$\Lambda\in\mathcal{P}(\omega)$ satisfies the reducibility criterion above for
the simple integral complex root $e_{k}-e_{l}$. Write
$\Lambda=\Lambda((\lambda_{1},\mu_{1}),\ldots,(\lambda_{n},\mu_{n}))=\Lambda((\nu_{1},\nu_{2}),\ldots,(\nu_{2n-1},\nu_{2n}))$
Let $\Lambda^{\prime}\in\mathcal{P}(\omega)$, obtained from $\Lambda$ by
exchanging $\nu_{k}$ and $\nu_{l}$, and let $a^{\prime}\in M(D)_{\omega}$
corresponding to $\Lambda^{\prime}$. We say that $a^{\prime}$ is obtained from
$a$ by an elementary operation, and we write $a^{\prime}\prec a$. The Bruhat
$G$-order on $M(D)_{\omega}$ is the partial order generated by $\prec$.
Let us now deduce from the reducibility criterion above the invariant
$\nu_{\delta}$ attached (cf. definition 7.1) to an essentially square
integrable modulo center irreducible representation
$\delta=\eta^{\prime}(x,y)$, $x,y\in\mathbb{C}$, $x-y\in\mathbb{Z}^{*}$. We
may suppose that $x-y=r>0$, since $\eta^{\prime}(x,y)=\eta^{\prime}(y,x)$.
###### Proposition 9.10.
With the previous notation, $\nu_{\delta}=\nu$ if $r>1$ and
$\nu_{\delta}=\nu^{2}$ if $r=1$. Since $r$ is the dimension of $\delta$, we
see that $\nu_{\delta}=\nu$ except when $\delta$ is a one-dimensional
representation of $\mathbf{GL}(1,\mathbb{H})$.
Proof. We want to study the reducibility of
$\pi=\eta^{\prime}(y+r,y)\times\eta^{\prime}(y+r+\alpha,y+\alpha)$
for $\alpha>0$. The support of this representation is regular if and only if
$y+r,y,y+r+\alpha,y+\alpha$ are distinct, but since
$y+r+\alpha>y+\alpha>y,\quad y+r+\alpha>y+r>y,$
the support is regular except when $r=\alpha$. The representation $\pi$ is the
standard representation attached to the character
$\Lambda=\Lambda((y+r+\alpha,y+\alpha),(y+r,y)).$
If $\alpha\notin\mathbb{Z}$, the support is regular, all integral roots are
imaginary compact for $\Lambda$, and then $\pi$ is irreducible.
If $\alpha=1$ and $r\neq 1$, the support is regular, all the roots are
integral for
$\Lambda((y+r+1,y+1),(y+r,y)),$
and $e_{1}-e_{3}$ is a complex root, simple in
$R_{\Lambda}^{+}=\\{e_{1}-e_{3},e_{1}-e_{2},e_{1}-e_{4},e_{3}-e_{2},e_{3}-e_{4},e_{2}-e_{4}\\},$
satisfying the reducibility criterion, since
$(\sigma\cdot(e_{1}-e_{3}))(y+r+1,y+1,y+r,y)=(e_{2}-e_{4})(y+r+1,y+1,y+r,y)=1>0.$
The only smaller element that $\Lambda$ in the Bruhat $G$-order is
$\Lambda^{\prime}=\Lambda((y+r,y+1),(y+r+1,y)),$
and we get
(9.8) $\displaystyle\eta^{\prime}(y+r,y)\times\eta^{\prime}(y+r+1,y+1)=$
$\displaystyle\mathbf{Lg}(\eta^{\prime}(y+r,y),$
$\displaystyle\eta^{\prime}(y+r+1,y+1))+\eta^{\prime}(y+r,y+1)\times\eta^{\prime}(y+r+1,y).$
If $\alpha=1$ and $r=1$, the support is singular. Applying Zuckerman
translation functors (see [26] for instance), we get
$\displaystyle\eta^{\prime}(y+2,y)\times\eta^{\prime}(y+3,y+1)=$
$\displaystyle\mathbf{Lg}(\eta^{\prime}(y+2,y),$
$\displaystyle\eta^{\prime}(y+3,y+1))+\eta^{\prime}(y+1,y+1)\times\eta^{\prime}(y+2,y).$
But, according to our convention, $\eta^{\prime}(y+1,y+1)=0$ (this is really
what we get applying translation functor to the wall), thus
$\eta^{\prime}(y+1,y)\times\eta^{\prime}(y+2,y+1)=\mathbf{Lg}(\eta^{\prime}(y+1,y),\eta^{\prime}(y+2,y+1)).$
is irreducible.
The next possibility of reducibility for $r=1$ is then $\alpha=2$, but then
the support is regular and we see as above that $\pi$ is reducible, more
precisely
(9.9) $\displaystyle\eta^{\prime}(y+3,y+2)\times\eta^{\prime}(y+1,y)=$
$\displaystyle\mathbf{Lg}(\eta^{\prime}(y+3,y+2),$
$\displaystyle\eta^{\prime}(y+1,y))+\eta^{\prime}(y+2,y+1)\times\eta^{\prime}(y+3,y).$
∎
## 10\. $U(3)$ for $A=\mathbb{H}$
We follow Tadić [46] who gives a proof of $U(3)$ for $A=\mathbb{C},\mathbb{R}$
to deal with the case $A=\mathbb{H}$.
###### Theorem 10.1.
Let $\delta=\eta^{\prime}(y+r,y)\in D$ and $y\in\mathbb{C}$ and
$r\in\mathbb{N}^{*}$ and let $n\in\mathbb{N}^{*}$. Then $u(\delta,n)$ is a
prime in the ring $\mathcal{R}$.
Proof. We know that $\delta$ is prime in $\mathcal{R}$, thus we start with
$n\geq 2$. Let us first deal with the case $r=1$. Then $\nu_{\delta}=2$ and
$a_{0}=a(\delta,n)=(\nu_{\delta}^{\frac{n-1}{2}}\delta,\nu_{\delta}^{\frac{n-1}{2}-1}\delta,\ldots,\nu_{\delta}^{-\frac{n-1}{2}}\delta)=$
$(\eta^{\prime}(y+n,y+n-1),\eta^{\prime}(y+n-2,y+n-3),\ldots\eta^{\prime}(y-n+2,y-n+1)).$
Set $a_{0}=a(\delta,n)=(X_{1},\ldots,X_{n})$ with
$X_{i}=\gamma\left(y+n+2-2i,y+n+1-2i\right),\quad i=1,\ldots,n.$
###### Remark 10.2.
The support of $u(\delta,n)$ is the multiset
$(y+n+2-2i,y+n+1-2i)_{i=1,\ldots n}.$
This support is regular.
Suppose that $u(\delta,n)$ is not prime in $\mathcal{R}$. Then there exists
polynomials $P$ and $Q$ in the variables $d\in D$, non invertible, such that
$u(\delta,n)=PQ$. Since $u(\delta,n)$ is homogeneous in $\mathcal{R}$ for the
natural graduation, the same holds for $P$ and $Q$.
Let us write
(10.1) $P=\sum_{c\in M(D)}m(c,P)\lambda(c),\quad Q=\sum_{d\in
M(D)}m(d,Q)\lambda(d).$
Set $S_{P}=\\{a\in M(D)|m(a,P)\neq 0\\}$, $S_{Q}=\\{a\in M(D)|m(a,Q)\neq
0\\}$. We get
$\mathbf{Lg}(a_{0})=X_{1}\times X_{2}\ldots\times X_{n}+\sum_{a\in
M(D),\,a<a_{0}}M(a,a_{0})\;\lambda(a).$
Thus there exists $c_{0}\in S_{P}$ and $d_{0}\in S_{Q}$ such that
$c_{0}+d_{0}=a_{0}=(X_{1},\ldots,X_{n}).$
Since $\deg P>0$, $\deg Q>0$, $c_{0}$ and $d_{0}$ are not empty. Furthermore,
we see that the polynomials $P$ and $Q$ are not constant Denote by $S_{1}$ the
set of $X_{i}$ such that $X_{i}\in c_{0}$ and by $S_{2}$ the set of $X_{i}$
such that $X_{i}\in d_{0}$. We get a partition of the $X_{i}$’s in two non
empty disjoint sets. Thus we can find $1\leq i\leq n-1$ such that
$\\{X_{i},X_{i+1}\\}\not\subset S_{j},\quad j=1,2,$
and without any loss of generality, we may suppose that $X_{i}\in S_{1}$,
$X_{i+1}\in S_{2}$. Furthermore, we have
$|S_{1}|=\deg P,\quad|S_{2}|=\deg Q,\quad\deg P+\deg Q=n$
We saw above (9.9) that $X_{i}\times X_{i+1}$ is reducible, more precisely
$X_{i}\times X_{i+1}=\mathbf{Lg}(X_{i},X_{i+1})+Y_{i}\times Y_{i+1}$
where $Y_{i}=\eta^{\prime}(y+n+2-2i,y+n-1-2i)$,
$Y_{i+1}=\eta^{\prime}(y+n+1-2i,y+n-2i)$.
We have $a_{1}:=(Y_{i},Y_{i+1})\prec(X_{i},X_{i+1})$. Set
$a_{i,i+1}=a_{1}+(X_{1},\ldots,X_{i-1},X_{i+2},\ldots,X_{n}).$
Then $a_{i,i+1}\prec a_{0}$ and thus $M(a_{1},a_{0})\neq 0$ by Prop. 6.2.
Therefore, there exists $c_{1}\in S_{P},\,d_{1}\in S_{Q}$, non empty and such
that
$c_{1}+d_{1}=a_{i,i+1}.$
We suppose now that $Y_{i}$ divide $\lambda(c_{1})$ in $\mathcal{R}$, the case
where $Y_{i}$ divide $\lambda(d_{1})$ being similar.
Suppose that also $\lambda(Y_{i+1})$ divide $\lambda(c_{1})$. We get a
partition of the $X_{j}$, $j\neq i,i+1$ in two non empty sets $S^{\prime}_{1}$
and $S^{\prime}_{2}$, such that
$c_{1}=\\{X_{j},j\in S^{\prime}_{1}\\}+Y_{i}+Y_{i+1},\quad d_{1}=\\{X_{j},j\in
S^{\prime}_{2}\\}.$
The polynomials $P$ and $Q$ being homogeneous, we get
$\deg(P)=|S^{\prime}_{1}|+2,\quad\deg(Q)=|S^{\prime}_{2}|.$
We see that $X_{i+1}\notin T:=S_{1}\cup S^{\prime}_{2}$, thus
$\\{X_{1},\ldots,X_{n}\\}\not\subset T$. For $r\in\mathcal{R}$, let us denote
by $\deg_{T}(r)$ the degree of $r$ in the variables $X_{j}\in T$. We get
$\deg_{T}P\geq|S_{1}|=\deg P$, $\deg_{T}Q\geq|S^{\prime}_{2}|=\deg Q$, thus
$\deg_{T}(\mathbf{Lg}(a_{0}))\geq n$. But the total degree of
$\mathbf{Lg}(a_{0})$ being $n$, we get $\deg_{T}(\mathbf{Lg}(a_{0}))=n$. The
expression of $\mathbf{Lg}(a_{0})$ in the basis $\lambda(b)$, $b\leq a_{0}$
shows that we can find $b_{0}\in M(D)$, $M(b_{0},a_{0})\neq 0$,
$\deg(b_{0})=n$ and $\deg_{T}\lambda(b_{0})=n$. Furthermore $\lambda(b_{0})$
can be written
$\lambda(b_{0})=X_{1}^{\alpha_{1}}X_{2}^{\alpha_{2}}\ldots
X_{n}^{\alpha_{n}},\quad\alpha_{j}\in\mathbb{N},\alpha_{1}+\cdots+\alpha_{n}=n.$
Since $T\neq\\{X_{1},\ldots,X_{n}\\}$, there exists $j$ such that
$\alpha_{j}>1$. But then $X_{j}$ appears with multiplicity at least two in
$b_{0}$. Since $\mathbf{Supp}\,(b_{0})=\mathbf{Supp}\,(a_{0})$ is regular, we
get a contradiction.
Suppose now that $\lambda(Y_{i+1})$ doesn’t divide $\lambda(c_{1})$. We get a
partition of the $X_{j}$, $j\neq i,i+1$ in two non empty sets $S^{\prime}_{1}$
and $S^{\prime}_{2}$, such that
$c_{1}=\\{X_{j},j\in S^{\prime}_{1}\\}+Y_{i},\quad d_{1}=\\{X_{j},j\in
S^{\prime}_{2}\\}+Y_{i+1}.$
We set now $T=S^{\prime}_{1}\cup S_{2}$, and we see that $X_{i+1}$ doesn’t
belong to $T$ , thus $\\{X_{1},\ldots,X_{n}\\}\not\subset T$. For
$r\in\mathcal{R}$, denote by $\deg_{T}(r)$ the degree of $r$ in the variables
$X_{j}\in T$ and $Y_{i}$. As above, we get : $\deg_{T}(\mathbf{Lg}(a_{0}))=n$,
there exists $b_{0}\in M(D)$, $M(b_{0},a_{0})\neq 0$, $\deg(b_{0})=n$ and
$\deg_{T}(\lambda(b_{0}))=n$. We can write
$\lambda(b_{0})=X_{1}^{\alpha_{1}}X_{2}^{\alpha_{2}}\ldots
X_{n}^{\alpha_{n}}Y_{i}^{\alpha},\quad\alpha_{j}\in\mathbb{N},\alpha_{1}+\cdots+\alpha_{n}+\alpha=n.$
Since $\\{X_{1},\ldots,X_{n}\\}\not\subset T$, there exists $j$ such that
$\alpha_{j}=0$. If $\alpha=0$, we get a contradiction as above. Thus
$\alpha\geq 1$, but since multiplicities in $\mathbf{Supp}\,(a_{0})$ are at
most $1$, we get $\alpha=1$, $\alpha_{j}=1$ if $j\neq i+1,i$,
$\alpha_{i}=\alpha_{i+1}=0$ and we still get a contradiction. This finishes
the case $r=1$.
Let us deal with briefly the case $r>1$. Then $\nu_{\delta}=\nu$ and
$a(\delta,n)=(\nu^{\frac{n-1}{2}}\delta,\nu^{\frac{n-1}{2}-1}\delta,\ldots,\nu^{-\frac{n-1}{2}}\delta)=$
$(\eta^{\prime}(x+\frac{n-1}{2},y+\frac{n-1}{2}),\eta^{\prime}(x+\frac{n-1}{2}-1,y+\frac{n-1}{2}-1),\ldots\eta^{\prime}(x-\frac{n-1}{2},y-\frac{n-1}{2})).$
Set $a_{0}=a(\delta,n)=(X_{1},\ldots,X_{n})$ with
$X_{i}=\gamma\left(y+r+\frac{n-1}{2}+1-i,y+\frac{n-1}{2}+1-i\right),\quad
i=1,\ldots,n.$
We proceed as above, using now formula (9.8) for the reducibility of
$\lambda(X_{i},X_{i+1})$ :
$\lambda(X_{i},X_{i+1})=\mathbf{Lg}(X_{i},X_{i+1})+\lambda(Y_{i},Y_{i+1})$
where $Y_{i}=\eta^{\prime}(y+r+\frac{n-1}{2}+1-i,y+\frac{n-1}{2}-i)$ and
$Y_{i+1}=\eta^{\prime}(y+r+\frac{n-1}{2}-i,y+\frac{n-1}{2}+1-i)$. In all
cases, we get contradictions by inspecting multiplicities in the support. We
leave the details to the reader. ∎
## 11\. $U(1)$ : archimedean case
We recall briefly the arguments for $A=\mathbb{C}$ and $\mathbb{R}$, even if
it is well known and done elsewhere, because we will need the notation anyway.
We give the complete argument when $A=\mathbb{H}$.
### 11.1. $A=\mathbb{C}$
This case is easy because for $\gamma=\gamma(x,y)$, $x,y\in\mathbb{C}$,
$x-y\in\mathbb{Z}$, a character of $\mathbb{C}^{\times}$, we have
$u(\gamma,n)=\gamma\circ\det.$
Representations $u(\gamma,n)$ are thus $1$-dimensional representations of
$\mathbf{GL}(n,\mathbb{C})$. Furthermore, if $\gamma$ is unitary (i.e. $\Re
e(x+y)=0$) then $u(\gamma,n)$ is unitary.
### 11.2. $A=\mathbb{R}$
There are two cases to consider. The first is $\delta\in D_{1}$,
$\delta=\delta(\alpha,\epsilon)$, $\alpha\in\mathbb{C}$,
$\epsilon\in\\{0,1\\}$. This case is similar to the case $A=\mathbb{C}$ above,
since
$u(\delta,n)=\delta\circ\det.$
Representations $u(\delta,n)$ are $1$-dimensional representations of
$\mathbf{GL}(n,\mathbb{R})$. Furthermore, if $\delta$ is unitary (ie. $\Re
e(\alpha)=0$) then $u(\delta,n)$ is unitary.
The second case is $\delta=\eta(x,y)\in D_{2}$, $x,y\in\mathbb{C}$,
$x-y=r\in\mathbb{N}^{*}$. We have already mentioned without giving any details
that $\eta(x,y)$ is obtained by cohomological induction from the character
$\gamma(x,y)$ of the Cartan subgroup $\mathbb{C}^{\times}$ of
$\mathbf{GL}(2,\mathbb{R})$. Let us be now more precise. Cohomological
induction functors considered here are normalized as in [26], (11.150b): if
$(\mathfrak{g}_{\mathbb{C}},K)$ is a reductive pair associated to a real
reductive group $G$, if
$\mathfrak{q}_{\mathbb{C}}=\mathfrak{l}_{\mathbb{C}}\oplus\mathfrak{u}_{\mathbb{C}}$
is a $\theta$-stable parabolic subalgebra of $\mathfrak{g}_{\mathbb{C}}$, with
Levi factor $\mathfrak{l}_{\mathbb{C}}$, and if $L$ is the normalizer in $G$
of $\mathfrak{q}_{\mathbb{C}}$, we define the cohomological induction functor
:
$\displaystyle\mathcal{R}_{\mathfrak{q}_{\mathbb{C}}}$
$\displaystyle:\mathcal{M}(\mathfrak{g}_{\mathbb{C}},K)\longrightarrow\mathcal{M}(\mathfrak{l}_{\mathbb{C}},K\cap
L)$ $\displaystyle X$
$\displaystyle\mapsto\Gamma^{S}\circ\mathrm{pro}(X\otimes\tilde{\tau})$
where $S=\dim(\mathfrak{u}_{\mathbb{C}}\cap\mathfrak{k}_{\mathbb{C}})$,
$\Gamma^{S}$ is the $S$-th Zuckerman derived functor from
$\mathcal{M}(\mathfrak{g}_{\mathbb{C}},K\cap L)$ to
$\mathcal{M}(\mathfrak{g}_{\mathbb{C}},K)$, $\mathrm{pro}$ is the parabolic
induction functor from $\mathcal{M}(\mathfrak{l}_{\mathbb{C}},K\cap L)$ to
$\mathcal{M}(\mathfrak{g}_{\mathbb{C}},K\cap L)$, and $\tilde{\tau}$ is a
character of $L$, square root of the character
$\bigwedge^{top}(\mathfrak{u}_{\mathbb{C}}/\mathfrak{u}_{\mathbb{C}}\cap\mathfrak{k}_{\mathbb{C}})$
(such a square root is usually defined only on a double cover of $L$, but for
the cases we are interested in here, i.e. products of
$G=\mathbf{GL}(n,\mathbb{R})$, $\mathbf{GL}(n,\mathbb{C})$ or
$\mathbf{GL}(n,\mathbb{H})$, we can find such a square root on $L$). This
normalization preserves infinitesimal character.
With this notation, for $G=\mathbf{GL}(2,\mathbb{R})$,
$L\simeq\mathbb{C}^{\times}$ and
$\mathfrak{u}_{\mathbb{C}}=\mathfrak{g}_{\mathbb{C}}^{e_{1}-e_{2}}$, we get
$\mathcal{R}_{\mathfrak{q}_{\mathbb{C}}}(\gamma(x,y))=\eta(x,y),\quad
x,y\in\mathbb{C},x-y\in\mathbb{N}.$
Recall the convention $\eta(x,x)=\delta(x,0)\times\delta(x,1)$ for limits of
discrete series. We have then
$\mathcal{R}_{\mathfrak{q}_{\mathbb{C}}}(\gamma(x,y))=\eta(x,y)$.
Set $a_{0}=a(\eta(x,y),n)\in M(D)$. The standard representation
$\lambda(a_{0})$ is obtained by parabolic induction from the representation
$\eta=\eta(x+\frac{n-1}{2},y+\frac{n-1}{2})\otimes\eta(x+\frac{n-3}{2},y+\frac{n-3}{2})\otimes\ldots\otimes\eta(x-\frac{n-1}{2},y-\frac{n-1}{2})$
of $\mathbf{GL}(2,\mathbb{R})\times\ldots\times\mathbf{GL}(2,\mathbb{R})$, the
representation $\eta$ being from what has just been said obtained by
cohomological induction from the character
$\gamma=\gamma(x+\frac{n-1}{2},y+\frac{n-1}{2})\otimes\gamma(x+\frac{n-3}{2},y+\frac{n-3}{2})\otimes\ldots\otimes\gamma(x-\frac{n-1}{2},y-\frac{n-1}{2})$
of $\mathbb{C}^{\times}\times\ldots\times\mathbb{C}^{\times}$. Furthermore
$u(\eta(x,y),n)$ is the unique irreducible quotient of $\lambda(a_{0})$.
Independence of polarization results in [26], chapter 11 show that the
standard representation $\lambda(a_{0})$ could be also obtained from the
character $\gamma$ of $(\mathbb{C}^{\times})^{n}$ in the following way : first
use parabolic induction from $(\mathbb{C}^{\times})^{n}$ to
$\mathbf{GL}(n,\mathbb{C})$ (with respect to the usual upper triangular Borel
subgroup) to get the standard representation
(11.1)
$\gamma(x+\frac{n-1}{2},y+\frac{n-1}{2})\times\gamma(x+\frac{n-3}{2},y+\frac{n-3}{2})\times\ldots\times\gamma(x-\frac{n-1}{2},y-\frac{n-1}{2})$
whose unique irreducible quotient is $u(\gamma(x,y),n)$, and then the
cohomological induction functor $\mathcal{R}_{\mathfrak{q}_{\mathbb{C}}}$ from
$\mathbf{GL}(n,\mathbb{C})$ to $\mathbf{GL}(2n,\mathbb{R})$ (the reader can
guess which $\theta$-stable parabolic subalgebra $\mathfrak{q}_{\mathbb{C}}$
we use). This shows also that $u(\delta,n)$ is the unique irreducible quotient
of $\mathcal{R}_{\mathfrak{q}_{\mathbb{C}}}(u(\gamma(x,y),n))$. Now,
irreducibility and unitarizability theorems of [26] also imply, the character
$u(\gamma(x,y),n)$ of $\mathbf{GL}(n,\mathbb{C})$ being in the weakly good
range, that $\mathcal{R}_{\mathfrak{q}_{\mathbb{C}}}(u(\gamma(x,y),n))$ is
irreducible and unitary if $u(\gamma(x,y),n)$ is unitary. Thus we get
$\mathcal{R}_{\mathfrak{q}_{\mathbb{C}}}(u(\gamma(x,y),n))=u(\eta(x,y),n)$
and this representation is unitary if and only if $\Re e(x+y)=0$.
In the degenerate case $x=y$ (see (9.4)), we get
$\mathcal{R}_{\mathfrak{q}_{\mathbb{C}}}(u(\gamma(x,y),n))=u(\delta(x,0),n)\times
u(\delta(x,1),n).$
### 11.3. $A=\mathbb{H}$
Let $\delta=\eta^{\prime}(x,y)$, $x,y\in\mathbb{C}$, $x-y\in\mathbb{N}^{*}$,
be an irreducible representation of $\mathbb{H}^{\times}$. Consider the
representation $u(\eta^{\prime}(x,y),n)$, and recall the invariant
$\nu_{\delta}$ of definition 7.1. We have seen that $\nu_{\delta}=\nu$ when
$x-y>1$, $\nu_{\delta}=\nu^{2}$ when $x-y=1$. In the first case, the
discussion for the unitarizability of $u(\eta^{\prime}(x,y),n)$ is exactly the
same as in the case $A=\mathbb{R}$: the standard representation
$\lambda(a_{0})$ whose unique irreducible quotient is
$u(\eta^{\prime}(x,y),n)$ is obtained by cohomological induction from
$\mathbf{GL}(n,\mathbb{C})$ to $\mathbf{GL}(n,\mathbb{H})$ of the
representation $\gamma$ defined in (11.1). Furthermore
$u(\eta^{\prime}(x,y),n)$ is the unique irreducible quotient of
$R_{\mathfrak{q}^{\prime}_{\mathbb{C}}}(u(\gamma(x,y),n))$ and is unitary if
and only if $\Re e(x+y)=0$.
When $\nu_{\delta}=\nu^{2}$, i.e. $x-y=1$, we get the same results, not for
$u(\eta^{\prime}(x,y),n)$, but for $\bar{u}(\eta^{\prime}(x,y),n)$, the
Langlands quotient of the standard representation
$\eta^{\prime}(x+\frac{n-1}{2},y+\frac{n-1}{2})\times\eta^{\prime}(x+\frac{n-3}{2},y+\frac{n-3}{2})\times\ldots\times\eta^{\prime}(x-\frac{n-1}{2},y-\frac{n-1}{2})$
$=\nu^{\frac{n-1}{2}}\eta^{\prime}(x,y)\times\nu^{\frac{n-3}{2}}\eta^{\prime}(x,y)\otimes\ldots\times\nu^{-\frac{n-1}{2}}\eta^{\prime}(x,y).$
Recall that $u(\eta^{\prime}(x,y),n)$ Langlands quotient of
$\displaystyle\nu_{\delta}^{\frac{n-1}{2}}\eta^{\prime}(x,y)\times\nu_{\delta}^{\frac{n-3}{2}}\eta^{\prime}(x,y)\otimes\ldots\times\nu_{\delta}^{-\frac{n-1}{2}}\eta^{\prime}(x,y)$
$\displaystyle=\nu^{n-1}\eta^{\prime}(x,y)\times\nu^{n-3}\eta^{\prime}(x,y)\otimes\ldots\times\nu^{-(n-1)}\eta^{\prime}(x,y).$
With the two conditions $x-y=1$ and $\Re e(x+y)=0$, we see that, up to a twist
by a unitary character, we only have to study the case $u(\eta^{\prime},n)$
with $\eta^{\prime}=\eta^{\prime}(\frac{1}{2},-\frac{1}{2})$. Unitarity of
$u(\eta^{\prime},n)$ can be deduced from the unitarity of the
$\bar{u}(\eta^{\prime},k)$ as in [9], using the fact that
(11.2) $\displaystyle\bar{u}(\eta^{\prime},2n+1)=u(\eta^{\prime},n+1)\times
u(\eta^{\prime},n)$ (11.3)
$\displaystyle\bar{u}(\eta^{\prime},2n)=\nu^{\frac{1}{2}}u(\eta^{\prime},n)\times\nu^{-\frac{1}{2}}u(\eta^{\prime},n).$
## 12\. Vogan’s classification and $U(0)$ in the archimedean case
As we have already said, $U(0)$ is established in the case $A=\mathbb{R}$ or
$\mathbb{C}$ by the work of M. Baruch filling the serious technical gap that
remained in Bernstein approach. It is also possible to establish $U(0)$ from
Vogan’s classification, and this will work also for $A=\mathbb{H}$. Of course,
this might seem a rather convoluted and unnatural approach, if the final goal
is to prove the classification of the unitary dual in Tadić’s form, since a
direct comparison between the classifications is possible. But let us notice
that :
— One of the main difficulty of Vogan’s paper is to prove some special cases
of $U(0)$ (the other difficult point is the exhaustion of the list of unitary
almost spherical representations). The rest of his paper uses only standard
and general techniques of the representation of real reductive groups, mainly
cohomological induction.
— The argument which allows the comparison between the two classifications
(“independence of polarizations”) is also the one leading to $U(0)$ from
Vogan’s classification.
— There is still some hope to find an uniform proof of $U(0)$ for all $A$.
In this section, we give a brief overview of Vogan’s paper [51], and how it
implies $U(0)$. Here, $A=\mathbb{R},\mathbb{C}$ or $\mathbb{H}$.
Let us fix a unitary character
$\delta:\mathbf{GL}(1,A)\simeq A^{\times}\rightarrow\mathbb{C}^{\times}.$
It extends canonically to a family of unitary characters
$\delta_{n}:\mathbf{GL}(n,A)\rightarrow\mathbb{C}^{\times},$
by composing with the determinant
$\mathbf{GL}(n,A)\rightarrow\mathbf{GL}(1,A)$ (non commutative determinant of
Dieudonné if $F=\mathbb{H}$).
The basic blocs of Vogan’s classification are the representations :
$\nu^{i\beta}\delta_{n},\quad\beta\in\mathbb{R}$
(with Tadić’s notation, $\nu^{i\beta}\delta_{n}=u(\nu^{i\beta}\delta,n)$ : it
is a unitary character of $\mathbf{GL}(n,A)$), and the representations
$\pi(\nu^{i\beta}\delta,n;\alpha)=\nu^{-\alpha}\nu^{i\beta}\delta_{n}\times\nu^{\alpha}\nu^{i\beta}\delta_{n},\quad
0<\alpha<\frac{1}{2}$
of $\mathbf{GL}(2n,F)$. These are Stein’s complementary series.
Vogan considers first parabolically induced representations of the form
(12.1) $\tau=\tau_{1}\times\tau_{2}\times\ldots\times\tau_{r}$
where each $\tau_{j}$ is either a unitary character
$\tau_{j}=\nu^{\beta_{j}}\delta_{n_{j}},\quad\beta_{j}\in i\mathbb{R},$
or a Stein’s complementary series
$\tau_{j}=\pi(\nu^{\beta_{j}}\delta,n_{j};\alpha),\quad\beta_{j}\in
i\mathbb{R},\;0<\alpha<\frac{1}{2}$
The reason for these conditions is the following : recall our choices of
maximal compact subgroups $K(n,A)$ of $\mathbf{GL}(n,A)$ respectively for
$A=\mathbb{R},\mathbb{C}$ and $\mathbb{H}$ :
$\mathbf{O}(n),\mathbf{U}(n)\text{ and }\mathbf{Sp}(n)$
and denote by $\mu_{n}$ the restriction of $\delta_{n}$ to $K(n,A)$. We say
that $\mu_{n}$ is a special $1$-dimensional representation of $K(n,A)$. If
$A=\mathbb{R}$, since $\mu_{n}$ factorizes through the determinant, there are
two special representations of $\mathbf{O}(n)$ : the trivial representation,
and the sign of the determinant. If $A=\mathbb{C}$, special representations of
$\mathbf{U}(n)$ are obtained by composing the determinant (with values in
$\mathbf{U}(1)$), and a character of $\mathbf{U}(1)$ (given by an integer).
Finally, if $A=\mathbb{H}$ the only special representation of $\mathbf{Sp}(n)$
is the trivial one. A representation of $\mathbf{GL}(n,A)$ is said to be
almost spherical (of type $\mu_{n})$ if it contains the special $K$-type
$\mu_{n}$. This generalizes spherical representations. The characters
$\delta_{n}\nu^{\beta}$ are exactly the ones whose restriction to $K(n,A)$ is
$\mu_{n}$. The $\tau_{i}$’s above are thus either almost spherical unitary
characters of type $\mu$ (the family $\mu=(\mu_{n})_{n}$ is fixed), or almost
spherical Stein’s complementary series of type $\mu$.
Then Vogan shows the following ([51], Theorem 3.8):
###### Theorem 12.1.
The representations $\tau=\tau_{1}\times\tau_{2}\times\ldots\times\tau_{r}$
are
$(i)$ unitary
$(ii)$ irreducible
Furthermore, every irreducible, almost spherical of type $\mu$, unitary
representation is obtained in this way, and two irreducible, almost spherical
of type $\mu$, unitary representations
$\tau=\tau_{1}\times\tau_{2}\times\ldots\times\tau_{r}$
and
$\tau^{\prime}=\tau_{1}^{\prime}\times\tau_{2}^{\prime}\times\ldots\times\tau_{s}^{\prime}$
are equivalent if and only if the multisets $\\{\tau_{i}^{\prime}\\}$ and
$\\{\tau_{j}\\}$ are equal.
Let us notice that this theorem contains a special case of $U(0)$ : this is
the point $(ii)$. It can be proved using Proposition 2.13 in [7] and results
of S. Sahi ([36], Thm 3A).
Furthermore, the classification of irreducible, almost spherical, unitary
representations it gives coincide with Tadić’s classification. (One has to
notice that an irreducible, almost spherical, unitary representation is such
with respect to an unique special $K$-type : special $K$-types are minimal,
and minimal $K$-types for $\mathbf{GL}(n,A)$ are unique, and appear with
multiplicity $1$).
Vogan classification of the unitary dual of $\mathbf{GL}(n,A)$ reduces matters
to this particular case of almost spherical representations using
cohomological induction functors preserving irreducibility and unitarity. More
precisely, let us recall some material about Vogan’s classification of the
admissible dual of a real reductive group by $G$ by minimal $K$-types ([49]).
To each irreducible representation of $G$ is attached a finite number of
minimal $K$-types. As we said above, for $G=\mathbf{GL}(n,A)$, the minimal
$K$-type is unique, and appears with multiplicity $1$. This gives a partition
(which can be explicitly given in terms of Langlands classification) of the
admissible dual of $\mathbf{GL}(n,A)$.
Vogan’s classification of the unitary dual deals with each term of this
partition separately. To each irreducible representation $\mu$ of the compact
group $K(n,A)$ is attached a subgroup $L$ of $\mathbf{GL}(n,A)$ with maximal
compact subgroup $K_{L}:=K(n,A)\cap L$, and an irreducible representation
$\mu_{L}$ of $K_{L}$. The subgroup $L$ is a product of groups of the form
$\mathbf{GL}(n_{i},A_{i})$,
$K(n,A)\cap L\simeq\prod_{i}K(n_{i},A_{i})\ $
and $\mu_{L}$ is a tensor product of special representations of the
$K(n_{i},A_{i})$.
As opposed to Tadić’s classification which uses only parabolic induction
functors, Vogan’s classification of $\mathbf{GL}(n,\mathbb{R})$ for instance,
will use classification of the almost spherical unitary dual of groups
$\mathbf{GL}(k,\mathbb{C})$. More precisely :
\- For $F=\mathbb{R}$, the subgroups $L$ are products of
$\mathbf{GL}(k,\mathbb{R})$ and $\mathbf{GL}(m,\mathbb{C})$.
\- For $F=\mathbb{C}$, the subgroups $L$ are products of
$\mathbf{GL}(k,\mathbb{C})$.
\- For $F=\mathbb{H}$, the subgroups $L$ are products of
$\mathbf{GL}(k,\mathbb{H})$ and $\mathbf{GL}(m,\mathbb{C})$.
A combination of parabolic and cohomological induction functors then defines a
functor
$\mathcal{I}_{L}^{G}$
from $\mathcal{M}(L)$ to $\mathcal{M}(\mathbf{GL}(n,A))$ with the following
properties :
\- $\mathcal{I}_{L}^{G}$ sends an irreducible (resp. unitary) representation
of $L$ with minimal $K_{L}$-type $\mu_{L}$ to an irreducible (resp. unitary)
representation of $\mathbf{GL}(n,F)$ with minimal $K$-type $\mu$.
\- $\mathcal{I}_{L}^{G}$ realizes a bijection between equivalence classes of
irreducible unitary representations of $L$ with minimal $K_{L}$-type $\mu_{L}$
and equivalence classes of irreducible unitary representations of
$\mathbf{GL}(n,F)$. with minimal $K$-type $\mu$.
¿From this point of view, to establish $U(0)$, the first thing to do is to
check that products of representations of the form (12.1) for different
families of special $K$-types $\mu$ are irreducible. For $F=\mathbb{H}$, there
is nothing to check since there is only one family of special $K$-types
$\mu=(\mu_{n})_{n}$. For $F=\mathbb{R}$, there are two families of special
$K$-types, the trivial and sign characters of the determinant of
$\mathbf{O}(n)$. The relevant result is then lemma 16.1 of [51]. For
$F=\mathbb{R}$, we have now obtained all irreducible unitary representations
which are products of $u(\delta,k)$ and $\pi(\delta,k;\alpha)$ with $\delta$
any unitary character of $\mathbf{GL}(1,\mathbb{R})=\mathbb{R}^{\times}$.
The case $A=\mathbb{C}$ is simpler and dealt with as follows. Let us notice
first that since square integrable modulo center representations of
$\mathbf{GL}(n,\mathbb{C})$ exist only for $n=1$, the above assertion shows
that we get all representations of Tadić’s classification, and establishes
$U(0)$. In that case, the subgroups $L$ from which we use cohomological
induction are of the form
$L=\mathbf{GL}(n_{1},\mathbb{C})\times\ldots\mathbf{GL}(n_{r},\mathbb{C}).$
The cohomological induction setting is that
$\mathfrak{l}_{\mathbb{C}}=\mathrm{Lie}(L)_{\mathbb{C}}$ is a Levi factor of a
$\theta$-stable parabolic subalgebra $\mathfrak{q}_{\mathbb{C}}$ of
$\mathfrak{g}_{\mathbb{C}}=\mathrm{Lie}(\mathbf{GL}(n,\mathbb{C}))_{\mathbb{C}}$.
But $L$ is also a Levi factor of a parabolic subgroup of
$\mathbf{GL}(n,\mathbb{C})$. Thus there are two ways of inducing from $L$ to
$\mathbf{GL}(n,\mathbb{C})$: parabolic and cohomological induction. An
‘independence of polarization’ result ([51], Theorem 17.6, see [26], Chapter
11 for a proof), asserts that the two coincide. This finishes the case
$A=\mathbb{C}$.
Let us now finish to discuss the cases $A=\mathbb{R}$ and $A=\mathbb{H}$.
Representations from Tadić’s classification which are still missing are the
ones built from $u(\delta,k)$’s and $\pi(\delta,k;\alpha)$’s with $\delta$ a
square integrable modulo center representation of $\mathbf{GL}(2,\mathbb{R})$
or $\mathbb{H}^{\times}$. As we have seen in 11.2 a square integrable modulo
center representation of $\mathbf{GL}(2,\mathbb{R})$ or $\mathbb{H}^{\times}$
is obtained by cohomological induction from the subgroup
$L\simeq\mathbb{C}^{\times}$ of $\mathbf{GL}(2,\mathbb{R})$ or
$\mathbf{GL}(1,\mathbb{H})=\mathbb{H}^{\times}$. This explains somehow why
cohomological induction will produce the missing representations. Let us
explain this :
— case $F=\mathbb{R}$ : we start with representations of the form
$u(\chi_{a},k_{a}),\;\pi(\chi_{b},k_{b};\alpha_{b}),\;u(\chi_{c},k_{c}),\;\pi(\chi_{d},k_{d};\alpha_{d}),\;u(\chi_{e},k_{e}),\;\pi(\chi_{f},k_{f};\alpha_{f}))$
where $u(\chi_{a},k_{a})$ are unitary characters of
$\mathbf{GL}(k_{a},\mathbb{C})$, $\pi(\chi_{b},k_{b};\alpha_{b})$ are Stein
complementary series of $\mathbf{GL}(2k_{b},\mathbb{C})$, $u(\chi_{c},k_{c})$
are unitary characters of $\mathbf{GL}(k_{c},\mathbb{R})$ of trivial type
$\mu$, $\pi(\chi_{d},k_{d};\alpha_{d})$ are Stein complementary series of
$\mathbf{GL}(2k_{d},\mathbb{R})$ of trivial type $\mu$, $u(\chi_{e},k_{e})$
are unitary characters of $\mathbf{GL}(k_{c},\mathbb{R})$ of type
$\mu=\mathrm{sgn}$, $\pi(\chi_{f},k_{f};\alpha_{f})$ are Stein complementary
series of $\mathbf{GL}(2k_{f},\mathbb{R})$ of type $\mu=\mathrm{sgn}$.
The tensor product
$\bigotimes_{a}u(\chi_{a},k_{a})\bigotimes_{b}\pi(\chi_{b},k_{b};\alpha_{b})\bigotimes_{c}u(\chi_{c},k_{c})\bigotimes_{d}\pi(\chi_{d},k_{d};\alpha_{d})\bigotimes_{e}u(\chi_{e},k_{e})\bigotimes_{f}\pi(\chi_{f},k_{f};\alpha_{f}))$
is a representation of the Levi subgroup
$\prod_{a}\mathbf{GL}(k_{a},\mathbb{C})\prod_{b}\mathbf{GL}(2k_{b},\mathbb{C})\prod_{c}\mathbf{GL}(k_{c},\mathbb{R})\prod_{d}\mathbf{GL}(2k_{d},\mathbb{R})\prod_{e}\mathbf{GL}(k_{e},\mathbb{R})\prod_{f}\mathbf{GL}(2k_{f},\mathbb{R})$
of $\mathbf{GL}(n,\mathbb{R})$, where
$n=\sum_{a}2k_{a}+\sum_{b}4k_{b}+\sum_{c}k_{c}+\sum_{d}2k_{d}+\sum_{e}k_{e}+\sum_{k}2k_{f}$.
As we saw, we first form almost spherical representations of a given type by
parabolic induction. Thus we induce
$\bigotimes_{c}u(\chi_{c},k_{c})\bigotimes_{d}\pi(\chi_{d},k_{d};\alpha_{d})$
from
$\prod_{c}\mathbf{GL}(k_{c},\mathbb{R})\prod_{d}\mathbf{GL}(2k_{d},\mathbb{R})$
to $\mathbf{GL}(q_{0},\mathbb{R})$, where
$q_{0}=\sum_{c}k_{c}+\sum_{d}2k_{d}$, obtaining an irreducible unitary
spherical representation $\pi_{0}$, and similarly
$\bigotimes_{e}u(\chi_{e},k_{e})\bigotimes_{f}\pi(\chi_{f},k_{f};\alpha_{f})$
from
$\prod_{e}\mathbf{GL}(k_{e},\mathbb{R})\prod_{d}\mathbf{GL}(2k_{f},\mathbb{R})$
to $\mathbf{GL}(q_{1},\mathbb{R})$, where
$q_{1}=\sum_{e}k_{e}+\sum_{f}2k_{f}$, obtaining an irreducible unitary almost
spherical type $\mu=\mathrm{sgn}$ representation.
Then we mix spherical and almost spherical of type $\mu=\mathrm{sgn}$
representations inducing parabolically $\pi_{0}\times\pi_{1}$ from
$\mathbf{GL}(q_{0},\mathbb{R})\times\mathbf{GL}(q_{1},\mathbb{R})$ to
$\mathbf{GL}(q_{0}+q_{1},\mathbb{R})$ : we get an irreducible unitary
representation $\pi$ of $\mathbf{GL}(q_{0}+q_{1},\mathbb{R})$.
The group
$\prod_{a}\mathbf{GL}(k_{a},\mathbb{C})\prod_{b}\mathbf{GL}(2k_{b},\mathbb{C})\times\mathbf{GL}(q_{0}+q_{1},\mathbb{R})$
is denoted by $L_{\theta}$ in [51]. Applying cohomological induction functor
$\mathcal{I}_{L_{\theta}}^{G}$ to the representation
$\bigotimes_{a}u(\chi_{a},k_{a})\bigotimes_{b}\pi(\chi_{b},k_{b};\alpha_{b})\otimes\pi$
of $L_{\theta}$, we get an irreducible unitary representation $\rho$ of
$\mathbf{GL}(n,\mathbb{R})$.
Independence of polarization theorems [51], Theorem 17.6, Theorem 17.7 and
17.9 (see [26], Chapter 11), allows us to invert the order of the two types of
induction. We could in fact start with cohomological induction, inducing each
$u(\chi_{a},k_{a})$
from $\mathbf{GL}(k_{a},\mathbb{C})$ to $\mathbf{GL}(2k_{a},\mathbb{R})$. In
non degenerate case, following the terminology of [51], definition 17.3, we
get representations $u(\delta_{a},2k_{a})$, where $\delta_{a}$ is a square
integrable modulo center irreducible representation of
$\mathbf{GL}(2,\mathbb{R})$. In the degenerate case, $\delta_{a}$ is a limit
of discrete series (9.4). These are almost spherical representations that we
had before (see [51], prop. 17.10).
In the same way, we induce all
$\pi(\chi_{b},k_{b};\alpha_{b})$
from $\mathbf{GL}(2k_{b},\mathbb{C})$ to $\mathbf{GL}(4k_{b},\mathbb{R})$. In
the non degenerate case, we get representations
$\pi(\delta_{b},2k_{b};\alpha_{b})$, where $\delta_{b}$ is as above. In the
degenerate case, we still get almost spherical representations.
The parabolically induced representation from
$\prod_{a}\mathbf{GL}(2k_{a},\mathbb{R})\prod_{b}\mathbf{GL}(4k_{b},\mathbb{R})\times\mathbf{GL}(q_{0}+q_{1},\mathbb{R})$
to $\mathbf{GL}(n,\mathbb{R})$ of
$\bigotimes_{a}u(\delta_{a},k_{a})\bigotimes_{b}\pi(\delta_{b},k_{b};\alpha_{b})\otimes\pi$
is $\rho$ (and thus irreducible), see [51], Theorem 17.6 .
This finishes the comparison of the two classifications. The case
$A=\mathbb{H}$ is entirely similar.
We deduce $U(0)$ using again independence of polarization. We want to show
that $\rho=\rho_{1}\times\rho_{2}$ is irreducible if $\rho_{1}$ and $\rho_{2}$
are irreducible and unitary. We write $\rho_{1}$ and $\rho_{2}$ as above using
first cohomological induction and then, parabolic induction. Using parabolic
induction by stage, we see that $\rho_{1}\times\rho_{2}$ is also written in
this form. Using again independence of polarization we write $\rho$ as a
parabolically then cohomologically induced representation, and we see that as
such, this is a representation appearing in Vogan’s classification which is
therefore irreducible.
## 13\. Jacquet-Langlands correspondence in the archimedean case
Ideas in this section are taken from [1] which deals with a similar problem
(Kazhdan-Patterson lifting).
### 13.1. Jacquet-Langlands and coherent families
Since we need to consider simultaneously the case $A=\mathbb{R}$ and
$A=\mathbb{H}$, we add relevant superscripts to the notation when needed as in
Section 4. We have noticed that Jacquet-Langlands correspondence between
essentially square integrable modulo center irreducible representations of
$\mathbf{GL}(2,\mathbb{R})$ and irreducible representations of
$\mathbb{H}^{\times}$ is given at the level of Grothendieck groups by
$\mathbf{LJ}(\eta(x,y))=-\eta^{\prime}(x,y)$
Representations in $D_{1}$ are sent to $0$. We extend this linearly to an
algebra morphism :
$\mathcal{R}^{\mathbb{R}}\rightarrow\mathcal{R}^{\mathbb{H}}.$
###### Lemma 13.1.
Jacquet-Langlands correspondence preserves supports.
Proof. $a\in M(D)$, $a=(\eta(x_{1},y_{1}),\ldots,\eta(x_{r},y_{r}))$. We have
then
$\mathbf{LJ}(\lambda(a))=(-1)^{r}\lambda(a^{\prime})$
where $a=(\eta^{\prime}(x_{1},y_{1}),\ldots,\eta^{\prime}(x_{r},y_{r}))$. The
support of $a$ is $(x_{1},y_{1},\ldots,x_{r},y_{r})$, and this is also the
support of $a^{\prime}$.
We recall now the definition of a coherent family of Harish-Chandra modules.
###### Definition 13.2.
Let $G$ be a real reductive group, $H$ a Cartan subgroup,
$\mathfrak{g}_{\mathbb{C}}$ and $\mathfrak{h}_{\mathbb{C}}$ the respective
complexification of their Lie algebras and $\Lambda$ the lattice of weights of
$H$ in finite dimensional representations of $G$. A coherent family of
(virtual) Harish-Chandra modules based at
$\lambda\in\mathfrak{h}_{\mathbb{C}}^{*}$ is a family
$\\{\pi(\lambda+\mu)\,|\,\mu\in\Lambda\\}$
($\lambda+\mu$ is just a formal symbol, since the two terms are not in the
same group) in the Grothendieck group $\mathcal{R}(G)$ such that
— The infinitesimal character of $\pi(\lambda+\mu)$ is given by
$\lambda+d\mu$.
— For all finite dimensional representation $F$ of $G$, we have, with
$\Delta(F)$ denoting the set of weights of $H$ in $F$, the following identity
in $\mathcal{R}(G)$ :
$\pi(\lambda+\mu)\otimes F=\sum_{\gamma\in\Delta(F)}\pi(\lambda+\mu+\gamma).$
Jacquet-Langlands correspondence preserves coherent families :
###### Lemma 13.3.
Let us identify two Cartan subgroups $H$ and $H^{\prime}$ respectively of
$\mathbf{GL}(2n,\mathbb{R})$ and $\mathbf{GL}(n,\mathbb{H})$ isomorphic to
$(\mathbb{C}^{\times})^{n}$. Let $\pi(\lambda+\mu)$ be a coherent family of
Harish-Chandra modules for $\mathbf{GL}(2n,\mathbb{R})$ based at
$\lambda\in\mathfrak{h}_{\mathbb{C}}^{*}$. Then
$\mathbf{LJ}(\pi(\lambda+\mu))$ is a coherent family for
$\mathbf{GL}(n,\mathbb{H})$.
Proof. The first property of coherent families is satisfied by
$\mathbf{LJ}(\pi(\lambda+\mu))$ because of the previous lemma. For the second
property, let us remark first that $\mathbf{GL}(2n,\mathbb{R})$ and
$\mathbf{GL}(n,\mathbb{H})$ being two real forms of
$\mathbf{GL}(2n,\mathbb{C})$, a finite dimensional representation $F$ of one
of these two groups is in fact the restriction of a finite dimensional
representation of $\mathbf{GL}(2n,\mathbb{C})$. We get for all regular element
$g^{\prime}$ of $\mathbf{GL}(n,\mathbb{H})$ corresponding to an element $g$ in
$\mathbf{GL}(2n,\mathbb{R})$,
$\displaystyle\sum_{\gamma\in\Delta(F)}\Theta_{\mathbf{LJ}(\pi(\lambda+\mu+\gamma))}(g^{\prime})=\sum_{\gamma\in\Delta(F)}\Theta_{\pi(\lambda+\mu+\gamma)}(g)=\Theta_{\pi(\lambda+\mu)\otimes
F}(g)$
$\displaystyle=\Theta_{\pi(\lambda+\mu)}(g)\,\Theta_{F}(g)=\Theta_{\mathbf{LJ}(\pi(\lambda+\mu))}(g^{\prime})\Theta_{F}(g^{\prime})=\Theta_{\mathbf{LJ}(\pi(\lambda+\mu))\otimes
F}(g^{\prime}),$
so
$\sum_{\gamma\in\Delta(F)}\mathbf{LJ}(\pi(\lambda+\mu+\gamma))=\mathbf{LJ}(\pi(\lambda+\mu))\otimes
F$. ∎
### 13.2. Jacquet-Langlands and cohomological induction
The cohomological induction functor $\mathcal{R}_{\mathfrak{q}_{\mathbb{C}}}$
introduced in 11.2 preserves irreducibility and unitarity when the
infinitesimal character of the induced module satisfies certain positivity
properties with respect to $\mathfrak{q}_{\mathbb{C}}$ (“weakly good range”).
Furthermore with the same conditions, other derived functors
$\Gamma^{i}(\mathrm{pro}(\bullet\otimes\tilde{\tau}))$, $i\neq S$ vanish. This
is not true in general, and this is the reason why we need to consider Euler-
Poincaré characteristic :
$\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}:=\sum_{i}(-1)^{i}\Gamma^{i}(\mathrm{pro}(\bullet\otimes\tilde{\tau})).$
This is not a functor between $\mathcal{M}(L)$ and $\mathcal{M}(G)$ anymore,
but simply a morphism between the Grothendieck groups $\mathcal{R}(L)$ and
$\mathcal{R}(G)$.
###### Lemma 13.4.
The morphism
$\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}:\;\mathcal{R}(L)\rightarrow\mathcal{R}(G)$
preserves coherent families.
Proof. The functors $\Gamma^{i}(\mathrm{pro}(\bullet\otimes\tilde{\tau}))$ are
normalized in order to preserve infinitesimal character, and thus the first
property of coherent family is preserved.
Let $\pi(\lambda+\mu)$ be a coherent family of Harish-Chandra for
$(\mathfrak{l},L\cap K)$. We want to show that for any finite dimensional
representation $F$ of $G$,
(13.1)
$\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}(\pi(\lambda+\mu))\otimes
F=\sum_{\gamma\in\Delta(F)}\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}(\pi(\lambda+\mu+\gamma))$
But
$\displaystyle\sum_{\gamma\in\Delta(F)}\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}(\pi(\lambda+\mu+\gamma))=\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}\left(\sum_{\gamma\in\Delta(F)}\pi(\lambda+\mu+\gamma)\right)$
$\displaystyle=$
$\displaystyle\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}(\pi(\lambda+\mu)\otimes
F)$
It is then enough to show that for any $(\mathfrak{l},L\cap K)$-module $X$,
(13.2) $\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}(X)\otimes
F=\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}(X\otimes F)$
Let $U$ be a any $(\mathfrak{g},K)$-module. Let us compute, using adjunction
properties of the functors involved:
$\displaystyle\mathrm{Hom}_{\mathfrak{g},K}(U,\Gamma(\mathrm{pro}((X\otimes
F)\otimes\tilde{\tau})))\simeq\mathrm{Hom}_{\mathfrak{l},L\cap K}(U,X\otimes
F\otimes\tilde{\tau})$ $\displaystyle\simeq\mathrm{Hom}_{\mathfrak{l},L\cap
K}(U,X\otimes(F^{*})^{*}\otimes\tilde{\tau})\simeq\mathrm{Hom}_{\mathfrak{l},L\cap
K}(U,\mathrm{Hom}_{\mathbb{C}}(F^{*},X\otimes\tilde{\tau})$
$\displaystyle\simeq\mathrm{Hom}_{\mathfrak{l},L\cap K}(U\otimes
F^{*},X\otimes\tilde{\tau}))\simeq\mathrm{Hom}_{\mathfrak{g},K}(U\otimes
F^{*},\Gamma(\mathrm{pro}(X\otimes\tilde{\tau})))$
$\displaystyle\simeq\mathrm{Hom}_{\mathfrak{g},K}(U,\Gamma(\mathrm{pro}(X\otimes\tilde{\tau}))\otimes
F)$
We deduce from this that $\Gamma(\mathrm{pro}(X\otimes\tilde{\tau}\otimes
F))\simeq\Gamma(\mathrm{pro}(X\otimes\tilde{\tau}))\otimes F$.
The same is true for $\Gamma^{i}$ replacing $\Gamma$ in the computation above.
This can be seen using general arguments using the exactness of the functor
$\bullet\otimes F$. Thus, for all $i\geq 0$,
$\Gamma^{i}(\mathrm{pro}(X\otimes\otimes\tilde{\tau}\otimes
F))\simeq\Gamma^{i}(\mathrm{pro}(X\otimes\tilde{\tau}))\otimes F$, which
implies (13.2).
∎
Let us now denote
$\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}^{\mathbb{R}}$ and
$\widehat{\mathcal{R}}_{\mathfrak{q}^{\prime}_{\mathbb{C}}}^{\mathbb{H}}$ the
Euler-Poincaré morphisms of cohomological induction between
$\mathbf{GL}(1,\mathbb{C})$ and respectively $\mathbf{GL}(2,\mathbb{R})$ and
$\mathbf{GL}(1,\mathbb{H})$, where $\mathfrak{q}_{\mathbb{C}}$ and
$\mathfrak{q}_{\mathbb{C}}^{\prime}$ are as 11.2 and 11.3.
###### Lemma 13.5.
With the notation above, and $x,y\in\mathbb{C}$, $x-y\in\mathbb{Z}$,
$\mathbf{LJ}(\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}^{\mathbb{R}}(\gamma(x,y)))=-\widehat{\mathcal{R}}_{\mathfrak{q}^{\prime}_{\mathbb{C}}}^{\mathbb{H}}(\gamma(x,y))$
Proof. When $x-y\geq 0$, we have
$\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}^{\mathbb{R}}(\gamma(x,y))=-\mathcal{R}_{\mathfrak{q}_{\mathbb{C}}}^{\mathbb{R}}(\gamma(x,y))=-\eta(x,y)$
and
$\widehat{\mathcal{R}}_{\mathfrak{q}^{\prime}_{\mathbb{C}}}^{\mathbb{H}}(\gamma(x,y))=-\mathcal{R}_{\mathfrak{q}^{\prime}_{\mathbb{C}}}^{\mathbb{H}}(\gamma(x,y))=-\eta^{\prime}(x,y).$
The formula is thus true in this case. The case $x-y<0$ follows because
$\mathbf{LJ}(\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}^{\mathbb{R}}(\gamma(x-n,y+n)))$
and
$\widehat{\mathcal{R}}_{\mathfrak{q}^{\prime}_{\mathbb{C}}}^{\mathbb{H}}(\gamma(x-n,y+n))$
are two coherent families which coincide for $n\geq 0$, and are therefore
equal.
###### Theorem 13.6.
Let $\mathcal{R}_{\mathfrak{q}_{\mathbb{C}}}^{\mathbb{R}}$ and
$\mathcal{R}_{\mathfrak{q}^{\prime}_{\mathbb{C}}}^{\mathbb{H}}$ be the
cohomological induction functors from $\mathbf{GL}(n,\mathbb{C})$ to
respectively $\mathbf{GL}(2n,\mathbb{R})$ and $\mathbf{GL}(n,\mathbb{H})$. We
have then
$\mathbf{LJ}\circ\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}^{\mathbb{R}}=(-1)^{n}\widehat{\mathcal{R}}_{\mathfrak{q}^{\prime}_{\mathbb{C}}}^{\mathbb{H}}.$
Proof. It is enough to show that the formula holds on the basis $\lambda(a)$,
$a\in M(D)$ of $\mathcal{R}^{\mathbb{C}})$. Let $a\in M(D)$,
$a=(\gamma(x_{1},y_{1}),\ldots,\gamma(x_{r},y_{r}))$. We compute
$\displaystyle\mathbf{LJ}\circ\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}^{\mathbb{R}}(\lambda(a))$
$\displaystyle=\mathbf{LJ}\circ\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}^{\mathbb{R}}(\gamma(x_{1},y_{1})\times\ldots\times\gamma(x_{r},y_{r}))$
$\displaystyle=\mathbf{LJ}(i_{\mathbf{GL}(2,\mathbb{R})^{r}}^{\mathbf{GL}(2r,\mathbb{R})}\circ\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}^{\mathbb{R}}(\gamma(x_{1},y_{1})\otimes\ldots\otimes\gamma(x_{r},y_{r}))$
$\displaystyle=i_{\mathbf{GL}(1,\mathbb{H})^{r}}^{\mathbf{GL}(r,\mathbb{H})}\circ\mathbf{LJ}(\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}^{\mathbb{R}}(\gamma(x_{1},y_{1})\otimes\ldots\otimes\gamma(x_{r},y_{r}))$
$\displaystyle=(-1)^{r}i_{\mathbf{GL}(1,\mathbb{H})^{r}}^{\mathbf{GL}(r,\mathbb{H})}\circ\widehat{\mathcal{R}}_{\mathfrak{q}^{\prime}_{\mathbb{C}}}^{\mathbb{H}}(\gamma(x_{1},y_{1})\otimes\ldots\otimes\gamma(x_{r},y_{r}))$
$\displaystyle=(-1)^{r}\widehat{\mathcal{R}}_{\mathfrak{q}^{\prime}_{\mathbb{C}}}^{\mathbb{H}}(\gamma(x_{1},y_{1})\times\ldots\times\gamma(x_{r},y_{r}))$
$\displaystyle=(-1)^{r}\widehat{\mathcal{R}}_{\mathfrak{q}^{\prime}_{\mathbb{C}}}^{\mathbb{H}}(\lambda(a))$
We have used independence of polarization theorem of [26], to replace a part
of cohomological induction by parabolic induction, and the fact that
$\mathbf{LJ}$ commutes with parabolic induction. ∎
###### Corollary 13.7.
Recall the representations $\bar{u}(\eta^{\prime},n)$ introduced in 11.3. We
have
$\mathbf{LJ}(u(\eta(x,y),n)=(-1)^{n}\;\bar{u}(\eta^{\prime}(x,y),n),$
$x,y\in\mathbb{C}$, $x-y\in\mathbb{N}$.
Recall that when $x-y\neq 1$, then
$\bar{u}(\eta^{\prime}(x,y),n)=u(\eta^{\prime}(x,y),n)$ (see 11.3).
Proof. This follows from the theorem and the formulas
$\mathcal{R}_{\mathfrak{q}_{\mathbb{C}}}^{\mathbb{R}}(u(\gamma(x,y))=u(\eta(x,y),n)$,
$\mathcal{R}_{\mathfrak{q}^{\prime}_{\mathbb{C}}}^{\mathbb{H}}(u(\gamma(x,y))=\bar{u}(\eta^{\prime}(x,y),n)$
obtained in 11.2 and 11.3. ∎
To be able to compute the transfer to $\mathbf{GL}(n,\mathbb{H})$ of any
irreducible unitary representation of $\mathbf{GL}(2n,\mathbb{R})$, we need to
compute the transfer of the $u(\delta,k)$ when $\delta\in D_{1}^{\mathbb{R}}$.
But, in this case, if $\delta=\delta(\alpha,\epsilon)$,
$u(\delta(\alpha,\epsilon),2k)=\delta(\alpha,\epsilon)\circ\det,$
and we know from [16] that the transfer of this character is the character
$\delta(\alpha,\epsilon)\circ RN$
($RN$ is the reduced norm) which is
$u(\eta^{\prime}(\alpha+\frac{1}{2},\alpha-\frac{1}{2}),k).$
From this, we get
###### Theorem 13.8.
Let $u$ be an irreducible unitary representation of
$\mathbf{GL}(2n,\mathbb{R})$. Then $\mathbf{LJ}(u)$ is either $0$, or up to a
sign, an irreducible unitary representation of $\mathbf{GL}(n,\mathbb{H})$.
For representations $u(\delta,k)$, we get:
— if $\delta=\delta(\alpha,\epsilon)\in D_{1}^{\mathbb{R}}$,
$\mathbf{LJ}(u(\delta(\alpha,\epsilon),2k))=u(\eta^{\prime}(\alpha+\frac{1}{2},\alpha-\frac{1}{2}),k)$
— if $\delta=\eta(x,y)\in D_{2}^{\mathbb{R}}$,
$\mathbf{LJ}(u(\eta(x,y)),k)=(-1)^{k}\bar{u}(\eta^{\prime}(x,y),k).$
To make it simple, a character is sent by $\mathbf{LJ}$ on the corresponding
character, while if $\delta\in D_{2}^{F}$ and
$\delta^{\prime}=\mathbf{C}(\delta)=-\mathbf{LJ}(\delta)$, then
$\mathbf{LJ}(u(\delta,k))=(-1)^{k}\bar{u}(\delta^{\prime},k)$.
In the first case note that we deal with a slightly different situation from
non archimedean fields, since the reduced norm of $\mathbb{H}$ is not
surjective, but has image in $\mathbb{R}_{+}^{*}$. In particular, if $s$ is
the character sign of the determinant on $GL_{2k}(\mathbb{R})$, then
$\mathbf{LJ}(s)$ is the trivial character of $GL_{k}(\mathbb{H})$. In the non
archimedean case, it is easy to check that $\mathbf{LJ}$ is injective on the
set of representations $u(\delta,k)$.
The above theorem gives a correspondence between irreducible unitary
representations of $\mathbf{GL}(2n,\mathbb{R})$ and of
$\mathbf{GL}(n,\mathbb{H})$, by forgetting the signs. As in the introduction,
we denote this correspondence by $|\mathbf{LJ}|$. Using (11.2) and (11.3), we
easily reformulate the result as in the introduction.
## 14\. Character formulas and ends of complementary series
From Tadić’s classification of the unitary dual, and the character formula for
induced representations, the character of any irreducible unitary
representation of $\mathbf{GL}(n,A)$ can be computed from the characters of
the $u(\delta,n)$, $\delta\in D$, $n\in\mathbb{N}$. It is remarkable that the
characters of the $u(\delta,n)$ can be computed, or more precisely, expressed
in terms of characters of square integrable modulo center representations. We
give also composition series of ends of complementary series. This information
is important for the topology of the unitary dual (see [42]).
### 14.1. $A=\mathbb{C}$
Let $\gamma=\gamma(x,y)$ be a character of $\mathbb{C}^{\times}$,
$x,y\in\mathbb{C}$, $x-y=r\in\mathbb{Z}$. The representation
$u(\gamma(x,y),n)$ is the character
$\det\circ\gamma$
of $\mathbf{GL}(n,\mathbb{C})$. There is a formula, due to Zuckerman, for the
trivial character of any real reductive group, obtained from a finite length
resolution of the trivial representation by standard modules in the category
$\mathcal{M}(G)$.
For $\mathbf{GL}(n,\mathbb{C})$, this formula is, denoting
$\mathbf{1}_{\mathbf{GL}(n,\mathbb{C})}$ the trivial representation
$\mathbf{1}_{\mathbf{GL}(n,\mathbb{C})}=u(\gamma(0,0),n)=\sum_{w\in\mathfrak{S}_{n}}(-1)^{l(w)}\prod_{i=1}^{n}\gamma(\frac{n-1}{2}-i+1,\frac{n-1}{2}-w(i)+1)$
From this, we get by tensoring with $\gamma(x,y)$,
(14.1)
$u(\gamma(x,y),n)=\sum_{w\in\mathfrak{S}_{n}}(-1)^{l(w)}\prod_{i=1}^{n}\gamma(x+\frac{n-1}{2}-i+1,y+\frac{n-1}{2}-w(i)+1)$
Set
$\gamma_{i,j}=\gamma(x+\frac{n-1}{2}-i+1,y+\frac{n-1}{2}-j+1)\in\mathcal{R}$.
The formula above becomes :
(14.2) $u(\gamma(x,y),n)=\det((\gamma_{i,j})_{1\leq i,j\leq n})$
From the Lewis Carroll identity ([14]), we deduce easily from this a formula
for composition series of ends of complementary series. This was obtained
previously by Tadić [44], using partial results of Sahi [37], but the proof
was complicated. For an easy formula, set
$\gamma(x,y)=\delta(\beta,r)$
with, $r=x-y$, $2\beta=x+y$
###### Proposition 14.1.
With the notation above, and $n\geq 2$
(14.3)
$\displaystyle\nu^{-\frac{1}{2}}u(\delta(\beta,r),n)\times\nu^{\frac{1}{2}}u(\delta(\beta,r),n)$
$\displaystyle\quad\quad=u(\delta(\beta,r),n+1)\times u(\delta(\beta,r),n-1)$
$\displaystyle\quad\quad\quad\quad\quad+u(\delta(\beta,r+1),n)\times
u(\delta(\beta,r-1),n)$
### 14.2. $A=\mathbb{R}$
Let $\eta(x,y)$ be an essentially square integrable modulo center
representation of $\mathbf{GL}(2,\mathbb{R})$, $x,y\in\mathbb{C}$,
$x-y=r\in\mathbb{N}^{*}$. Since
$u(\eta(x,y),n)=-\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}^{\mathbb{R}}(u(\gamma(x,y))),$
we get from (14.1) that
$u(\eta(x,y),n)=-\sum_{w\in\mathfrak{S}_{n}}(-1)^{l(w)}\prod_{i=1}^{n}\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}^{\mathbb{R}}(\gamma(x+\frac{n-1}{2}-i+1,y+\frac{n-1}{2}-w(i)+1)).$
We have noticed in the proof of Lemma 13.5 that
$-\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}^{\mathbb{R}}(\gamma(x-n,y+n))$
is a coherent family of representation of $\mathbf{GL}(2,\mathbb{R})$ such
that
$-\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}^{\mathbb{R}}(\gamma(x-n,y+n))=\eta(x-n,y+n)$
when $x-n>y+n$. Set
$\tilde{\eta}(x-n,y+n)=-\widehat{\mathcal{R}}_{\mathfrak{q}_{\mathbb{C}}}^{\mathbb{R}}(\gamma(x-n,y+n))$.
Then we get
$u(\eta(x,y),n)=(-1)^{n+1}\sum_{w\in\mathfrak{S}_{n}}(-1)^{l(w)}\prod_{i=1}^{n}\tilde{\eta}(x+\frac{n-1}{2}-i+1,y+\frac{n-1}{2}-w(i)+1).$
Set
$\tilde{\eta}_{i,j}=\tilde{\eta}(x+\frac{n-1}{2}-i+1,y+\frac{n-1}{2}-j+1)$.
The formula above becomes :
(14.4) $u(\eta(x,y),n)=(-1)^{n+1}\det((\tilde{\eta}_{i,j})_{1\leq i,j\leq n})$
Again from the Lewis Carroll identity ([14]), we deduce easily from this a
formula for composition series of ends of complementary series
###### Proposition 14.2.
With the notation above, $n\geq 2$, $x-y>1$,
(14.5)
$\displaystyle\nu^{-\frac{1}{2}}u(\eta(x,y),n)\times\nu^{\frac{1}{2}}u(\eta(x,y),n)$
$\displaystyle\quad\quad=u(\eta(x,y),n+1)\times u(\eta(x,y),n-1)$
$\displaystyle\quad\quad\quad\quad\quad+u(\eta(x+\frac{1}{2},y-\frac{1}{2}),n)\times
u(\eta(x-\frac{1}{2},y+\frac{1}{2}),n).$
If $x=y+1$, recall the convention that
$\eta(x-\frac{1}{2},x-\frac{1}{2})=\delta(x-\frac{1}{2},0)\times\delta(x-\frac{1}{2},1).$
We get
(14.6)
$\displaystyle\nu^{-\frac{1}{2}}u(\eta(x,x-1),n)\times\nu^{\frac{1}{2}}u(\eta(x,x-1),n)$
$\displaystyle\quad\quad=u(\delta(x,x-1),n+1)\times u(\eta(x,x-1),n-1)$
$\displaystyle\quad\quad\quad\quad\quad+u(\eta(x+\frac{1}{2},x-\frac{3}{2}),n)\times[u(\delta(x-\frac{1}{2},0),n)\times
u(\delta(x-\frac{1}{2},1),n)]$
###### Remark 14.3.
We cannot deduce by our method the composition series of the ends of
complementary series for $u(\delta,n)$ when $\delta\in D_{1}$. There is still
a formula for the character of $u(\delta,n)$, since
$u(\delta,n)=\delta\circ\det$ is a one-dimensional representation (Zuckerman),
but no interpretation for the right-hand-side of this formula as a
determinant, so we cannot apply the Lewis Carroll identity.
### 14.3. $A=\mathbb{H}$
The discussion is similar to the real case for the $u(\eta^{\prime}(x,y),n)$
when $x-y\geq 2$.
###### Proposition 14.4.
With the notation above, $n\geq 2$, $x-y\geq 2$,
(14.7)
$\displaystyle\nu^{-\frac{1}{2}}u(\eta^{\prime}(x,y),n)\times\nu^{\frac{1}{2}}u(\eta^{\prime}(x,y),n)$
$\displaystyle\quad\quad=u(\eta^{\prime}(x,y),n+1)\times
u(\eta^{\prime}(x,y),n-1)$
$\displaystyle\quad\quad\quad\quad\quad+u(\eta^{\prime}(x+\frac{1}{2},y-\frac{1}{2}),n)\times
u(\eta^{\prime}(x-\frac{1}{2},y+\frac{1}{2}),n).$
If $y=x-1$, we get the same kind of character formulas, but for the
$\bar{u}(\eta^{\prime}(x,y),n)$ :
(14.8)
$\bar{u}(\eta^{\prime}(x,x-1),n)=(-1)^{n+1}\det((\tilde{\eta}^{\prime}_{i,j})_{1\leq
i,j\leq n}),$
where
$\tilde{\eta}^{\prime}_{i,j}=\tilde{\eta}^{\prime}(x+\frac{n-1}{2}-i+1,y+\frac{n-1}{2}-j+1)$,
and $\tilde{\eta}^{\prime}$ denotes the coherent family coinciding with $\eta$
when $x-y$ is positive, as in the real case.
Again from the Lewis Carroll identity, we deduce the following (with $2n$ in
place of $n$):
(14.9)
$\displaystyle\nu^{-\frac{1}{2}}\bar{u}(\eta^{\prime}(x,x-1),2n)\times\nu^{\frac{1}{2}}\bar{u}(\eta^{\prime}(x,x-1),2n)$
$\displaystyle\quad\quad=\bar{u}(\eta^{\prime}(x,x-1),2n+1)\times\bar{u}(\eta(x,x-1),2n-1)$
$\displaystyle\quad\quad\quad\quad\quad+\bar{u}(\eta^{\prime}(x+\frac{1}{2},x-\frac{1}{2}),2n)\times\bar{u}(\eta(x-\frac{1}{2},x-\frac{3}{2}),2n).$
The $\bar{u}(\eta^{\prime}(.,.),.)$ can be expressed as products of
$u(\eta^{\prime}(.,.),.)$, explicitly:
$\displaystyle\bar{u}(\eta^{\prime}(x,x-1),2n)$
$\displaystyle=u(\eta^{\prime}(x+\frac{1}{2},x-\frac{1}{2}),n)\times
u(\eta^{\prime}(x-\frac{1}{2},x-\frac{3}{2}),n)$
$\displaystyle\bar{u}(\eta^{\prime}(x,x-1),2n+1)$
$\displaystyle=u(\eta^{\prime}(x,x-1),n+1)\times u(\eta^{\prime}(x,x-1),n)$
Substituting in this in (14.9), and using the fact that the ring $\mathcal{R}$
is a domain, we find that :
###### Proposition 14.5.
(14.10) $\displaystyle\nu^{-1}u(\eta^{\prime}(x,x-1),n)\times\nu
u(\eta^{\prime}(x,x-1),n)$
$\displaystyle\quad\quad=u(\eta^{\prime}(x,x-1),n+1)\times
u(\eta^{\prime}(x,x-1),n-1)$
$\displaystyle\quad\quad\quad\quad\quad+u(\eta^{\prime}(x+\frac{1}{2},x-\frac{1}{2}),n)\times
u(\eta^{\prime}(x-\frac{1}{2},x-\frac{3}{2}),n).$
## 15\. Compatibility and further comments
Let $F$ be a local field (archimedean or non archimedean of any
characteristic) and $A$ a central division algebra of dimension $d^{2}$ over
$F$ (if $F$ is archimedean, then $d\in\\{1,2\\}$). If $g\in G^{F}_{nd}$ is a
regular semisimple element, we say that $g$ transfers if there exists an
element $g^{\prime}$ of $G_{n}^{A}$ which corresponds to $g$ (see Section 4).
Then $g$ transfers if and only if its characteristic polynomial breaks into a
product of irreducible polynomials of degrees divisible by $d$. We say that
$\pi\in\mathcal{R}(G_{nd}^{F})$ is $d$-compatible if $\mathbf{LJ}(\pi)\neq 0$.
Otherwise stated, $\pi$ is $d$-compatible if and only if its character does
not identically vanish on the set of elements of $G_{nd}^{F}$ which transfer.
This justify the dependence of the definition only on $d$ (and not on $D$). We
then have the following results:
###### Proposition 15.1.
Let $\pi_{i}\in\mathbf{Irr}_{n_{i}}^{F}$, $1\leq i\leq k$, with
$\sum_{i}n_{i}=n$. Then $\pi_{1}\times\pi_{2}\times...\times\pi_{k}$ is
$d$-compatible if and only if for all $1\leq i\leq k$, $d$ divides $n_{i}$ and
$\pi_{i}$ is $d$-compatible.
Proof. If an element $g\in G_{n}^{F}$ is conjugated with an element of a Levi
subgroup of $G_{n}^{F}$, say $(g_{1},g_{2},...,g_{k})\in
G_{(n_{1},n_{2},...,n_{k})}$ with $g_{i}\in G_{n_{i}}^{F}$, then the
characteristic polynomial of $g$ is the product of the characteristic
polynomials of $g_{i}$. It follows that, if $g$ is semisimple regular, it
transfers if and only if $d|n_{i}$ for all $i$ and $g_{i}$ transfers.
It is a general fact that for a fully induced representation of a group $G$
from a Levi subgroup $M$, the character is zero on regular semisimple elements
which are not conjugated in $G$ to some element in $M$. Moreover, one has a
precise formula of the character of the fully induced representation in terms
of the character of the inducing representation (see [20] and [15],
Proposition 3 for non-archimedean $F$, [25] Section 13, for archimedean $F$).
The proposition follows.∎
We define now an order $<<$ finer than the Bruhat order on $<$ on
$\mathbf{Irr}^{A}_{n}$. If
$\pi=\mathbf{Lg}(\delta_{1},\delta_{2},...,\delta_{k})$ and
$\pi^{\prime}=\mathbf{Lg}(\delta^{\prime}_{1},\delta^{\prime}_{2},...,\delta^{\prime}_{k^{\prime}})$
are in $\mathbf{Irr}^{A}_{n}$, we set $\pi<<\pi^{\prime}$ if
$\mathbf{Lg}({\bf C}^{-1}(\delta_{1}),{\bf C}^{-1}(\delta_{2}),...,{\bf
C}^{-1}(\delta_{k}))<\mathbf{Lg}({\bf C}^{-1}(\delta^{\prime}_{1}),{\bf
C}^{-1}(\delta^{\prime}_{2}),...,{\bf C}^{-1}(\delta^{\prime}_{k^{\prime}}))$
in $\mathbf{Irr}_{nd}^{F}$.
###### Proposition 15.2.
Let $\delta_{i}\in D_{n_{i}}^{F}$, $1\leq i\leq k$. Assume for all $1\leq
i\leq k$ we have $d|n_{i}$, and set $\delta^{\prime}_{i}={\bf
C}(\delta_{i})\in D_{n_{i}}^{A}$. Then
$\mathbf{Lg}(\delta_{1},\delta_{2},...,\delta_{k})$ is compatible and one has:
$\mathbf{LJ}(\mathbf{Lg}(\delta_{1},\delta_{2},...,\delta_{k}))=(-1)^{nd-n}\mathbf{Lg}(\delta^{\prime}_{1},\delta^{\prime}_{2},...,\delta^{\prime}_{k})+\sum_{j\in
J}m_{j}\pi^{\prime}_{j}$
where $J$ is empty or finite, $m_{j}\in\mathbb{Z}^{*}$,
$\pi^{\prime}_{j}\in\mathbf{Irr}_{\sum n_{i}}^{A}$ and
$\pi^{\prime}_{j}<<\mathbf{Lg}(\delta^{\prime}_{1},\delta^{\prime}_{2},...,\delta^{\prime}_{k})$
for all $j\in J$.
Proof. One applies Theorem 6.1 and an induction on the number of
representations smaller than
$\mathbf{Lg}(\delta_{1},\delta_{2},...,\delta_{k})$. See [6], Proposition
3.10. ∎
###### Proposition 15.3.
If $\delta\in D_{n}^{F}$, set $\deg(\delta)=n$ and let $l(\delta)$ be the
length of ${\bf Supp}(\delta)$ (notice that $l(\delta)|\deg(\delta)$). Then
a) $u(\delta,k)$ is $d$-compatible if and only if either $d|\deg(\delta)$ or
$d|k\frac{\deg(\delta)}{l(\delta)}$.
b) there exists $k_{\delta}\in\mathbb{N}^{*}$ such that $u(\delta,k)$ is
$d$-compatible if and only if $k_{\delta}|k$. Moreover, $k_{\delta}|d$.
Proof. a) is in Section 3.5 of [7] for the non-archimedean case. It follows
from Theorem 13.8 in the archimedean case.
b) follows easily from a). For the archimedean (non trivial i.e.
$A=\mathbb{H}$) case, $d=2$ and the transfer theorem 13.8 shows that
— if $\deg(\delta)=2$, then $u(\delta,k)$ is $2$-compatible for all $k$ (hence
$k_{\delta}=1$) and
— if $\deg(\delta)=1$ then $u(\delta,k)$ is $2$-compatible if (and only if,
because of the dimension of $G_{k}^{F}$) $k$ is even (hence $k_{\delta}=2$).∎
Let $\gamma$ be an irreducible generic unitary representation of $G^{F}_{n}$.
As $\gamma$ is generic, it is fully induced from an essentially square
integrable representation ([52] for non archimedean fields, section 8 for
archimedean fields). Then as $\gamma$ is unitary, thanks to the classification
of the unitary spectrum ([41], [51] and Section 8 the present paper), $\gamma$
is an irreducible product
$\sigma_{1}\times\sigma_{2}\times...\times\sigma_{p}\times\pi_{1}\times\pi_{2}\times...\times\pi_{l}$,
where, for $1\leq i\leq p$, $\sigma_{i}\in D^{u,F}$, and, for $1\leq j\leq l$,
$\pi_{j}=\pi(\delta_{j},1;\alpha_{j})$ for some $\delta_{j}\in D^{u,F}$ and
some $\alpha_{i}\in]0,\frac{1}{2}[$.
Using the Langlands classification, it is easy to see that the representation
$\nu^{\frac{k-1}{2}}\gamma\times\nu^{\frac{k-1}{2}-1}\gamma\times...\times\nu^{-\frac{k-1}{2}}\gamma$
has a unique quotient $u(\gamma,k)$, and one has
$u(\gamma,k)=u(\sigma_{1},k)\times u(\sigma_{2},k)\times...\times
u(\sigma_{p},k)\times\pi(\delta_{1},k;\alpha_{1})\times\pi(\delta_{2},k;\alpha_{2})\times...\times\pi(\delta_{l},k;\alpha_{l})$
(see for instance [6] Section 4.1). The local components of cuspidal
automorphic representations of $\mathbf{GL}_{n}$ over adeles of global fields
are unitary generic representations ([39]). According to the classification of
the residual spectrum ([31]), it follows that local component of residual
automorphic representations of the linear group are of type $u(\gamma,k)$.
###### Proposition 15.4.
Let $\gamma$ be a unitary generic representation of $G_{n}^{F}$ for some
$n\in\mathbb{N}^{*}$. There exists $k_{\gamma}$ such that $u(\gamma,k)$ is
$d$-compatible if and only if $k_{\gamma}|k$. Moreover, $k_{\gamma}|d$.
Proof. The (easy) proof given in [7] Section 3.5 for non-archimedean fields
works also for archimedean fields. If
$u(\gamma,k)=u(\sigma_{1},k)\times u(\sigma_{2},k)\times...\times
u(\sigma_{p},k)\times\pi(\delta_{1},k;\alpha_{1})\times\pi(\delta_{2},k;\alpha_{2})\times...\times\pi(\delta_{l},k;\alpha_{l}),$
then $u(\gamma,k)$ is $d$-compatible if and only if all the $u(\sigma_{i},k)$
and $u(\delta_{j},k)$ are compatible (Proposition 15.1). Then Prop. 15.3
implies Prop. 15.4. If $F=\mathbb{R}$, $k_{\gamma}=1$ if and only if all the
$\sigma_{i}$ and $\delta_{j}$ are in $D_{2}$. If not, $k_{\gamma}=2$.∎
## 16\. Notation for the global case
Let $F$ be a global field of characteristic zero and $D$ a central division
algebra over $F$ of dimension $d^{2}$. Let $n\in\mathbb{N}^{*}$. Set
$A=M_{n}(D)$. For each place $v$ of $F$ let $F_{v}$ be the completion of $F$
at $v$ and set $A_{v}=A\otimes F_{v}$. For every place $v$ of $F$, $A_{v}$ is
isomorphic to $M_{r_{v}}(D_{v})$ for some positive integer $r_{v}$ and some
central division algebra $D_{v}$ of dimension $d_{v}^{2}$ over $F_{v}$ such
that $r_{v}d_{v}=nd$. We fix once for all an isomorphism $A_{v}\simeq
M_{r_{v}}(D_{v})$ and identify these two algebras. We say that $M_{n}(D)$ is
split at a place $v$ if $d_{v}=1$. The set $V$ of places where $M_{n}(D)$ is
not split is finite. For each $v$, $d_{v}$ divides $d$, and moreover $d$ is
the smallest common multiple of the $d_{v}$ over all the places $v$.
Let $G^{\prime}(F)$ be the group $A^{\times}=\mathbf{GL}_{n}(D)$. For every
finite place $v$ of $F$, set
$G^{\prime}_{v}=A_{v}^{\times}=\mathbf{GL}_{r_{v}}(D_{v})$. For every finite
place $v$ of $F$, we set $K_{v}=\mathbf{GL}_{r_{v}}(O_{v})$, where $O_{v}$ is
the ring of integers of $D_{v}$. Let ${\mathbb{A}}$ be the ring of adèles of
$F$. We define the group $G^{\prime}({\mathbb{A}})$ of adèles of
$G^{\prime}(F)$ as the restricted product of the $G^{\prime}_{v}$ over all
$v$, with respect to the family of open compact subgroups $K_{v}$, $v$ finite.
Let $G^{\prime}_{\infty}$ be the direct product of $G^{\prime}_{v}$ over the
set of infinite places of $F$ and and $G^{\prime}_{f}$ the restricted product
of $G^{\prime}_{v}$ over finite places, with respect to the open compact
subgroups $K_{v}$. The group $G^{\prime}({\mathbb{A}})$ decomposes into the
direct product
$G^{\prime}({\mathbb{A}})=G^{\prime}_{\infty}\times G^{\prime}_{f}.$
Fix maximal compact subgroups $K_{v}$ at archimedean places $v$ like before,
$K_{v}=\mathbf{O}(n),\mathbf{U}(n),\mathbf{Sp}(n)$ according to
$G^{\prime}_{v}$ being $\mathbf{GL}_{n}(\mathbb{R})$,
$\mathbf{GL}_{n}(\mathbb{C})$ or $\mathbf{GL}_{n}(\mathbb{H})$. Let
$K_{\infty}$ (resp. $K_{f}$) be the compact subgroup of $G_{\infty}$ (resp. of
$G^{\prime}_{f}$) which is the direct product of $K_{v}$ over the infinite
places (resp. finite places) $v$. Let $K$ be $K_{\infty}\times K_{f}$ as a
(compact) subgroup of $G^{\prime}({\mathbb{A}})$. Let
${\mathfrak{g}}_{\infty}$ be the Lie algebra of $G_{\infty}$.
An admissible $G^{\prime}({\mathbb{A}})$-module is a linear space $V$ which is
both a $({\mathfrak{g}}_{\infty},K_{\infty})$-module and a $G^{\prime}_{f}$
smooth module such that the actions of $({\mathfrak{g}}_{\infty},K_{\infty})$
and $G^{\prime}_{f}$ commute and for all irreducible equivalence class of
continuous representations $\pi$ of $K$ the $\pi$ isotypic component of $V$ is
of finite dimension. It is irreducible if it has no proper sub
$G^{\prime}({\mathbb{A}})$-module, and unitary if admits a Hermitian product
which is invariant under both actions of
$({\mathfrak{g}}_{\infty},K_{\infty})$ and $G^{\prime}_{f}$.
If $V$ is an admissible $G^{\prime}({\mathbb{A}})$-module, then $V$ is
isomorphic with a tensor product $V_{\infty}\otimes V_{f}$, where $V_{\infty}$
is an irreducible $({\mathfrak{g}}_{\infty},K_{\infty})$-module and $V_{f}$ is
an irreducible smooth representation of $V_{f}$.
If $(\pi,H)$ is a unitary irreducible admissible $G_{f}$-module, then $\pi$
breaks into a restricted tensor product $\otimes_{v\text{ finite}}\pi_{v}$
where $\pi_{v}$ is a unitary irreducible representation of $G^{\prime}_{v}$
([23], [30], [18] or [17]). For almost all $v$, $\pi_{v}$ has a fixed vector
under the maximal compact subgroup $K_{v}$. Such a representation is called
spherical. The $\pi_{v}$ are determined by $\pi$. Such a $\pi_{v}$ is called
the local component of $\pi$ at the place $v$. The set of local components
$\pi_{v}$ determines $\pi$.
Let $Z(F)$ be the center of $G^{\prime}(F)$ and, for every place $v$, let
$Z_{v}$ be the center of $G^{\prime}_{v}$. Then we identify the center
$Z({\mathbb{A}})$ of $G^{\prime}({\mathbb{A}})$ with the restricted product of
the $Z_{v}$, with respect to the open compact subgroups $Z_{v}\cap K_{v}$ at
finite places. For any finite $v$, we fix a Haar measure $dg_{v}$ on
$G^{\prime}_{v}$ such that the volume of $K_{v}$ is one, and a Haar measure
$dz_{v}$ on $Z_{v}$ such that the volume of $Z_{v}\cap K_{v}$ is one. The set
of measures $\\{dg_{v}\\}_{v\ \text{finite}}$ induce a well defined Haar
measure on the locally compact group $G^{\prime}_{f}$ and $\\{dz_{v}\\}_{v\
\text{finite}}$ induce a well defined measure on its center (see for instance
[34] where measures on restricted products are explained).
For the archimedean groups we chose Duflo-Vergne’s normalization, defined as
follows: let $G$ be a reductive group (complex or real), and pick a
$G$-invariant symmetric, non-degenerate bilinear form $\kappa$ on the Lie
algebra $\mathfrak{g}$. Then $\mathfrak{g}$ will be endowed with the Lebesgue
measure $dX$ such that the volume of a parallelotope supported by a basis
$\\{X_{1},\ldots,X_{n}\\}$ of $\mathfrak{g}$ is equal to
$|\det(\kappa(X_{i},X_{j}))|^{\frac{1}{2}}$ and $G$ will be endowed with the
Haar measure tangent to $dX$. If $G^{\prime}$ is a closed subgroup of $G$,
such that $\kappa$ is non-degenerate on its Lie algebra
$\mathfrak{g}^{\prime}$, we endow $G^{\prime}$ with the Haar measure
determined by $\kappa$ as above. This gives measures on $G^{\prime}_{\infty}$
and its center.
We fix now the measure $dg$ on
$G^{\prime}({\mathbb{A}})=G^{\prime}_{\infty}\times G^{\prime}_{f}$ (resp.
$dz$ on $Z({\mathbb{A}})$) which is the product of measures chosen before for
the infinite and the finite part. We fix a measure on
$Z({\mathbb{A}})\backslash G^{\prime}({\mathbb{A}})$ which is the quotient
measure $dz\backslash dg$.
We see $G^{\prime}(F)$ as a subgroup of $G^{\prime}({\mathbb{A}})$ via the
diagonal embedding. As $G^{\prime}(F)\cap Z({\mathbb{A}})\backslash
G^{\prime}(F)$ is a discrete subgroup of $Z({\mathbb{A}})\backslash
G^{\prime}({\mathbb{A}})$, $dz\backslash dg$ defines a measure on the quotient
space $Z({\mathbb{A}})G^{\prime}(F)\backslash G^{\prime}({\mathbb{A}})$. The
measure of the space $Z({\mathbb{A}})G^{\prime}(F)\backslash
G^{\prime}({\mathbb{A}})$ is finite.
Fix a unitary smooth character $\omega$ of $Z({\mathbb{A}})$, trivial on
$Z(F)$.
Let $L^{2}(Z({\mathbb{A}})G^{\prime}(F)\backslash
G^{\prime}({\mathbb{A}});\omega)$ be the space of classes of functions $f$
defined on $G^{\prime}({\mathbb{A}})$ with values in ${\mathbb{C}}$ such that
i) $f$ is left invariant under $G^{\prime}(F)$,
ii) $f$ satisfies $f(zg)=\omega(z)f(g)$ for all $z\in Z({\mathbb{A}})$ and
almost all $g\in G^{\prime}({\mathbb{A}})$,
iii) $|f|^{2}$ is integrable over $Z({\mathbb{A}})G^{\prime}(F)\backslash
G^{\prime}({\mathbb{A}})$.
Let $R^{\prime}_{\omega}$ be the representation of $G^{\prime}({\mathbb{A}})$
in $L^{2}(Z({\mathbb{A}})G^{\prime}(F)\backslash
G^{\prime}({\mathbb{A}});\omega)$ by right translations. As explained in [13],
each irreducible subspace of $L^{2}(Z({\mathbb{A}})G^{\prime}(F)\backslash
G^{\prime}({\mathbb{A}});\omega)$ gives rise to a unique unitary irreducible
admissible $G^{\prime}({\mathbb{A}})$-module. We call such a
$G^{\prime}({\mathbb{A}})$-module a discrete series of
$G^{\prime}({\mathbb{A}})$.
Every discrete series of $G^{\prime}({\mathbb{A}})$ with the central character
$\omega$ appears in $R^{\prime}_{\omega}$ with a finite multiplicity ([18]).
Let $R^{\prime}_{\omega,disc}$ be the subrepresentation of
$R^{\prime}_{\omega}$ generated by the discrete series. If $\pi$ is a discrete
series we call the multiplicity of $\pi$ in the discrete spectrum the
multiplicity with which $\pi$ appears in $R^{\prime}_{\omega,disc}$.
Notation. Fix $n$ and $D$ as before. The same constructions work obviously
starting with $A=\mathbf{GL}_{nd}(F)$ instead of $A=\mathbf{GL}_{n}(D)$. We
denote $G({\mathbb{A}})$ the group of invertible elements of $A$ and modify
all notations accordingly.
## 17\. Second insight of some local results
We would like to point out that some of the archimedean results described in
this paper may be proved by global methods and local tricks as in the non-
archimedean case ([7] and [6]), avoiding any reference to cohomological
induction. These are $U(1)$ for $\mathbf{GL}(n,\mathbb{H})$, the fact that
products of representations in $\mathcal{U}_{\mathbb{H}}$ are irreducible and
the Jacquet-Langlands transfer of unitary representations (using $U(0)$ for
$\mathbf{GL}(n,\mathbb{R})$ \- [11] \- but not on $\mathbf{GL}(n,\mathbb{H})$)
We sketch here these proofs.
### 17.1. $U(1)$ and the transfer of $u(\delta,k)$
Let ${\bf LJ}:\mathcal{R}_{2n}^{\mathbb{R}}\to\mathcal{R}_{n}^{\mathbb{H}}$ be
the morphism between Grothendieck groups extending the classical Jacquet-
Langlands correspondence for square integrable representations (Section 4). We
give here a second proof of the
###### Proposition 17.1.
(a) If $\chi\in D_{1}$, then ${\bf LJ}(u(\chi,2n))=\chi^{\prime}_{n}$.
(b) If $\delta\in D_{2}$ and $\delta^{\prime}=\bf{C}(\delta)$, then ${\bf
LJ}(u(\delta,n))=(-1)^{n}\bar{u}(\delta^{\prime},n)$.
(c) The statement $U(1)$, i.e. $u(\delta^{\prime},n)$ are unitary, is true for
$\mathbf{GL}(n,\mathbb{H})$.
The first assertion (a) is obvious since $u(\chi,2n)=\chi_{2n}$ and the
equality of characters may be checked directly. To prove (c), recall we have
(17.1) ${\bf
LJ}(u(\delta,n))=(-1)^{n}(\bar{u}(\delta^{\prime},n)+\sum_{i=1}^{k}a_{i}u_{i}),$
where the $u_{i}$ are irreducible non-equivalent representations of
$\mathbf{GL}(n,\mathbb{H})$, non equivalent to $\bar{u}(\delta^{\prime},n)$,
and $a_{i}$ are non-zero integers (Proposition 15.2).
We now claim that all the irreducible representations on the right hand side
of the equality are unitary and the $a_{i}$ are all positive. One may proceed
like in [6]: choose a global field $F$ and a division algebra $D$ over $F$
such that, if $G^{\prime}({\mathbb{A}})$ is the adele group of $D^{\times}$,
we have $G^{\prime}_{v}=GL_{n}(\mathbb{H})$ for some place $v$. As $\delta\in
D_{2}$, there exists a cuspidal representation $\rho$ of
$G({\mathbb{A}})=GL_{2n}({\mathbb{A}})$ such that $\rho_{v}=\delta$. According
to the classification of the residual spectrum for $G({\mathbb{A}})$ ([31])
there exists a residual representation $\pi$ of $G({\mathbb{A}})$ such that
$\pi_{v}=u(\delta,n)$. Comparing then the trace formula from [3] (or the
simple trace formula from [2]) of $G({\mathbb{A}})$ and
$G^{\prime}({\mathbb{A}})$, one gets using standard simplifications and
multiplicity one on the $G({\mathbb{A}})$ side a local formula ${\bf
LJ}(u(\delta,n))=\pm\sum_{j=1}^{k}b_{j}w_{i}$ where the $b_{j}$ are
multiplicities of representations - hence positive, and $w_{j}$ are local
component of global discrete series \- hence unitary. By linear independence
of characters on $\mathbf{GL}(n,\mathbb{H})$, this formula is the same as the
formula (17.1) which implies in particular $\bar{u}(\delta^{\prime},n)$ is
unitary (see [6], Cor. 4.8(a)). This implies the assertion $U(1)$, since when
$\delta^{\prime}$ is not a character one has
$\bar{u}(\delta^{\prime},n)=u(\delta^{\prime},n)$, while when
$\delta^{\prime}$ is a (unitary) character we know $u(\delta^{\prime},k)$ is
the unitary character $\delta^{\prime}\circ RN$. So (c) is proved.
We now prove (b). We want to prove that on the left hand side of the equality
(17.1) there is just one term, $\bar{u}(\delta^{\prime},n)$. If $\pi$ is an
irreducible unitary representation of $\mathbf{GL}(n,\mathbb{R})$ we say $\pi$
is semirigid if it is a product of representations $u(\delta,k)$. We already
showed in the previous paragraph that all these representations $u(\delta,k)$
correspond by ${\bf LJ}$ to zero or a sum of unitary representations. As ${\bf
LJ}$ commutes with products and a product of irreducible unitary
representations is a sum of irreducible unitary representations, it follows
that any sum of semirigid irreducible unitary representation of some
$\mathbf{GL}(2n,\mathbb{R})$ correspond to zero or a sum of unitary
representations of $\mathbf{GL}(n,\mathbb{H})$. The relation (17.1) shows now
that for all $\alpha\in{\mathbb{R}}$, ${\bf
LJ}(\pi(\delta,n;\alpha))=\nu^{\prime\alpha}(\sum_{i=0}^{k}a_{i}u_{i})\times\nu^{\prime-\alpha}(\sum_{i=0}^{k}a_{i}u_{i})$
where $a_{0}=1$, $u_{0}=\bar{u}(\delta^{\prime},n)$. When $\alpha=\frac{1}{2}$
on the left hand side of the equality we obtain a sum of semirigid unitary
representations (see Proposition 14.5 for precise formula), so on the right
hand side we should have a sum of unitary representations. But this is
impossible as soon as the sum $\sum_{i=1}^{k}a_{i}u_{i}$ contains a
representation $u_{1}$, since then the mixed product
$\nu^{\prime-\frac{1}{2}}u_{0}\times\nu^{\prime\frac{1}{2}}u_{1}$ contains a
non hermitian subquotient (the “bigger” one for the Bruhat order for example).
This shows there is only one $u_{i}$, $i=0$, and so
$\mathbf{LJ}(u(\delta,n))=(-1)^{n}\bar{u}(\delta^{\prime},n)$. ∎
### 17.2. Irreducibility and transfer of all unitary representations
We know now that the representations in $\mathcal{U}_{\mathbb{H}}$ are all
unitary. To show that their products remain irreducible, we may use the
irreducibility trick in [7], Proposition 2.13 which reduces the problem to
show that $u(\delta^{\prime},k)\times u(\delta^{\prime},k)$ is irreducible for
all discrete series $\delta^{\prime}$ of $\mathbf{GL}(1,\mathbb{H})$ and all
$k\in\mathbb{N}^{*}$. Let $\delta$ be a square integrable representation of
$\mathbf{GL}(2,\mathbb{R})$ such that $\mathbf{LJ}(\delta)=\delta^{\prime}$.
It follows that we have the equality $\mathbf{LJ}(u(\delta,k)\times
u(\delta,k))=\bar{u}(\delta^{\prime},k)\times\bar{u}(\delta^{\prime},k)$. On
the left hand side we have the irreducible representation $M=u(\delta,k)\times
u(\delta,k)$. On the right hand side we have a sum of unitary representations,
the product
$M^{\prime}=\bar{u}(\delta^{\prime},k)\times\bar{u}(\delta^{\prime},k)$ (we
already know $\bar{u}(\delta^{\prime},k)$ is unitary), which we want to show
has actually a single term. Apply the same $\alpha$ trick like before : we
know that $\pi(M,\alpha)$ corresponds to $\pi(M^{\prime},\alpha)$. For
$\alpha=\frac{1}{2}$ the first representation breaks into a sum of semirigid
unitary representations, while the second is a sum containing non unitary
representations unless $M^{\prime}$ contains a single term. Notice that the
Langlands quotient theorem and $U(4)$ guarantee $M^{\prime}$ has a subquotient
which appears with multiplicity one, so either $M^{\prime}$ is a sum
containing two different terms, or is irreducible. So the square of
$\bar{u}(\delta^{\prime},k)$ is irreducible for all $k$. If $\delta^{\prime}$
is not a character, then $u(\delta^{\prime},k)=\bar{u}(\delta^{\prime},k)$ so
the square of $u(\delta^{\prime},k)$ is irreducible. If $\delta^{\prime}$ is a
character then we saw
$\bar{u}(\delta^{\prime},2k+1)=u(\delta^{\prime},k)\times
u(\delta^{\prime},k+1)$ and the result follows again.
This implies now: if $u$ is an irreducible unitary representation of
$\mathbf{GL}(2n,\mathbb{R})$, then $\mathbf{LJ}(u)$ is either zero, or plus or
minus irreducible unitary representation of $\mathbf{GL}(n,\mathbb{H})$.
The proofs here are based on the trace formula and do not involve
cohomological induction. However, the really difficult result is $U(0)$ on
$\mathbf{GL}(n,\mathbb{H})$, and it does.
## 18\. Global results
### 18.1. Global Jacquet-Langlands, multiplicity one and strong multiplicity
one for inner forms
For all $v\in V$, denote ${\bf LJ}_{v}$ (resp. $|{\bf LJ}|_{v}$) the
correspondence ${\bf LJ}$ (resp. $|{\bf LJ}|$), as defined in Sections 4 and
13, applied to $G_{v}$ and $G^{\prime}_{v}$.
If $\pi$ is a discrete series of $G({\mathbb{A}})$, we say $\pi$ is
$D$-compatible if, for all $v\in V$, $\pi_{v}$ is $d_{v}$-compatible. Then
${\bf LJ}(\pi_{v})\neq 0$ and $|{\bf LJ}|_{v}(\pi_{v})$ is an irreducible
representation of $G^{\prime}_{n}$.
Here are the Jacquet-Langlands correspondence and the multiplicity one
theorems for $G^{\prime}({\mathbb{A}})$ (already known for $G({\mathbb{A}})$:
[39], [32]).
###### Theorem 18.1.
(a) There exists a unique map ${\bf G}$ from the set of discrete series of
$G^{\prime}({\mathbb{A}})$ into the set of discrete series of
$G({\mathbb{A}})$ such that ${\bf G}(\pi^{\prime})=\pi$ implies $|{\bf
LJ}|_{v}(\pi_{v})=\pi^{\prime}_{v}$ for all places $v\in V$, and
$\pi_{v}=\pi^{\prime}_{v}$ for all places $v\notin V$. The map ${\bf G}$ is
injective and onto the set of $D$-compatible discrete series of
$G({\mathbb{A}})$.
(b) The multiplicity of every discrete series of $G^{\prime}({\mathbb{A}})$ in
the discrete spectrum is $1$. If two discrete series of
$G^{\prime}({\mathbb{A}})$ have isomorphic local component at almost every
place, then they are equal.
The proof is the same as the proof of Theorem 5.1 in [7] with the following
minor changes: Lemma 5.2 [7] is obviously still true when the inner form is
not split at infinite places using the Proposition 15.1 here. For the
finiteness property quoted in [7], p. 417 as [BB], one has to replace this
reference with [5], where the case of ramified at infinite places inner form
is addressed. We do not need here the claim (d) in [7], Theorem 5.1 which is
now a particular case of Tadić classification of unitary representation for
inner forms. At the bottom of pages 417 and 419 in [7], the independence of
characters on a product of connected $p$-adic groups is used. Here the product
involve also real, sometimes non connected groups like
$\mathbf{GL}(n,\mathbb{R})$. The linear independence of characters on each of
these $\mathbf{GL}_{n}$ is enough to ensure the linear independence of
characters on the product, as at infinite places representations are Harish-
Chandra modules so for all these groups, real or $p$-adic, irreducible
representations correspond to irreducible modules on a well chosen algebra
with idempotents.
As in [7], the hard core of the proof is the powerful equality 17.8 from [3]
(comparison of trace formulae of $G({\mathbb{A}})$ and
$G^{\prime}({\mathbb{A}})$).
Let us show now the classification of cuspidal representations of
$G^{\prime}({\mathbb{A}})$ in terms of cuspidal representations of
$G({\mathbb{A}})$. Let $\nu$ (resp. $\nu^{\prime}$) be the global character of
$G({\mathbb{A}})$ (resp. $G^{\prime}({\mathbb{A}})$) given by the product of
local characters like before (i.e. absolute value of the reduced norm). Recall
that, according to Moeglin-Waldspurger classification, every discrete series
$\pi$ of $G({\mathbb{A}})$ is the unique irreducible quotient of an induced
representation
$\nu^{\frac{k-1}{2}}\rho\times\nu^{\frac{k-3}{2}}\rho\times...\times\nu^{-\frac{k-1}{2}}\rho$
where $\rho$ is cuspidal. Then $k$ and $\rho$ are determined by $\pi$, so
$\pi$ is cuspidal if and only if $k=1$. We set $\pi=MW(\rho,k)$.
###### Proposition 18.2.
(a) Let $n\in\mathbb{N}^{*}$ and let $\rho$ be a cuspidal representation of
$G_{n}({\mathbb{A}})$. Then there exists $k_{\rho}$ such that, if
$k\in\mathbb{N}^{\times}$, then $MW(\rho,k)$ is $D$-compatible if and only if
$k_{\rho}|k$. Moreover, $k_{\rho}|d$.
(b) Let $\pi^{\prime}$ be a discrete series of $G^{\prime}({\mathbb{A}})$ and
$\pi={\bf G}(\pi^{\prime})$. Then $\pi^{\prime}$ is cuspidal if and only if
$\pi$ is of the form $MW(\rho,k_{\rho})$.
(c) Let $\rho^{\prime}$ be a cuspidal representation of some
$G^{\prime}_{n}({\mathbb{A}})$. Write ${\bf
G}(\rho^{\prime})=MW(\rho,k_{\rho})$ and then set
$\nu_{\rho^{\prime}}=\nu^{k_{\rho}}$. For every $k\in\mathbb{N}^{*}$, the
induced representation
$\nu_{\rho^{\prime}}^{\frac{k-1}{2}}\rho^{\prime}\times\nu_{\rho^{\prime}}^{\frac{k-3}{2}}\rho^{\prime}\times...\times\nu_{\rho^{\prime}}^{-\frac{k-1}{2}}\rho^{\prime}$
has a unique irreducible quotient which we will denote
$MW^{\prime}(\rho^{\prime},k)$. It is a discrete series and all discrete
series are obtained from some cuspidal $\rho^{\prime}$ like that. If ${\bf
G}(\rho^{\prime})=MW(\rho,k_{\rho})$ we have ${\bf
G}(MW^{\prime}(\rho^{\prime},k))=MW(\rho,kk_{\rho})$.
Proof. (a) This follows from the Proposition 15.4 and the fact that for all
$v\in V$ $d_{v}|d$.
(b) This is the proposition 5.5 in [7], with “cuspidal” in place of “basic
cuspidal” thanks to Grbac’s appendix. Both the proof of the claim and the
proof in the appendix work the same way here.
(c) When $G^{\prime}_{n}({\mathbb{A}})$ is split at infinite places, this is
the claim (a) of Proposition 5.7 in [7]. We follow the same idea which reduces
the problem to local computation. As [7] makes use of Zelevinsky involution,
we have to give here a proof in the archimedean case (where the involution
doesn’t exist). First, to show that the induced representation
$\nu_{\rho^{\prime}}^{\frac{k-1}{2}}\rho^{\prime}\times\nu_{\rho^{\prime}}^{\frac{k-3}{2}}\rho^{\prime}\times...\times\nu_{\rho^{\prime}}^{-\frac{k-1}{2}}\rho^{\prime}$
has a constituent which is a discrete series, we will directly show that ${\bf
G}^{-1}(MW(\rho,kk_{\rho}))$, which is a discrete series indeed, is a
constituent of
$\nu_{\rho^{\prime}}^{\frac{k-1}{2}}\rho^{\prime}\times\nu_{\rho^{\prime}}^{\frac{k-3}{2}}\rho^{\prime}\times...\times\nu_{\rho^{\prime}}^{-\frac{k-1}{2}}\rho^{\prime}.$
We will show it place by place, local component by local component. Fix a
place $v$ and let $\gamma$ be the local component of $\rho$ at the place $v$.
It is an irreducible unitary generic representation, and we know that
$u(\gamma,k_{\rho})$ transfers. Set $\pi=\mathbf{LJ}(u(\gamma,k_{\rho}))$.
What we want to prove is that $\mathbf{LJ}(u(\gamma,kk_{\rho}))$ is a
subquotient of
$\nu^{k_{\rho}\frac{k-1}{2}}\pi\times\nu^{k_{\rho}\frac{k-3}{2}}\pi\times...\times\nu^{k_{\rho}(-\frac{k-1}{2})}\pi$.
The unitary generic representation $\gamma$ may be written as
$\gamma=(\times_{i}\,\sigma_{i})\times(\times_{j}\,\pi(\tau_{j},1,\alpha_{j}))$,
with $\sigma_{i}$ and $\tau_{j}$ square integrable representations and
$\alpha_{j}\in]0,\frac{1}{2}[$. So it is enough to prove the result when
$\gamma$ is a square integrable representation. Let us suppose $\gamma$ is
square integrable. To prove that $\pi=\mathbf{LJ}(u(\gamma,k_{\rho}))$ implies
$\mathbf{LJ}(u(\gamma,kk_{\rho}))$ is a quotient of
$\nu^{k_{\rho}\frac{k-1}{2}}\pi\times\nu^{k_{\rho}\frac{k-3}{2}}\pi\times...\times\nu^{k_{\rho}(-\frac{k-1}{2})}\pi$
we would like to show that the essentially square integrable support of the
representation $\mathbf{LJ}(u(\gamma,kk_{\rho}))$ is the union of the square
integrable support of the representations
$\\{\nu^{k_{\rho}(\frac{k-1}{2}-i)}\pi\\}_{i\in\\{0,1,...,k-1\\}}$. Then, as
the essentially square integrable support of
$\times_{i=0}^{k-1}[\nu^{k_{\rho}(\frac{k-1}{2}-i)}\pi]$ is in standard order,
$\mathbf{LJ}(u(\gamma,kk_{\rho}))$ will be the unique quotient of the product.
If $\gamma$ lives on a group of a size such that it transfers to some ${\bf
C}(\gamma)$, then $\pi=\bar{u}({\bf C}(\gamma),k_{\rho})$,
$\mathbf{LJ}(u(\gamma,kk_{\rho}))=\bar{u}({\bf C}(\gamma),kk_{\rho})$ ([7]
Proposition 3.7 (a) and second case of transfer in Theorem 13.8 of this
paper), and the result is straightforward. If not, then $u(\gamma,k_{\rho})$
verifies the “twisted” case of transfer [7], Proposition 3.7 (b) for non
archimedean field, first case of Theorem 13.8 in this paper for archimedean
field. In the non archimedean case, one may compute more explicit formulas for
the transfer ([7] formula (3.9)) and see that it works. In the archimedean
case $\gamma$ is a character of $\mathbf{GL}_{1}(\mathbb{R})$ and so
$\pi=\gamma\circ RN_{\frac{k_{\rho}}{2}}$ and
$\mathbf{LJ}(u(\gamma,kk_{\rho}))=\gamma\circ RN_{\frac{kk_{\rho}}{2}}$.∎
Let us recall the uniqueness of the cuspidal support for automorphic
representations. According to a result of Langlands [29] particularized to our
case, we know that any automorphic representation of
$G^{\prime}({\mathbb{A}})$ is a constituent of an induced representation of
the form $\nu^{\prime a_{1}}\rho_{1}\times\nu^{\prime
a_{2}}\rho_{2}\times...\times\nu^{\prime a_{k}}\rho_{k}$ where $a_{i}$ are
real numbers and $\rho_{i}$ are cuspidal representations. In [24] the authors
prove that, for $G({\mathbb{A}})$, the couples $(\rho_{i},a_{i})$ are unique
which in particular solves the question of the existence of CAP
representations. In [7] it is shown the result is true (more or less by
transfer) for the more general case $G^{\prime}({\mathbb{A}})$, if the inner
form is split at infinite places. Using the previous results, the same proof
now works with no condition on the infinite places.
## 19\. $L$-functions $\epsilon$-factors and transfer
The fundamental work of Jacquet, Langlands and Godement of $L$-functions and
$\epsilon$-factors of linear groups on division algebras easily implies the
following theorem. What we call $\epsilon^{\prime}$-factors following [19] are
sometimes called $\gamma$-factors in literature. The value of all functions
depend on the choice of some additive non trivial character $\psi$ of
$\mathbb{R}$ which is not relevant for the results.
###### Theorem 19.1.
(a) Let $u$ be a $2$-compatible irreducible unitary representation of
$\mathbf{GL}_{2n}(\mathbb{R})$ and $u^{\prime}$ the irreducible unitary
representation of $\mathbf{GL}_{n}(\mathbb{H})$ such that $\mathbf{LJ}(u)=\pm
u^{\prime}$. Then the $\epsilon^{\prime}$ factors of $u$ and $u^{\prime}$ are
equal.
(b) Let $\delta\in D_{2}$ and set $\delta^{\prime}={\bf C}(\delta)$. Then for
all $k\in\mathbb{N}^{\times}$ the $L$-functions of $u(\delta,k)$ and
$\bar{u}(\delta^{\prime},k)$ are equal and the $\epsilon$-factors of
$u(\delta,k)$ and $\bar{u}(\delta^{\prime},k)$ are equal.
(c) If $\chi$ is a character of $\mathbf{GL}(2n,\mathbb{R})$ and
$\chi^{\prime}=\mathbf{LJ}(\chi)$, then the $\epsilon^{\prime}$-factors of
$\chi$ and $\chi^{\prime}$ are equal.
Proof. If we prove (b) and (c), then (a) follows by the corollary 8.9 from
[19] and classifications of unitary representations in Tadić setting explained
in the present paper.
(b) is proved in [23] for $k=1$. As a particular case of [22] (5.4) page 80,
the $L$-function (resp. $\epsilon$-factor) of a Langlands quotient
$u(\delta,k)$ is the product to the $L$-functions (resp. $\epsilon$-factors)
of representations $\nu^{i-\frac{k-1}{2}}\delta$, $0\leq i\leq k-1$. The same
proof given there for $\mathbf{GL}_{2n}(\mathbb{R})$ works for
$\mathbf{GL}_{n}(\mathbb{H})$ as well, so the case $k=1$ imply the general
case.
(c) In case $\chi$ is the trivial character, this is the corollary 8.10 page
121 in [19]. The general case follows easily by torsion with $\chi$ (or by
reproducing the same proof).∎
## References
* [1] J. Adams and J.-S. Huang. Kazhdan-Patterson lifting for ${\rm GL}(n,\mathbb{R})$. Duke Math. J., 89(3):423–444, 1997.
* [2] J. Arthur. The invariant trace formula. II. Global theory. J. Amer. Math. Soc., 1(3):501–554, 1988.
* [3] J. Arthur and L. Clozel. Simple algebras, base change, and the advanced theory of the trace formula, volume 120 of Annals of Mathematics Studies. Princeton University Press, Princeton, NJ, 1989.
* [4] A. I. Badulescu. Correspondance de Jacquet-Langlands pour les corps locaux de caractéristique non nulle. Ann. Sci. École Norm. Sup. (4), 35(5):695–747, 2002.
* [5] A. I. Badulescu. Un théorème de finitude dans le spectre automorphe pour les formes intérieures de ${\rm GL}_{n}$ sur un corps global. Bull. London Math. Soc., 37(5):651–657, 2005.
* [6] A. I. Badulescu. Jacquet-Langlands et unitarisabilité. J. Inst. Math. Jussieu, 6(3):349–379, 2007.
* [7] A. I. Badulescu. Global Jacquet-Langlands correspondence, multiplicity one and classification of automorphic representations. Invent. Math., 172(2):383–438, 2008. With an appendix by Neven Grbac.
* [8] A. I. Badulescu, G. Henniart, B. Lemaire, and S. V. Sur le dual unitaire de GL(r,D). American Journal of Math., to be published, 2009.
* [9] A. I. Badulescu and D. A. Renard. Sur une conjecture de Tadić. Glas. Mat. Ser. III, 39(59)(1):49–54, 2004.
* [10] D. Barbasch and A. Moy. A unitarity criterion for $p$-adic groups. Invent. Math., 98(1):19–37, 1989.
* [11] E. M. Baruch. A proof of Kirillov’s conjecture. Ann. of Math. (2), 158(1):207–252, 2003.
* [12] J. N. Bernstein. $P$-invariant distributions on ${\rm GL}(N)$ and the classification of unitary representations of ${\rm GL}(N)$ (non-Archimedean case). In Lie group representations, II (College Park, Md., 1982/1983), volume 1041 of Lecture Notes in Math., pages 50–102. Springer, Berlin, 1984.
* [13] A. Borel and H. Jacquet. Automorphic forms and automorphic representations. In Automorphic forms, representations and $L$-functions (Proc. Sympos. Pure Math., Oregon State Univ., Corvallis, Ore., 1977), Part 1, Proc. Sympos. Pure Math., XXXIII, pages 189–207. Amer. Math. Soc., Providence, R.I., 1979. With a supplement “On the notion of an automorphic representation” by R. P. Langlands.
* [14] G. Chenevier and D. Renard. Characters of Speh representations and Lewis Carroll identity. Represent. Theory, 12:447–452, 2008.
* [15] L. Clozel. Théorème d’Atiyah-Bott pour les variétés $p$-adiques et caractères des groupes réductifs. Mém. Soc. Math. France (N.S.), (15):39–64, 1984. Harmonic analysis on Lie groups and symmetric spaces (Kleebach, 1983).
* [16] P. Deligne, D. Kazhdan, and M.-F. Vignéras. Représentations des algèbres centrales simples $p$-adiques. In Representations of reductive groups over a local field, Travaux en Cours, pages 33–117. Hermann, Paris, 1984.
* [17] D. Flath. Decomposition of representations into tensor products. In Automorphic forms, representations and $L$-functions (Proc. Sympos. Pure Math., Oregon State Univ., Corvallis, Ore., 1977), Part 1, Proc. Sympos. Pure Math., XXXIII, pages 179–183. Amer. Math. Soc., Providence, R.I., 1979.
* [18] I. M. Gel′fand, M. I. Graev, and I. I. Pyatetskii-Shapiro. Representation theory and automorphic functions, volume 6 of Generalized Functions. Academic Press Inc., Boston, MA, 1990. Translated from the Russian by K. A. Hirsch, Reprint of the 1969 edition.
* [19] R. Godement and H. Jacquet. Zeta functions of simple algebras. Lecture Notes in Mathematics, Vol. 260. Springer-Verlag, Berlin, 1972\.
* [20] Harish-Chandra. Harmonic analysis on reductive $p$-adic groups. Lecture Notes in Mathematics, Vol. 162. Springer-Verlag, Berlin, 1970\. Notes by G. van Dijk.
* [21] H. Jacquet. Generic representations. In Non-commutative harmonic analysis (Actes Colloq., Marseille-Luminy, 1976), pages 91–101. Lecture Notes in Math., Vol. 587\. Springer, Berlin, 1977.
* [22] H. Jacquet. Principal $L$-functions of the linear group. In Automorphic forms, representations and $L$-functions (Proc. Sympos. Pure Math., Oregon State Univ., Corvallis, Ore., 1977), Part 2, Proc. Sympos. Pure Math., XXXIII, pages 63–86. Amer. Math. Soc., Providence, R.I., 1979.
* [23] H. Jacquet and R. P. Langlands. Automorphic forms on ${\rm GL}(2)$. Lecture Notes in Mathematics, Vol. 114. Springer-Verlag, Berlin, 1970\.
* [24] H. Jacquet and J. A. Shalika. On Euler products and the classification of automorphic forms. II. Amer. J. Math., 103(4):777–815, 1981.
* [25] A. W. Knapp. Representation theory of semisimple groups. Princeton Landmarks in Mathematics. Princeton University Press, Princeton, NJ, 2001. An overview based on examples, Reprint of the 1986 original.
* [26] A. W. Knapp and D. A. Vogan, Jr. Cohomological induction and unitary representations, volume 45 of Princeton Mathematical Series. Princeton University Press, Princeton, NJ, 1995.
* [27] A. W. Knapp and G. J. Zuckerman. Classification of irreducible tempered representations of semisimple groups. Ann. of Math. (2), 116(2):389–455, 1982.
* [28] B. Kostant. On Whittaker vectors and representation theory. Invent. Math., 48(2):101–184, 1978.
* [29] R. P. Langlands. On the notion of automorphic representation. In Automorphic forms, representations and $L$-functions (Proc. Sympos. Pure Math., Oregon State Univ., Corvallis, Ore., 1977), Part 1, Proc. Sympos. Pure Math., XXXIII, pages 203–208. Amer. Math. Soc., Providence, R.I., 1979.
* [30] R. P. Langlands. Base change for ${\rm GL}(2)$, volume 96 of Annals of Mathematics Studies. Princeton University Press, Princeton, N.J., 1980.
* [31] C. Mœglin and J.-L. Waldspurger. Le spectre résiduel de ${\rm GL}(n)$. Ann. Sci. École Norm. Sup. (4), 22(4):605–674, 1989.
* [32] I. I. Piatetski-Shapiro. Multiplicity one theorems. In Automorphic forms, representations and $L$-functions (Proc. Sympos. Pure Math., Oregon State Univ., Corvallis, Ore., 1977), Part 1, Proc. Sympos. Pure Math., XXXIII, pages 209–212. Amer. Math. Soc., Providence, R.I., 1979.
* [33] R. S. Pierce. Associative algebras, volume 88 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1982. Studies in the History of Modern Science, 9.
* [34] D. Ramakrishnan and R. J. Valenza. Fourier analysis on number fields, volume 186 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1999.
* [35] D. Renard. Representations of $p$-adic reductive groups.
* [36] S. Sahi. Jordan algebras and degenerate principal series. J. Reine Angew. Math., 462:1–18, 1995.
* [37] S. Sahi. Jordan algebras and degenerate principal series. J. Reine Angew. Math., 462:1–18, 1995.
* [38] V. Sécherre. Proof of the tadic conjecture U0 on the unitary dual of GL(m,D). J. Reine Angew. Math., 626:187–204, 2009.
* [39] J. A. Shalika. The multiplicity one theorem for ${\rm GL}_{n}$. Ann. of Math. (2), 100:171–193, 1974.
* [40] B. Speh. Unitary representations of ${\rm Gl}(n,\,{\bf R})$ with nontrivial $({\mathfrak{g}},\,K)$-cohomology. Invent. Math., 71(3):443–465, 1983.
* [41] M. Tadić. Classification of unitary representations in irreducible representations of general linear group (non-Archimedean case). Ann. Sci. École Norm. Sup. (4), 19(3):335–382, 1986.
* [42] M. Tadić. Topology of unitary dual of non-Archimedean ${\rm GL}(n)$. Duke Math. J., 55(2):385–422, 1987.
* [43] M. Tadić. Induced representations of ${\rm GL}(n,A)$ for $p$-adic division algebras $A$. J. Reine Angew. Math., 405:48–77, 1990.
* [44] M. Tadić. On characters of irreducible unitary representations of general linear groups. Abh. Math. Sem. Univ. Hamburg, 65:341–363, 1995.
* [45] M. Tadić. Representation theory of ${\rm GL}(n)$ over a $p$-adic division algebra and unitarity in the Jacquet-Langlands correspondence. Pacific J. Math., 223(1):167–200, 2006.
* [46] M. Tadić. $\mathrm{GL}(n,\mathbb{C})\,\hat{}$ and $\mathrm{GL}(n,\mathbb{R})\,\hat{}$. Contemporary Mathematics. Amer. Math. Soc., Providence, R.I., 2009.
* [47] D. A. Vogan. Irreducible characters of semisimple Lie groups. III. Proof of Kazhdan-Lusztig conjecture in the integral case. Invent. Math., 71(2):381–417, 1983.
* [48] D. A. Vogan, Jr. Gel′fand-Kirillov dimension for Harish-Chandra modules. Invent. Math., 48(1):75–98, 1978.
* [49] D. A. Vogan, Jr. The algebraic structure of the representation of semisimple Lie groups. I. Ann. of Math. (2), 109(1):1–60, 1979.
* [50] D. A. Vogan, Jr. Irreducible characters of semisimple Lie groups. IV. Character-multiplicity duality. Duke Math. J., 49(4):943–1073, 1982.
* [51] D. A. Vogan, Jr. The unitary dual of ${\rm GL}(n)$ over an Archimedean field. Invent. Math., 83(3):449–505, 1986.
* [52] A. V. Zelevinsky. Induced representations of reductive ${\mathfrak{p}}$-adic groups. II. On irreducible representations of ${\rm GL}(n)$. Ann. Sci. École Norm. Sup. (4), 13(2):165–210, 1980.
|
arxiv-papers
| 2009-05-26T09:54:25 |
2024-09-04T02:49:02.903034
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "A.I. Badulescu and D. Renard",
"submitter": "David Renard",
"url": "https://arxiv.org/abs/0905.4143"
}
|
0905.4214
|
# A potential setup for perturbative confinement
David Dudal
Center for Theoretical Physics, Massachusetts Institute of Technology,
77 Massachusetts Avenue, Cambridge, MA 02139, USA
Ghent University, Department of Mathematical Physics and Astronomy,
Krijgslaan 281-S9, 9000 Gent, Belgium david.dudal@ugent.be
###### Abstract
A few years ago, ’t Hooft suggested a way to discuss confinement in a
perturbative fashion. The original idea was put forward in the Coulomb gauge
at tree level. In recent years, the concept of a nonperturbative short
distance linear potential also attracted phenomenological attention. Motivated
by these observations, we discuss how a perturbative framework, leading to a
linear piece in the potential, can be developed in a manifestly gauge and
Lorentz invariant manner, which moreover enjoys the property of being
renormalizable to all orders. We provide an effective action framework to
discuss the dynamical realization of the proposed scenario in Yang-Mills gauge
theory.
MIT-CTP–4043
## 1 Motivation
In [1, 2, 3], ’t Hooft launched the idea that confinement can be looked upon
as a natural renormalization phenomenon in the infrared region of a Yang-Mills
gauge theory. He employed the Coulomb gauge, $\partial_{i}A_{i}=0$, in which
case the kinetic (quadratic) part of the gauge field action becomes
$S_{YM}=-\frac{1}{4}\int\mathrm{d}^{4}xF_{\mu\nu}^{2}\to\int\mathrm{d}^{4}x\left(-\frac{1}{2}(\partial_{i}A_{j})^{2}+\frac{1}{2}(\partial_{0}A_{j})^{2}+\frac{1}{2}(\partial_{j}A_{0})^{2}\right)\,.$
(1.1)
The usual (classical) Coulomb potential is recovered as the solution of the
equation of motion for $A_{0}$ in the presence of static charges with strength
$\alpha_{s}$ (= source terms) separated from each other by a vector
$\mathbf{r}$,
$V_{Q\overline{Q}}(\mathbf{r})=-\frac{\alpha_{s}}{r}\,.$ (1.2)
He then proposed that some (unspecified) infrared quantum effects will alter
the kinetic part into
$\int\mathrm{d}^{4}x\left(-\frac{1}{2}(\partial_{i}A_{j})^{2}+\frac{1}{2}(\partial_{0}A_{j})^{2}+\frac{1}{2}(\partial_{j}A_{0})^{2}\right)+\int\mathrm{d}^{4}x\left(-\frac{1}{2}\partial_{j}A_{0}\frac{2\sigma/\alpha_{s}}{-\partial_{j}^{2}+2\sigma/\alpha_{s}}\partial_{j}A_{0}\right)\,.$
(1.3)
As a consequence, the Coulomb potential in momentum space gets modified into
$V_{Q\overline{Q}}(\mathbf{p})=-\frac{4\pi\alpha_{s}}{\mathbf{p}^{2}}-\frac{8\pi\sigma}{\mathbf{p}^{4}}\,,$
(1.4)
which corresponds to
$V_{Q\overline{Q}}(\mathbf{r})=-\frac{\alpha_{s}}{r}+\sigma r\,,$ (1.5)
which is nothing else than a confining potential of the Cornell type [4]. We
made use of the well-known identity
$\partial_{i}^{2}\frac{1}{r}=-4\pi\delta(\mathbf{r})$, which also allows one
to define a regularized version of the Fourier transform of
$\frac{1}{\mathbf{p}^{4}}$, since $\partial_{i}^{2}(r)=\frac{2}{r}$. Indeed,
calling $f(\mathbf{p})$ the Fourier transform of $r$, we can write
$\partial_{i}^{2}\partial_{i}^{2}\int\frac{\mathrm{d}^{3}\mathbf{p}}{(2\pi)^{3}}f(\mathbf{p})e^{i\mathbf{p}\cdot\mathbf{r}}=-8\pi\int\frac{\mathrm{d}^{3}\mathbf{p}}{(2\pi)^{3}}e^{i\mathbf{p}\cdot\mathbf{r}}\,,$
(1.6)
which leads to
$f(\mathbf{p})=-\frac{8\pi}{\mathbf{p}^{4}}\,.$ (1.7)
Of course, this is an appealing idea, at it might give a way to handle
confining theories in a relatively “simple” way, modulo the fact that the
origin of the parameter (= string tension) $\sigma$ is still rather unclear.
It was argued that the coefficient $\frac{\sigma}{\alpha_{s}}$ has to be
adjusted in such a way that higher order corrections converge as fast as
possible [1, 2].
In this work, we intent to set a modest step forward in this program. First of
all, we would like to avoid the use of a non-Lorentz covariant gauge fixing as
the Coulomb one, in fact, we should rather avoid using any preferred gauge and
produce a Lorentz and gauge invariant version of the ’t Hooft mechanism.
Secondly, in [1, 2] it was assumed that the infrared effects would not reflect
on the ultraviolet sector. Here, we can even explicitly prove the ultraviolet
renormalizability of the procedure. We also point out shall how it would be
possible to dynamically realize this perturbative confinement scenario,
starting from the original Yang-Mills action.
Let us also refer to [5], which gives a second motivation for this work. In
the phenomenological paper [5], the issue of physical $\frac{1}{q^{2}}$ power
corrections was discussed. Such $\frac{1}{q^{2}}$ corrections are in principle
forbidden to appear in the usual Operator Product Expansion (OPE) applied to
physical correlators, since there is no local dimension 2 gauge invariant
condensate to account for the quadratic power correction. This wisdom was
however challenged in [5], by including nonperturbative effects beyond the OPE
level. Next to the motivation based on ultraviolet renormalons and/or
approaches in which the Landau pole is removed from the running coupling,
which lead to $\frac{1}{q^{2}}$ uncertainties when studying the correlators,
it was noticed that a linear piece survives in the heavy quark potential up to
short distances. This means that a Cornell potential (1.5) could also leave
its footprints at distances smaller than might be expected. In the meantime,
the notion of a short distance linear potential has also been discussed by
means of the gauge/gravity duality approach (AdS/QCD), see e.g. [6, 7]. Notice
hereby that the string tension at short distances does not have to concur with
the one at larger distances [6, 7].
## 2 Constructing the starting action and some of its properties
We shall work in Euclidean space. We shall make a small detour before arriving
to our actual purpose of the note. We start from the usual Yang-Mills action,
and we couple the nonlocal gauge invariant operator
$\mathcal{O}(x)=F_{\mu\nu}^{a}(x)\left[\frac{1}{D_{\rho}^{2}}\right]^{ab}(x)F_{\mu\nu}^{b}(x)$
(2.1)
to it by means of a global “source” $J^{2}$, i.e. we consider
$\displaystyle S_{YM}+S_{\mathcal{O}}$ $\displaystyle=$
$\displaystyle\frac{1}{4}\int\mathrm{d}^{4}yF_{\mu\nu}^{a}F_{\mu\nu}^{a}-\frac{J^{2}}{4}\int\mathrm{d}^{4}x\mathcal{O}(x)\,.$
(2.2)
This particular operator was first put to use in [8, 9] in the context of a
dynamical mass generation for $3D$ gauge theories.
We introduced the formal notation $\frac{1}{D^{2}}$, which corresponds to the
(nonlocal) inverse operator of $D^{2}$, i.e.
$\frac{1}{D^{2}}(x)f(x)\equiv\int\mathrm{d}^{4}y\left[\frac{1}{D^{2}}\right](x-y)f(y)$
(2.3)
for a generic function $f(x)$, whereby
$D^{2}(x)\left[\frac{1}{D^{2}}\right](x-y)=\delta(x-y)\,.$ (2.4)
Imposing a gauge fixing by adding a gauge fixing term and corresponding ghost
part $S_{gf}$ to the action
$S=S_{YM}+S_{\mathcal{O}}+S_{gf}\;,$ (2.5)
it was shown in [10, 11] that the partition function,
$\int[\mathrm{d}\Phi]e^{-S}\,,$ (2.6)
can be brought in a localized form by introducing a pair of complex bosonic
antisymmetric tensor fields
$\left(B_{\mu\nu}^{a},\overline{B}_{\mu\nu}^{a}\right)$ and of complex
anticommuting antisymmetric tensor fields
$\left(\overline{G}_{\mu\nu}^{a},G_{\mu\nu}^{a}\right)$, both belonging to the
adjoint representation, so that the nonlocal action $S$ gets replaced by its
equivalent local counterpart111Performing the Gaussian path integration over
$(B,\overline{B},G,\overline{G})$ leads back to (2.2).
$\displaystyle S^{\prime}$ $\displaystyle=$
$\displaystyle\int\mathrm{d}^{4}x\left[\frac{1}{4}F_{\mu\nu}^{a}F_{\mu\nu}^{a}+\frac{i}{4}J(B-\overline{B})_{\mu\nu}^{a}F_{\mu\nu}^{a}+\frac{1}{4}\left(\overline{B}_{\mu\nu}^{a}D_{\sigma}^{ab}D_{\sigma}^{bc}B_{\mu\nu}^{c}-\overline{G}_{\mu\nu}^{a}D_{\sigma}^{ab}D_{\sigma}^{bc}G_{\mu\nu}^{c}\right)\right]\;,$
(2.7)
such that
$\int[\mathrm{d}\Phi]e^{-S}=\int[\mathrm{d}\Phi]e^{-S^{\prime}}\,.$ (2.8)
The shorthand notation $\Phi$ represents all the fields present in $S$ or
$S^{\prime}$. The covariant derivative is given by
$D_{\mu}^{ab}=\delta^{ab}\partial_{\mu}-gf^{abc}A_{\mu}^{c}\,.$ (2.9)
From now on, we can forget about the original starting point (2.2), and start
our discussion from the local action (2.7), whereby $J$ can now also be
considered to be a local source $J(x)$, coupled to the operator
$(B-\overline{B})_{\mu\nu}^{a}F_{\mu\nu}^{a}$.
This is however not the end of the story. It was proven in [10, 11] that
$S^{\prime}$ must be extended in order to obtain a renormalizable action. More
precisely, the complete starting action is given by
$\displaystyle\Sigma$ $\displaystyle=$
$\displaystyle\int\mathrm{d}^{4}x\left[\frac{1}{4}F_{\mu\nu}^{a}F_{\mu\nu}^{a}+\frac{iJ}{4}(B-\overline{B})_{\mu\nu}^{a}F_{\mu\nu}^{a}+\frac{1}{4}\left(\overline{B}_{\mu\nu}^{a}D_{\sigma}^{ab}D_{\sigma}^{bc}B_{\mu\nu}^{c}-\overline{G}_{\mu\nu}^{a}D_{\sigma}^{ab}D_{\sigma}^{bc}G_{\mu\nu}^{c}\right)\right.$
(2.10)
$\displaystyle-\left.\frac{3}{8}J^{2}\lambda_{1}\left(\overline{B}_{\mu\nu}^{a}B_{\mu\nu}^{a}-\overline{G}_{\mu\nu}^{a}G_{\mu\nu}^{a}\right)+J^{2}\frac{\lambda_{2}}{32}\left(\overline{B}_{\mu\nu}^{a}-B_{\mu\nu}^{a}\right)^{2}\right.$
$\displaystyle\left.+\frac{\lambda^{abcd}}{16}\left(\overline{B}_{\mu\nu}^{a}B_{\mu\nu}^{b}-\overline{G}_{\mu\nu}^{a}G_{\mu\nu}^{b}\right)\left(\overline{B}_{\rho\sigma}^{c}B_{\rho\sigma}^{d}-\overline{G}_{\rho\sigma}^{c}G_{\rho\sigma}^{d}\right)+\varsigma\,J^{4}\right]+S_{gf}\;,$
We shall clarify the significance of the vacuum term $\varsigma\,J^{4}$, with
$\varsigma$ a dimensionless parameter, after (3.1). $\lambda^{abcd}$ is an
invariant rank 4 tensor coupling, subject to the following symmetry
constraints
$\displaystyle\lambda^{abcd}=\lambda^{cdab}\;,\lambda^{abcd}=\lambda^{bacd}\;,$
(2.11)
which can be read off from the vertex that $\lambda^{abcd}$ multiplies [10,
11].
In general, an invariant tensor $\lambda^{abcd}$ is defined by means of [12]
$\lambda^{abcd}=\mathrm{Tr}(t^{a}t^{b}t^{c}t^{d})\,,$ (2.12)
with $t^{a}$ the $SU(N)$ generators in a certain representation $r$. (2.12) is
left invariant under the transformation
$t^{a}\to U^{+}t^{a}U\,,\qquad U=\mathrm{e}^{i\omega^{b}t^{b}}\,,$ (2.13)
which leads for infinitesimal $\omega^{a}$ to the generalized Jacobi identity
[12]
$f^{man}\lambda^{mbcd}+f^{mbn}\lambda^{amcd}+f^{mcn}\lambda^{abmd}+f^{mdn}\lambda^{abcm}=0\,.$
(2.14)
It are the radiative corrections which necessitate the introduction of the
extra terms $\propto\lambda_{1,2}J^{2}$, as well as the quartic interaction
$\propto\lambda^{abcd}$ [10, 11]. The quantities $\lambda_{1}$ and
$\lambda_{2}$ are two a priori independent scalar “couplings”.
It can be easily checked that (2.10) is gauge invariant, $\delta_{\omega}S=0$,
w.r.t. to the infinitesimal gauge variations
$\displaystyle\delta_{\omega}A_{\mu}^{a}=-D_{\mu}^{ab}\omega^{b}\;,\delta_{\omega}B_{\mu\nu}^{a}=gf^{abc}\omega^{b}B_{\mu\nu}^{c}\;,\delta_{\omega}\overline{B}_{\mu\nu}^{a}=gf^{abc}\omega^{b}\overline{B}_{\mu\nu}^{c}\;,\delta_{\omega}G_{\mu\nu}^{a}=gf^{abc}\omega^{b}G_{\mu\nu}^{c}\;,\delta_{\omega}\overline{G}_{\mu\nu}^{a}=gf^{abc}\omega^{b}\overline{G}_{\mu\nu}^{c}\;.$
Using a linear covariant gauge,
$\displaystyle S_{gf}$ $\displaystyle=$
$\displaystyle\int\mathrm{d}^{4}x\;\left(\frac{\alpha}{2}b^{a}b^{a}+b^{a}\partial_{\mu}A_{\mu}^{a}+\overline{c}^{a}\partial_{\mu}D_{\mu}^{ab}c^{b}\right)\;,$
(2.16)
it was shown in [10, 11] that the action $\Sigma$, (2.10), is renormalizable
to all orders of perturbation theory, making use of the algebraic formalism
and BRST cohomological techniques [13]. Indeed, the action (2.10) enjoys a
nilpotent BRST symmetry, generated by
$\displaystyle sA_{\mu}^{a}$ $\displaystyle=$ $\displaystyle-
D_{\mu}^{ab}c^{b}\;,sc^{a}=\frac{g}{2}f^{abc}c^{b}c^{c}\;,sB_{\mu\nu}^{a}=gf^{abc}c^{b}B_{\mu\nu}^{c}\;,s\overline{B}_{\mu\nu}^{a}=gf^{abc}c^{b}\overline{B}_{\mu\nu}^{c}\;,$
$\displaystyle sG_{\mu\nu}^{a}$ $\displaystyle=$ $\displaystyle
gf^{abc}c^{b}G_{\mu\nu}^{c}\;,s\overline{G}_{\mu\nu}^{a}=gf^{abc}c^{b}\overline{G}_{\mu\nu}^{c}\;,s\overline{c}^{a}=b^{a}\;,sb^{a}=0\;,s^{2}=0,s\Sigma=0\,.$
(2.17)
Later on, the renormalizability was also confirmed in the more involved
maximal Abelian gauge [14].
If we put the source $J=0$, we expect to recover the usual Yang-Mills theory
we started from, see (2.2). Though, the action (2.10) with $J=0$,
$\displaystyle S_{YM}^{\prime}$ $\displaystyle=$
$\displaystyle\int\mathrm{d}^{4}x\left[\frac{1}{4}F_{\mu\nu}^{a}F_{\mu\nu}^{a}+\frac{1}{4}\left(\overline{B}_{\mu\nu}^{a}D_{\sigma}^{ab}D_{\sigma}^{bc}B_{\mu\nu}^{c}-\overline{G}_{\mu\nu}^{a}D_{\sigma}^{ab}D_{\sigma}^{bc}G_{\mu\nu}^{c}\right)\right.$
(2.18)
$\displaystyle\left.+\frac{\lambda^{abcd}}{16}\left(\overline{B}_{\mu\nu}^{a}B_{\mu\nu}^{b}-\overline{G}_{\mu\nu}^{a}G_{\mu\nu}^{b}\right)\left(\overline{B}_{\rho\sigma}^{c}B_{\rho\sigma}^{d}-\overline{G}_{\rho\sigma}^{c}G_{\rho\sigma}^{d}\right)\right]\;,$
seems to differ from the ordinary gluodynamics action $S_{YM}$. This is
however only apparent. Following [11, 15], we introduce the nilpotent
“supersymmetry” $\delta^{(2)}$,
$\displaystyle\delta^{(2)}B_{\mu\nu}^{a}$ $\displaystyle=$ $\displaystyle
G_{\mu\nu}^{a}\;,\delta^{(2)}G_{\mu\nu}^{a}=0\;,\delta^{(2)}\overline{G}_{\mu\nu}^{a}=\overline{B}_{\mu\nu}^{a}\;,\delta^{(2)}\overline{B}_{\mu\nu}^{a}=0\;,\delta^{(2)}\delta^{(2)}=0\;,\delta^{(2)}\left(S_{YM}^{\prime}+S_{gf}\right)=0\;.$
(2.19)
based on which it can be shown that the newly introduced tensor fields
$\\{B_{\mu\nu}^{a},{\overline{B}}_{\mu\nu}^{a},G_{\mu\nu}^{a},{\overline{G}}_{\mu\nu}^{a}\\}$
do not belong to the cohomology of $\delta^{(2)}$, as they constitute pairs of
$\delta^{(2)}$-doublets, and as such completely decouple from the physical
spectrum [13]. This means that $S_{YM}$ and $S_{YM}^{\prime}$ share the same
physical degrees of freedom, being 2 transverse gluon polarizations, as can be
proven using the BRST cohomology [15].
In addition, the tensor coupling $\lambda^{abcd}$ cannot enter the Yang-Mills
correlators constructed from the original Yang-Mills fields
$A_{\mu}^{a},b^{a},\overline{c}^{a},c^{a}$ as it is coupled to a
$\delta^{(2)}$-exact term,
$\propto\delta^{(2)}\left[\left(\overline{B}_{\mu\nu}^{a}B_{\mu\nu}^{b}-\overline{G}_{\mu\nu}^{a}G_{\mu\nu}^{b}\right)\left(\overline{G}_{\rho\sigma}^{c}B_{\rho\sigma}^{d}\right)\right]$,
hence $\lambda^{abcd}$ w.r.t. Yang-Mills correlators plays a role akin to that
of a gauge parameter w.r.t. gauge invariant correlators.
The gauge invariant action $S_{YM}^{\prime}$, (2.18), is thus perturbatively
completely equivalent with the usual Yang-Mills action: it is renormalizable
to all orders of perturbation theory, and the physical spectrum is the same.
The advantage of $S_{YM}^{\prime}$ is that it allows to couple a gauge
invariant local composite operator to it, which is written down in (2.10).
This means that we can probe Yang-Mills gauge theories with this particular
operator, and investigate the associated effective action, to find out whether
a gauge invariant condensate is dynamically favoured.
## 3 The effective action for the gauge invariant operator
$(B_{\mu\nu}^{a}-\overline{B}_{\mu\nu}^{a})F_{\mu\nu}^{a}$
We consider the functional $W(J)$, given by
$e^{-W(J)}=\int[\mathrm{d}\Phi]e^{-S_{YM}^{\prime}-\int\mathrm{d}^{4}x\left(\frac{iJ}{4}(B-\overline{B})_{\mu\nu}^{a}F_{\mu\nu}^{a}-\frac{3}{8}J^{2}\lambda_{1}\left(\overline{B}_{\mu\nu}^{a}B_{\mu\nu}^{a}-\overline{G}_{\mu\nu}^{a}G_{\mu\nu}^{a}\right)+J^{2}\frac{\lambda_{2}}{32}\left(\overline{B}_{\mu\nu}^{a}-B_{\mu\nu}^{a}\right)^{2}+\varsigma\,J^{4}\right)}\,.$
(3.1)
Here, we can appreciate the role of the $\varsigma\,J^{4}$ term. Upon
integrating over the fields, it becomes clear that we need a counterterm
$\delta\varsigma\,J^{4}$ to remove the divergent $J^{4}$-quantum corrections
to $W(J)$. Hence, we need a parameter $\varsigma$ to absorb this counterterm
$\delta\varsigma\,J^{4}$. Although it seems that we are introducing a new free
parameter into the action in this manner, $\varsigma$ can be made a unique
function of the coupling constant(s) by requiring a homogenous renormalization
group equation for the effective action, see [16] for applications to the
$\lambda\phi^{4}$ and Coleman-Weinberg model.
We now define in the usual way
$\displaystyle\varphi(x)$ $\displaystyle=$ $\displaystyle\frac{\delta
W(J)}{\delta J(x)}\,.$ (3.2)
The original theory (i.e. Yang-Mills) is recovered in the physical limit
$J=0$, in which case we have
$\varphi=\frac{i}{4}\Braket{(B_{\mu\nu}^{a}-\overline{B}_{\mu\nu}^{a})F_{\mu\nu}^{a}}\,.$
(3.3)
If we construct the effective action $\Gamma(\varphi)$, we can thus study the
condensation of the gauge invariant operator
$(B_{\mu\nu}^{a}-\overline{B}_{\mu\nu}^{a})F_{\mu\nu}^{a}$. The functionals
$\Gamma(\varphi)$ and $W(J)$ are related through a Legendre transformation
$\displaystyle\Gamma(\varphi)$ $\displaystyle=$ $\displaystyle
W(J)-\int\mathrm{d}^{4}x\ J(x)\varphi(x)\;.$ (3.4)
The vacuum corresponds to the solution of
$\displaystyle\frac{\partial}{\partial\varphi}\Gamma(\varphi)$
$\displaystyle=$ $\displaystyle 0~{}(=-J)\;,$ (3.5)
with minimal energy. From now on, we shall restrict ourselves to space-time
independent $\varphi$ and $J$.
In the current situation, we shall have to perform the Legendre transformation
explicitly [17]. Let us give an illustrative example with a “toy functional”
$W(J)$
$W(J)=\frac{a_{0}}{4}J^{4}+\frac{g^{2}}{4}J^{4}\left(a_{1}+a_{2}\ln\frac{J}{\overline{\mu}}\right)+\textrm{higher
order terms}\,,$ (3.6)
where $\overline{\mu}$ is the renormalization scale. Hence
$\varphi=a_{0}J^{3}+g^{2}J^{3}\left(a_{1}+\frac{a_{2}}{4}+a_{2}\ln\frac{J}{\overline{\mu}}\right)+\textrm{higher
order terms}\,,$ (3.7)
which leads to
$J=\left(\frac{\varphi}{a_{0}}\right)^{1/3}\left(1-\frac{g^{2}}{3a_{0}}\left(a_{1}+\frac{a_{2}}{4}+a_{2}\ln\frac{(\varphi/a_{0})^{1/3}}{\overline{\mu}}\right)\right)+\textrm{higher
order terms}\,.$ (3.8)
The trivial vacuum with $\varphi=0$ is of course always recovered, but there
is the possibility for an alternative solution $\varphi\neq 0$, when solving
the equation $0=-J=\frac{\partial\Gamma}{\partial\varphi}$.
In practice, one can determine $W(J)$ up to the lowest orders in perturbation
theory. $\Gamma(\varphi)$ itself is obtained by substituting (3.8) into (3.4)
to reexpress everything in terms of $\varphi$.
We are now ready to have a look at the effective action in the _condensed
vacuum_. We shall find that the _tree level_ action gets modified in the
following way
$\displaystyle\Sigma\to\Sigma^{\prime}$ $\displaystyle\equiv$ $\displaystyle
S_{YM}^{\prime}+\int\mathrm{d}^{4}x\left[\frac{im}{4}(B-\overline{B})_{\mu\nu}^{a}F_{\mu\nu}^{a}-\frac{3}{8}m^{2}\lambda_{1}\left(\overline{B}_{\mu\nu}^{a}B_{\mu\nu}^{a}-\overline{G}_{\mu\nu}^{a}G_{\mu\nu}^{a}\right)+m^{2}\frac{\lambda_{2}}{32}\left(\overline{B}_{\mu\nu}^{a}-B_{\mu\nu}^{a}\right)^{2}\right]$
(3.9) $\displaystyle+\textrm{higher order terms}\,,$
with
$\displaystyle m=\left(\frac{\varphi}{a_{0}}\right)^{1/3}\,,$ (3.10)
since at tree level we only have to take the lowest order term of (3.8) with
us.
The actual computation of the effective action for the gauge invariant local
composite operator $(B_{\mu\nu}^{a}-\overline{B}_{\mu\nu}^{a})F_{\mu\nu}^{a}$
will be the subject of future work, as this requires a rather large amount of
calculations and the knowledge of yet undetermined renormalization group
functions to two-loop order [16, 18]. Anyhow, we expect that the theory will
experience a gauge invariant dimensional transmutation, leading to
$\Braket{(B_{\mu\nu}^{a}-\overline{B}_{\mu\nu}^{a})F_{\mu\nu}^{a}}\sim\Lambda_{QCD}^{3}$.
Further steps towards the effective potential calculation were set in the
recent work [18].
## 4 The link with perturbative confinement
We did not substantiate yet the role of the extra parameters $\lambda_{1}$ and
$\lambda_{2}$. We consider the case
$\lambda_{1}=\frac{2}{3}\,,\qquad\lambda_{2}=0\,.$ (4.1)
Returning for a moment to the Coulomb gauge in the static case222Meaning that
we formally set “$\partial_{0}=0$”., it is easy to verify at lowest
(quadratic) order that the $(A_{0},A_{0})$ sector exactly reduces to that of
(1.1), by integrating out the extra fields.
Since we have the freedom to choose the tree level (“classical”) values for
$\lambda_{1}$ and $\lambda_{2}$ as we want, we can always make the confining
scenario work by assigning the values (4.1). The higher order quantum
corrections will consequently induce perturbative corrections in the couplings
$g^{2}$ and $\lambda^{abcd}$ to the leading order Cornell potential333We shall
comment on the role of the tensor coupling $\lambda^{abcd}$ later on in this
note.. At the current time we cannot make more definite statements about this,
as the corresponding renormalization group functions of $\lambda_{1}$ and
$\lambda_{3}$ have not yet been calculated explicitly, see also [18]. The
upshot would of course be to keep the expansion under control, i.e. to have a
reasonably small expansion parameter. If the dynamically generated mass scale
is sufficiently large, one can readily imagine to have an effective coupling
constant $g^{2}$ which is relatively small due to asymptotic freedom. It is
perhaps noteworthy to recall the possible emergence of linear piece of the
potential at short distance: restricting to short distance, i.e. high
momentum, might be useful in combination with asymptotic freedom.
Anyhow, we envisage that the essential nontrivial dynamics would be buried in
the tree level mass parameter (i.e. the nontrivial condensate $\varphi$),
which characterizes an effective action with confining properties. One can
then perform a perturbative weak coupling expansion around this nontrivial
vacuum.
## 5 The static quark potential via the Wilson loop
So far, we have been looking at the Coulomb gauge to get a taste of the inter
quark potential. However, there is a cleaner (gauge invariant) way to define
the static inter quark potential $V_{Q\overline{Q}}(\mathbf{r})$. As it is
well known, $V_{Q\overline{Q}}(\mathbf{r})$ can be related to the expectation
value of a Wilson loop, see e.g. [19, 20]. More precisely,
$V_{Q\overline{Q}}(\mathbf{r}-\mathbf{r}^{\prime})=\lim_{T\to\infty}\frac{1}{T}\ln\frac{\mathrm{Tr}\braket{\mathcal{W}}}{\mathrm{Tr}\braket{\mathbf{1}}}\,,$
(5.1)
with the Wilson loop $\cal W$ defined by
${\cal
W}=\mathcal{P}\mathrm{e}^{g\oint_{\mathcal{C}}A_{\mu}\mathrm{d}x_{\mu}}\,,$
(5.2)
where the symbol $\mathcal{P}$ denotes path ordering, needed in the non-
Abelian case to ensure the gauge invariance of $\mathrm{Tr}\mathcal{W}$. The
symbol $\mathbf{1}$ is the unit matrix corresponding to the representation $R$
of the “quarks”. Let $t^{a}$ be the corresponding generators. We shall
consider a rectangular loop $\mathcal{C}$ connecting 2 charges at respective
positions $\mathbf{r}$ and $\mathbf{r}^{\prime}$, with temporal extension
$T\to\infty$.
To explicitly calculate (5.1), we shall mainly follow [21]. First, we notice
that at $T\to\infty$, $F_{\mu\nu}^{2}\to 0$, i.e. $A_{\mu}$ becomes equivalent
to a pure gauge potential444We discard gauge potentials with nontrivial
topology., $A_{\mu}=0$, meaning that we can rewrite the trace of the Wilson
loop as
$\mathrm{Tr}{\cal W}=\mathrm{Tr}\mathcal{P}\mathrm{e}^{g\int
A_{0}(\mathbf{r},t)\mathrm{d}t-g\int
A_{0}(\mathbf{r}^{\prime},t)\mathrm{d}t}\,.$ (5.3)
We introduce the current,
$J_{\mu}^{a}(\mathbf{x},t)=g\delta_{\mu
0}t^{a}\delta^{(3)}(\mathbf{x}-\mathbf{r})-g\delta_{\mu
0}t^{a}\delta^{(3)}(\mathbf{x}-\mathbf{r}^{\prime})\,,$ (5.4)
to reexpress the expectation value of (5.3) as
$\mathrm{Tr}\braket{{\cal W}}=\frac{\mathcal{P}}{{\cal
N}}\int[\mathrm{d}\Phi]\mathrm{e}^{-\Sigma^{\prime}+\int\mathrm{d}^{4}xJ_{\mu}^{a}A_{\mu}^{a})}\,,$
(5.5)
with ${\cal N}$ the appropriate normalization factor.
We are now ready to determine the potential explicitly. We limit ourselves to
lowest order, in which case the path ordering is irrelevant, and we find
$V_{Q\overline{Q}}(\mathbf{r}-\mathbf{r}^{\prime})=\frac{1}{\mathrm{Tr}\mathbf{1}}\lim_{T\to\infty}\frac{1}{T}\int\frac{\mathrm{d}^{4}p}{(2\pi)^{4}}\frac{1}{2}J_{\mu}^{a}(p)D_{\mu\nu}^{ab}(p)J_{\nu}^{b}(-p)\,,$
(5.6)
with
$J_{\mu}^{a}(p)=2\pi
g\delta(p_{0})(e^{-i\mathbf{p}\cdot\mathbf{r}}-e^{-i\mathbf{p}\cdot\mathbf{r}^{\prime}})\delta_{\mu
0}t^{a}\,,$ (5.7)
and with
$D_{\mu\nu}^{ab}(p)=D_{\mu\nu}(p)\delta^{ab}\,,\qquad
D_{\mu\nu}(p)=\frac{p^{2}+m^{2}}{p^{4}}\left(\delta_{\mu\nu}-\frac{p_{\mu}p_{\nu}}{p^{2}}\right)+\frac{\alpha}{p^{2}}\frac{p_{\mu}p_{\nu}}{p^{2}}\,,$
(5.8)
the gluon propagator. Proceeding with (5.6), we get
$\displaystyle V_{Q\overline{Q}}(\mathbf{r}-\mathbf{r}^{\prime})$
$\displaystyle=$
$\displaystyle\lim_{T\to\infty}\frac{1}{2T}C_{2}(R)\int\frac{\mathrm{d}^{4}p}{(2\pi)^{4}}g^{2}\delta^{2}(p_{0})(2\pi)^{2}(e^{-i\mathbf{p}\cdot\mathbf{r}}-e^{-i\mathbf{p}\cdot\mathbf{r}^{\prime}})(e^{i\mathbf{p}\cdot\mathbf{r}}-e^{i\mathbf{p}\cdot\mathbf{r}^{\prime}})D_{00}(p)$
(5.9) $\displaystyle=$
$\displaystyle\lim_{T\to\infty}\frac{1}{2T}C_{2}(R)g^{2}2\pi\delta(0)\int\frac{\mathrm{d}^{3}\mathbf{p}}{(2\pi)^{3}}(e^{-i\mathbf{p}\cdot\mathbf{r}}-e^{-i\mathbf{p}\cdot\mathbf{r}^{\prime}})(e^{i\mathbf{p}\cdot\mathbf{r}}-e^{i\mathbf{p}\cdot\mathbf{r}^{\prime}})D_{00}(p)_{p_{0}=0}$
$\displaystyle=$
$\displaystyle-g^{2}C_{2}(R)\int\frac{\mathrm{d}^{3}\mathbf{p}}{(2\pi)^{3}}\frac{\mathbf{p}^{2}+m^{2}}{\mathbf{p}^{4}}-g^{2}C_{2}(R)\int\frac{\mathrm{d}^{3}\mathbf{p}}{(2\pi)^{3}}\frac{\mathbf{p}^{2}+m^{2}}{\mathbf{p}^{4}}\mathrm{e}^{i\mathbf{p}(\mathbf{r}-\mathbf{r}^{\prime})}\,.$
We used that
$\displaystyle\lim_{T\to\infty}T=\displaystyle\lim_{T\to\infty}\int_{-T/2}^{T/2}\mathrm{d}t=2\pi\delta(0)$.
The first term of (5.9) corresponds to the (infinite) self energy of the
external charges [21], so we can neglect this term to identify the interaction
energy, which yields after performing the Fourier integration
$\displaystyle V_{Q\overline{Q}}(\mathbf{r}-\mathbf{r}^{\prime})$
$\displaystyle=$
$\displaystyle\frac{g^{2}C_{2}(R)}{8\pi}m^{2}|\mathbf{r}-\mathbf{r}^{\prime}|-\frac{g^{2}C_{2}(R)}{4\pi}\frac{1}{|\mathbf{r}-\mathbf{r}^{\prime}|}\,.$
(5.10)
We nicely obtain a Cornell potential, with the string tension in
representation $R$ given by $\sigma(R)=\frac{g^{2}}{8\pi}C_{2}(R)m^{2}$.
Notice that the so-called Casimir scaling [22] of $\sigma(R)$ is
straightforwardly fulfilled, at least at the considered order.
If we consider our model in a specific gauge, for example the Landau gauge, we
see the presence of a $\frac{1}{p^{4}}$ singularity in the (tree level) gluon
propagator (5.8). Actually, it was already argued in [23] that such pole would
induce the area law of the Wilson loop, if present in _some_ gauge. In the
Landau gauge in particular, lattice data have already ruled out since long
such a highly singular gluon propagator, see [24] for a recent numerical
analysis.
A first observation is that we presented only a lowest order calculation,
based on the tree level gluon propagator. We did not consider quantum
corrections, on neither the Wilson loop’s expectation value nor gluon
propagator. A more sophisticated treatment would also have to take into
account that our naive string tension $\sigma$, related to the condensate
$\braket{B-\overline{B}}F$, will run with the scale. This would ask for a
renormalization group improved treatment. We already mentioned in the
introduction that the string tension at short distance (large energy scale)
does not have to concur with the one at large distances (small energy scale)
[6, 7].
We must also remind that most gauges, in particular, the Landau gauge, are
plagued by the Gribov copy problem, which also influence the infrared dynamics
of a gauge theory [25, 26]. The latter problem can be overcome as we are not
obliged to work in the Landau gauge, since we have set up a gauge invariant
framework. In most other gauges, it is not even known how to tackle e.g. the
gauge copy problem in a more or less tractable way, or there are no copies at
all in certain gauges555Some of these gauges then suffer from other problems..
As an example of the latter gauges, let us impose the planar gauge [27] via a
gauge fixing term $S_{gf}~{}=~{}\int\mathrm{d}^{4}x\frac{1}{2n^{2}}n\cdot
A\partial^{2}n\cdot A$. The gluon propagator becomes a bit complicated
$\displaystyle D_{\mu\nu}^{ab}(p)$ $\displaystyle=$
$\displaystyle\delta^{ab}\left(\frac{p^{2}+m^{2}}{p^{4}}\delta_{\mu\nu}+m^{2}\frac{p^{2}+m^{2}}{p^{4}}\frac{n^{2}}{(p\cdot
n)^{2}}\frac{p_{\mu}p_{\nu}}{p^{2}}-\frac{p^{2}+m^{2}}{p^{4}}\frac{n_{\mu}p_{\nu}}{p\cdot
n}-\frac{(p^{2}+m^{2})^{2}}{p^{6}}\frac{p_{\mu}n_{\nu}}{p\cdot n}\right)\,,$
(5.11)
nevertheless the result (5.10) is recovered, after some algebra.
## 6 Symmetry breaking pattern
We already mentioned the useful supersymmetry $\delta^{(2)}$, which is however
broken if
$\Braket{(B_{\mu\nu}^{a}-\overline{B}_{\mu\nu}^{a})F_{\mu\nu}^{a}}\neq 0$
(i.e. $m\neq 0$). Hence, we should worry about the emergence of an extra
(undesired) massless degree of freedom: the associated Goldstone fermion666Not
boson, as $\delta_{2}$ transforms bosons into fermions and vice versa.. The
situation is however more complicated than this. The starting action
$S_{YM}^{\prime}$ enjoys the following set of (nilpotent) supersymmetries
$\displaystyle\delta^{(1)}$ $\displaystyle=$
$\displaystyle\int\mathrm{d}^{4}x\left(B_{\mu\nu}^{a}\frac{\delta}{\delta
G_{\mu\nu}^{a}}-\overline{G}_{\mu\nu}^{a}\frac{\delta}{\delta\overline{B}_{\mu\nu}^{a}}\right)\,,\qquad\delta^{(3)}~{}=~{}\int\mathrm{d}^{4}x\left(\overline{B}_{\mu\nu}^{a}\frac{\delta}{\delta
G_{\mu\nu}^{a}}-\overline{G}_{\mu\nu}^{a}\frac{\delta}{\delta
B_{\mu\nu}^{a}}\right)\,,$ $\displaystyle\delta^{(2)}$ $\displaystyle=$
$\displaystyle\int\mathrm{d}^{4}x\left(\overline{B}_{\mu\nu}^{a}\frac{\delta}{\delta\overline{G}_{\mu\nu}^{a}}+G_{\mu\nu}^{a}\frac{\delta}{\delta
B_{\mu\nu}^{a}}\right)\,,\qquad\delta^{(4)}~{}=~{}\int\mathrm{d}^{4}x\left(B_{\mu\nu}^{a}\frac{\delta}{\delta\overline{G}_{\mu\nu}^{a}}+G_{\mu\nu}^{a}\frac{\delta}{\delta\overline{B}_{\mu\nu}^{a}}\right)\,,$
(6.1)
in addition to the bosonic symmetries generated by
$\displaystyle\Delta^{(1)}$ $\displaystyle=$
$\displaystyle\int\mathrm{d}^{4}x\left(B_{\mu\nu}^{a}\frac{\delta}{\delta
B_{\mu\nu}^{a}}-\overline{B}_{\mu\nu}^{a}\frac{\delta}{\delta\overline{B}_{\mu\nu}^{a}}\right)\,,\qquad\Delta^{(2)}~{}=~{}\int\mathrm{d}^{4}x\left(G_{\mu\nu}^{a}\frac{\delta}{\delta
G_{\mu\nu}^{a}}-\overline{G}_{\mu\nu}^{a}\frac{\delta}{\delta\overline{G}_{\mu\nu}^{a}}\right)\,.$
(6.2)
It appears that a nonvanishing
$\Braket{(B_{\mu\nu}^{a}-\overline{B}_{\mu\nu}^{a})F_{\mu\nu}^{a}}$ results in
the dynamical breakdown of the continuous symmetries
$\delta^{(1),(2),(3),(4)}$ and $\Delta^{(1)}$. Though, a little more care is
needed. Not all the breakings are independent, as one checks that
$\displaystyle\delta^{(1)-(3)}~{}\equiv~{}\delta^{(1)}-\delta^{(3)}\,,\qquad\delta^{(2)-(4)}~{}\equiv~{}\delta^{(2)}-\delta^{(4)}\,,\qquad\Delta^{(1)}\,,$
(6.3)
are clearly dynamically broken for $\braket{(B-\overline{B})F}\neq 0$, since
can write
$\displaystyle\Braket{(B_{\mu\nu}^{a}-\overline{B}_{\mu\nu}^{a})F_{\mu\nu}^{a}}=\Braket{\delta^{(1)-(3)}\left[G_{\mu\nu}^{a}F_{\mu\nu}^{a}\right]}=-\Braket{\delta^{(2)-(4)}\left[\overline{G}_{\mu\nu}^{a}F_{\mu\nu}^{a}\right]}=\Braket{\Delta^{(1)}\left[(B_{\mu\nu}^{a}+\overline{B}_{\mu\nu}^{a})F_{\mu\nu}^{a}\right]}\,,$
(6.4)
while
$\displaystyle\delta^{(1)+(3)}~{}\equiv~{}\delta^{(1)}+\delta^{(3)}\,,\qquad\delta^{(2)+(4)}~{}\equiv~{}\delta^{(2)}+\delta^{(4)}\,,\qquad\Delta^{(2)}\,,$
(6.5)
are still conserved.
If a nonzero value of
$\Braket{(B_{\mu\nu}^{a}-\overline{B}_{\mu\nu}^{a})F_{\mu\nu}^{a}}$ is
dynamically favoured, 2 Goldstone fermions and 1 Goldstone boson seem to enter
the physical spectrum. As this would be a serious problem777These extra
particles carry no color, so there is no reason to expect that these would be
confined or so, thereby removing themselves from the physical spectrum., we
need to find a way to remove these from the spectrum. A typical way to kill
unwanted degrees of freedom is by imposing constraints on the allowed
excitations. Consistency is assured when this is done by using symmetry
generators to restrict the physical subspace. First, we have to identify the
suitable operators to create/annihilate the Goldstone particles. As it is well
known, these are provided by the Noether currents corresponding to (6.3),
which can be derived from the action $S_{YM}^{\prime}$. We obtain
$\displaystyle j^{(1)-(3)}_{\mu}$ $\displaystyle=$ $\displaystyle-
B_{\alpha\beta}^{a}D_{\mu}^{ab}\overline{G}_{\alpha\beta}^{b}+\overline{G}_{\alpha\beta}^{a}D_{\mu}^{ab}B_{\alpha\beta}^{b}+\overline{B}_{\alpha\beta}^{a}D_{\mu}^{ab}\overline{G}_{\alpha\beta}^{b}-\overline{G}_{\alpha\beta}^{a}D_{\mu}^{ab}\overline{B}_{\alpha\beta}^{b}\;,$
$\displaystyle j^{(2)-(4)}_{\mu}$ $\displaystyle=$
$\displaystyle\overline{B}_{\alpha\beta}^{a}D_{\mu}^{ab}G_{\alpha\beta}^{b}-G_{\alpha\beta}^{a}D_{\mu}^{ab}\overline{B}_{\alpha\beta}^{b}-B_{\alpha\beta}^{a}D_{\mu}^{ab}G_{\alpha\beta}^{b}+G_{\alpha\beta}^{a}D_{\mu}^{ab}B_{\alpha\beta}^{b}\,,$
(6.6)
after a little algebra. Let us now define what physical operators are. First
of all, they are expected to be gauge invariant888Or more precisely, BRST
closed but not exact, after fixing the gauge.. Secondly, based on
$\Delta^{(2)}$ we can also introduce a $\mathcal{G}$-ghost charge, with
$\mathcal{G}(G_{\mu\nu}^{a})=+1$, $\mathcal{G}(\overline{G}_{\mu\nu}^{a})=-1$,
and demand that physical operators are $\mathcal{G}$-neutral. In addition, we
also can request invariance w.r.t. $\delta^{(1)+(3)}$ and $\delta^{(2)+(4)}$.
Let us mention the following useful relations
$\displaystyle\delta^{(1)+(3)}j_{\mu}^{(2)-(4)}$ $\displaystyle=$
$\displaystyle\delta^{(2)+(4)}j_{\mu}^{(1)-(3)}~{}=~{}2(\overline{B}_{\alpha\beta}^{a}D_{\mu}^{ab}B_{\alpha\beta}^{b}-B_{\alpha\beta}^{a}D_{\mu}^{ab}\overline{B}_{\alpha\beta}^{b})\neq
0\;,$ $\displaystyle\delta^{(1)+(3)}j^{(1)-(3)}$ $\displaystyle=$
$\displaystyle\delta^{(2)+(4)}j^{(2)-(4)}~{}=~{}0\,.$ (6.7)
The currents $j_{\mu}^{(2)-(4)}$ or $j_{\mu}^{(1)-(3)}$ are thus not physical
operators. Although gauge invariant, (6) tells us these are not
$\delta^{(1)+(3)}$ or $\delta^{(2)+(4)}$ invariant. Moreover, since
$\mathcal{G}(j_{\mu}^{(2)-(4)})=+1$, and $\mathcal{G}(j_{\mu}^{(1)-(3)})=-1$,
also the $\mathcal{G}$-neutrality is not met.
We can assure $\mathcal{G}$-neutrality by e.g. taking a product
$j^{(2)-(4)}j^{(1)-(3)}$, but this does not ensure $\delta^{(1)+(3)}$ or
$\delta^{(2)+(4)}$ invariance, which can be easily checked using (6).
Concerning the current $k_{\mu}$ associated with $\Delta^{(1)}$, we find
$\displaystyle k_{\mu}$ $\displaystyle=$ $\displaystyle-
B_{\alpha\beta}^{a}D_{\mu}^{ab}\overline{B}_{\alpha\beta}^{a}+\overline{B}_{\alpha\beta}^{a}D_{\mu}^{ab}B_{\alpha\beta}^{a}\,,$
(6.8)
hence
$\displaystyle\delta^{(1)+(3)}k_{\mu}$ $\displaystyle=$ $\displaystyle
B_{\alpha\beta}^{a}D_{\mu}^{ab}\overline{G}_{\alpha\beta}^{a}-\overline{G}_{\alpha\beta}^{a}D_{\mu}^{ab}B_{\alpha\beta}^{a}+\overline{G}_{\alpha\beta}^{a}D_{\mu}^{ab}\overline{B}_{\alpha\beta}^{a}-\overline{B}_{\alpha\beta}^{a}D_{\mu}^{ab}\overline{G}_{\alpha\beta}^{a}~{}\neq~{}0\;,$
$\displaystyle\delta^{(2)+(4)}k_{\mu}$ $\displaystyle=$ $\displaystyle-
G_{\alpha\beta}^{a}D_{\mu}^{ab}\overline{B}_{\alpha\beta}^{a}+\overline{B}_{\alpha\beta}^{a}D_{\mu}^{ab}G_{\alpha\beta}^{a}-B_{\alpha\beta}^{a}D_{\mu}^{ab}G_{\alpha\beta}^{a}+G_{\alpha\beta}^{a}D_{\mu}^{ab}B_{\alpha\beta}^{a}~{}\neq~{}0\,.$
(6.9)
Since the symmetries we are using are not unrelated, it is evidently no
surprise that $k_{\mu}$, $j_{\mu}^{(2)-(4)}$ and $j_{\mu}^{(1)-(3)}$ are
transformed into each other. The question remains however whether we can build
combinations999These combinations may of course contain other operators too.
of these which enjoy all the necessary invariances? Let us try to construct
one, starting from $j^{(2)-(4)}$. We shall use a more symbolic notation. It
can be checked that e.g.
$\delta^{(2)+(4)}\left(\overline{G}j^{(2)-(4)}+(B+\overline{B})K-Gj^{(1)-(3)}\right)~{}=~{}0\,,$
(6.10)
but
$\delta^{(1)+(3)}\left(\overline{G}j^{(2)-(4)}+(B+\overline{B})K-Gj^{(1)-(3)}\right)=-4\overline{G}k-2(B+\overline{B})j^{(1)-(3)}\,.$
(6.11)
So far, we have been unable to construct suitable invariant operators. We are
lead to believe that this is generally true, in return we could state that the
Goldstone modes can be expelled from the spectrum. An explicit proof is
however lacking hitherto.
## 7 A few words on the tensor coupling $\lambda^{abcd}$
In the massless case, the precise value of the tensor coupling
$\lambda^{abcd}$ is irrelevant, as it cannot influence the dynamics of the
(physical) Yang-Mills sector of the theory as explained above. However, when
studying the effective action for $\varphi=\braket{(B-\overline{B})F}$,
$\lambda^{abcd}$ plays a role. We might see this as a drawback, as then a new
independent coupling would enter the game. As our setup was to deal with
confinement in usual gauge theories with a single gauge coupling $g^{2}$, we
would like to retain solely $g^{2}$ as the relevant parameter. This can be
nicely accommodated for by invoking the renormalization group equations to
reduce the number of couplings. In the presence of multiple couplings, one can
always opt to choose a primary coupling and express the others in term of this
one. For consistency, no sacrifices should be made w.r.t. the renormalization
group equations, therefore we shall search for a fix point
$\lambda_{*}^{abcd}(g^{2})$, such that
$\mu\frac{\partial}{\partial\mu}\lambda_{*}^{abcd}=0$.
We recall the result of [11], where it was calculated, using dimensional
regularization $(d=4-2\epsilon)$ and using the $\overline{\mbox{MS}}$ scheme,
that
$\displaystyle\mu\frac{\partial}{\partial\mu}\lambda^{abcd}$ $\displaystyle=$
$\displaystyle-2\varepsilon\lambda^{abcd}+\left[\frac{1}{4}\left(\lambda^{abpq}\lambda^{cpdq}+\lambda^{apbq}\lambda^{cdpq}+\lambda^{apcq}\lambda^{bpdq}+\lambda^{apdq}\lambda^{bpcq}\right)\right.$
(7.1)
$\displaystyle\left.-~{}12C_{A}\lambda^{abcd}a~{}+~{}8C_{A}f^{abp}f^{cdp}a^{2}~{}+~{}16C_{A}f^{adp}f^{bcp}a^{2}~{}+~{}96d_{A}^{abcd}a^{2}\right]+\ldots\,,$
with $a=\frac{g^{2}}{16\pi^{2}}$, and we also rescaled
$\lambda^{abcd}\to\frac{1}{16\pi^{2}}\lambda^{abcd}$. We clearly notice that
$\lambda^{abcd}=0$ is not a fixed point of this renormalization group
equation. We must thus look out for an alternative fixed point
$\lambda_{*}^{abcd}\neq 0$.
We shall restrict ourselves to the simplest case: we take $SU(2)$ as gauge
group, and only consider gauge fields in the adjoint representation. Doing so,
we can simplify (7.1) a bit by explicitly computing the completely symmetric
rank 4 tensor $d_{A}^{abcd}$ [12], and by looking for tensor structures that
can be used to construct a rank 4 tensor consistent with the constraints
(2.14) and (2.11).
The generators of the adjoint representation of $SU(2)$, are given by
$(t^{a})_{bc}=i\varepsilon^{abc}$. We can compute $d_{A}^{abcd}$, which is
defined by means of a symmetrized trace STr as
$\displaystyle\\!\\!\\!\\!\\!\\!d^{abcd}_{A}$ $\displaystyle=$
$\displaystyle\mbox{STr}\left(t^{a}t^{b}t^{c}t^{d}\right)~{}=~{}\left[\delta^{ab}\delta^{cd}+\delta^{ad}\delta^{bc}\right]_{\mathrm{symmetrized\,w.r.t.}\,\\{a,b,c,d\\}}~{}=~{}\frac{2}{3}\left(\delta^{ab}\delta^{cd}+\delta^{ac}\delta^{bd}+\delta^{ad}\delta^{bc}\right)\,.$
(7.2)
Moreover, we can also simplify the other tensor appearing in (7.1), namely
($C_{A}=2$)
$\displaystyle 8C_{A}f^{abp}f^{cdp}a^{2}+16C_{A}f^{adp}f^{bcp}a^{2}$
$\displaystyle=$
$\displaystyle-16\delta^{ac}\delta^{bd}-16\delta^{ad}\delta^{bc}+32\delta^{ab}\delta^{cd}\,.$
(7.3)
Using the constraints (2.14) as definition of any building block of our tensor
$\lambda_{\ast}^{abcd}$, one can check that the following rank 4 color tensors
are suitable (linearly independent) candidates
$\displaystyle{\cal O}_{1}^{abcd}$ $\displaystyle=$
$\displaystyle\delta^{ab}\delta^{cd}\,,\qquad{\cal
O}_{2}^{abcd}~{}=~{}\delta^{ac}\delta^{bd}+\delta^{ad}\delta^{bc}\,.$ (7.4)
Clearly, $d_{A}^{abcd}$ and the tensor (7.3) are particular linear
combinations of the tensors in (7.4). We now propose
$\lambda_{f}^{abcd}(a)=y_{1}{\cal O}_{1}^{abcd}a+y_{2}{\cal
O}_{2}^{abcd}a\qquad\qquad y_{i}\in\mathbb{R}\,,$ (7.5)
and we demand that the l.h.s. of (7.1) vanishes when (7.5) is substituted into
it, with $\epsilon=0$. This leads to
$\displaystyle\left\\{\begin{array}[]{ccc}y_{1}&\approx&67.6\\\
y_{2}&\approx&-43.6\end{array}\right.\,,\qquad\qquad\left\\{\begin{array}[]{ccc}y_{1}&\approx&28.4\\\
y_{2}&\approx&-4.4\end{array}\right.\,.$ (7.10)
We conclude that the renormalization group equation
$\mu\frac{\partial}{\partial\mu}\lambda^{abcd}=\beta^{abcd}=0$ possesses a
fixed point in $d=4$, at least at 1-loop for the gauge group $SU(2)$ in the
presence of only gauge fields.
We end this note by briefly returning to the issue of $\frac{1}{q^{2}}$ power
corrections. In [28, 29], these were related to (part of) the dimension two
condensate $\braket{A^{2}_{\min}}=(VT)^{-1}\braket{\min_{g\in
SU(N)}\int\mathrm{d}^{4}x(A_{\mu}^{g})^{2}}$. The nonlocal operator
$A^{2}_{\min}$ reduces to $A^{2}$ in the Landau gauge, hence the interest in
this gauge [28, 29]. Although the mechanism discussed in this Letter might
seem to be completely different, this is however not the case. The
nonperturbative mass scale, set by the condensation of the gauge invariant
operator (3.3), will also fuel a nonvanishing $A^{2}$ condensate in the Landau
gauge, i.e. $\braket{A^{2}}\propto m^{2}$, already in a perturbative loop
expansion. As such, at least part of the nonperturbative information stored in
$\braket{A^{2}}$ could be attributed to the gauge invariant condensate
introduced in this work.
## Acknowledgments
D. Dudal is grateful to R. Jackiw for useful discussions. D. Dudal is
supported by the Research-Foundation Flanders. This work is supported in part
by funds provided by the US Department of Energy (DOE) under cooperative
research agreement DEFG02-05ER41360.
## References
* [1] G. ’t Hooft, Nucl. Phys. Proc. Suppl. 121 (2003) 333.
* [2] G. ’t Hooft, Nucl. Phys. A 721 (2003) 3.
* [3] G. ’t Hooft, Prog. Theor. Phys. Suppl. 167 (2007) 144.
* [4] E. Eichten, K. Gottfried, T. Kinoshita, K. D. Lane and T. M. Yan, Phys. Rev. D 17 (1978) 3090 [Erratum-ibid. D 21 (1980) 313].
* [5] K. G. Chetyrkin, S. Narison and V. I. Zakharov, Nucl. Phys. B 550 (1999) 353.
* [6] O. Andreev and V. I. Zakharov, Phys. Rev. D 74 (2006) 025023\.
* [7] V. I. Zakharov, AIP Conf. Proc. 964 (2007) 143.
* [8] R. Jackiw and S. Y. Pi, Phys. Lett. B 368 (1996) 131.
* [9] R. Jackiw and S. Y. Pi, Phys. Lett. B 403 (1997) 297.
* [10] M. A. L. Capri, D. Dudal, J. A. Gracey, V. E. R. Lemes, R. F. Sobreiro, S. P. Sorella and H. Verschelde, Phys. Rev. D 72 (2005) 105016.
* [11] M. A. L. Capri, D. Dudal, J. A. Gracey, V. E. R. Lemes, R. F. Sobreiro, S. P. Sorella and H. Verschelde, Phys. Rev. D 74 (2006) 045008.
* [12] T. van Ritbergen, A. N. Schellekens and J. A. M. Vermaseren, Int. J. Mod. Phys. A 14 (1999) 41.
* [13] O. Piguet and S. P. Sorella, Lect. Notes Phys. M28 (1995) 1\.
* [14] M. A. L. Capri, V. E. R. Lemes, R. F. Sobreiro, S. P. Sorella and R. Thibes, J. Phys. A 41 (2008) 155401.
* [15] D. Dudal, N. Vandersickel and H. Verschelde, Phys. Rev. D 76 (2007) 025006.
* [16] K. Knecht and H. Verschelde, Phys. Rev. D 64 (2001) 085006.
* [17] S. Yokojima, Phys. Rev. D 51 (1995) 2996.
* [18] F. R. Ford and J. A. Gracey, Phys. Lett. B 674 (2009) 232
* [19] G. S. Bali, Phys. Rept. 343 (2001) 1.
* [20] L. S. Brown and W. I. Weisberger, Phys. Rev. D 20 (1979) 3239\.
* [21] W. Fischler, Nucl. Phys. B 129 (1977) 157.
* [22] G. S. Bali, Phys. Rev. D 62 (2000) 114503.
* [23] G. B. West, Phys. Lett. B 115 (1982) 468\.
* [24] A. Cucchieri and T. Mendes, Phys. Rev. Lett. 100 (2008) 241601\.
* [25] V. N. Gribov, Nucl. Phys. B 139 (1978) 1.
* [26] D. Dudal, J. A. Gracey, S. P. Sorella, N. Vandersickel and H. Verschelde, Phys. Rev. D 78 (2008) 065047.
* [27] G. Leibbrandt, Rev. Mod. Phys. 59 (1987) 1067.
* [28] F. V. Gubarev, L. Stodolsky and V. I. Zakharov, Phys. Rev. Lett. 86 (2001) 2220.
* [29] F. V. Gubarev and V. I. Zakharov, Phys. Lett. B 501 (2001) 28\.
|
arxiv-papers
| 2009-05-26T14:38:49 |
2024-09-04T02:49:02.926733
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "D. Dudal",
"submitter": "David Dudal",
"url": "https://arxiv.org/abs/0905.4214"
}
|
0905.4280
|
The Quantum Wave Packet of the Schrödinger’s Equation for Continuous Quantum
Measurements
J. M. F. Bassalo1, P. T. S. Alencar2, D. G. da Silva3, A. Nassar4 and M.
Cattani5
1 Fundação Minerva, R. Serzedelo Correa 347, 1601 - CEP 66035-400, Belém,
Pará, Brasil
E-mail: bassalo@amazon.com.br
2 Universidade Federal do Pará - CEP 66075-900, Guamá, Belém, Pará, Brasil
E-mail: tarso@ufpa.br
3 Escola Munguba do Jari, Vitória do Jari - CEP 68924-000, Amapá, Brasil
E-mail: danielgemaque@yahoo.com.br
4 Extension Program-Department of Sciences, University of California, Los
Angeles, California 90024, USA
E-mail: nassar@ucla.edu
5 Instituto de Física da Universidade de São Paulo. C. P. 66318, CEP
05315-970, São Paulo, SP, Brasil
E-mail: mcattani@if.usp.br
Abstract: In this paper we study the quantum wave packet of the Schrödinger’s
equation for continuous quantum measurements.
PACS 03.65 - Quantum Mechanics
1\. Introduction
In this paper we will study the wave packet of a Schrödinger Continuous
Measurements Equation proposed by Nassar [1], using the quantum mechanical
formalism of de Broglie-Bohm.[2]
2\. The Continuous Measurement Schrödinger’s Equation
According to Nassar [1] a Schrödinger equation assuming continuous
measurements is given by:
i ${\hbar}\ {\frac{{\partial}{\Psi}(x,\ t)}{{\partial}t}}\ =\ -\
{\frac{{\hbar}^{2}}{2\ m}}\ {\frac{{\partial}^{2}\ {\Psi}(x,\
t)}{{\partial}x^{2}}}\ +\ {\Big{[}}\ {\frac{1}{2}}\ m\ {\Omega}^{2}(t)\ x^{2}\
+\ {\lambda}\ x\ X(t)\ {\Big{]}}\ {\Psi}(x,\ t)\ -$
$-\ {\frac{i\ {\hbar}}{4\ {\tau}}}\ {\Bigg{(}}\ {\frac{[x\ -\
q(t)]^{2}}{{\delta}^{2}(t)}}\ -\ 1\ {\Bigg{)}}\ {\Psi}(x,\ t)$ , (2.1)
where ${\Psi}(x,\ t)$ is a wave function which describes a given system,
$X(t)$ is the position of a classical particle submitted to a time dependent
harmonic potential with frequency ${\Omega(t)}$, and ${\tau}$ and ${\delta}$
have dimensions of space and time, respectively, and $q(t)$ is average value
$<\ x(t)\ >$.
Writing the wave function ${\Psi}(x,\ t)$ in the polar form defined by the
Madelung-Bohm transformation [3,4] we obtain:
${\Psi}(x,\ t)\ =\ {\phi}(x,\ t)\ e^{i\ S(x,\ t)}$ , (2.2)
where $S(x\ ,t)$ is the classical action and ${\phi}(x,\ t)$ will be defined
in what follows.
Calculating the derivatives, temporal and spatial, of (2.2), we get:
${\frac{{\partial}{\Psi}}{{\partial}t}}\ =\ e^{i\ S}\
{\frac{{\partial}{\phi}}{{\partial}t}}\ +\ i\ {\phi}\ e^{i\ S}\
{\frac{{\partial}S}{{\partial}t}}\ \ \ {\to}$
${\frac{{\partial}{\Psi}}{{\partial}t}}\ =\ e^{i\ S}\
({\frac{{\partial}{\phi}}{{\partial}t}}\ +\ i\ {\phi}\
{\frac{{\partial}S}{{\partial}t}})\ =\ (i\ {\frac{{\partial}S}{{\partial}t}}\
+\ {\frac{1}{{\phi}}}\ {\frac{{\partial}{\phi}}{{\partial}t}})\ {\Psi}$ ,
(2.3a,b)
${\frac{{\partial}{\Psi}}{{\partial}x}}\ =\ e^{i\ S}\
({\frac{{\partial}{\phi}}{{\partial}x}}\ +\ i\ {\phi}\
{\frac{{\partial}S}{{\partial}x}})\ =\ (i\ {\frac{{\partial}S}{{\partial}x}}\
+\ {\frac{1}{{\phi}}}\ {\frac{{\partial}{\phi}}{{\partial}x}})\ {\Psi}$ ,
(2.3c,d)
${\frac{{\partial}^{2}{\Psi}}{{\partial}x^{2}}}\ =\
{\frac{{\partial}}{{\partial}x}}\ [(i\ {\frac{{\partial}S}{{\partial}x}}\ +\
{\frac{1}{{\phi}}}\ {\frac{{\partial}{\phi}}{{\partial}x}})\ {\Psi}]$ =
= ${\Psi}\ [i\ {\frac{{\partial}^{2}S}{{\partial}x^{2}}}\ +\
{\frac{1}{{\phi}}}\ {\frac{{\partial}^{2}{\phi}}{{\partial}x^{2}}}\ -\
{\frac{1}{{\phi}^{2}}}\ ({\frac{{\partial}{\phi}}{{\partial}x}})^{2}]\ +\ (i\
{\frac{{\partial}S}{{\partial}x}}\ +\ {\frac{1}{{\phi}}}\
{\frac{{\partial}{\phi}}{{\partial}x}})\
{\frac{{\partial}{\Psi}}{{\partial}x}}$ =
= ${\Psi}\ [i\ {\frac{{\partial}^{2}S}{{\partial}x^{2}}}\ +\
{\frac{1}{{\phi}}}\ {\frac{{\partial}^{2}{\phi}}{{\partial}x^{2}}}\ -\
{\frac{1}{{\phi}^{2}}}\ ({\frac{{\partial}{\phi}}{{\partial}x}})^{2}]\ +\ (i\
{\frac{{\partial}S}{{\partial}x}}\ +\ {\frac{1}{{\phi}}}\
{\frac{{\partial}{\phi}}{{\partial}x}})\ (i\
{\frac{{\partial}S}{{\partial}x}}\ +\ {\frac{1}{{\phi}}}\
{\frac{{\partial}{\phi}}{{\partial}x}})\ {\Psi}$ =
= ${\Psi}\ [i\ {\frac{{\partial}^{2}S}{{\partial}x^{2}}}\ +\
{\frac{1}{{\phi}}}\ {\frac{{\partial}^{2}{\phi}}{{\partial}x^{2}}}\ -\
{\frac{1}{{\phi}^{2}}}\ ({\frac{{\partial}{\phi}}{{\partial}x}})^{2}\ -\
({\frac{{\partial}S}{{\partial}x}})^{2}\ +\ {\frac{1}{{\phi}^{2}}}\
({\frac{{\partial}{\phi}}{{\partial}x}})^{2}\ +\ 2\ {\frac{i}{{\phi}}}\
{\frac{{\partial}S}{{\partial}x}}\ {\frac{{\partial}{\phi}}{{\partial}x}}]\ \
\ {\to}$
${\frac{{\partial}^{2}{\Psi}}{{\partial}x^{2}}}\ =\ e^{i\ S}\
[{\frac{{\partial}^{2}{\phi}}{{\partial}x^{2}}}\ +\ 2\ i\
{\frac{{\partial}S}{{\partial}x}}\ {\frac{{\partial}{\phi}}{{\partial}x}}\ +\
i\ {\phi}\ {\frac{{\partial}^{2}S}{{\partial}x^{2}}}\ -\ {\phi}\
({\frac{{\partial}S}{{\partial}x}})^{2}]\ =$
$=\ {\big{[}}\ i\ {\frac{{\partial}^{2}S}{{\partial}x^{2}}}\ +\
{\frac{1}{{\phi}}}\ {\frac{{\partial}^{2}{\phi}}{{\partial}x^{2}}}\ -\
({\frac{{\partial}S}{{\partial}x}})^{2}\ +\ 2\ i\ {\frac{1}{{\phi}}}\
{\frac{{\partial}S}{{\partial}x}}\ {\frac{{\partial}{\phi}}{{\partial}x}}\
{\big{]}}\ {\Psi}$ . (2.3e,f)
Now, inserting the relations defined by eq. (2.3a,e) into eq. (2.1) we have,
remembering that $e^{i\ S}$ is common factor:
$i\ {\hbar}\ ({\frac{{\partial}{\phi}}{{\partial}t}}\ +\ i\ {\phi}\
{\frac{{\partial}S}{{\partial}t}})\ =\ -\ {\frac{{\hbar}^{2}}{2\ m}}\
[{\frac{{\partial}^{2}{\phi}}{{\partial}x^{2}}}\ +$
$\ +\ 2\ i\ {\frac{{\partial}S}{{\partial}x}}\
{\frac{{\partial}{\phi}}{{\partial}x}}\ +\ i\ {\phi}\
{\frac{{\partial}^{2}S}{{\partial}x^{2}}}\ -\ {\phi}\
({\frac{{\partial}S}{{\partial}x}})^{2}]\ +\ {\big{[}}\ {\frac{1}{2}}\ m\
{\Omega}(t)^{2}\ x^{2}\ +\ {\lambda}\ x\ X(t)\ {\big{]}}\ {\phi}\ -$
$-\ {\frac{i\ {\hbar}}{4\ {\tau}}}\ {\big{(}}\ {\frac{[x\ -\
q(t)]^{2}}{{\delta}^{2}(t)}}\ -\ 1\ {\big{)}}\ {\phi}$ , (2.4)
Separating the real and imaginary parts of the relation (2.4), results:
a) imaginary part
${\frac{{\partial}{\phi}}{{\partial}t}}\ =\ -\ {\frac{{\hbar}}{2\ m}}\
{\big{(}}\ 2\ {\frac{{\partial}S}{{\partial}x}}\
{\frac{{\partial}{\phi}}{{\partial}x}}\ +\ {\phi}\
{\frac{{\partial}^{2}S}{{\partial}x^{2}}}\ {\big{)}}\ -\ {\frac{1}{4\
{\tau}}}\ {\big{(}}\ {\frac{[x\ -\ q(t)]^{2}}{{\delta}^{2}(t)}}\ -\ 1\
{\big{)}}\ {\phi}$ , (2.5)
b) real part
\- ${\hbar}\ {\phi}\ {\frac{{\partial}S}{{\partial}t}}\ =\ -\
{\frac{{\hbar}^{2}}{2\ m}}\ [{\frac{{\partial}^{2}{\phi}}{{\partial}x^{2}}}\
-\ {\phi}\ ({\frac{{\partial}S}{{\partial}x}})^{2}]\ +\ {\big{[}}\
{\frac{1}{2}}\ m\ {\Omega}(t)^{2}\ x^{2}\ +\ {\lambda}\ x\ X(t)\ {\big{]}}\
{\phi}\ \ \ ({\div}\ m\ {\phi})\ \ \ {\to}$
\- ${\frac{{\hbar}}{m}}\ {\frac{{\partial}S}{{\partial}t}}\ =\ -\
{\frac{{\hbar}^{2}}{2\ m^{2}}}\ {\frac{1}{{\phi}}}\ {\big{[}}\
{\frac{{\partial}^{2}{\phi}}{{\partial}x^{2}}}\ -\ {\phi}\
({\frac{{\partial}S}{{\partial}x}})^{2}\ {\big{]}}\ +\ {\big{[}}\
{\frac{1}{2}}\ {\Omega}(t)^{2}\ x^{2}\ +\ {\frac{{\lambda}}{m}}\ x\ X(t)\
{\big{]}}$ . (2.6)
Dynamics of the Schrödinger’s Equation for Continuous Quantum Measurements
Now, let us see the correlation between the expressions (2.5-6) and the
traditional equations of the Ideal Fluid Dynamics [5] a) continuity equation,
b) Euler’s equation. To do this let us perform the following correspondences:
Quantum density probability: ${\mid}\ {\Psi}(x,\ t)\ {\mid}^{2}\ =\
{\Psi}^{*}(x,\ t)\ {\Psi}(x,\ t)\ \ \ \ \ \ {\longleftrightarrow}$
Quantum mass density: ${\rho}(x,\ t)\ =\ {\phi}^{2}(x,\ t)\ \ \
{\longleftrightarrow}\ \ \ {\sqrt{{\rho}}}\ =\ {\phi}$ , (2.7a,b)
Gradient of the wave function: ${\frac{{\hbar}}{m}}\ {\frac{{\partial}S(x,\
t)}{{\partial}x}}\ \ \ \ \ {\longleftrightarrow}$
Quantum velocity: $v_{qu}(x,\ t)\ {\equiv}\ v_{qu}$ . (2.8)
Putting the relations (2.7b, 2.8) into the equation (2.5) we get:
${\frac{{\partial}{\sqrt{{\rho}}}}{{\partial}t}}\ =\ -\ {\frac{{\hbar}}{2\
m}}\ {\big{(}}\ 2\ {\frac{{\partial}S}{{\partial}x}}\
{\frac{{\partial}{\sqrt{{\rho}}}}{{\partial}x}}\ +\ {\sqrt{{\rho}}}\
{\frac{{\partial}^{2}S}{{\partial}x^{2}}}\ {\big{)}}\ -\ {\frac{1}{4\
{\tau}}}\ {\big{(}}\ {\frac{[x\ -\ q(t)]^{2}}{{\delta}^{2}(t)}}\ -\ 1\
{\big{)}}\ {\sqrt{{\rho}}}\ \ \ {\to}$
${\frac{1}{2\ {\sqrt{{\rho}}}}}\ {\frac{{\partial}{\rho}}{{\partial}t}}\ =\ -\
{\frac{{\hbar}}{2\ m}}\ {\big{(}}\ 2\ {\frac{{\partial}S}{{\partial}x}}\
{\frac{1}{2\ {\sqrt{{\rho}}}}}\ {\frac{{\partial}{\rho}}{{\partial}x}}\ +\
{\sqrt{{\rho}}}\ {\frac{{\partial}^{2}S}{{\partial}x^{2}}}\ {\big{)}}\ -\
{\frac{1}{4\ {\tau}}}\ {\big{(}}\ {\frac{[x\ -\ q(t)]^{2}}{{\delta}^{2}(t)}}\
-\ 1\ {\big{)}}\ {\sqrt{{\rho}}}\ \ \ {\to}$
${\frac{1}{{\rho}}}\ {\frac{{\partial}{\rho}}{{\partial}t}}\ =\ -\
{\frac{{\hbar}}{m}}\ {\big{(}}\ {\frac{{\partial}S}{{\partial}x}}\
{\frac{1}{{\rho}}}\ {\frac{{\partial}{\rho}}{{\partial}x}}\ +\
{\frac{{\partial}^{2}S}{{\partial}x^{2}}}\ {\big{)}}\ -\ {\frac{1}{2\
{\tau}}}\ {\big{(}}\ {\frac{[x\ -\ q(t)]^{2}}{{\delta}^{2}(t)}}\ -\ 1\
{\big{)}}\ \ \ {\to}$
${\frac{1}{{\rho}}}\ {\frac{{\partial}{\rho}}{{\partial}t}}\ =\ -\
{\frac{{\partial}}{{\partial}x}}\ {\big{(}}\ {\frac{{\hbar}}{m}}\
{\frac{{\partial}S}{{\partial}x}}\ {\big{)}}\ -\ {\frac{1}{{\rho}}}\
{\big{(}}\ {\frac{{\hbar}}{m}}\ {\frac{{\partial}S}{{\partial}x}}\ {\big{)}}\
{\frac{{\partial}{\rho}}{{\partial}x}}\ -\ {\frac{1}{2\ {\tau}}}\ {\big{(}}\
{\frac{[x\ -\ q(t)]^{2}}{{\delta}^{2}(t)}}\ -\ 1\ {\big{)}}\ \ \ {\to}$
${\frac{1}{{\rho}}}\ {\frac{{\partial}{\rho}}{{\partial}t}}\ =\ -\
{\frac{{\partial}v_{qu}}{{\partial}x}}\ -\ {\frac{v_{qu}}{{\rho}}}\
{\frac{{\partial}{\rho}}{{\partial}x}}\ -\ {\frac{1}{2\ {\tau}}}\ {\big{(}}\
{\frac{[x\ -\ q(t)]^{2}}{{\delta}^{2}(t)}}\ -\ 1\ {\big{)}}\ \ \ {\to}$
${\frac{{\partial}{\rho}}{{\partial}t}}\ +\ {\rho}\
{\frac{{\partial}v_{qu}}{{\partial}x}}\ +\ v_{qu}\
{\frac{{\partial}{\rho}}{{\partial}x}}\ =\ -\ {\frac{{\rho}}{2\ {\tau}}}\
{\big{(}}\ {\frac{[x\ -\ q(t)]^{2}}{{\delta}^{2}(t)}}\ -\ 1\ {\big{)}}\ \ \
{\to}$
${\frac{{\partial}{\rho}}{{\partial}t}}\ +\ {\frac{{\partial}({\rho}\
v_{qu})}{{\partial}x}}\ =\ -\ {\frac{{\rho}}{2\ {\tau}}}\ {\big{(}}\
{\frac{[x\ -\ q(t)]^{2}}{{\delta}^{2}(t)}}\ -\ 1\ {\big{)}}$ , (2.9)
which represents the continuity equation of the mass conservation law of the
Fluid Dynamics. We must note that this expression also indicates descoerence
of the considered physical system represented by the eq. (2.1). Using eq.
(2.7b) we now define the quantum potential $V_{qu}$:
$V_{qu}(x,\ t)\ {\equiv}\ V_{qu}\ =\ -\ ({\frac{{\hbar}^{2}}{2\ m\ {\phi}}})\
{\frac{{\partial}^{2}{\phi}}{{\partial}x^{2}}}\ =\ -\ {\frac{{\hbar}^{2}}{2\
m}}\ {\frac{1}{{\sqrt{{\rho}}}}}\
{\frac{{\partial}^{2}{\sqrt{{\rho}}}}{{\partial}x^{2}}}$ , (2.10a,b)
the expression (2.6) will written as:
${\frac{{\hbar}}{m}}\ {\frac{{\partial}S}{{\partial}t}}\ +\
{\frac{{\hbar}^{2}}{2\ m^{2}}}\ ({\frac{{\partial}S}{{\partial}x}})^{2}\ =\ -\
{\frac{1}{m}}\ {\Big{(}}\ {\big{[}}\ {\frac{1}{2}}\ m\ {\Omega}^{2}(t)\ x^{2}\
+\ {\lambda}\ x\ X(t)\ {\big{]}}\ +\ V_{qu}\ {\Big{)}}$ , (2.11)
or, using (2.8):
${\hbar}\ {\frac{{\partial}S}{{\partial}t}}\ +\ {\frac{1}{2}}\ m\ v_{qu}^{2}\
+\ {\frac{1}{2}}\ m\ {\Omega}^{2}(t)\ x^{2}\ +\ {\lambda}\ x\ X(t)\ +\ V_{qu}\
=$
$=\ {\hbar}\ {\frac{{\partial}S}{{\partial}t}}\ +\ {\frac{1}{2}}\ m\
v_{qu}^{2}\ +\ V\ +\ V_{qu}\ =\ 0$ . (2.12a,b)
Differentiating the relation (2.12a) with respect $x$ and using the relation
(2.8) we obtain:
${\frac{{\partial}}{{\partial}x}}\ {\Bigg{[}}\ {\hbar}\
{\frac{{\partial}S}{{\partial}t}}\ +\ {\Big{(}}\ {\frac{1}{2}}\ m\ v_{qu}^{2}\
+\ {\big{[}}\ {\frac{1}{2}}\ m\ {\Omega}^{2}(t)\ x^{2}\ +\ {\lambda}\ x\ X(t)\
{\big{]}}\ +\ V_{qu}\ {\Big{)}}\ {\Bigg{]}}\ =\ 0\ \ \ {\to}$
${\frac{{\partial}}{{\partial}t}}\ {\big{(}}\ {\frac{{\hbar}}{m}}\
{\frac{{\partial}S}{{\partial}x}}\ {\big{)}}\ +\
{\frac{{\partial}}{{\partial}x}}\ {\big{(}}\ {\frac{1}{2}}\ v_{qu}^{2}\
{\big{)}}\ +\ {\Omega}^{2}(t)\ x\ +\ {\frac{{\lambda}}{m}}\ X(t)\ =\ -\
{\frac{1}{m}}\ {\frac{{\partial}}{{\partial}x}}\ V_{qu}\ \ \ {\to}$
${\frac{{\partial}v_{qu}}{{\partial}t}}\ +\ v_{qu}\
{\frac{{\partial}v_{qu}}{{\partial}x}}\ +\ {\Omega}^{2}(t)\ x\ +\
{\frac{{\lambda}}{m}}\ X(t)\ =\ -\ {\frac{1}{m}}\
{\frac{{\partial}}{{\partial}x}}\ V_{qu}$ , (2.13)
which is an equation similar to the Euler’s equation which governs the motion
of an ideal fluid.
Taking into account that[6]:
$v_{qu}(x,\ t)\ =\ {\frac{dx_{qu}}{dt}}\ =\ {\big{[}}\
{\frac{{\dot{{\delta}}}(t)}{{\delta}(t)}}\ +\ {\frac{1}{2\ {\tau}}}\
{\big{]}}\ [x_{qu}\ -\ q(t)]\ +\ {\dot{q}}(t)$ , (2.14a,b)
the expression (2.13) could be written as:
$m\ {\frac{d^{2}x}{dt^{2}}}\ =\ -\ {\frac{{\partial}}{{\partial}x}}\
{\Big{[}}\ {\frac{1}{2}}\ m\ {\Omega}^{2}(t)\ x^{2}\ +\ {\lambda}\ x\ X(t)\
+V_{qu}\ {\Big{]}}\ {\equiv}$
${\equiv}\ F_{c}(x,\ t)\ {\mid}_{x=x(t)}\ +\ F_{qu}(x,\ t)\ {\mid}_{x=x(t)}$ ,
(2.15)
where:
${\frac{d}{dt}}\ =\ {\frac{{\partial}}{{\partial}t}}\ +\ v_{qu}\
{\frac{{\partial}}{{\partial}x}}$ , (2.16)
is the ”substantive differentiation” (local plus convective) or
”hidrodynamical differention” [5]. We note that the eq. (2.15) has a form of
the second Newton law.
In this way, the expressions (2.10a,b;2.13;2.15) represent the dynamics of a
quantum particle which propagates with the quantum velocity (${\vec{v}}_{qu}$)
in a viscous medium, submitted to a time dependent classical harmonic
potential [${\frac{1}{2}}\ m\ {\Omega}(t)^{2}\ x^{2}$], to an external linear
field characterized by the potential [${\lambda}\ x\ X(t)$] and to the quantum
Bohm potential ($V_{qu}$).
In what follows we calculate the wave packet of the Schrödinger’s Equation for
Continuous Measurements ($SECM$) given by the eq.(2.1)
3\. Quantum Wave Packet
3.1. Introduction
In 1909 [7], Einstein studied the black body radiation in thermodynamical
equilibrium with matter. Starting from Planck’s equation, of 1900, of the
radiation density and using the Fourier expansion technique to calculate its
fluctuations, he showed that it exhibits, simultaneously, fluctuations which
are characteristic of waves and particles. In 1916 [8], analyzing again the
black body Planckian radiation, Einstein proposed that an electromagnetic
radiation with wavelenght ${\lambda}$ had a linear momentum $p$, given by the
relation:
p = ${\frac{h}{{\lambda}}}$ , (3.1.1)
where h is the Planck constant [9].
In works developed between 1923 and 1925 [10] de Broglie formulated his
fundamental idea that the electron with mass $m$, in its atomic orbital motion
with velocity $v$ and linear momentum $p\ =\ m\ v$ is guided by a ”matter
wave” (pilot-wave) with wavelenght is given by:
${\lambda}\ =\ {\frac{h}{p}}\ .\ \ \ \ \ (3.1.2)$
In 1926 [11], Schrödinger proposed that the ”pilot-wave de Brogliean” ought to
obey a differential equation, today know as the famous Schrödinger’s equation:
$i\ {\hbar}\ {\frac{{\partial}}{{\partial}t}}\ {\Psi}({\vec{r}},\ t)=\
{\hat{H}}\ {\Psi}({\vec{r}},\ t)$ , (3.1.3a)
where ${\hat{H}}$ is Hamiltonian operator definied by:
${\hat{H}}\ =\ {\frac{{\hat{p}}^{2}}{2\ m}}\ +\ V({\vec{r}},\ t)\ ,\ \ \ \ \
({\hat{p}}\ =\ -\ i\ {\hbar}\ {\nabla})$ , (3.1.3b,c)
where $V$ is the potential energy. In this same year of 1926 [12] Born
interpreted the Schrödinger wave function ${\Psi}$ as being an amplitude of
probability.
3.2. Quantum Wave Packet via Schrödinger-Feynman Quantum Mechanics
As well known [13], when the potential energy $V$ of physical system, with
total energy $E$, depends only on the position [$V({\vec{r}}$)], the solution
of the Schrödinger’s equation ($SE$) [see relations (3.1.3a-c)] is given by:
${\Psi}({\vec{r}},\ t)\ =\ {\psi}({\vec{r}})\ e^{-\ i\ {\frac{E}{{\hbar}}}\
t}\ =\ {\psi}({\vec{r}})\ e^{-\ i\ {\omega}\ t}$, (3.2.1a,b)
where ${\psi}({\vec{r}}$) satisfies the following equation, known as time
independent $SE$ [14]:
${\Delta}\ {\psi}({\vec{r}})\ +\ {\frac{2\ m}{{\hbar}^{2}}}\ (E\ -\ V)\
{\psi}({\vec{r}})\ =\ 0\ \ \ {\Leftrightarrow}\ \ \ {\hat{H}}\
{\psi}({\vec{r}})\ =\ E\ {\psi}({\vec{r}})$, (3.2.2a,b)
where ${\hat{H}}$ is given by the expressions (3.1.3b,c). In addition,
${\psi}({\vec{r}}$) and its differentiation must be continuous
${\frac{{\partial}{\psi}({\vec{r}})}{{\partial}{\vec{r}}}}$.
It is important to note that the relation (3.2.1b) was obtained considering
the Planckian energy:
$E\ =\ h\ {\nu}\ =\ {\hbar}\ {\omega}\ ,\ \ \ {\hbar}\ =\ {\frac{h}{2\
{\pi}}},\ \ \ {\omega}\ =\ 2\ {\pi}\ {\nu}$ . (3.2.3a-d)
As the expression (3.2.2b) is an eigenvalue equation its solution is given by
a discrete set of eigenfunctions (”Schrödingerian waves”) of the operator
${\hat{H}}$ [15]. On the other hand, the expression (3.1.2) suggests that it
would be possible to use a handful concentration of ”de Brogliean waves”, with
wavelenght ${\lambda}$, to describe the particles localized in the space. In
this description it is necessary to use a mechanism that takes into account
these ”waves” with many wavelenghts. This mechanism is the Fourier Analysis
[15]. So, according to this technique (to one dimentional case) we can
considerer ${\psi}$(x) like a superposition of plane monochromatic harmonic
waves, that is:
${\psi}(x)\ =\ {\frac{1}{{\sqrt{2\ {\pi}}}}}\ {\int}_{-\ {\infty}}^{+\
{\infty}}\ {\phi}(k)\ e^{i\ k\ x}\ dk$ , (3.2.4a)
beeing:
${\phi}(k)\ =\ {\frac{1}{{\sqrt{2\ {\pi}}}}}\ {\int}_{-\ {\infty}}^{+\
{\infty}}\ {\psi}(x^{\prime})\ e^{-\ i\ k\ x^{\prime}}\ dx^{\prime}$ ,
(3.2.4b)
where:
$k\ =\ {\frac{2\ {\pi}}{{\lambda}}}$ , (3.2.4c)
is the wavenumber and represents the transition from the discrete to the
continuous description. Note that, using the expressions (3.1.2), (3.2.3c),
the relation (3.2.4c) will be written as:
$k\ =\ 2\ {\pi}\ {\frac{p}{h}}\ \ \ {\to}\ \ \ p\ =\ {\hbar}\ k$ . (3.2.4d)
Inserting the expression (3.2.4a) in the same relation (3.2.4b) results:
${\psi}(x)\ =\ {\frac{1}{2\ {\pi}}}\ {\int}_{-\ {\infty}}^{+\ {\infty}}\
{\int}_{-\ {\infty}}^{+\ {\infty}}\ {\psi}(x^{\prime})\ e^{i\ k\ (x-\
x^{\prime})}\ dx^{\prime}\ dk$ . (3.2.5)
Considering that [15]:
${\delta}(z^{\prime}\ -\ z)\ {\equiv}\ {\delta}(z\ -\ z^{\prime})\ =\
{\frac{1}{2\ {\pi}}}\ {\int}_{-\ {\infty}}^{+\ {\infty}}\ e^{i\ k\ (z-\
z^{\prime})}\ dk$ , (3.2.6a,b)
$f(z)\ =\ {\int}_{-\ {\infty}}^{+\ {\infty}}\ f(z^{\prime})\
{\delta}(z^{\prime}\ -\ z)\ dz^{\prime}$ , (3.2.6c)
we verify that (since $z,\ z^{\prime}\ {\equiv}\ x,\ x^{\prime}$), the
consistency of the relation (3.2.5), characterized by the famous completeness
relation.
Taking into account the relation (3.2.4a) in the unidimensional representation
of the equation (3.2.1b) the following relation will be obtained:
${\Psi}(x,\ t)\ =\ {\frac{1}{{\sqrt{2\ {\pi}}}}}\ {\int}_{-\ {\infty}}^{+\
{\infty}}\ {\phi}(k)\ e^{i\ [k\ x\ -\ {\omega}(k)\ t]}\ dk$ , (3.2.7)
which represents the wave packet of amplitude ${\phi}(k)$.
Note that the dependence ${\omega}$ in terms of $k$, indicated in the above
relation, is due to the fact that the energy $E$ of a physical system depends
of $p$. So, considering that this fact and also the relations (3.2.3b;3.2.4d),
this dependence is verified immediately as will be shown in what follows.
Now, let us write the equation (3.2.7) in terms of the Feynman propagator.
Putting $t\ =\ 0$ in this expression and, in analogy with the relation
(3.2.4b), we get:
${\Psi}(x,\ 0)\ =\ {\frac{1}{{\sqrt{2\ {\pi}}}}}\ {\int}_{-\ {\infty}}^{+\
{\infty}}\ {\phi}(k)\ e^{i\ k\ x}\ dk\ \ \ {\to}$
${\phi}(k)\ =\ {\frac{1}{{\sqrt{2\ {\pi}}}}}\ {\int}_{-\ {\infty}}^{+\
{\infty}}\ {\Psi}(x^{\prime},\ 0)\ e^{-\ i\ k\ x^{\prime}}\ dx^{\prime}$ .
(3.2.8)
Inserting this expression in the equation (3.2.7) results:
${\Psi}(x,\ t)\ =\ {\frac{1}{{\sqrt{2\ {\pi}}}}}\ {\int}_{-\ {\infty}}^{+\
{\infty}}\ {\Big{(}}\ {\frac{1}{{\sqrt{2\ {\pi}}}}}\ {\int}_{-\ {\infty}}^{+\
{\infty}}\ {\Psi}(x^{\prime},\ 0)\ e^{-\ i\ k\ x^{\prime}}\ dx^{\prime}\
{\Big{)}}\ {\times}\ e^{i\ [k\ x\ -\ {\omega}(k)\ t]}\ dk\ \ \ {\to}$
${\Psi}(x,\ t)\ =\ {\int}_{-\ {\infty}}^{+\ {\infty}}\ {\Big{(}}\ {\frac{1}{2\
{\pi}}}\ {\int}_{-\ {\infty}}^{+\ {\infty}}\ e^{i\ k\ [(x\ -\ x^{\prime})\ -\
{\frac{{\omega}(k)}{k}}\ t]}\ dk\ {\Big{)}}\ {\times}\ {\Psi}(x^{\prime},\ 0)\
dx^{\prime}$ . (3.2.9)
At this point it is important to say that, in the formalism of Feynman Quantum
Mechanics [16], the term inside the brackets in the equation (3.2.9)
represents the Feynman propagator $K(x,\ x^{\prime};\ t)$. So, this expression
can be written as:
${\Psi}(x,\ t)\ =\ {\int}_{-\ {\infty}}^{+\ {\infty}}\ K(x,\ x^{\prime};\ t)\
{\Psi}(x^{\prime},\ 0)\ dx^{\prime}$ , (3.2.10a)
where:
$K(x,\ x^{\prime};\ t)\ =\ {\frac{1}{2\ {\pi}}}\ {\int}_{-\ {\infty}}^{+\
{\infty}}\ e^{i\ k\ [(x\ -\ x^{\prime})\ -\ {\frac{{\omega}(k)}{k}}\ t]}\ dk$
. (3.2.10b)
The equation (3.2.10a) represents the wavefunction ${\Psi}$ for any time $t$
in terms of this function in the time $t\ =\ 0$. So, if ${\omega}$(k) is a
known function of $k$, so ${\Psi}(x,\ t)$ can be explicitely obtained from
${\Psi}(x,\ 0)$.
In the sequence we determine the form of the wave packet given in the
equations (3.2.10a,b) according to the $SECM$, defined by eq.(2.1) [17].
3.3. The Quantum Wave Packet of the Schrödinger for Continuous Measurements
Initially, let us calculate the quantum trajectory ($x_{qu}$) of the physical
system represented by the eq.(2.1). To do this, let us integrate the relations
(2.14b) [remembering that ${\int}\ {\frac{dz}{z}}\ =\ {\ell}n\ z,\ {\ell}n\
({\frac{x}{y}})\ =\ {\ell}n\ x\ -\ {\ell}n\ y$ ,e ${\ell}n\ x\ y\ =\ {\ell}n\
x\ +\ {\ell}n\ y$]:
$v_{qu}(x,\ t)\ =\ {\frac{dx_{qu}}{dt}}\ =\ {\big{[}}\
{\frac{{\dot{{\delta}}}(t)}{{\delta}(t)}}\ +\ {\frac{1}{2\ {\tau}}}\
{\big{]}}\ [x_{qu}\ -\ q(t)]\ +\ {\dot{q}}(t)\ \ \ {\to}$
${\frac{dx_{qu}}{dt}}\ -\ {\frac{dq}{dt}}\ =\ {\big{[}}\
{\frac{{\dot{{\delta}}}(t)}{{\delta}(t)}}\ +\ {\frac{1}{2\ {\tau}}}\
{\big{]}}\ [x_{qu}\ -\ q(t)]\ \ \ {\to}\ \ \ {\frac{d[x_{qu}(t)\ -\
q(t)]}{[x_{qu}(t)\ -\ q(t)]}}\ =\ {\big{[}}\
{\frac{{\dot{{\delta}}}(t)}{{\delta}(t)}}\ dt\ +\ {\frac{dt}{2\ {\tau}}}\
{\big{]}}\ \ \ {\to}$
${\int}_{o}^{t}\ {\frac{d[x_{qu}(t^{\prime})\ -\
q(t^{\prime})]}{[x_{qu}(t^{\prime})\ -\ q(t^{\prime})]}}\ =\ {\int}_{o}^{t}\
{\frac{d{\delta}(t^{\prime})}{{\delta}(t^{\prime})}}\ +\ {\int}_{o}^{t}\
{\frac{dt}{2\ {\tau}}}\ \ \ {\to}$
${\ell}n\ {\Big{(}}\ {\frac{[x_{qu}(t)\ -\ q(t)]}{[x_{qu}(0)\ -\ q(0)]}}\
{\Big{)}}\ =\ {\ell}n\ {\Big{[}}\ {\frac{{\delta}(t)}{{\delta}(0)}}\
{\Big{]}}\ +\ {\frac{t}{2\ {\tau}}}\ =\ {\ell}n\ {\Big{[}}\
{\frac{{\delta}(t)}{{\delta}(0)}}\ {\Big{]}}\ +\ {\ell}n\ {\Big{[}}\ exp\
{\big{(}}\ {\frac{t}{2\ {\tau}}}\ {\big{)}}\ {\Big{]}}\ =$
$\ =\ {\ell}n\ {\Big{(}}\ {\frac{{\delta}(t)}{{\delta}(0)}}\ .\ exp\
{\Big{[}}\ {\frac{t}{2\ {\tau}}}\ {\Big{]}}\ {\Big{)}}\ \ \ {\to}\ \ \
x_{qu}(t)\ =\ q(t)\ +\ e^{t/2\ {\tau}}\ {\frac{{\delta}(t)}{{\delta}(0)}}\
[x_{qu}(0)\ -\ q(0)]$ , (3.3.1)
that represent the looked for quantum trajectory.
To obtain the Schrödinger-de Broglie-Bohm wave packet for Continuous
Measurements given by the eq.(2.2), let us expand the functions $S(x,\ t)$,
$V(x,\ t)$ and $V_{qu}(x,\ t)$ around of $q(t)$ up to second Taylor order
[2.5]. In this way we have:
$S(x,\ t)\ =\ S[q(t),\ t]\ +\ S^{\prime}[q(t),\ t]\ [x\ -\ q(t)]\ +\
{\frac{S^{\prime\prime}[q(t),\ t]}{2}}\ [x\ -\ q(t)]^{2}$ , (3.3.2)
$V(x,\ t)\ =\ V[q(t),\ t]\ +\ V^{\prime}[q(t),\ t]\ [x\ -\ q(t)]\ +\
{\frac{V^{\prime\prime}[q(t),\ t]}{2}}\ [x\ -\ q(t)]^{2}$ , (3.3.3)
$V_{qu}(x,\ t)\ =\ V_{qu}[q(t),\ t]\ +\ V_{qu}^{\prime}[q(t),\ t]\ [x\ -\
q(t)]\ +\ {\frac{V_{qu}^{\prime\prime}[q(t),\ t]}{2}}\ [x\ -\ q(t)]^{2}$ .
(3.3.4)
Differentiating the expression (3.3.2) in the variable $x$, multiplying the
result by ${\frac{{\hbar}}{m}}$, using the relations (2.8) and (2.14b), and
taking into account the polynomial identity property, we obtain:
${\frac{{\hbar}}{m}}\ {\frac{{\partial}S(x,\ t)}{{\partial}x}}\ =\
{\frac{{\hbar}}{m}}\ {\Big{(}}\ S^{\prime}[q(t),\ t]\ +\
S^{\prime\prime}[q(t),\ t]\ [x\ -\ q(t)]\ {\Big{)}}\ =$
$=\ v_{qu}(x,\ t)\ =\ {\big{[}}\ {\frac{{\dot{{\delta}}}(t)}{{\delta}(t)}}\ +\
{\frac{1}{2\ {\tau}}}\ {\big{]}}\ [x_{qu}\ -\ q(t)]\ +\ {\dot{q}}(t)\ =\ \ \
{\to}$
$S^{\prime}[q(t),\ t]\ =\ {\frac{m\ {\dot{q}}(t)}{{\hbar}}}\ ,\ \ \
S^{\prime\prime}[q(t),\ t]\ =\ {\frac{m}{{\hbar}}}\ {\big{[}}\
{\frac{{\dot{{\delta}}}(t)}{{\delta}(t)}}\ +\ {\frac{1}{2\ {\tau}}}\
{\big{]}}$ . (3.3.5a,b)
Substituting the expressions (3.3.5a,b) in the equation (3.3.2), results:
$S(x,\ t)\ =\ S_{o}(t)\ +\ {\frac{m\ {\dot{q}}(t)}{{\hbar}}}\ [x\ -\ q(t)]\ +\
{\frac{m}{2\ {\hbar}}}\ {\Big{[}}\ {\frac{{\dot{{\delta}}}(t)}{{\delta}(t)}}\
+\ {\frac{1}{2\ {\tau}}}\ {\Big{]}}\ [x\ -\ q(t)]^{2}$ , (3.3.6)
where:
$S_{o}(t)\ {\equiv}\ S[q(t),\ t]$ , (3.3.7)
is the quantum action.
Differentiating the eq.(3.3.6) in relation to the time $t$, we obtain
(remembering that ${\frac{{\partial}x}{{\partial}t}}$ = 0):
${\frac{{\partial}S}{{\partial}t}}\ =\ {\dot{S}}_{o}(t)\ +\
{\frac{{\partial}}{{\partial}t}}\ {\Big{(}}\ {\frac{m\
{\dot{q}}(t)}{{\hbar}}}\ [x\ -\ q(t)]\ {\Big{)}}\ +\
{\frac{{\partial}}{{\partial}t}}\ {\Bigg{(}}\ {\frac{m}{2\ {\hbar}}}\
{\Big{[}}\ {\frac{{\dot{{\delta}}}(t)}{{\delta}(t)}}\ +\ {\frac{1}{2\
{\tau}}}\ {\Big{]}}\ [x\ -\ q(t)]^{2}\ {\Bigg{)}}\ \ \ {\to}$
${\frac{{\partial}S}{{\partial}t}}\ =\ {\dot{S}}_{o}(t)\ +\ {\frac{m\
{\ddot{q}}(t)}{{\hbar}}}\ [x\ -\ q(t)]\ -\ {\frac{m\
{\dot{q}}(t)^{2}}{{\hbar}}}\ +$
\+ ${\frac{m}{2\ {\hbar}}}\ [{\frac{{\ddot{{\delta}}}(t)}{{\delta}(t)}}\ -\
{\frac{{\dot{{\delta}}}^{2}(t)}{{\delta}^{2}(t)}}]\ [x\ -\ q(t)]^{2}\ -\
{\frac{m\ {\dot{q}}(t)}{{\hbar}}}\ {\Big{(}}\
{\frac{{\dot{{\delta}}}(t)}{{\delta}(t)}}\ +\ {\frac{1}{2\ {\tau}}}\
{\Big{)}}\ [x\ -\ q(t)]$ . (3.3.8)
Considering that [6]:
${\rho}(x,\ t)\ =\ [2\ {\pi}\ {\delta}^{2}(t)]^{-\ 1/2}\ e^{-\ {\frac{[x\ -\
{\bar{x}}(t)]^{2}}{2\ {\delta}^{2}(t)}}}$ , (3.3.9)
let us write $V_{qu}$ in terms of $[x\ -\ q(t)]$. Initially using the
eqs.(2.7b) and (3.3.9), we calculate the following differentiations:
${\frac{{\partial}{\phi}}{{\partial}x}}\ =\ {\frac{{\partial}}{{\partial}x}}\
{\Big{(}}\ [2\ {\pi}\ {\delta}^{2}(t)]^{-\ 1/4}\ e^{-\ {\frac{[x\ -\
q(t)]^{2}}{4\ {\delta}^{2}(t)}}}\ {\Big{)}}\ =\ [2\ {\pi}\
{\delta}^{2}(t)]^{-\ 1/4}\ e^{-\ {\frac{[x\ -\ q(t)]^{2}}{4\
{\delta}^{2}(t)}}}{\frac{{\partial}}{{\partial}x}}\ {\Big{(}}\ -\ {\frac{[x\
-\ q(t)]^{2}}{4\ {\delta}^{2}(t)}}\ {\Big{)}}\ \ \ {\to}$
${\frac{{\partial}{\phi}}{{\partial}x}}\ =\ -\ [2\ {\pi}\ {\delta}^{2}(t)]^{-\
1/4}\ e^{-\ {\frac{[x\ -\ q(t)]^{2}}{4\ {\delta}^{2}(t)}}}\ {\frac{[x\ -\
q(t)]}{2\ {\delta}^{2}(t)}}$ ,
${\frac{{\partial}^{2}{\phi}}{{\partial}x^{2}}}\ =\
{\frac{{\partial}}{{\partial}x}}\ {\Big{(}}\ -\ [2\ {\pi}\
{\delta}^{2}(t)]^{-\ 1/4}\ e^{-\ {\frac{[x\ -\ q(t)]^{2}}{4\
{\delta}^{2}(t)}}}\ {\frac{[x\ -\ q(t)]}{2\ {\delta}^{2}(t)}}\ {\Big{)}}$ =
$\ =\ -\ [2\ {\pi}\ {\delta}^{2}(t)]^{-\ 1/4}\ e^{-\ {\frac{[x\ -\
q(t)]^{2}}{4\ {\delta}^{2}(t)}}}\ {\frac{{\partial}}{{\partial}x}}\ {\Big{(}}\
{\frac{[x\ -\ q(t)]}{2\ {\delta}^{2}(t)}}\ {\Big{)}}\ -$
$-\ [2\ {\pi}\ {\delta}^{2}(t)]^{-\ 1/4}\ e^{-\ {\frac{[x\ -\ q(t)]^{2}}{4\
{\delta}^{2}(t)}}}\ {\frac{{\partial}}{{\partial}x}}\ {\Big{(}}\ -\ {\frac{[x\
-\ q(t)]^{2}}{4\ {\delta}^{2}(t)}}\ {\Big{)}}\ {\big{(}}\ {\frac{[x\ -\
q(t)]}{2\ {\delta}^{2}(t)}}\ {\big{)}}\ \ \ {\to}$
${\frac{{\partial}^{2}{\phi}}{{\partial}x^{2}}}\ =\ -\ [2\ {\pi}\
{\delta}^{2}(t)]^{-\ 1/4}\ e^{-\ {\frac{[x\ -\ q(t)]^{2}}{4\
{\delta}^{2}(t)}}}\ {\frac{1}{2\ {\delta}^{2}(t)}}\ +\ [2\ {\pi}\
{\delta}^{2}(t)]^{-\ 1/4}\ e^{-\ {\frac{[x\ -\ q(t)]^{2}}{4\
{\delta}^{2}(t)}}}\ {\frac{[x\ -\ q(t)]^{2}}{4\ {\delta}^{4}(t)}}$ =
$=\ -\ {\phi}\ {\frac{1}{2\ {\delta}^{2}(t)}}\ +\ {\phi}\ {\frac{[x\ -\
q(t)]^{2}}{4\ {\delta}^{4}(t)}}\ \ \ {\to}\ \ \ {\frac{1}{{\phi}}}\
{\frac{{\partial}^{2}{\phi}}{{\partial}x^{2}}}\ =\ {\frac{[x\ -\ q(t)]^{2}}{4\
{\delta}^{4}(t)}}\ -\ {\frac{1}{2\ {\delta}^{2}(t)}}$ . (3.3.10)
Substituting the relation (3.3.10) in the equation (2.10a), taking into
account the expression (3.3.4), results:
$V_{qu}(x,\ t)\ =\ V_{qu}[q(t),\ t]\ +\ V_{qu}^{\prime}[q(t),\ t]\ [x\ -\
q(t)]\ +\ {\frac{V_{qu}^{\prime\prime}[q(t),\ t]}{2}}\ [x\ -\ q(t)]^{2}\ \ \
{\to}$
$V_{qu}(x,\ t)\ =\ {\frac{{\hbar}^{2}}{4\ m\ {\delta}^{2}(t)}}\ [x\ -\
q(t)]^{o}\ -\ {\frac{{\hbar}^{2}}{8\ m\ {\delta}^{4}(t)}}\ [x\ -\ q(t)]^{2}$ .
(3.3.11)
Besides this the eq.(3.3.3) will be written, using the eq.(2.1) in the form:
$V(x,\ t)\ =\ V[q(t),\ t]\ +\ V^{\prime}[q(t),\ t]\ [x\ -\ q(t)]\ +\
{\frac{V^{\prime\prime}[q(t),\ t]}{2}}\ [x\ -\ q(t)]^{2}\ \ \ {\to}$
$V(x,\ t)\ =\ {\frac{1}{2}}\ m\ {\Omega}^{2}(t)\ q^{2}(t)\ +\ {\lambda}\ q(t)\
X(t)\ +$
$+\ {\Big{(}}\ m\ {\Omega}^{2}(t)\ q(t)\ +\ {\lambda}\ X(t)\ {\Big{)}}\ [x\ -\
q(t)]\ +\ {\frac{m}{2}}\ {\Omega}^{2}(t)\ [x\ -\ q(t)]^{2}$ . (3.3.12)
Inserting the relations (2.8), (2.14b) and (3.3.2-4;3.3.8,10,11), into the
eq.(2.12b), we obtain, remembering that $S_{o}(t)$, ${\delta}(t)$ and $q(t)$:
${\hbar}\ {\frac{{\partial}S}{{\partial}t}}\ +\ {\frac{1}{2}}\ m\ v_{qu}^{2}\
+\ V\ +\ V_{qu}\ =$
$=\ {\hbar}\ {\Big{[}}\ {\dot{S}}_{o}\ +\ {\frac{m\ {\ddot{q}}}{{\hbar}}}\ (x\
-\ q)\ -\ {\frac{m\ {\dot{q}}^{2}}{{\hbar}}}\ +\ {\frac{m}{2\ {\hbar}}}\
{\big{(}}\ {\frac{{\ddot{{\delta}}}}{{\delta}}}\ -\
{\frac{{\dot{{\delta}}}^{2}}{{\delta}^{2}}}\ {\big{)}}\ (x\ -\ q)^{2}\ -$
$-\ {\frac{m\ {\dot{q}}}{{\hbar}}}\ {\big{(}}\
{\frac{{\dot{{\delta}}}}{{\delta}}}\ +\ {\frac{1}{2\ {\tau}}}\ {\big{)}}\ (x\
-\ q){\Big{]}}\ +\ {\frac{1}{2}}\ m\ {\Big{[}}\ {\big{(}}\
{\frac{{\dot{{\delta}}}}{{\delta}}}\ +\ {\frac{1}{2\ {\tau}}}\ {\big{)}}\ (x\
-\ q)\ +\ {\dot{q}}\ {\Big{]}}^{2}\ +$
$+\ {\frac{1}{2}}\ m\ {\Omega}^{2}(t)\ q^{2}+\ {\lambda}\ q\ X(t)\ +\
{\big{[}}\ m\ {\Omega}^{2}(t)\ q\ +\ {\lambda}\ X(t)\ {\big{]}}\ (x\ -\ q)\ +\
{\frac{m}{2}}\ {\Omega}^{2}(t)\ (x\ -\ q)^{2}\ +$
$+\ {\frac{{\hbar}^{2}}{4\ m\ {\delta}^{2}}}\ -\ {\frac{{\hbar}^{2}}{8\ m\
{\delta}^{4}}}\ (x\ -\ q)^{2}\ =\ 0$ . (3.3.13)
Since $(x\ -\ q)^{o}\ =\ 1$, we can gather together the above expression in
potencies of $(x\ -\ q)$, obtaining:
${\Big{[}}\ {\hbar}\ {\dot{S}}_{o}\ -\ m\ {\dot{q}}^{2}\ +\ {\frac{1}{2}}\ m\
{\dot{q}}^{2}\ +\ {\frac{1}{2}}\ m\ {\Omega}^{2}(t)\ q^{2}+\ {\lambda}\ q\
X(t)\ +\ {\frac{{\hbar}^{2}}{4\ m\ {\delta}^{2}}}\ {\Big{]}}\ (x\ -\ q)^{o}\
+$
$+\ {\Big{[}}\ m\ {\ddot{q}}\ -\ m\ {\dot{q}}\ {\big{(}}\
{\frac{{\dot{{\delta}}}}{{\delta}}}\ +\ {\frac{1}{2\ {\tau}}}\ {\big{)}}\ +\
m\ {\dot{q}}\ {\big{(}}\ {\frac{{\dot{{\delta}}}}{{\delta}}}\ +\ {\frac{1}{2\
{\tau}}}\ {\big{)}}+\ m\ {\Omega}^{2}(t)\ q\ +\ {\lambda}\ X(t)\ {\Big{]}}\
(x\ -\ q)\ +\ {\Big{[}}\ {\frac{m}{2}}\ {\big{(}}\
{\frac{{\ddot{{\delta}}}}{{\delta}}}\ -\
{\frac{{\dot{{\delta}}}^{2}}{{\delta}^{2}}}\ {\big{)}}\ +$
$+\ {\frac{m}{2}}\ {\big{(}}\ {\frac{{\dot{{\delta}}}^{2}}{{\delta}^{2}}}\ +\
{\frac{{\dot{{\delta}}}}{{\tau}\ {\delta}}}\ +\ {\frac{1}{4\ {\tau}^{2}}}\
{\big{)}}\ +\ {\frac{m}{2}}\ {\Omega}^{2}(t)\ -\ {\frac{{\hbar}^{2}}{8\ m\
{\delta}^{4}}}\ {\Big{]}}\ (x\ -\ q)^{2}\ =\ 0$ . (3.3.14)
As the above relation is an identically null polynomium, the coefficients of
the potencies must be all equal to zero, that is:
${\dot{S}}_{o}(t)\ =\ {\frac{1}{{\hbar}}}\ {\Big{[}}\ {\frac{1}{2}}\ m\
{\dot{q}}^{2}\ -\ {\frac{1}{2}}\ m\ {\Omega}^{2}(t)\ q^{2}-\ {\lambda}\ q\
X(t)\ -\ {\frac{{\hbar}^{2}}{4\ m\ {\delta}^{2}}}\ {\Big{]}}$ , (3.3.15)
${\ddot{q}}\ +\ {\Omega}^{2}(t)\ q\ +\ {\frac{{\lambda}}{m}}\ X(t)\ =\ 0$ ,
(3.3.16)
${\ddot{{\delta}}}\ +\ {\frac{{\dot{{\delta}}}}{{\tau}}}\ +\ {\Big{[}}\
{\Omega}^{2}(t)\ +\ {\frac{1}{4\ {\tau}^{2}}}\ {\Big{]}}\ {\delta}\ =\
{\frac{{\hbar}^{2}}{4\ m^{2}\ {\delta}^{3}(t)}}$ . (3.3.17)
Assuming that the following initial conditions are obeyed:
$q(0)\ =\ x_{o}\ ,\ \ \ {\dot{q}}(0)\ =\ v_{o}\ ,\ \ \ {\delta}(0)\ =\ a_{o}\
,\ \ \ {\dot{{\delta}}}(0)\ =\ b_{o}$ , (3.3.18a-d)
and that [see eq.(3.3.7)]:
$S_{o}(0)\ =\ {\frac{m\ v_{o}\ x_{o}}{{\hbar}}}$ , (3.3.19)
the integration of the expression (3.3.15) will be given by:
$S_{o}(t)\ =\ {\frac{1}{{\hbar}}}\ {\int}_{o}^{t}\ dt^{\prime}\ {\Big{[}}\
{\frac{1}{2}}\ m\ {\dot{q}}^{2}(t^{\prime})\ -\ {\frac{1}{2}}\ m\
{\Omega}^{2}(t^{\prime})\ q^{2}(t^{\prime})-$
$-\ {\lambda}\ q(t^{\prime})\ X(t^{\prime})\ -\ {\frac{{\hbar}^{2}}{4\ m\
{\delta}^{2}(t^{\prime})}}\ {\Big{]}}\ +\ {\frac{m\ v_{o}\ x_{o}}{{\hbar}}}$ .
(3.3.20)
Taking into account the expressions (3.3.5a,b) and (3.3.20) in the equation
(3.3.6) results:
$S(x,\ t)\ =\ {\frac{1}{{\hbar}}}\ {\int}_{o}^{t}\ dt^{\prime}\ {\Big{[}}\
{\frac{1}{2}}\ m\ {\dot{q}}^{2}(t^{\prime})\ -\ {\frac{1}{2}}\ m\
{\Omega}^{2}(t^{\prime})\ q^{2}(t^{\prime})\ -\ {\lambda}\ q(t^{\prime})\
X(t^{\prime})\ -\ {\frac{{\hbar}^{2}}{4\ m\ {\delta}^{2}(t^{\prime})}}\
{\Big{]}}\ +$
$+\ {\frac{m\ v_{o}\ x_{o}}{{\hbar}}}\ +\ {\frac{m\ {\dot{q}}(t)}{{\hbar}}}\
[x\ -\ q(t)]\ +\ {\frac{m}{2\ {\hbar}}}\ {\Big{[}}\
{\frac{{\dot{{\delta}}}(t)}{{\delta(t)}}}\ +\ {\frac{1}{2\ {\tau}}}\
{\Big{]}}\ [x\ -\ q(t)]^{2}$ . (3.3.21)
This result obtained above permit us, finally, to obtain the wave packet for
the $SBBMC$ equation. Indeed, considering the relations (2.2;2.7b), (3.3.9)
and (3.3.21), we get [18]:
${\Psi}(x,\ t)\ =\ [2\ {\pi}\ {\delta}^{2}(t)]^{-\ 1/4}\ exp\ {\Bigg{[}}\
{\Big{(}}\ {\frac{i\ m}{2\ {\hbar}}}\ {\Big{[}}\
{\frac{{\dot{{\delta}}}(t)}{{\delta}(t)}}\ +\ {\frac{1}{2\ {\tau}}}\
{\Big{]}}\ -\ {\frac{1}{4\ {\delta}^{2}(t)}}\ {\Big{)}}\ [x\ -\ q(t)]^{2}\
{\Bigg{]}}\ {\times}$
${\times}\ exp\ {\Big{[}}\ {\frac{i\ m\ {\dot{q}}(t)}{{\hbar}}}\ [x\ -\ q(t)]\
+\ {\frac{i\ m\ v_{o}\ x_{o}}{{\hbar}}}\ {\Big{]}}\ {\times}$
${\times}\ exp\ {\Big{[}}\ {\frac{i}{{\hbar}}}\ {\int}_{o}^{t}\ dt^{\prime}\
{\Big{[}}\ {\frac{1}{2}}\ m\ {\dot{q}}^{2}(t^{\prime})\ -\ {\frac{1}{2}}\ m\
{\Omega}^{2}(t^{\prime})\ q^{2}(t^{\prime})-\ {\lambda}\ q(t^{\prime})\
X(t^{\prime})\ -\ {\frac{{\hbar}^{2}}{4\ m\ {\delta}^{2}(t^{\prime})}}\
{\Big{]}}$ . (3.3.22)
NOTES AND REFERENCES
1\. NASSAR, A. B. 2004. Chaotic Behavior of a Wave Packet under Continuos
Quantum Mechanics (mimeo).
2\. To a formal and philosophical study of the $BBQM$ see, for instance:
2.1. HOLLAND, P. R. 1993. The Quantum Theory of Motion: An Account of the de
Broglie-Bohm Causal Interpretation of Quantum Mechanics, Cambridge University
Press.
2.2. JAMMER, M. 1974. The Philosophy of Quantum Mechanics, John Willey.
2.3. FREIRE JUNIOR, O. 1999. David Bohm e a Controvérsia dos Quanta, Coleção
CLE, Volume 27, Centro de Lógica, Epistemologia e História da Ciência,
UNICAMP.
2.4. AULETTA, G. 2001. Foundations and Interpretation of Quantum Mechanics,
World Scientific.
2.5. BASSALO, J. M. F., ALENCAR, P. T. S., CATTANI, M. S. D. e NASSAR, A. B.
2003. Tópicos da Mecânica Quântica de de Broglie-Bohm, EDUFPA.
3\. MADELUNG, E. 1926. Zeitschrift für Physik 40, p. 322.
4\. BOHM, D. 1952. Physical Review 85, p. 166.
5\. See books on the Fluid Mechanics, for instance:
5.1. STREETER, V. L. and DEBLER, W. R. 1966. Fluid Mechanics, McGraw-Hill Book
Company, Incorporation.
5.2. COIMBRA, A. L. 1967. Mecânica dos Meios Contínuos, Ao Livro Técnico S. A.
5.3. LANDAU, L. et LIFSHITZ, E. 1969. Mécanique des Fluides. Éditions Mir.
5.4. BASSALO, J. M. F. 1973. Introdução à Mecânica dos Meios Contínuos,
EDUFPA.
5.5. CATTANI, M. S. D. 1990/2005. Elementos de Mecânica dos Fluidos, Edgard
Blücher.
6\. BASSALO, J. M. F., ALENCAR, P. T. S., SILVA, D. G. da, NASSAR, A. B. and
CATTANI, M. 2009. arXiv:0902.2988v1 [math-ph], 17 February.
7\. EINSTEIN, A. 1909. Physikalische Zeitschrift 10, p. 185.
8\. EINSTEIN, A. 1916. Verhandlungen der Deutschen Physikalischen Gesellschaft
18, p. 318; —– 1916. Mitteilungen der Physikalischen Gesellschaft zu Zürich
16, p. 47.
9\. This dual character of the eletromagntic radiation has been just proposed
by Stark, in 1909, in the paper published in the Physikalische Zetischrift 10,
p. 902. In this paper he explained bremsstrahlung.
10\. DE BROGLIE, L. 1923. Comptes Rendus de l’Academie des Sciences de Paris
177, pgs. 507; 548; 630; —– 1924. Comptes Rendus de l’Academie des Sciences de
Paris 179, p. 39; —– 1925. Annales de Physique 3, p. 22.
11\. SCHRÖDINGER, E. 1926. Annales de Physique Leipzig 79, pgs. 361; 489; 734;
747.
12\. BORN, M. 1926. Zeitschrift für Physik 37; 38, pgs. 863; 803.
13\. See, for instance, the following textes, in which inclusive can be found
the references of papers mentioned in the Introduction:
13.1. POWELL, J. L. and CRASEMAN, B. 1961. Quantum Mechanics. Addison Wesley
Publishing Company, Incorporation.
13.2. HARRIS, L. and LOEB, A. L. 1963. Introduction to Wave Mechanics, McGraw-
Hill Book Company, Inc. and Kogakusha Comapny, Ltd.
13.3. DAVYDOV, A. S. 1965. Quantum Mechanics. Pergamon Press.
13.4. DICKE, R. H. and WITTKE, J. P. 1966. Introduction to Quantum Mechanics.
Addison Wesley Publishing Company, Incorporation.
13.5. NEWING, R. A. and CUNNINGHAM, J. 1967. Quantum Mechanics, Oliver and
Boyd Ltd.
13.6. SCHIFT, L. I. 1970. Quantum Mechanics. McGraw-Hill Book Company,
Incorporation.
13.7. MERZBACHER, E. 1976. Quantum Mechanics. John Wiley and Sons,
Incorporation.
13.8. MOURA, O. 1984. Mecânica Quântica. EDUFPA.
13.9. SHANKAR, R. 1994. Principles of Quantum Mechanics, Plenum Press.
14\. See textes cited in the Note (13).
15\. BUTKOV, E. 1973. Mathematical Physics, Addison-Wesley Publishing Company.
16\. FEYNMAN, R. P. and HIBBS, A. R. 1965. Quantum Mechanics and Path
Integrals, McGraw-Hill Book Company.
17\. SILVA, D. G. da 2006. Cálculo dos Invariantes de Ermakov-Lewis e do
Pacote de Onda Quântico da Equação de Schrödinger para Medidas Contínuas.
Trabalho de Conclusão de Curso, DFUFPA.
18\. NASSAR, A. B. 1990. Physics Letters A 146, 89; —– 1999\. Wave Function
versus Propagator. (DFUFPA, mimeo); SOUZA, J. F. de 1999. Aproximação de de
Broglie-Bohm para Osciladores Harmônicos Dependentes do Tempo. Tese de
Mestrado, DFUFPA.
|
arxiv-papers
| 2009-05-26T19:49:58 |
2024-09-04T02:49:02.935364
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "J.M.F. Bassalo, P.T.S. Alencar, D.G. da Silva, A. Nassar and M.\n Cattani",
"submitter": "Mauro Cattani",
"url": "https://arxiv.org/abs/0905.4280"
}
|
0905.4293
|
# Complex Lie Algebroids and ACH manifolds
Paolo Antonini
Paolo.Antonini@mathematik.uni-regensburg.de
paolo.anton@gmail.com
###### Abstract
We propose the definition of a Manifold with a complex Lie structure at
infinity. The important class of ACH manifolds enters into this class.
###### Contents
1. 1 Manifolds with a complex Lie structure at infinity
2. 2 ACH manifolds
1. 2.1 The square root of a manifold with boundary
2. 2.2 The natural complex Lie algebroid associated to an ACH manifold
## 1 Manifolds with a complex Lie structure at infinity
Manifolds with a Lie structure at infinity are well known in literature [2, 1,
3]. Remind the definition. Let $X$ be a smooth non compact manifold together
with a compactification $X\hookrightarrow\overline{X}$ where $\overline{X}$ is
a compact manifold with corners111with embedded hyperfaces i.e the definition
requires that every boundary hypersurface has a smooth defining function . A
Lie structure at infinity on $X$ is the datum of a Lie subalgebra
$\mathcal{V}$ of the Lie algebra of vector fields on $\overline{X}$ subjected
to two restrictions
1. 1.
every vector field in $\mathcal{V}$ must be tangent to each boundary hyperface
of $\overline{X}$.
2. 2.
$\mathcal{V}$ must be a finitely generated $C^{\infty}(\overline{X})$–module
this meaning that exists a fixed number $k$ such that around each point
$x\in\overline{X}$ we have for every $V\in\mathcal{V}$,
$\varphi(V-\sum_{i_{1}}^{k}\varphi_{k}V_{k})=0$
where $\varphi$ is a function with $\varphi=1$ in the neighborhood, the vector
fields $V_{1},...,V_{n}$ belong to $\mathcal{V}$ and the coefficients
$\varphi_{j}$ are smooth functions with univoquely determined germ at $x$.
By The Serre–Swann equivalence there must be a Lie Algebroid over
$\overline{X}$ i.e. a smooth vector bundle $A\longrightarrow\overline{X}$ with
a Lie structure on the space of sections $\Gamma(A)$ and a vector bundle map
$\rho:A\longrightarrow\overline{X}$ such that the extended map on sections is
a morphism of Lie algebras and satisfies
1. 1.
$\rho(\Gamma(A))={A}$
2. 2.
$[X,fY]=f[X,Y]+(\rho(X)f)Y$ for all $X,Y\in\Gamma(A)$.
In particular we can define manifolds with a Lie structure at infinity as
manifolds $X$ with a Lie algebroid over a compactification $\overline{X}$ with
the image of $\rho$ contained in the space of boundary vector fields (these
are called boundary Lie algebroids). Notice that the vector bundle $A$ can be
"physically reconstructed" in fact its fiber $A_{x}$ is naturally the quotient
$\mathcal{V}/\mathcal{V}_{x}$ where
$\mathcal{V}_{x}:=\Big{\\{}V\in\mathcal{V}:V=\sum_{\textrm{finite}}\varphi_{j}V_{j},\,V_{j}\in\mathcal{V},\,\varphi_{j}\in
C^{\infty}(\overline{X}),\,\varphi_{j}(x)=0\Big{\\}}.$
In particular since over the interior $X$ there are no restrictions
$\rho:A_{|X}\longrightarrow TX$ is an isomorphism. In mostly of the
applications this map degenerates over the boundary. One example for all is
the Melrose $b$–geometry [8] where one takes as $\overline{X}$ a manifold with
boundary and $\mathcal{V}$ is the space of all vector fields that are tangent
to the boundary. Here $A={}^{b}T\overline{X}$ the $b$–tangent bundle. In fact
all of these ideas are a formalization of long program of Melrose.
In this section we aim to take into account complex Lie algebroids i.e.
complex vector bundles with a structure of a complex Lie algebra on the space
of sections and the anchor mapping ($\mathbb{C}$–linear, of coarse) with
values on the complexified tangent space
$T_{\mathbb{C}}\overline{X}=T\overline{X}\otimes\mathbb{C}.$
Definition 1.1 — A manifold with a complex Lie structure at infinity is a
triple $(X,\overline{X},A)$ where $X\hookrightarrow\overline{X}$ is a
compactification with a manifold with corners and
$A\longrightarrow\overline{X}$ is a complex Lie algebroid with the
$\mathbb{C}$–linear anchor mapping $\rho:A\longrightarrow
T_{\mathbb{C}}\overline{X}$ with values on the space of complex vector fields
tangent to each boundary hypersurface.
Note that over the interior the algebroid $A$ reduces to the complexified
tangent bundle so a hermitian metric along the fibers of $A$ restricts to a
hermitian metric on $X$. We shall call the corresponding object a hermitian
manifold with a complex Lie structure at infinity or a hermitian Lie manifold.
## 2 ACH manifolds
The achronim ACH stands for asymptotically complex hyperbolic manifold. This
is an important class of non–compact Riemannian manifolds and are strictly
related to some solutions of the Einstein equation [6, 4] and CR geometry [5].
We are going to remind the definition. Let $\overline{X}$ be a compact
manifold of even dimension $m=2n$ with boundary $Y$ . We will denote by $X$
the interior of $X$, and choose a defining function $u$ of $Y$ , that is a
function on $\overline{X}$, positive on $X$ and vanishing to first order on
$Y=\partial\overline{X}$. The notion of ACH metric on $X$ is related to the
data of a strictly pseudoconvex CR structure on $Y$, that is an almost complex
structure $J$ on a contact distribution of $Y$, such that
$\gamma(\cdot,\cdot)=d\eta(\cdot,J\cdot)$ is a positive Hermitian metric on
the contact distribution (here we have chosen a contact form $\eta$). Identify
a collar neighborhood of $Y$ in $X$ with $[0,T)\times Y$ , with coordinate $u$
on the first factor. A Riemannian metric $g$ is defined to be an ACH metric on
$X$ if there exists a CR structure $J$ on $Y$, such that near $Y$
$g\sim\dfrac{du^{2}+\eta^{2}}{u^{2}}+\dfrac{\gamma}{u}.$ (1)
The asymtotic $\sim$ should be intended in the sense that the difference
between $g$ and the model metric
$g_{0}=\dfrac{du^{2}+\eta^{2}}{u^{2}}+\dfrac{\gamma}{u}$ is a symmetric
$2$–tensor $\kappa$ with $|\kappa|=O({u}^{\delta/2})$, $0<\delta\leq 1$. One
also requires that each $g_{0}$–covariant derivative of $\kappa$ must satisfy
$|\nabla^{m}\kappa|=O({u}^{\delta}/2)$. The complex structure on the Levi
distribution $H$ on the boundary is called the conformal infinity of $g$.
Hereafter we shall take the normalization
$\delta=1.$
This choice is motivated by applications to the ACH Einstein manifolds where
well known normalization results show its naturality [6].
### 2.1 The square root of a manifold with boundary
In order to show that ACH manifolds are complex Lie manifolds we need a
construction of Melrose, Epstein and Mendoza [7]. So let $\overline{X}$ be a
manifold with boundary with boundary defining function $u$. Let us extend the
ring of smooth functions $C^{\infty}(\overline{X})$ by adjoining the function
$\sqrt{u}$. Denote this new ring $C^{\infty}(\overline{X}_{1/2})$ In local
coordinates a function is in this new structure if it can be expressed as a
$C^{\infty}$ function of $u^{1/2},y_{1},...,y_{n}$ i.e. it is $C^{\infty}$ in
the interior and has an expansion at $\partial{\overline{X}}$ of the form
$f(u,x)\sim\sum_{j=0}^{\infty}u^{j/2}a_{j}(x)$
with coefficients $a_{j}(x)$ smooth in the usual sense. The difference
$f-\sum_{j=0}^{N}u^{j/2}a_{j}(x)$ becomes increasingly smooth with $N$. In
this way $f$ is determined by the asymtotic series up to a function with all
the derivatives that vanish at the boundary. Since the ring is independent
from the choice of the defining function and invariant under diffeomorphisms
of $\overline{X}$ the manifold $\overline{X}$ equipped with
$C^{\infty}(\overline{X}_{1/2})$ is a manifold with boundary globally
diffeomorphic to $\overline{X}$.
Definition 2.2 — The square root of $\overline{X}$ is the manifold
$\overline{X}$ equipped with the ring of functions
$C^{\infty}(\overline{X}_{1/2})$. We denote it $\overline{X}_{1/2}$
Notice the natural mapping
$\iota_{1/2}:\overline{X}\longrightarrow\overline{X}_{1/2}$ descending from
the inclusion $C^{\infty}(\overline{X})\hookrightarrow
C^{\infty}(\overline{X}_{1/2})$ is not a $C^{\infty}$ isomorphism since it
cannot be smoothly inverted. Note also the important fact that the interiors
and boundaries of $\overline{X}$ and $\overline{X}_{1/2}$ are canonically
diffeomorphic. The change is the way the boundary is attached.
### 2.2 The natural complex Lie algebroid associated to an ACH manifold
Let $X$ be an orientable $2n$–dimensional ACH manifold with compactification
$\overline{X}$, define $Y:=\partial\overline{X}$ and remember for further use
it is canonically diffeomorphic to the boundary of $\overline{X}_{1/2}$. So
$Y$ is a CR $(2n-1)$– manifold with contact form $\eta$ (we keep all the
notations above). Let $H=\operatorname{Ker}\eta$ the Levi distribution with
choosen complex structure $J:H\longrightarrow H$. Extend $J$ to a complex
linear endomorphism $J:T_{\mathbb{C}}Y\longrightarrow T_{\mathbb{C}}Y$ with
$J^{2}=-1$. Define the complex subundle $T_{1,0}$ of $T_{\mathbb{C}}Y$ as the
bundle of the $i$–eigenvectors. Notice that directly from the definition on
the CR structure it is closed under the complex bracket of vector fields; for
this reason the complex vector space
$\mathcal{V}_{1,0}:=\\{V\in\Gamma(\overline{X}_{1/2},T\overline{X}_{1/2}):V_{|Y}\in\Gamma(T_{1,0})\\}$
is a complex Lie algebra. It is also a finitely generated projective module.
To see this, around a point $x\in Y$ let $U_{1},...,U_{r}$, $r=2(n-1)$ span
$H$ and let $T\in\Gamma(Y,TY)$ be the Reeb vector field, univoquely determined
by the conditions $\gamma(T)=1$ and $d\gamma(\cdot,T)=0$. Then it is easy to
see that the following is a local basis of $\mathcal{V}_{1,0}$ over
$C^{\infty}(\overline{X}_{1/2},\mathbb{C})$:
$\sqrt{u}\partial_{u},\,\,U_{1}-iJU_{1},\,\,...,\,\,U_{r}-iJU_{r},\,\,\sqrt{u}T$
(2)
where $u$ is a boundary defining function. Now let
$\widetilde{\mathcal{V}}_{\textrm{ACH}}:=\sqrt{u}\mathcal{V}_{1,0}$
the submodule defined by the multiplication of every vector field by the
smooth function $\sqrt{u}$. A local basis corresponding to (2) is
$u\partial_{u},\,\,\sqrt{u}[U_{1}-iJU_{1}],\,\,...,\,\,\sqrt{u}[U_{r}-iJU_{r}],\,\,uT.$
(3)
Let $A\longrightarrow\overline{X}_{1/2}$ the corresponding Lie algebroid. The
following result is immediate
Theorem 2.2 — Every ACH metric on $X$ extends to a smooth hermitian metric on
$A$. In particular an ACH manifold is a manifold with a Complex Lie structure
at infinity.
Proof — Just write the matrix of the difference $\kappa$ on a frame of the
form (3). This gives the right asymptotic. $\Box$
## References
* [1] B. Ammann; A. D. Ionescu and V. Nistor Sobolev spaces on Lie manifolds and regularity for polyhedral domains. Doc. Math. 11 (2006), 161–206 (electronic)
* [2] B. Ammann, R. Lauter and V. Nistor Pseudodifferential operators on manifolds with a Lie structure at infinity. Ann. of Math. 165 (2007), no. 3, 717–747.
* [3] B. Ammann; R. Lauter and V. Nistor On the geometry of Riemannian manifolds with a Lie structure at infinity. Int. J. Math. Math. Sci. no 1–4 (2004), 161–193.
* [4] O. Biquard, Métriques dÉinstein asymptotiquement symétriques, Astérisque 265, (2000)
* [5] O. Biquard and M. Herzlich, A Burns-Epstein invariant for ACHE 4-manifolds. Duke Math. J. 126 (2005), no. 1, 53–100.
* [6] O. Biquard and Rollin, and Y, Rollin. Wormholes in ACH Einstein Manifolds. Trans. Amer. Math. Soc. 361 (2009), no. 4, 2021–2046
* [7] C. L. Epstein, R. B. Melrose, G. A. Mendoza Resolvent of the Laplacian on strictly pseudoconvex domains. Acta Math. 167 (1991), no. 167, 1–106
* [8] R. B. Melrose. The Atiyah-Patodi-Singer index theorem, volume 4 of Research Notes in Mathematics. A K Peters Ltd., Wellesley, MA, 1993.
|
arxiv-papers
| 2009-05-26T21:20:32 |
2024-09-04T02:49:02.943785
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Paolo Antonini",
"submitter": "Paolo Antonini",
"url": "https://arxiv.org/abs/0905.4293"
}
|
0905.4300
|
# NGC 7538 IRS 1 - an ionized jet powered by accretion
Göran Sandell SOFIA-USRA, NASA Ames Research Center, MS N211-3, Moffett
Field, CA 94035, U.S.A. Goran.H.Sandell@nasa.gov W. M. Goss National Radio
Astronomy Observatory, P.O. Box O, Socorro, NM 87801, U.S.A. Melvyn Wright
Radio Astronomy Laboratory, University of California, Berkeley 601 Campbell
Hall, Berkeley, CA 94720, U.S.A. Stuartt Corder11affiliation: Jansky Fellow,
NRAO Joint ALMA Observatory, Av Apoquindo 3650 Piso 18, Las Condes, Santiago,
Chile
###### Abstract
Analysis of high spatial resolution VLA images shows that the free-free
emission from NGC 7538 IRS 1 is dominated by a collimated ionized wind. We
have re-analyzed high angular resolution VLA archive data from 6 cm to 7 mm,
and measured separately the flux density from the compact bipolar core and the
extended (1${}^{\prime\prime}\mskip-7.6mu.\,$5 - 3′′) lobes. We find that the
flux density of the core is $\propto\nu^{\alpha}$, where $\nu$ is the
frequency and $\alpha$ is $\sim$ 0.7. The frequency dependence of the total
flux density is slightly steeper with $\alpha$ = 0.8. A massive optically
thick hypercompact core with a steep density gradient can explain this
frequency dependence, but it cannot explain the extremely broad recombination
line velocities observed in this source. Neither can it explain why the core
is bipolar rather than spherical, nor the observed decrease of 4% in the flux
density in less than 10 years. An ionized wind modulated by accretion is
expected to vary, because the accretion flow from the surrounding cloud will
vary over time. BIMA and CARMA continuum observations at 3 mm show that the
free-free emission still dominates at 3 mm. HCO+ J = $1\to 0$ observations
combined with FCRAO single dish data show a clear inverse P Cygni profile
towards IRS 1. These observations confirm that IRS 1 is heavily accreting with
an accretion rate $\sim$2 10-4 M⊙/yr.
###### Subject headings:
accretion, accretion disks – H II regions – stars: early-type – stars:
formation
## 1\. Introduction
NGC 7538 IRS 1 was first detected in the radio at 5 GHz by Martin (1973), who
found three compact radio sources at the SE edge of the large ($\sim$ 4′) H II
region NGC 7538, which is at a distance 2.65 kpc (Moscadelli et al., 2008).
The brightest of the three, source B, has later become known as IRS 1 (Wynn-
Williams, Becklin, & Neugebauer, 1974). The far-infrared luminosity of the
three sources is $\sim$1.9 105 L⊙, completely dominated by IRS 1(Hackwell,
Grasdalen & Gehrz, 1982). IRS 1 was first resolved with the VLA at 14.9 GHz by
Campbell (1984), who showed that it has a compact
($\sim$0${}^{\prime\prime}\mskip-7.6mu.\,$2) bipolar N-S core with faint
extended fan-shaped lobes, suggesting an ionized outflow. This is an extremely
well-studied source with numerous masers, a prominent molecular outflow and
extremely broad hydrogen recombination lines indicating substantial mass
motion of the ionized gas (Gaume et al., 1995; Sewilo et al., 2004; Keto et
al., 2008). Although it has been modeled as an ionized jet (Reynolds, 1986) or
a photo-ionized accretion disk (Lugo, Lizano & Garay, 2004), the source is
usually referred to as an Ultra Compact or Hyper Compact H II region with a
turnover at $\sim$ 100 GHz. Since we have carried extensive observations of
IRS 1 with BIMA and CARMA, the desire to determine the contribution from dust
emission at mm-wavelengths led us to investigate where the free-free emission
from IRS 1 becomes optically thin. Such a determination is not possible from
published data, because IRS 1 has been observed with a variety of VLA
configurations, some of which resolve out the extended emission and others may
only quote total flux. This prevented us from separately estimating the flux
for the compact bipolar core and the extended lobes. Furthermore the flux
density varies with time (Franco-Hernández & Rodríguez, 2004), which makes it
difficult to derive an accurate spectral index.
## 2\. VLA archive data
Figure 1.— C-band (4.8 GHz) image of the IRS1 - 3 field. The beam size is
plotted in the bottom right corner of the image. The first contour is at
3-$\sigma$ level, the next contour 3 times higher and from there on we plot 7
logarithmic contours. An outline of the jet from IRS 1, plotted in red, shows
the optically thick inner part of the jet and where the jet expands and
becomes optically thin. Fig 2 zooms in on the inner optically thick part of
the jet and illustrates how the jet becomes smaller when we go up in frequency
due to the outer portion of the jet becoming optically thin.
We have retrieved and re-analyzed a few key observations of NGC 7538 IRS 1
from the VLA data archive. These are all long integration observations with
very high angular resolution, good uv-coverage, and high image fidelity. Table
1 gives the observing dates, synthesized beam width, sensitivity, the
integrated flux of the bipolar core, and the observed total flux. We included
a BnA array observation obtained by us at X-band (8.5 GHz)(Sandell, Goss &
Wright, 2005), because it is more contemporary with the high frequency data
sets and provides an additional data point at a frequency where there are no
other high angular resolution VLA data available. Fig 1 shows the IRS 1 - 3
field at 4.8 GHz, while Fig 2 shows the morphology of the compact bipolar core
as a function of frequency.
## 3\. Analysis of VLA data
We determined flux densities of the compact bipolar core with an accuracy of a
few percent. The results are given in Table 1. Table 1 also gives total flux
densities, obtained by integrating over the whole area where we detect
emission from IRS 1. At 4.9 GHz the total linear extent of IRS 1 is $\sim$
7′′, while it is $\sim$ 1${}^{\prime\prime}\mskip-7.6mu.\,$4 at 43.4 GHz. Even
though we can reliably determine the flux density of the bipolar core at 43.4
GHz, some of the faint extended emission is filtered out; therefore the total
flux is underestimated. At 4.9 GHz the bipolar core has a total length of
$\sim$ 1${}^{\prime\prime}\mskip-7.6mu.\,$1 FWHM (Full Width Half Maximum),
with a separation between the two peaks of
0${}^{\prime\prime}\mskip-7.6mu.\,$68, at 14.9 GHz the size is
0${}^{\prime\prime}\mskip-7.6mu.\,$44 with a peak separation of
0${}^{\prime\prime}\mskip-7.6mu.\,$24 (Table 1). At 43.4 GHz we can still fit
the core with a double Gaussian with a peak separation of $\sim$
0${}^{\prime\prime}\mskip-7.6mu.\,$12, while the linear size (length) has a
FWHM of 0${}^{\prime\prime}\mskip-7.6mu.\,$20\. Since one of our data sets (
X-band, 8.4 GHz) has insufficient resolution to resolve the two bipolar cores,
we plot the linear size (length) as a function of wavelength in Fig. 3,
although fitting to the lobe separation gives virtually identical results, but
with a larger uncertainty. We find the size to vary with frequency as
$\nu^{-0.8\pm 0.03}$, while the flux density increases with frequency as
$\nu^{0.7\pm 0.05}$. The frequency dependence of the total flux is more
difficult to estimate, since some VLA configurations are not very sensitive to
faint extended emission and will therefore underestimate the total flux. If we
chose data sets which give a reliable flux for IRS 2111The emission from IRS 2
is already optically thin at 4.8 GHz (C-band) with an integrated flux of 1440
mJy., which is a spherical H II region with a size of $\sim$ 8′′, we find that
the total flux from IRS 1 has a slightly steeper spectral index, 0.8 $\pm$
0.03 determined from data sets from 4.8 GHz to 49 GHz. The fit to the total
flux densities for IRS 1 is shown as a dotted line in Fig. 3. Such a shallow
frequency dependence would require a very steep density gradient in the
ionized gas, if the emission originates from an H II region ionized by a
central O-star, although it is not impossible, see e.g. Keto et al. (2008). A
high-density H II region with a steep density gradient should be spherical,
not bipolar with a dark central lane as we observe in IRS 1. Neither can a
steep density gradient model explain the extremely broad recombination lines
observed in IRS 1 (Gaume et al., 1995; Keto et al., 2008). All these
characteristics can be explained, if the emission originates in an ionized
stellar wind or jet (Reynolds, 1986). The inner part of the jet is optically
thick and further out, where the jet expands, the emission becomes optically
thin. At higher frequencies the outer part of the jet becomes optically thin
and therefore appears shorter and more collimated, exactly what we see in IRS
1, see Fig 2. For a uniformly expanding spherical wind, the size of the source
varies as a function of frequency as $\nu^{-0.7}$, while the flux density
density goes as $\nu^{0.6}$ (Panagia & Felli, 1975). Since we resolve the
emission from IRS 1, we know that it is not spherical, but instead it appears
to originate in an initially collimated bipolar jet (opening angle $\lesssim$
30∘) approximately aligned with the molecular outflow from IRS 1, see Section
4. For a collimated ionized jet, Reynolds (1986) showed that it can have a
spectral index anywhere between 2 and $-$0.1, depending on gradients in jet
width, velocity, degree of ionization, and temperature.
Figure 2.— VLA images at 4.86, 14.94, 22.37, & 43.37 GHz showing the bipolar core surrounding IRS 1. Beam sizes are given in Table 1. The C-band image (Fig. 1) shows faint extended lobes extending several arcseconds to the north and south of IRS 1. These images show that the core shrinks in size with increasing frequency. Table 1Observational parameters and results of VLA archive data of IRS 1 Frequency | Synthesized beam | rms | Size | flux density | flux density | Observing dates
---|---|---|---|---|---|---
| | | bipolar core | bipolar core | total |
[GHz] | ′′ $\times$ ′′ pa = ∘ | [mJy beam-1] | ′′ $\times$ ′′ pa = ∘ | [mJy] | [mJy] |
4.86 | 0.43 $\times$ 0.37 $+$05.4∘ | 0.300 | 1.06 $\times$ 0.44 $-$9.6 | 94.3 $\pm$ 4.8 | 00122.5 | 1984:1123
8.46 | 1.21 $\times$ 0.47 $-$00.5∘ | 0.050 | 0.76 $\times$ 0.30 $-$10.9 | 135.7 $\pm$ 13.6aaInsufficient spatial resolution, blends in with IRS 2. Unreliable total flux. | 00148.1aaInsufficient spatial resolution, blends in with IRS 2. Unreliable total flux. | 2003:1014
14.94 | 0.14 $\times$ 0.12 $+$47.6∘ | 0.017 | 0.44 $\times$ 0.20 $-$0.3 | 185.9 $\pm$ 5.6 | 00248.7 | 2006:0226,0512bbAverage of two data-sets at different epochs during the same year.
14.91 | 0.14 $\times$ 0.10 $+$13.7∘ | 0.081 | 0.50 $\times$ 0.20 $-$1.2 | 165.2 $\pm$ 13.3 | 00291.0 | 1994:1123
22.37 | 0.08 $\times$ 0.08 $-$43.0∘ | 0.017 | 0.34 $\times$ 0.17 $+$3.1 | 286.5 $\pm$ 11.5 | 00370.0 | 1994:1123
43.37 | 0.14 $\times$ 0.12 $+$14.5∘ | 0.170 | 0.20 $\times$ 0.14 $-$1.0 | 429.7 $\pm$ 12.9 | 00473.7 | 2006:0706,0914bbAverage of two data-sets at different epochs during the same year.
Figure 3.— Top panel: A least squares power-law fit to the length of the
optically thick jet as a function of frequency $\nu$ gives $\theta$(arcsec) =
4.13 $\nu^{-0.8}$ where $\nu$ is in GHz. Bottom panel: Least squares fit to
VLA data (open squares) of the compact bipolar core surrounding IRS1 plotted
as a solid line, and to total fluxes (open triangles) plotted with a dotted
line. In the figure we also show mm data points obtained with BIMA and CARMA
as well as interferometer data from the literature, but they are not used in
any of the fits. This plot clearly demonstrates that the free-free emission
still dominates the emission in high angular resolution interferometer data
even at 1.3 mm.
## 4\. Outflow and accretion
Although the molecular outflow from IRS 1 in early studies was found to be
quite compact and going from SE to NW (Scoville et al., 1986), we find that
the molecular outflow is very extended ($>$ 4′) and approximately aligned with
the free-free jet from analyzing large mosaics obtained with CARMA in 12CO and
13CO J = $1\to 0$, and HCO+ J = $1\to 0$. For HCO+ we filled in missing zero
spacing with fully sampled single dish maps obtained with FCRAO. A thorough
discussion of the molecular outflow is beyond the scope of this paper, and
will be discussed in a forthcoming paper (Corder et al. 2009, in prep). Here
we therefore summarize some of the main results. The CARMA mosaics show that
the outflow is very large and has a position angle (pa) of $\sim$ $-$20∘,
which is similar to the orientation of the collimated inner part of the
ionized jet driven by IRS 1 on smaller angular scales. The outflow extends
several arcminutes to the north and starts as a wide angle limb brightened
flow. To the south the outflow is difficult to trace, because of several other
outflows in the giant molecular core in which IRS 1 is embedded, but appears
more collimated with a linear extent of $\sim$ 2′. Analysis of Spitzer IRAC
archive data show that the outflow may be even larger to the south. The 5.8
and 8.0 $\mu$m IRAC images show a jet-like feature projecting back towards IRS
1, extending to $\sim$ 3.5′ (2.6 pc) from IRS 1, which is much further out
than what was covered by the CARMA mosaics (Corder, 2008). However, the CARMA
observations also confirm that IRS 1 is still heavily accreting. In HCO+ J =
$1\to 0$ we observe a clear inverse P Cygni profile towards the strong
continuum emission from IRS 1 (Fig 4). The absorption is all red-shifted and
has two velocity components, which could mean that the accretion activity is
varying with time (episodic). From these data Corder (2008) estimates $\sim$
6.8 M⊙ of infalling gas. With the assumption that the accretion time is
similar to the free-fall time, $\sim$ 30,000 yr, Corder finds an accretion
rate $\sim$ 2 10-4 M⊙/yr, which will block most of the uv photons from the
central O-star, allowing them to escape only at the polar regions.
Figure 4.— Inverse P Cygni profile in HCO+ J = $1\to 0$ observed with CARMA
supplemented with FCRAO single dish data (Corder, 2008). Note that the
spectrum is extracted from a continuum subtracted spectral line cube, hence
the flux density appears to go below zero. The angular resolution is
4${}^{\prime\prime}\mskip-7.6mu.\,$5\. The vertical dotted line shows the
systemic velocity of IRS 1.
## 5\. Millimeter data - where is the accretion disk?
In Fig. 3 we also plotted flux densities at 3 and 1 mm from the literature
(Lugo, Lizano & Garay, 2004), supplemented with our own results at 3 mm from
BIMA (Wright et al. 2009, in prep) and CARMA (Corder, 2008). All the observed
flux densities at 3 mm or even at 1.3 mm can be explained by free-free
emission, with at most a marginal excess from dust emission. High angular
resolution CARMA continuum observations at 91.4 and 108.1 GHz confirm what is
predicted from the fit to total flux densities in Fig 3, i.e. that the free-
free emission dominates at 3 mm. These CARMA observations resolve IRS 1 with a
size of 0${}^{\prime\prime}\mskip-7.6mu.\,$5
$\times\leq$0${}^{\prime\prime}\mskip-7.6mu.\,$3 pa $-$1∘ and
0${}^{\prime\prime}\mskip-7.6mu.\,$4
$\times\leq$0${}^{\prime\prime}\mskip-7.6mu.\,$1 pa 20∘ at 94 and 109 GHz,
respectively, i.e. the 3 mm emission is aligned with the free-free emission,
and not with an accretion disk, which is expected to be perpendicular to the
jet. Since we detect a collimated free-free jet and since there is a strong
accretion flow towards IRS 1 (Section 4), IRS 1 must be surrounded by an
accretion disk. The morphology of the free-free emission suggests that the
disk should be almost edge-on, and centered halfway between the northern and
the southern peak of the optically thick inner part of the free-free jet.
However, if such an edge-on accretion disk is thin, i.e. the height of the
disk is small relative to its diameter, it may not provide much surface area,
and is therefore difficult to detect at 3 mm. At 3 mm IRS 1 and IRS 2 are very
strong in the free-free, and even the free-free emission from IRS 3 cannot be
ignored, which makes it very hard to detect dust emission from the accretion
disk. Since the spectral index for dust is $\sim$ 3 - 3.5, the dust emission
may start to dominate at frequencies above 300 GHz, observable only by SMA,
but even at 1.3 mm we may have a much better chance to detect the accretion
disk. There is some evidence for such a disk in 450 and 350 $\mu$m continuum
images obtained at JCMT (Sandell & Sievers, 2004). Until higher spatial
resolution interferometer images are available, it is not clear whether this
emission originates in a disk, from the surrounding in-falling envelope, or
from a superposition of several nearby sources.
## 6\. Discussion and Conclusions
Free-free emission from ionized jets is very common in low mass protostars
(Class 0 and Class I sources), which all appear to drive molecular outflows.
This situation is especially true for young stars, which often have rather
well collimated outflows (Rodríguez, Anglada & Curiel, 1999). Jets may also be
common in young early B-type high mass stars (Gibb & Hoare, 2007), although
they have not been studied as well as low-mass protostars, because they are
short lived, more distant and harder to identify. Although there have not been
any detections of jets in young O stars, i.e. stars with a luminosity of $>$
105 L⊙, such objects are likely to exist. Broad radio recombination line (RRL)
objects, many of which are classified as Ultra Compact or Hyper Compact H II
regions (Jaffe & Martín-Pintado, 1999; Kurtz & Franco, 2002; Sewilo et al.,
2004; Keto et al., 2008) offer a logical starting point, because they show
evidence for substantial mass motions, which would be readily explained if the
recombination line emission originates in a jet. Several of them also show
evidence for accretion. In the sample analyzed by Jaffe & Martín-Pintado
(1999), which partly overlaps with the sources discussed by Gibb & Hoare
(2007), they find four sources (including IRS 1) with bipolar morphology. All
appear to be dominated by wind ionized emission, although not necessarily jet
driven winds. K 3-50 A, however, at a distance of 8.7 kpc (De Pree et al.,
1994), has a luminosity of a mid-O star, drives an ionized bipolar outflow (De
Pree et al., 1994), and has a spectral index of 0.5 in the the frequency range
5 - 15 GHz (Jaffe & Martín-Pintado, 1999). K 3-50 A is therefore another
example of an O-star, where the free-free emission appears to be coming from
an ionized jet. Detailed studies of broad RRL objects will undoubtedly
discover more examples of jet-ionized H II regions. Such objects are likely to
drive molecular outflows, excite masers and show evidence for strong
accretion.
To summarize: We have shown that the emission from IRS 1 is completely
dominated by a collimated ionized jet. The jet scenario also readily explains
why the free-free emission is variable, because the accretion rate from the
surrounding clumpy molecular cloud will vary with time. It also explains why
IRS 1 is an extreme broad RRL object. Since most Hyper Compact H II regions
are broad RRL objects, and show rising free-free emission with similar
spectral index to IRS 1, other sources now classified as Hyper Compact H II
regions, may also be similar to IRS 1.
The National Radio Astronomy Observatory (NRAO) is a facility of the National
Science Foundation operated under cooperative agreement by Associated
Universities, Inc. The BIMA array was operated by the Universities of
California (Berkeley), Illinois, and Maryland with support from the National
Science Foundation. Support for CARMA construction was derived from the states
of California, Illinois, and Maryland, the Gordon and Betty Moore Foundation,
the Kenneth T. and Eileen L. Norris Foundation, the Associates of the
California Institute of Technology, and the National Science Foundation.
Ongoing CARMA development and operations are supported by the National Science
Foundation under a cooperative agreement, and by the CARMA partner
universities.
## References
* Campbell (1984) Campbell, B. 1984, ApJ, 282, L27
* Corder (2008) Corder, S. 2008, PhD thesis, Caltech
* De Pree et al. (1994) De Pree, C. G., Goss, W. M., Palmer, P., & Rubin, R. H. 1994, ApJ, 428, 670
* Franco-Hernández & Rodríguez (2004) Franco-Hernández, R., & Rodríguez, L. F. 2004, ApJ, 604, L105
* Gaume et al. (1995) Gaume, R. A., Goss, W. M., Dickel, H. R., Wilson, T. L., and Johnston, K. J. 1995, ApJ, 438, 776
* Gibb & Hoare (2007) Gibb, A. G., & Hoare, M. G. 2007, MNRAS, 380, 246
* Hackwell, Grasdalen & Gehrz (1982) Hackwell, J. A., Grasdalen, G. L., and Gehrz, R. D. 1982, ApJ, 252, 250
* Jaffe & Martín-Pintado (1999) Jaffe, D. T., & Martín-Pintado, J. 1999, ApJ, 520, 162
* Keto et al. (2008) Keto, E., Zhang, Q., & Kurtz, S. 2008, ApJ, 672, 423
* Kurtz & Franco (2002) Kurtz, S., & Franco, J. 2002, RevMexAA, 12, 16
* Lugo, Lizano & Garay (2004) Lugo, J., Lizano, S., & Garay, G. 2004, ApJ, 614, 807
* Martin (1973) Martin, A. H. M. 1973, MNRAS, 163, 141
* Moscadelli et al. (2008) Moscadelli, L., Reid, M. J., Menten, K. M., Brunthaler, A., Zheng,, X. W., & Xu, Y. 2008, [astro-ph]arXiv:0811.0679v1
* Panagia & Felli (1975) Panagia, N. & Felli, M. 1975, A&A, 39, 1
* Reynolds (1986) Reynolds, S. P. 1986, ApJ, 304, 713
* Rodríguez, Anglada & Curiel (1999) Rodríguez, L. F., Anglada, L., & Curiel, S. 1999, ApJS, 125, 427
* Sandell, Goss & Wright (2005) Sandell, G., Goss, W. M., & Wright, M. 2005, ApJ, 621, 839
* Sandell & Sievers (2004) Sandell, G., & Sievers, A. 2004, ApJ, 600, 269
* Sewilo et al. (2004) Sewilo, M., Churchwell, E., Kurtz, S., Goss, W. M., & Hofner, P. 2004, ApJ, 605,285
* Scoville et al. (1986) Scoville, N. Z., Sargent, A. I., Claussen, M. J., Masson, C. R., Lo, K. Y., & Phillips, T. G. 1986, ApJ, 303, 416
* Wynn-Williams, Becklin, & Neugebauer (1974) Wynn-Williams, C. G., Becklin, E. E., and Neugebauer, G. 1974, ApJ, 187, 473
|
arxiv-papers
| 2009-05-26T22:42:40 |
2024-09-04T02:49:02.948434
|
{
"license": "Public Domain",
"authors": "G. Sandell, W. M. Goss, M. Wright, S. Corder",
"submitter": "Goran Sandell",
"url": "https://arxiv.org/abs/0905.4300"
}
|
0905.4913
|
# A simple Havel–Hakimi type algorithm to realize graphical degree sequences
of directed graphs††thanks: PLE was partly supported by OTKA (Hungarian NSF),
under contract Nos. AT048826 and K 68262. IM was supported by a Bolyai
postdoctoral stipend and OTKA (Hungarian NSF) grant F61730. ZT was supported
in part by the NSF BCS-0826958, HDTRA 201473-35045 and by Hungarian
Bioinformatics MTKD-CT-2006-042794 Marie Curie Host Fellowships for Transfer
of Knowledge.
Péter L. Erdős and István Miklós
A. Rényi Institute of Mathematics, Hungarian Academy of
Sciences, Budapest, PO Box 127, H-1364, Hungary
{elp} {miklosi}@renyi.hu Zoltán Toroczkai
Interdisciplinary Center for Network Science and Applications
and Department of Physics University of Notre Dame
Notre Dame, IN, 46556, USA
toro@nd.edu
###### Abstract
One of the simplest ways to decide whether a given finite sequence of positive
integers can arise as the degree sequence of a simple graph is the greedy
algorithm of Havel and Hakimi. This note extends their approach to directed
graphs. It also studies cases of some simple forbidden edge-sets. Finally, it
proves a result which is useful to design an MCMC algorithm to find random
realizations of prescribed directed degree sequences.
AMS subject classification[2000]. 05C07 05C20 90B10 90C35
Keywords. network modeling; directed graphs; degree sequences; greedy
algorithm
## 1 Introduction
The systematic study of graphs (or more precisely the tlinear graphs, as it
was called in that time) began sometimes in the late forties, through seminal
works by P. Erdős, P. Turán, W.T. Tutte, and others. One problem which
received considerable attention was the existence of certain subgraphs of a
given graph. For example such subgraph could be a perfect matching in a (not
necessarily bipartite) graph, or a Hamiltonian cycle through all vertices,
etc. Generally these substructures are called factors. The first couple of
important results of this kind are due to W.T. Tutte who gave necessary and
sufficient conditions for the existence of 1-factors and $f$-factors.
In the case of complete graphs, the existence problem of such factors is
considerably easier. In particular, the existence problem of (sometimes
simple) undirected graphs with given degree sequences even admits simple
greedy algorithms for its solution.
Subsequently, the theory was extended for factor problems of directed graphs
as well, but the greedy type algorithm mentioned above, to the best knowledge
of the authors, is missing even today.
In this paper we fill this gap: after giving a short and comprehensive (but
definitely not exhausting) history of the $f$-factor problem (Section 2), we
describe a greedy algorithm to decide the existence of a directed simple graph
possessing the prescribed degree sequence (Section 3). In Section 4 we prove a
consequence of the previous existence theorem, which is a necessary ingredient
for the construction of edge-swap based Markov Chain Monte Carlo (MCMC)
methods to sample directed graphs with prescribed degree sequence. Finally in
Section 5 we discuss a slightly harder existence problem of directed graphs
with prescribed degree sequences where some vertex-pairs are excluded from the
constructions. This result can help to efficiently generate all possible
directed graphs with a given degree sequence.
## 2 A brief history (of $f$-factors)
For a given function $f:V(G)\rightarrow\mathbb{N}\cup\\{0\\}$, an $f$-factor
of a given simple graph $G(V,E)$ is a subgraph $H$ such that $d_{H}(v)=f(v)$
for all $v\in V.$ One of the very first key results of modern graph theory is
due to W.T. Tutte: in 1947 he gave a complete characterization of simple
graphs with an $f$-factor in case of $f\equiv 1$ (Tutte’s 1-factor theorem,
[14]). Tutte later solved the problem of the existence of $f$-factors for
general $f$’s (Tutte’s $f$-factor theorem, [15]). In 1954 he also found a
beautiful graph transformation to handle $f$-factor problems via perfect
matchings in bipartite graphs [16]. This also gave a clearly polynomial time
algorithm for finding $f$-factors.
In cases where $G$ is a complete graph, the $f$-factor problem becomes easier:
then we are simply interested in the existence of a graph with a given degree
sequence (the exact definitions will come in Section 3). In 1955 P. Havel
developed a simple greedy algorithm to solve the degree sequence problem for
simple undirected graphs ([8]). In 1960 P. Erdős and T. Gallai studied the
$f$-factor problem for the case of a complete graph $G$, and proved a simpler
Tutte-type result for the degree sequence problem (see [3]). As they already
pointed out, the result can be derived directly form the original $f$-factor
theorem, taking into consideration the special properties of the complete
graph $G$, but their proof was independent of Tutte’s proof and they referred
to Havel’s theorem.
In 1962 S.L. Hakimi studied the degree sequence problem in undirected graphs
with multiple edges ([6]). He developed an Erdős-Gallai type result for this
much simpler case, and for the case of simple graphs he rediscovered the
greedy algorithm of Havel. Since then this algorithm is referred to as the
Havel–Hakimi algorithm.
For directed graphs the analogous question of recognizability of a bi-
graphical-sequence comes naturally. In this case we are given two $n$-element
vectors $\mathbf{d^{+},d^{-}}$ of non-negative integers. The problem is the
existence of a directed graph on $n$ vertices, such that the first vector
represents the out-degrees and the second one the in-degrees of the vertices
in this graph. In 1957 D. Gale and H. J. Ryser independently solved this
problem for simple directed graphs (there are no parallel edges, but loops are
allowed), see [5, 13]. In 1958 C. Berge generalized these results for
$p$-graphs where at most $p$ parallel edges are allowed ([1]). (Berge calls
the out-degree and in-degree together the demi-degrees.) Finally in 1973, the
revised version of his book Graphs ([2]) gives a solution for the $p$-graph
problem, loops excluded. To show some of the afterlife of these results: D.
West in his renowned recent textbook ([17]), discusses the case of simple
directed graphs with loops allowed.
The analog of $f$-factor problems for directed graphs has a sparser history.
Øystein Ore started the systematic study of that question in 1956 (see [11,
12]). His method is rather algebraic, and the finite and infinite cases - more
or less - are discussed together. The first part developed the tools and
proved the directly analog result of Tutte’s $f$-factor problem for finite
directed graphs (with loops), while the second part dealt with the infinite
case.
In 1962 L.R. Ford and D.R. Fulkerson studied, generalized and solved the
“original” $f$-factor problem for a directed graph $\vec{G}$ ([4]). Here lower
and upper bounds were given for both demi-degrees of the desired subgraph (no
parallel edges, no loops) with the original question naturally corresponding
to equal lower and upper bounds. The solutions (as well as in Berge’s cases)
are based on network flow theory.
Finally, in a later paper Hakimi also proves results for bi-graphical
sequences, however, without presenting a directed version of his original
greedy algorithm (see [7]).
## 3 A greedy algorithm to realize bi-graphical sequences
A sequence $\mathbf{d}=\\{d_{1},d_{2},\ldots,d_{n}\\}$ of nonnegative integers
is called a graphical sequence if a simple graph $G(V,E)$ exists on $n$ nodes,
$V=\\{v_{1},v_{2},\ldots,v_{n}\\}$, whose degree sequence is $\mathbf{d}$. In
this case we say that $G$ realizes the sequence $\mathbf{d}$. For simplicity
of the notation we will consider only sequences of strictly positive integers
($d_{n}>0$) to avoid isolated points. The following, well-known result, was
proved independently by V. Havel and S.L. Hakimi.
###### Theorem 1 (Havel [8], Hakimi [6])
There exists a simple graph with degree sequence $d_{1}>0,$
$d_{2}\geq\cdots\geq d_{n}>0$ ($n\geq 3$) if and only if there exists one with
degree sequence $d_{2}-1,\ldots,d_{d_{1}+1}-1,d_{d_{1}+2},\ldots,d_{n}$. (Note
that there is no prescribed ordering relation between $d_{1}$ and the other
degrees.)
This can be proved using a recursive procedure, which transforms any
realization of the degree sequence into the form described in the Theorem 1,
by a sequence of two-edge swaps.
A bi-degree-sequence (or BDS for short)
$\mathbf{(d^{+},d^{-})}=(\\{d^{+}_{1},d^{+}_{2},\ldots,d^{+}_{n}\\},\\{d^{-}_{1},d^{-}_{2},$
$\ldots,d^{-}_{n}\\})$ of nonnegative integers is called a bi-graphical
sequence if there exists a simple directed graph (digraph)
$\vec{G}(V,\vec{E})$ on $n$ nodes, $V=\\{v_{1},v_{2},\ldots,v_{n}\\}$, such
that the out-degree and in-degree sequences together form
$\mathbf{(d^{+},d^{-})}$. (That is the out-degree of vertex $v_{j}$ is
$d^{+}_{j}$ and its in-degree is $d^{-}_{j}.$) In this case we say that
$\vec{G}$ realizes our BDS. For simplicity, we will consider only sequences of
strictly positive integer BDS’s, that is each degree is $\geq 0$ and
$d^{+}_{j}+d^{-}_{j}>0$, to avoid isolated points.
Our goal is to prove a Havel–Hakimi type algorithm to realize bi-graphical
sequences. To that end we introduce the notion of normal order: we say that
the BDS is in normal order if the entries satisfy the following properties:
for each $i=1,\ldots,n-2$ we either have $d^{-}_{i}>d^{-}_{i+1}$ or
$d^{-}_{i}=d^{-}_{i+1}$ and $d^{+}_{i}\geq d^{+}_{i+1}$. Clearly, all BDS-s
can be arranged into normal order. Note that we made no ordering assumption
about node $v_{n}$ (the pair $d^{+}_{n},d^{-}_{n}$).
###### Theorem 2
Assume that the BDS $\mathbf{(d^{+},d^{-})}$ $($with $d^{+}_{j}+d^{-}_{j}>0$,
$j\in[1,n])$ is in normal order and $d_{n}^{+}>0$ (recall: the out-degree of
the last vertex is positive). Then $\mathbf{(d^{+},d^{-})}$ is bi-graphical if
and only if the BDS
$\displaystyle\Delta^{+}_{k}=\left\\{\begin{array}[]{lll}d^{+}_{k}&\quad\mbox{if
}&k\neq n\\\ 0&\quad\mbox{if}&k=n\;,\end{array}\right.$ (3)
$\displaystyle\Delta^{-}_{k}=\left\\{\begin{array}[]{lll}d^{-}_{k}-1&\quad\mbox{if
}&k\leq d^{+}_{n}\\\ d^{-}_{k}&\quad\mbox{if
}&k>d^{+}_{n}\;\;\;,\end{array}\right.$ (6)
with zero elements removed (those $j$ for which
$\Delta^{+}_{j}=\Delta^{-}_{j}=0$) is bi-graphical.
Before starting the proof, we emphasize the similarity between this result and
the original HH-algorithm. As in the undirected case, using Theorem 2, we can
find in a greedy way a proper realization of graphical bi-degree sequences.
Indeed: choose any vertex $v_{n}$ with non-zero out-degree from the sequence,
arrange the rest in normal order, then make $d_{n}^{-}$ connections from
$v_{n}$ to nodes with largest in-degrees, thus constructing the out-
neighborhood of $v_{n}$ in the (final) realization. Next, remove the vertices
(if any) from the remaining sequence that have lost both their in- and out-
degrees in the process, pick a node with non-zero out-degree, then arrange the
rest in normal order. Applying Theorem 2 again, we find the final out-
neighborhood of our second chosen vertex. Step by step we find this way the
out-neighborhood of all vertices, while their in-neighborhoods get defined
eventually (being exhausted by incoming edges). Note, that every vertex in
this process is picked at most once, namely, when its out-neighborhood is
determined by the Theorem, and never again after that.
Our forthcoming proof is not the simplest, however, we use a more general
setup to shorten the proofs of later results.
First, we define the partial order $\preceq$ among $k$-element vectors of
increasing positive integers: we say $\mathbf{a}\preceq\mathbf{b}$ iff for
each $j=1,\ldots,k$ we have $a_{j}\leq b_{j}.$
A possible out-neighborhood (or PON for short) of vertex $v_{n}$ is a
$d^{+}_{n}$-element subset of $V\setminus\\{v_{n}\\}$ which is a candidate for
an out-neighborhood of $v_{n}$ in some graphical representation. (In essence,
a PON can be any $d^{+}_{n}$-element subset of $V\setminus\\{v_{n}\\}$ but
later on we may consider some restrictions on it.) Let $A$ be a PON of
$v_{n}.$ Then denote by $\bm{i}(A)$ the vector of the increasingly ordered
subscripts of the elements of $A$. (For example, if
$A=\\{v_{2},v_{4},v_{9}\\}$, then $\bm{i}(A)=(2,4,9)$.) Let $A$ and $B$ be two
PONs of $v_{n}.$ We write:
$B\preceq A\quad\Leftrightarrow\quad\bm{i}_{B}\preceq\bm{i}_{A}\;.$ (7)
In this case we also say that $B$ is to the left of $A$. (For example,
$B=\\{v_{1},v_{2},v_{6},v_{7}\\}$ is to the left of
$A=\\{v_{2},v_{4},v_{6},v_{9}\\}$.)
###### Definition 3
Consider a bi-graphical BDS sequence $(\mathbf{d^{+},d^{-}})$ and let $A$ be a
PON of $v_{n}$. The $A$-reduced BDS
$\left(\mathbf{d^{+}}\big{|}_{A},\mathbf{d^{-}}\big{|}_{A}\right)$ is defined
as:
$\displaystyle d^{+}_{k}\big{|}_{A}$ $\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{lll}d^{+}_{k}&\quad\mbox{if }&k\neq n\\\
0&\quad\mbox{if}&k=n\;,\end{array}\right.$ (10) $\displaystyle
d^{-}_{k}\big{|}_{A}$ $\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{lll}d^{-}_{k}-1&\quad\mbox{if
}&k\in\bm{i}(A)\\\ d^{-}_{k}&\quad\mbox{if
}&k\not\in\bm{i}(A).\end{array}\right.$ (13)
In other words, if $A$ is a PON in a BDS, then the reduced degree sequence
$\left(\mathbf{d^{+}}\big{|}_{A},\mathbf{d^{-}}\big{|}_{A}\right)$ is obtained
by removing the out-edges of node $v_{n}$ (according to the possible out-
neighborhood $A$). As usual, if for one subscript $k$ in the $A$-reduced BDS
we have $d^{+}_{k}\big{|}_{A}=d^{-}_{k}\big{|}_{A}=0$ then the vertex with
this index is to be removed from the bi-degree sequence.
###### Lemma 4
Let $(\mathbf{d^{+},d^{-}})$ be a BDS, and let $A$ be a possible out-
neighborhood of $v_{n}.$ Furthermore let $B$ be another PON with
$B=A\setminus\\{v_{k}\\}\cup\\{v_{i}\\}$ where $d^{-}_{i}\geq d^{-}_{k}$ and
in case of $d^{-}_{i}=d^{-}_{k}$ we have $d^{+}_{i}\geq d^{+}_{k}.$ Then if
$\mathbf{(D^{+},D^{-})}:=\left(\mathbf{d^{+}}\big{|}_{A},\mathbf{d^{-}}\big{|}_{A}\right)$
is bi-graphical, so is
$\left(\mathbf{d^{+}}\big{|}_{B},\mathbf{d^{-}}\big{|}_{B}\right)$.
Proof. Since our $A$-reduced BDS $\mathbf{(D^{+},D^{-})}$ is bi-graphical,
there exists a directed graph $\vec{G}$ which realizes the bi-degree sequence
$\mathbf{(D^{+},D^{-})}$. We are going to show that in this case there exists
a directed graph $\vec{G}^{\prime}$ which realizes the BDS
$\left(\mathbf{d^{+}}\big{|}_{B},\mathbf{d^{-}}\big{|}_{B}\right)$. In the
following, $v_{a}v_{b}$ will always mean a directed edge from node $v_{a}$ to
node $v_{b}$. Let us now construct the directed graph $\vec{G}_{1}$ by adding
$v_{n}v$ directed edges for each $v\in A.$ (Since according to (10), in
$\mathbf{(D^{+},D^{-})}$ the out-degree of $v_{n}$ is equal to zero, no
parallel edges are created.) The bi-degree-sequence of $\vec{G}_{1}$ is
$(\mathbf{d^{+},d^{-}}).$ Our goal is to construct another realization
$\vec{G}_{1}^{\prime}$ of $(\mathbf{d^{+},d^{-}})$ such that the deletion of
the out-edges of $v_{n}$ in the latter produces the BDS
$\left(\mathbf{d^{+}}\big{|}_{B},\mathbf{d^{-}}\big{|}_{B}\right)$.
By definition we have $v_{n}v_{k}\in\vec{E}_{1},$ (the edge set of
$\vec{G}_{1}$) but $v_{n}v_{i}\not\in\vec{E}_{1}$. At first assume that there
exists a vertex $v_{\ell}$ ($\ell\neq i,k,n$), such that
$v_{\ell}v_{i}\in\vec{E}_{1}$ but $v_{\ell}v_{k}\not\in\vec{E}_{1}$. (When
$d^{-}_{i}>d^{-}_{k}$ then this happens automatically, however if
$d^{-}_{i}=d^{-}_{k}$ and $v_{k}v_{i}\in\vec{E}_{1}$ then it is possible that
the in-neighborhood of $v_{i}$ and $v_{k}$ are the same - except of course
$v_{k}$, $v_{i}$ themselves and $v_{n}$.) This means that now we can swap the
edges $v_{n}v_{k}$ and $v_{\ell}v_{i}$ into $v_{n}v_{i}$ and $v_{\ell}v_{k}$.
(Formally we create the new graph
$\vec{G}_{1}^{\prime}=(V,\vec{E}_{1}^{\prime})$ such that
$\vec{E}_{1}^{\prime}=\vec{E}_{1}\setminus\\{v_{n}v_{k},v_{\ell}v_{i}\\}\cup\\{v_{n}v_{i},v_{\ell}v_{k}\\}.$)
This achieves our wanted realization.
Our second case is when $d^{-}_{i}=d^{-}_{k},$ $v_{k}v_{i}\in\vec{E}_{1}$, and
furthermore
$\mbox{for each }\ell\neq i,k,n\quad\mbox{we have}\quad
v_{\ell}v_{i}\in\vec{E}_{1}\Leftrightarrow v_{\ell}v_{k}\in\vec{E}_{1}.$ (14)
It is important to observe that in this case $v_{i}v_{k}\not\in\vec{E}_{1}:$
otherwise some $v_{\ell}$ would not satisfy (14) (in order to keep
$d^{-}_{i}=d^{-}_{k}$).
Now, if there exists a subscript $m$ (different from $k,i,n$) such that
$v_{i}v_{m}\in\vec{E}_{1}$ but $v_{k}v_{m}\not\in\vec{E}_{1},$ then we create
the required new graph $\vec{G}_{1}^{\prime}$ by applying the following triple
swap (or three-edge swap): we exchange the directed edges
$v_{n}v_{k},v_{k}v_{i}$ and $v_{i}v_{m}$ into $v_{n}v_{i},v_{i}v_{k}$ and
$v_{k}v_{m}$.
By our assumption we have $d^{+}_{i}\geq d^{+}_{k}$. On one hand side if
$d^{+}_{i}>d^{+}_{k}$ holds then due to the properties $v_{k}v_{i}\in\vec{E}$
and $v_{i}v_{k}\not\in\vec{E},$ there exist at least two subscripts
$m_{1},m_{2}\neq i,k$ such that $v_{i}v_{m_{j}}\in\vec{E}$ but
$v_{k}v_{m_{j}}\not\in\vec{E}$ and at least one of them differs from $n$.
Thus, when $d^{+}_{i}>d^{+}_{k}$, we do find such an $m$ for which the triple
swap above can be performed.
The final case is when $d^{-}_{i}=d^{-}_{k}$ and $d^{+}_{i}=d^{+}_{k}$. If
vertex $v_{m}$ does not exist, then we must have $v_{i}v_{n}\in\vec{E}_{1}$
(to keep $d^{+}_{i}=d^{+}_{k}$), and in this case clearly,
$v_{k}v_{n}\notin\vec{E}_{1}$. Therefore, in this (final) case the graphical
realization $\vec{G}_{1}$ has the properties
$v_{n}v_{k},v_{k}v_{i},v_{i}v_{n}\in\vec{E}_{1}$ and
$v_{n}v_{i},v_{i}v_{k},v_{k}v_{n}\not\in\vec{E}_{1}$. Then the triple swap
$\vec{E}_{1}^{\prime}:=\vec{E}_{1}\setminus\left\\{v_{n}v_{k},v_{k}v_{i},v_{i}v_{n}\right\\}\cup\left\\{v_{n}v_{i},v_{i}v_{k},v_{k}v_{n}\right\\}$
(15)
will produce the required new graphical realization $\vec{G}_{1}^{\prime}$.
$\Box$
###### Observation 5
For later reference it is important to recognize that in all cases above, the
transformations from one realization to the next one happened with the use of
two-edge or three-edge swaps.
###### Lemma 6
Let $(\mathbf{d^{+},d^{-}})$ be an BDS and let $A$ and $C$ be two possible
out-neighborhoods of $v_{n}.$ Furthermore assume that $C\preceq A$, that is
$C$ is to the left of $A$. Finally assume that vertices in $A\cup C$ are in
normal order. Then if
$\left(\mathbf{d^{+}}\big{|}_{A},\mathbf{d^{-}}\big{|}_{A}\right)$ is bi-
graphical, so is
$\left(\mathbf{d^{+}}\big{|}_{C},\mathbf{d^{-}}\big{|}_{C}\right)$.
Proof. Since $C$ is to the left of $A$ therefore, there is a (unique)
bijection $\phi:C\setminus A\rightarrow A\setminus C$ such that $\forall c\in
C\setminus A$ : $\bm{i}(\\{c\\})<\bm{i}(\\{\phi(c)\\})$ (the subscript of
vertex $c$ is smaller than the subscript of vertex $\phi(c)$). (For example,
if $A=\\{v_{4},v_{5},v_{6},v_{7},v_{8},v_{9}\\}$ and
$C=\\{v_{1},v_{2},v_{3},v_{5},v_{7},v_{8}\\}$, then $C\setminus
A=\\{v_{1},v_{2},v_{3}\\}$, $A\setminus C=\\{v_{4},v_{6},v_{9}\\}$, and $\phi$
is the map $\\{v_{1}\leftrightarrow v_{4},v_{2}\leftrightarrow
v_{6},v_{3}\leftrightarrow v_{9}\\}$).
To prove Lemma 6 we apply Lemma 4 recursively for each $c\in C\setminus A$ (in
arbitrary order) to exchange $\phi(c)\in A$ with $c\in C$, preserving the
graphical character at every step. After the last step we find that the
sequence reduced by $C$ is graphical. $\Box$
Proof of Theorem 2: We can easily achieve now the required graphical
realization of $(\mathbf{d^{+},d^{-}})$ if we use Lemma 6 with the current
$A$, and $C=\\{v_{1},\ldots,v_{d^{+}_{n}}\\}.$ We can do that since
$(\mathbf{d^{+},d^{-}})$ is in normal order, therefore the assumptions of
Lemma 6 always hold. $\Box$
## 4 A simple prerequisite for MCMC algorithms to sample directed graphs with
given BDS
In practice it is often useful to choose uniformly a random element from a set
of objects. A frequently used tool for that task is a well-chosen Markov-Chain
Monte-Carlo method (MCMC for short). To that end, a graph is established on
the objects and random walks are generated on it. The edges represent
operations which can transfer one object to the other. If the Markov chain can
step from an object $x$ to object $y$ with non-zero probability, then it must
be able to jump to $x$ from $y$ with non-zero probability (reversibility). If
the graph is connected, then applying the well-known Metropolis-Hastings
algorithm, it will yield a random walk converging to the uniform distribution
starting from an arbitrary (even fixed) object.
To be able to apply this technique we have to define our graph (the Markov
chain) $\mathcal{G}(\mathbf{d^{+}},\mathbf{d^{-}})=(\mathcal{V},\mathcal{E})$.
The vertices are the different possible realizations of the bi-graphical
sequence $(\mathbf{d^{+}},\mathbf{d^{-}}).$ An edge represents an operation
consisting of a two or three-edge swap which transforms the first realization
into the second one. (For simplicity, sometimes we just say swap for any of
them.) We will show:
###### Theorem 7
Let $\vec{G}_{1},\vec{G}_{2}$ be two realizations of the same bi-graphical
sequence $(\mathbf{d^{+}},\mathbf{d^{-}}).$ Then there exists a sequence of
swaps which transforms $\vec{G}_{1}$ into $\vec{G}_{2}$ through different
realizations of the same bi-graphical sequence.
Remark: In the case of undirected graphs the (original) analogous observation
(needing only two-edges swaps) was proved by H.J. Ryser ([13]).
Proof. We prove the following stronger statement:
* (✠)
there exists a sequence of at most $2e$ swaps which transform $\vec{G}_{1}$
into $\vec{G}_{2}$, where $e$ is the total number of out-edges in
$(\mathbf{d^{+}},\mathbf{d^{-}})$
by induction on $e$. Assume that $(\maltese)$ holds for $e^{\prime}<e$. We can
assume that our bi-graphical sequence is in normal order on the first $n-1$
vertices and $d^{+}_{n}>0.$ By Theorem 2 there is a sequence $T_{1}$ ($T_{2}$)
of $d=d^{+}_{n}$ many swaps which transforms $\vec{G}_{1}$ ($\vec{G}_{2}$)
into a $\vec{G}^{\prime}_{1}$ ($\vec{G}^{\prime}_{2}$) such that
$\Gamma^{+}_{\vec{G}_{1}^{\prime}}(v_{n})=\\{v_{1},\dots,v_{d}\\}$
($\Gamma^{+}_{\vec{G}_{2}^{\prime}}(v_{n})=\\{v_{1},\dots,v_{d}\\}$).
We consider now the directed graphs $\vec{G}_{1}^{\prime\prime}$
($\vec{G}_{2}^{\prime}$) derived from directed graph $\vec{G}_{1}^{\prime}$
(directed graph $\vec{G}_{2}^{\prime}$) by deleting all out-neighbors of
$v_{n}.$ Then both directed graphs realize the bi-graphical sequence
$(\Delta^{+},\Delta^{-})$ which, in turn, satisfies relations (3) and (6).
Therefore the total number of out-degrees is $e-d$ in both directed graphs,
and by the inductive assumption there is a sequence $T$ of $2(e-d)$ many swaps
which transforms $\vec{G}^{\prime\prime}_{1}$ into
$\vec{G}_{2}^{\prime\prime}$.
Now observe that if a swap transforms $\vec{H}$ into $\vec{H}^{\prime}$, then
the “inverse swap” (choosing the same edges and non-edges and swap them)
transforms $\vec{H}^{\prime}$ into $\vec{H}$. So the swap sequence $T_{2}$ has
an inverse $T_{2}^{\prime}$ which transforms $\vec{G}_{2}^{\prime}$ into
$\vec{G}_{2}$. Hence the sequence $T_{1}TT_{2}^{\prime}$ is the required swap
sequence: it transforms $\vec{G}_{1}$ into $\vec{G}_{2}$ and its length is at
most $d+2(e-d)+d=2e$. $\Box$
## 5 Is a BDS bi-graphical when one of its vertex’s out-neighborhood is
constrained?
In network modeling of complex systems (for a rather general reference see
[10]) one usually defines a (di)graph with components of the system being
represented by the nodes, and the interactions (usually directed) amongst the
components being represented as the edges of this digraph. Typical cases
include biological networks, such as the metabolic network, signal
transduction networks, gene transcription networks, etc. The graph is usually
inferred from empirical observations of the system and it is uniquely
determined if one can specify all the connections in the graph. Frequently,
however, the data available from the system is incomplete, and one cannot
uniquely determine this graph. In this case there will be a set
${\mathcal{D}}$ of (di)graphs satisfying the existing data, and one can be
faced with:
1. (i)
finding a typical element of the class ${\mathcal{D}},$
2. (ii)
or generating all elements of the class ${\mathcal{D}}$.
(A more complete analysis of this phenomenon can be found in [9].) In Section
4 we already touched upon problem (i) when ${\mathcal{D}}$ is the class of all
directed graphs of a given BDS. The analogous Problem (ii) for undirected
graphs was recently addressed in [9] which provides an economical way of
constructing all elements from ${\mathcal{D}}$. In this Section we give a
prescription based on the method from [9], to solve (ii) for the case of all
directed graphs with prescribed BDS. This is particularly useful from the
point of view of studying the abundance of motifs in real-world networks: one
needs to know first all the (small) subgraphs, or motifs, before we study
their statistics from the data.
Before we give the details, it is perhaps worth making the following remark:
Clearly, one way to solve problem (i) would be to first solve problem (ii),
then choose uniformly from ${\mathcal{D}}$. However, in (those very small)
cases when reasonable answers can be expected for problem (ii), problem (i) is
rather uninteresting. In general, however, (i) cannot be solved efficiently by
the use of (ii).
We start the discussion of problem (ii) with pointing out that our new,
directed Havel–Hakimi type algorithm is unable to generate all realization of
a prescribed DBS (see Figure 1).
Figure 1: This graph cannot be obtained by the directed Havel–Hakimi
procedure. The integers indicate node degrees.
The situation is very similar to the non-directed case, see [9]. The directed
HH-algorithm must start with a vertex with degree-pair $(2,1)$, therefore the
two vertices of degree-pair $(0,3)$ must be out-neighbors of the same vertex -
not for the graph in the Figure.
One possible way to overstep this shortage is to discover systematically all
possible out-connections from a given vertex $v$ in all realizations of the
prescribed graphical BDS.
We do not know a greedy algorithm to achieve this. The next best thing we can
do is to develop a greedy algorithm to decide whether a given (sub)set of
prescribed out-neighbors of $v$ would prevent to find a realization of the BDS
containing those prescribed out-neighbors. In the following, we describe such
a greedy algorithm. (It is perhaps interesting to note that this latter
problem can be considered as a very special directed $f$-factor problem.)
To start, we consider a $\mathbf{(d^{+},d^{-})}$ bi-degree sequence together
with a forbidden vertex set $F$ whose elements are not allowed to be out-
neighbors of vertex $v_{n}.$ (Or, just oppositely, we can imagine that we
already have decided that those vertices will become out-neighbors of $v_{n}$
and the BDS is already updated accordingly. The forbidden vertex set governs
only the out-neighbors, since in the process the in-neighbors are born
“automatically”.) It is clear that $|F|+1+d^{-}_{n}\leq n$ must hold for the
existence of a graphical realization of this $F$-restricted BDS.
Assume that the vertices are enumerated in such a way that subset $F$ consists
of vertices $v_{n-|F|},\ldots,v_{n-1}$ and vertices
$V^{\prime}=\\{v_{1},\ldots,v_{n-|F|-1}\\}$ are in normal order. (We can also
say that we apply a permutation on the subscripts accordingly.) Then we say
that the BDS is in $F$-normal order.
###### Definition 8
Consider a bi-graphical BDS sequence $(\mathbf{d^{+},d^{-}})$ in $F$-normal
order, and let $A$ be a PON. The $A$-reduced BDS
$\left(\mathbf{d^{+}}\big{|}_{A},\mathbf{d^{-}}\big{|}_{A}\right)$ is defined
as in (10) and (13), while keeping in mind the existence of an $F$ set to the
right of $A$.
In other words, if $A$ is a PON in an $F$-restricted BDS, then the reduced
degree sequence
$\left(\mathbf{d^{+}}\big{|}_{A},\mathbf{d^{-}}\big{|}_{A}\right)$ is still
obtained by removing the out-edges of node $v_{n}$ (according to the possible
out-neighborhood $A$).
Finally, one more notation: let $(\mathbf{d^{+},d^{-}})$ be a BDS, $F$ a
forbidden vertex subset of $V$ and denote by $F[k]$ the set of the first $k$
vertices in the $F$-normal order.
###### Theorem 9
Let $A$ be any PON in the $F$-restricted $(\mathbf{d^{+},d^{-}})$ BDS, which
is in $F$-normal order. Then if the $A$-reduced BDS
$\left(\mathbf{d^{+}}\big{|}_{A},\mathbf{d^{-}}\big{|}_{A}\right)$ is
graphical, then the $F[d^{+}_{n}]$-reduced BDS
$\left(\mathbf{d^{+}}\big{|}_{F[d^{+}_{n}]},\mathbf{d^{-}}\big{|}_{F[d^{+}_{n}]}\right)$
is graphical as well.
Proof. It is immediate: Lemma 6 applies. $\Box$
This statement gives us indeed a greedy way to check whether there exists a
graphical realization of the $F$-restricted bi-degree sequence
$(\mathbf{d^{+},d^{-}})$: all we have to do is to check only whether the
$F[d^{+}_{n}]$-reduced BDS
$\left(\mathbf{d^{+}}\big{|}_{F[d^{+}_{n}]},\mathbf{d^{-}}\big{|}_{F[d^{+}_{n}]}\right)$
is graphical.
Finally, we want to remark that, similarly to the indirected case, Theorem 9
is suitable to speed up the generation of all possible graphical realizations
of a BDS. The details can be found in [9] which is a joint work of these
authors with Hyunju Kim and László A. Székely.
### Acknowledgements
The authors acknowledge useful discussions with Gábor Tusnády, Éva Czabarka
and László A. Székely and Hyunju Kim. ZT would also like to thank for the kind
hospitality extended to him at the Alfréd Rényi Institute of Mathematics,
where this work was completed. Finally we want to express our gratitude to
Antal Iványi for his editorial help.
## References
* [1] C. Berge: The Theory of Graphs, Methuen & Co Ltd. London (1962), Chapter 9.
* [2] C. Berge: Graphs and Hypergraphs, North Holland Pub. Company, Amsterdam (1973), Chapter 6.
* [3] P. Erdős - T. Gallai: Gráfok előírt fokú pontokkal (Graphs with prescribed degree of vertices), Mat. Lapok, 11 (1960), 264–274. (in Hungarian)
* [4] L.R. Ford - D.R. Fulkerson: Flows in Networks, RAND Corporation R-375-PR (1962) Chapter 2 Section 11.
* [5] D. Gale: A theorem on flows in networks, Pacific J. Math. 7 (2) (1957), 1073–1082.
* [6] S.L. Hakimi: On the realizability of a set of integers as degrees of the vertices of a simple graph. J. SIAM Appl. Math. 10 (1962), 496–506.
* [7] S.L. Hakimi: On the degrees of the vertices of a directed graph, J. Franklin Institute 279 (4) (1965), 290–308.
* [8] V. Havel: A remark on the existence of finite graphs. (Czech), Časopis Pěst. Mat. 80 (1955), 477–480.
* [9] Hyunju Kim - Z. Toroczkai - P.L. Erdős - I. Miklós - L.A. Székely: Degree-based graph construction, submitted Journal of Physics A (2009), 1–12.
* [10] M.E.J. Newman - A.L. Barabási - D.J. Watts: The Structure and Dynamics of Networks (Princeton Studies in Complexity, Princeton UP) (2006), 624 pp.
* [11] Ø. Ore: Studies on directed graphs, I, Annals of Math. (Second Series) 63 (3) (1956), 383–406.
* [12] Ø. Ore: Studies on directed graphs, II, Annals of Math. (Second Series) 64 (3) (1956), 142–153.
* [13] H.J. Ryser: Combinatorial properties of matrices of zeros and ones, Canad. J. Math. 9 (1957), 371–377.
* [14] W.T. Tutte: The factorization of linear graphs, J. London Math. Soc. 22 (1947), 107–111.
* [15] W.T. Tutte: The factors of graphs, Canad. J. Math. 4 (1952), 314–328.
* [16] W.T. Tutte: A short proof of the factors theorem for finite graphs, Canad. J. Math. 6 (1954), 347–352.
* [17] D.B. West: Introduction to Graph Theory, Prentice Hall, Upper Sadle River, US, (2001), Section 1.4.
|
arxiv-papers
| 2009-05-29T16:03:33 |
2024-09-04T02:49:02.966037
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "P\\'eter L. Erd\\H{o}s, Istv\\'an Mikl\\'os and Zolt\\'an Toroczkai",
"submitter": "P\\'eter L. Erd\\H{o}s",
"url": "https://arxiv.org/abs/0905.4913"
}
|
0906.0052
|
# A Minimum Description Length Approach to Multitask Feature Selection
Brian Tomasik
(May 2009)
###### Abstract
One of the central problems in statistics and machine learning is _regression_
: Given values of input variables, called _features_ , develop a model for an
output variable, called a _response_ or _task_. In many settings, there are
potentially thousands of possible features, so that _feature selection_ is
required to reduce the number of predictors used in the model. Feature
selection can be interpreted in two broad ways. First, it can be viewed as a
means of _reducing prediction error_ on unseen test data by improving model
generalization. This is largely the focus within the machine-learning
community, where the primary goal is to train a highly accurate system. The
second approach to feature selection, often of more interest to scientists, is
as a form of _hypothesis testing_ : Assuming a “true” model that generates the
data from a small number of features, determine which features actually belong
in the model. Here the metrics of interest are precision and recall, more than
test-set error.
Many regression problems involve not one but several response variables. Often
the responses are suspected to share a common underlying structure, in which
case it may be advantageous to share information across the responses; this is
known as _multitask learning_. As a special case, we can use multiple
responses to better identify shared predictive features—a project we might
call _multitask feature selection_.
This thesis is organized as follows. Section 1 introduces feature selection
for regression, focusing on $\ell_{0}$ regularization methods and their
interpretation within a Minimum Description Length (MDL) framework. Section 2
proposes a novel extension of MDL feature selection to the multitask setting.
The approach, called the “Multiple Inclusion Criterion” (MIC), is designed to
borrow information across regression tasks by more easily selecting features
that are associated with multiple responses. We show in experiments on
synthetic and real biological data sets that MIC can reduce prediction error
in settings where features are at least partially shared across responses.
Section 3 surveys hypothesis testing by regression with a single response,
focusing on the parallel between the standard Bonferroni correction and an MDL
approach. Mirroring the ideas in Section 2, Section 4 proposes a novel MIC
approach to hypothesis testing with multiple responses and shows that on
synthetic data with significant sharing of features across responses, MIC
outperforms standard FDR-controlling methods in terms of finding true
positives for a given level of false positives. Section 5 concludes.
## 1 Feature Selection with a Single Response
The standard _linear regression model_ assumes that a response $y$ is
generated as a linear combination of $m$ predictor variables (“features”)
$x_{1}$, $\ldots$, $x_{m}$ with some random noise:
$\begin{split}y=\beta_{1}x_{1}+\beta_{2}x_{2}+\ldots+\beta_{m}x_{m}+\epsilon,\
\ \ \epsilon\sim\mathcal{N}(0,\sigma^{2}),\end{split}$ (1)
where we assume the first feature $x_{1}$ is an intercept whose value is
always 1. Given $n$ observations of the features and responses, we can write
the $y$ values in an $n\times 1$ vector $Y$ and the $x$ values in an $n\times
m$ matrix $X$. Assuming the observations are independent and identically
distributed, (1) can be rewritten as
$\begin{split}Y=X\beta+\epsilon,\ \ \
\epsilon\sim\mathcal{N}_{n}(0,\sigma^{2}I_{n\times n}),\end{split}$ (2)
where $\beta=\begin{bmatrix}\beta_{1}\\\ \ldots\\\ \beta_{m}\end{bmatrix}$,
$\mathcal{N}_{n}$ denotes the $n$-dimensional Gaussian distribution, and
$I_{n\times n}$ is the $n\times n$ identity matrix.
The maximum-likelihood estimate $\widehat{\beta}$ for $\beta$ under this model
can be shown to be the one minimizing the residual sum of squares:
$\begin{split}\text{RSS}:=(Y-X\beta)^{\prime}(Y-X\beta).\end{split}$ (3)
The solution is given by
$\begin{split}\widehat{\beta}=\left(X^{\prime}X\right)^{-1}X^{\prime}Y\end{split}$
(4)
and is called the _ordinary least squares_ (OLS) estimate for $\beta$.
In some cases, we may want to restrict the number of features in our model to
a subset of $q$ of them, including the intercept. In this case, we pretend
that our $X$ matrix contains only the relevant $q$ columns when applying (4);
the remaining $m-q$ entries of the $\widehat{\beta}$ matrix are set to 0. I’ll
denote the resulting estimate by $\widehat{\beta}_{q}$, and the RSS for that
model by
$\begin{split}\text{RSS}_{q}:=(Y-X\widehat{\beta}_{q})^{\prime}(Y-X\widehat{\beta}_{q}).\end{split}$
(5)
### 1.1 Penalized Regression
In many cases, regression problems have large numbers of potential features.
For instance, [FS04] predicted credit-card bankruptcy using a model with more
than $m=67{,}000$ potential features. In bioinformatics applications, it is
common to have thousands or tens of thousands of features for, say, the type
of each of a number of genetic markers or the expression levels of each of a
number of gene transcripts. The number of observations, in contrast, is
typically a few hundred at best.
The OLS esimate (4) breaks down when $m>n$, since the $m\times m$ matrix
$X^{\prime}X$ is not invertible due to having rank at most $n$. Moreover, it’s
implausible that a given response is actually linearly related to such a large
number of features; a model $\widehat{\beta}$ with lots of nonzeros is
probably overfitting the training data.
The statistics and machine-learning communities have developed a number of
approaches for addressing this problem. One of the most common is _regularized
regression_ , which aims to minimize not (3) directly, but a penalized version
of the residual sum of squares:
$\begin{split}(Y-X\beta)^{\prime}(Y-X\beta)+\lambda\left\|\beta\right\|_{p},\end{split}$
(6)
where $\left\|\beta\right\|_{p}$ represents the $\ell_{p}$ norm of $\beta$ and
$\lambda$ is a tunable hyperparameter, to be determined by cross-validation or
more sophisticated _regularization path_ approaches.111See, e.g., [Fri08] for
an excellent introduction.
_Ridge regression_ takes the penalty as proportional to the (square of the)
$\ell_{2}$ norm: $\lambda\left\|\beta\right\|_{2}^{2}$. This corresponds to a
Bayesian maximum a posteriori estimate for $\beta$ under a Gaussian prior
$\beta\sim\mathcal{N}(0_{m\times 1},\frac{\sigma^{2}}{\lambda}I_{m\times m})$,
with $\sigma^{2}$ as in (1) [Bis06, p. 153]. Under this formulation, we have
$\begin{split}\widehat{\beta}=(X^{\prime}X+\lambda I_{m\times
m})^{-1}X^{\prime}Y,\end{split}$
which is computationally valid because $X^{\prime}X+\lambda I_{m\times m}$ is
invertible for $\lambda>0$. However, because the square of a decimal number
less than one is much smaller than the original number, the $\ell_{2}$ norm
offers little incentive to drive entries of $\widehat{\beta}$ to 0—many of
them just become very small.
Another option is to penalize by the $\ell_{1}$ norm, which is known as _lasso
regression_ and is equivalent to a double-exponential prior on $\beta$
[Tib96]. Unlike $\ell_{2}$, $\ell_{1}$ regularization doesn’t square the
coefficients, and hence the entries of $\widehat{\beta}$ tend to be sparse. In
this way, $\ell_{1}$ regression can be seen as a form of feature
selection—i.e., choosing a subset of the original features to keep in the
model (e.g., [BL97]). Sparsity helps to avoid overfitting to the training set;
as a result, the number of training examples required for successful learning
with $\ell_{1}$ regularization grows only logarithmically with the number of
irrelevant features, whereas this number grows linearly for $\ell_{2}$
regression [Ng04]. Sparse models also have the benefit of being more
interpretable, which is important for scientists who want to know which
particular variables are actually relevant for a given response. Building
regression models for interpretation is further discussed in Sections 3 and 4.
If $\ell_{2}$ regression fails to achieve sparsity because coefficients are
squared, then, say, $\ell_{\frac{1}{2}}$ regression should achieve even more
sparsity than $\ell_{1}$. As $p$ approaches 0, $\left\|\beta\right\|_{p}$
approaches the number of nonzero values in $\beta$. Hence, regularization with
what is called the “$\ell_{0}$ norm” is _subset selection_ : Choosing a small
number of the original features to retain in the regression model. Once a
coefficient is in the model, there’s no incentive to drive it to a small
value; all that counts is the cost of adding it in the first place. The
$\ell_{0}$ norm has a number of advantages [LPFU08], including bounded worst-
case risk with respect to the $\ell_{1}$ norm and better control of a measure
called the “false discover rate” (FDR), explained more fully in Section 3.
Moreover, as [OTJ09, p. 1] note, “A virtue of the [$\ell_{0}$] approach is
that it focuses on the qualitative decision as to whether a covariate is
relevant to the problem at hand, a decision which is conceptually distinct
from parameter estimation.” However, they add, “A virtue of the [$\ell_{1}$]
approach is its computational tractability.” Indeed, exact $\ell_{0}$
regularization requires subset search, which has been proved NP-hard [Nat95].
In practice, therefore, an approximate greedy algorithm like forward stepwise
selection is necessary [BL05, p. 582].
In a regression model, residual sum of squares is proportional up to an
additive constant to the negative log-likelihood of $\beta$. Therefore,
$\ell_{0}$ regularization can be rephrased as a _penalized likelihood_
criterion (with a different $\lambda$ than in (6)):
$\begin{split}-2\ln P(Y\,\lvert\,\widehat{\beta}_{q})+\lambda q,\end{split}$
(7)
where $q$ is the number of features in the model, and
$P(Y\,\lvert\,\widehat{\beta}_{q})$ is the likelihood of the data given a
model containing $q$ features. As noted in [GF00], statisticians have proposed
a number of choices for $\lambda$, including
* •
$\lambda=2$, corresponding to Mallows’ $C_{p}$ [Mal73], or approximately the
Akaike Information Criterion (AIC) [Aka73],
* •
$\lambda=\ln n$, the Bayesian Information Criterion (BIC) [Sch78], and
* •
$\lambda=2\ln m$, the Risk Inflation Criterion (RIC) [DJ94, FG94].
It turns out that each of these criteria can be derived from an information-
theoretic principle called Minimum Description Length (MDL) under different
“model coding” schemes (e.g., [FS99, HY99]). MDL forms the focus of this
thesis, and its approach to regression is the subject of the next subsection.
### 1.2 Regression by Minimum Description Length
MDL [Ris78, Ris99] is a method of model selection that treats the best model
as the one which maximally compresses a digital representation of the observed
data. We can imagine a “Sender” who wants to transmit some data in an email to
a “Receiver” using as few bits as possible [Sti04]. In the case of linear
regression, we assume that both Sender and Receiver know the $n\times m$
matrix $X$, and Sender wants to convey the values in the $n\times 1$ matrix
$Y$. One way to do this would be to send the raw values for each of the $n$
observations $Y=\begin{bmatrix}y_{1}\\\ \ldots\\\ y_{n}\\\ \end{bmatrix}$.
However, if the response is correlated with some features, it may be more
efficient first to describe a regression model $\widehat{\beta}$ for $Y$ given
$X$ and then to enumerate the residuals $Y-X\widehat{\beta}$, which—having a
narrower distribution—require fewer bits to encode.
To minimize description length, then, Sender should choose $\widehat{\beta}$
to minimize
$\begin{split}\mathcal{D}(Y\,\lvert\,\widehat{\beta})+\mathcal{D}(\widehat{\beta}),\end{split}$
(8)
where the first term is the description length of the residuals about the
model, and the second term is the description length of the model
itself.222This is what [Gru05, p. 11] calls the “crude, two-part version of
MDL.” Beginning with [Ris86], Rissanen introduced a “one-part” version of MDL
based on a concept called “stochastic complexity.” However, it too divides the
description length into terms for model fit and model complexity, so it is in
practice similar to two-part MDL [Sti04]. Exactly what these terms mean will
be elaborated below.
#### 1.2.1 Coding the Data: $\mathcal{D}(Y\,\lvert\,\widehat{\beta})$
The Kraft inequality in information theory [CT06, sec. 5.2] implies that for
any probability distribution $\left\\{p_{i}\right\\}$ over a finite or
countable set, there exists a corresponding code with codeword lengths
$\left\lceil-\lg p_{i}\right\rceil$. Moreover, these code lengths are optimal
in the sense of minimizing the expected code length with respect to
$\left\\{p_{i}\right\\}$. If Sender and Receiver agree on a model for the
data, e.g., (1), then they have a probability distribution over possible
configurations of residuals $\epsilon$, so they will agree to use a code for
the residuals with lengths
$\begin{split}\mathcal{D}(Y\,\lvert\,\widehat{\beta})=-\lg
P(\epsilon\,\lvert\,\widehat{\beta})=-\lg
P(Y\,\lvert\,\widehat{\beta}),\end{split}$ (9)
that is, the negative log-likelihood of the data given the model. This is a
standard statistical measure for poorness of fit.333We use “idealized” code
lengths and so drop the ceiling on $-\lg P(Y\,\lvert\,\widehat{\beta})$ [BY98,
p. 2746]. Also, since $P(Y\,\lvert\,\widehat{\beta})$ is a continuous density,
we would in practice need to discretize it. This could be done to a given
precision with only a constant impact on code length [CT06, sec. 8.3].
Consider a stepwise-regression scenario in which our model currently contains
$q-1$ features (including the intercept term), and we’re deciding whether to
include an extra $q^{\text{th}}$ feature. Let $Y_{i}$ denote the
$i^{\text{th}}$ entry (row) of $Y$ and $\widehat{\beta}_{q}$ a model with all
$q$ features. (2) and (9) give
$\begin{split}\mathcal{D}(Y\,\lvert\,\widehat{\beta}_{q})&=-\lg\prod_{i=1}^{n}P(Y_{i}\,\lvert\,\widehat{\beta}_{q})\\\
&=-\sum_{i=1}^{n}\lg\left[\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp\left(-\frac{1}{2\sigma^{2}}(Y_{i}-X\widehat{\beta}_{q})^{2}\right)\right]\\\
&=\frac{1}{2\ln
2}\left[n\ln(2\pi\sigma^{2})+\frac{\text{RSS}_{q}}{\sigma^{2}}\right],\end{split}$
(10)
where $\text{RSS}_{q}$ is as in (5). Of course, $\sigma^{2}$ is in practice
unknown, so we estimate it by444 This is the maximum-likelihood estimate for
$\sigma^{2}$, which Sender uses because, ignoring model-coding cost,
maximizing likelihood is equivalent to minimizing description length. In
practice, of course, many statisticians use the unbiased estimate
$\begin{split}\widehat{\sigma}^{2}=\frac{\text{RSS}_{q}}{n-q}.\end{split}$
[Sti04, Appendix A] makes an interesting case for this latter value within the
MDL framework. However, we use the maximum-likelihood estimate for its
elegance and convenience.
$\begin{split}\widehat{\sigma}^{2}=\frac{\text{RSS}_{q}}{n}.\end{split}$ (11)
Inserting this estimate for $\sigma^{2}$ in (10) gives
$\begin{split}\mathcal{D}(Y\,\lvert\,\widehat{\beta}_{q})&=\frac{1}{2\ln
2}\left[n\ln(2\pi\widehat{\sigma}^{2})+n\right]\\\ &=\frac{n}{2\ln
2}\left[\ln\left(\frac{2\pi\text{RSS}_{q}}{n}\right)+1\right].\end{split}$
(12)
Unfortunately, in practice, (11) seems to cause overfitting, especially when
used for multiple responses as in Section 2.2.2. The overfitting comes from
the fact that $\widehat{\sigma}^{2}$ assumes that the current $q^{\text{th}}$
feature actually comes into the model; if this is not the case, the estimated
variance is spuriously too low, which allows random noise to appear more
unlikely than it really is. To prevent overfitting, we use instead an estimate
of $\widehat{\sigma}^{2}$ based on the model without the current feature:
$\widehat{\beta}_{q-1}$. That is,
$\begin{split}\widehat{\sigma}^{2}=\frac{1}{n}\sum_{i=1}^{n}(Y_{i}-X\widehat{\beta}_{q-1})^{2}=\frac{\text{RSS}_{q-1}}{n},\end{split}$
(13)
which means that (10) becomes instead
$\begin{split}\mathcal{D}(Y\,\lvert\,\widehat{\beta}_{q})&=\frac{n}{2\ln
2}\left[\ln\left(\frac{2\pi\text{RSS}_{q-1}}{n}\right)+\frac{\text{RSS}_{q}}{\text{RSS}_{q-1}}\right].\\\
\end{split}$ (14)
#### 1.2.2 Coding the Model: $\mathcal{D}(\widehat{\beta})$
Just as $\mathcal{D}(Y\,\lvert\,\widehat{\beta})$ depends on the model for the
residuals that Sender and Receiver choose, so their coding scheme for
$\widehat{\beta}$ itself will reflect their prior expectations.555Again by the
Kraft inequality, we can interpret $2^{-\mathcal{D}(\beta)}$ as a prior over
possible models $\beta$. In fact, this is done explicitly in the Minimum
Message Length (MML) principle (e.g., [Wal05]), a Bayesian sister to MDL which
chooses the model $\widehat{\beta}$ with maximum $P(\beta\,\lvert\,Y)$, i.e.,
the model that minimizes $\begin{split}-\lg P(\beta\,\lvert\,Y)=-\lg
P(\beta)-\lg P(Y\,\lvert\,\beta)+\text{const}.\end{split}$ When the number of
features $m$ is large (say, $1000$), Sender will likely only want to transmit
a few of them that are most relevant, and hence the $\widehat{\beta}$ matrix
will contain mostly zeros.666For convenience, we assume that Sender always
transmits the intercept coefficient $\widehat{\beta}_{1}$ and that, in fact,
doing so is free of charge. Thus, even if Sender were to encode all other
feature coefficients as 0, Receiver would at least have the average
$\overline{Y}$ of the $Y$ values to use in reconstructing $Y$. So the first
step in coding $\widehat{\beta}$ could be to say where the nonzero entries are
located; if only a few features enter the model, this can be done relatively
efficiently by listing the indices of the features in the set
$\left\\{1,2,\ldots,m\right\\}$. This requires $\left\lceil\lg m\right\rceil$
bits, which we idealize as just $\lg m$.
The second step is to encode the numerical values of those coefficients.
Rissanen [Ris83] suggested the basic approach for doing this: Create a
discrete grid over some possible parameter values, and use a code for integers
to specify which grid element is closest.777Thus, Sender transmits only
rounded coefficient values. Rounding adds to the residual coding cost because
Sender is not specifying the exact MLE, but the increase tends to be less than
a bit [Ris89]. [Sti04] described a simple way to approximate the value of a
particular coefficient $\widehat{\beta}$: Encode an integer version of its
z-score relative to the null-hypothesis value $\beta_{0}$ (which in our case
is 0):
$\begin{split}\left\langle\frac{\widehat{\beta}-\beta_{0}}{\text{SE}(\widehat{\beta})}\right\rangle=\left\langle\frac{\widehat{\beta}}{\text{SE}(\widehat{\beta})}\right\rangle,\end{split}$
(15)
where $\langle x\rangle$ means “the closest integer to $x$.” The z-score can
then be coded with the idealized universal code for the positive integers of
[Eli75] and [Ris83], in which the cost to code
$i\in\left\\{1,2,3,\ldots\right\\}$ is
$\begin{split}\lg^{*}i+b,\end{split}$
where $\lg^{*}i:=\lg i+\lg\lg i+\lg\lg\lg i+\ldots$ so long as the terms
remain positive, and $b\approx\lg 2.865\approx 1.516$ [Ris83, p. 424] is the
constant such that
$\begin{split}\sum_{i=1}^{\infty}2^{-(\lg^{*}i+b)}=1.\end{split}$
We require the $\lg^{*}$ instead of a simple $\lg$ because the number of bits
Sender uses to convey the integer $i$ will vary, and she needs to tell
Receiver how many bits to expect. This number of bits is itself an integer
that can be coded, hence the iteration of logarithms. The middle row of Table
1 shows example costs with this code.
$i$ | 1 | 2 | 3 | 4 | 5 | 10 | 100
---|---|---|---|---|---|---|---
Universal-code cost for $i$ | 1.5 | 2.5 | 3.8 | 4.5 | 5.3 | 7.4 | 12.9
Universal-style cost for $i$, truncated at 1000 | 1.2 | 2.2 | 3.4 | 4.2 | 5.0 | 7.0 | 12.6
Table 1: Example costs of the universal code for the integers.
In fact, in practice it’s unnecessary to allow our integer code to extend to
arbitrarily large integers. We’re interested in features near the limit of
detectability, and we expect our z-scores to be roughly in the range $\sim 2$
to $\sim 4$, since if they were much higher, the true features would be
obvious and wouldn’t require sensitive feature selection. We could thus impose
some maximum possible z-score $Z$ that we might ever want to encode (say,
1000) and assume that all of our z-scores will fall below it. In this case,
the constant $c$ can be reduced to a new value $c_{Z}$, now only being large
enough that
$\begin{split}\sum_{i=1}^{Z}2^{-(\lg^{*}i+c_{Z})}=1.\end{split}$ (16)
In particular, $c_{1000}\approx 1.199$, which is reflected in the reduced
costs shown in the third row of Table 1. As it turns out, $c_{Z}\approx 1$
over a broad range (say, for $Z\in\left\\{5,\ldots,1000\right\\}$).
In our implementation, we avoid computing the actual values of our z-scores
(though this could be done in principle), instead assuming a constant cost of
2 bits per coefficient. Though MDL theoretically has no tunable parameters
once Sender and Receiver have decided on their models, the number of bits to
specify a coefficient can act as one, having much the same effect as the
significance level $\alpha$ of a hypothesis test (see Section 3.3 for
details). We found 2 bits per coefficient to work well in practice.
#### 1.2.3 Summary
Combining the cost of the residuals with the cost of the model gives the
following formula for the description length as a function of the number of
features $q$ that we include in the model:
$\begin{split}-\lg P(Y\,\lvert\,\widehat{\beta}_{q})+q(\lg m+2).\end{split}$
(17)
Note the similarity between (17) and the RIC penalty for (7).
## 2 MIC for Prediction
Standard multivariate linear regression as described in Section 1 assumes many
features but only a single response. However, a number of practical problems
involve both multiple features and multiple responses. For instance, a
biologist may have transcript abundances for thousands of genes that she would
like to predict using genetic markers [KCY+06], or she may have multiple
diagnostic categories of cancers whose presence or absence she would like to
predict using transcript abundances [KWR+01]. [BF97] describe a situation in
which they were asked to use over 100 econometric variables to predict stock
prices in 60 different industries.
More generally, we suppose we have $h$ response variables $y_{1},\ldots,y_{h}$
to be predicted by the same set of features $x_{2},\ldots,x_{p}$ (with $x_{1}$
being an intercept, as before). In parallel to (2), we can write
$\begin{split}Y=X\beta+\epsilon,\end{split}$ (18)
where now $Y$ is an $n\times h$ matrix with each response along a column. $X$
remains $n\times m$, while $\beta$ is $m\times h$ and $\epsilon$ is $n\times
h$.
In general, the noise on the responses may be correlated; for instance, if our
responses consist of temperature measurements at various locations, taken with
the same thermometer, then if our instrument drifted too high at one location,
it will have been too high at the other. A second, practical reason we might
make this assumption comes from our use of stepwise regression to choose our
model: In the early iterations of a stepwise algorithm, the effects of
features not present in the model show up as part of the “noise” error term,
and if two responses share a feature not yet in the model, the portion of the
error term due to that feature will be the same. Thus, we may decide to take
the rows of $\epsilon$ have nonzero covariance:
$\begin{split}\epsilon_{i}\sim\mathcal{N}_{h}(0,\Sigma),\end{split}$ (19)
where $\epsilon_{i}$ is the $i$th row of $\epsilon$, and $\Sigma$ is an
arbitrary (not necessarily diagonal) $h\times h$ covariance matrix. In
practice, however, we end up using a diagonal estimate of $\Sigma$, as
discussed in Section 2.2.2.
### 2.1 Multitask Learning
The naïve approach to regression with the above model is to treat each
response as a separate regression problem and calculate each column of
$\widehat{\beta}$ using just the corresponding column of $Y$. This is
completely valid mathematically; it even works in the case where the
covariance matrix $\Sigma$ is non-diagonal, because the maximum-likelihood
solution for the mean of a multivariate Gaussian is independent of the
covariance [Bis06, p. 147]. However, intuitively, if the regression problems
are related, we ought to be able to do better by “borrowing strength” across
the multiple responses. This concept goes by various names; in the machine-
learning community, it’s often known as “multitask learning” (e.g., [Car97])
or “transfer learning” (e.g., [RNK06]).
Several familiar machine-learning algorithms naturally do multitask learning;
for instance, neural networks can share hidden-layer weights with multiple
responses. In addition, a number of explicit multitask regression methods have
been proposed, such as _curds and whey_ , which takes advantage of correlation
among responses [BF97], and various dimensionality-reduction algorithms (e.g.,
[AM05]).
Some techniques even do explicit feature selection. For instance, [BVF98,
BVF02] present Bayesian Markov Chain Monte Carlo approaches for large numbers
of features. [ST05] present a _Multiresponse Sparse Regression_ (MRSR)
algorithm that selects features in a stepwise manner by choosing at each
iteration the feature most correlated with the residuals of the responses from
the previous model.888Another sparse-regression algorithm is BBLasso,
described in Section 2.1.2 below. We have not explored the above algorithms in
detail, but comparing them against MIC could be a fruitful direction for
future work. Below we elaborate on two multitask-learning approaches that we
have used.
#### 2.1.1 AndoZhang
Ando and Zhang [AZ05] aim to use multiple prediction problems to learn an
underlying shared structural parameter on the input space. Their presentation
is general, but they give the particular example of a linear model, which
we’ll rewrite using the notation of this thesis. Given $h$ tasks and $m$
features per task, the authors assume the existence of an $H\times m$
matrix999The authors use $h$ instead of $H$, but we have already used that
variable to denote the number of tasks. $\Theta$, with $H<m$, such that for
each task $k=1,\ldots,h$,
$\begin{split}Y^{k}=X\beta^{k}+X\Theta^{\prime}\beta_{\Theta}^{k},\end{split}$
where superscript $k$ denotes the $k^{\text{th}}$ column and $\beta_{\Theta}$
is an $H\times h$ matrix of weights for the “lower dimensional features”
created by the product $X\Theta^{\prime}$ (p. 1824). The transformation
$\Theta$ is common across tasks and is thus the way we share information. The
authors develop an algorithm, referred to as _AndoZhang_ in this thesis, that
chooses (p. 1825)
$\begin{split}\left[\left\\{\widehat{\beta},\widehat{\beta}_{\Theta}\right\\},\widehat{\Theta}\right]=\underset{\left\\{\beta,\beta_{\Theta}\right\\},\Theta}{\operatorname{argmin}}\sum_{k=1}^{h}\left(\frac{1}{n}\left\|Y^{k}-X\left(\beta^{k}+\Theta^{\prime}\beta_{\Theta}^{k}\right)\right\|^{2}+\lambda_{k}\left\|\beta^{k}\right\|^{2}\right)\text{
\ such that \ }\Theta\Theta^{\prime}=I_{H\times H}.\end{split}$
Here $\left\\{\lambda_{k}\right\\}_{k=1}^{h}$ are regularization weights, and
(unlike Ando and Zhang) we’ve assumed a constant number $n$ of observations
for each task, as well as a quadratic loss function.
$\left\|\cdot\right\|^{2}$ denotes the square of the $\ell_{2}$ norm.
Ando and Zhang present their algorithm as a tool for semi-supervised learning:
Given one particular problem to be solved, it’s possible to generate auxiliary
problems from the known data, solve them, and then use the shared structural
parameter $\Theta$ on the original problem. Of course, the approach works just
as well when we start out with all of the tasks as problems to be solved in
their own right.
#### 2.1.2 BBLasso
A few multitask learning methods actually do feature selection, i.e., building
models that contain only a subset of the original features. AndoZhang, with
its $\ell_{2}$ regularization penalty, leaves in most or all of the features,
but $\ell_{1}$ methods, such as _BBLasso_ presented below, do not.
[AEP08] present an algorithm for learning functions of input features across
tasks. This is more general than the approach considered in this thesis of
choosing merely a subset of the original features, but [AEP08, sec. 2.2] note
that their approach reduces to feature selection with an identity function.
Regularizing with an $\ell_{1}$ norm over features, their problem thus becomes
$\begin{split}\widehat{\beta}=\underset{\beta}{\operatorname{argmin}}\left(\text{RSS}_{\beta}+\lambda\sum_{i=1}^{m}\left\|\beta_{i}\right\|\right),\end{split}$
(20)
where
$\text{RSS}_{\beta}=\text{trace}\left[(Y-X\beta)^{\prime}(Y-X\beta)\right]$ is
the residual sum of squares for all $h$ responses, and $\beta_{i}$ denotes the
$i^{\text{th}}$ row of $\beta$. The $i^{\text{th}}$ term in the sum represents
the magnitude of coefficients assigned to feature $i$; the sum over these
values for each $i$ amounts to an $\ell_{1}$ norm over the rows.
The $\ell_{2}$ norm favors shared coefficients across responses for the same
features. To see this, [AEP08, p. 6] suggest the case in which the entries of
$\beta$ can only be 0 or 1. If $h$ different features each have a single
nonzero response coefficient, the penalty will be
$\lambda\sum_{j=1}^{h}1=\lambda h$, since $\left\|\beta_{i}\right\|=1$ for
each such feature $i$. However, if a single feature shares a coefficient of 1
across all $h$ responses, the penalty is only
$\lambda\sqrt{\sum_{j=1}^{h}1^{2}}=\lambda\sqrt{h}$. The same number and
magnitude of nonzero coefficients thus leads to a much smaller penalty when
the coefficients are shared.
In principle the penalty term can be made even smaller by including nonzero
coefficients for only some of the responses. For instance, in the example
above, the penalty would be $\sqrt{h/2}$ if just half of the responses had
nonzero coefficients (i.e., coefficients of 1). However, in reality, the
coefficient values are not restricted to 0 and 1, meaning that unhelpful
coefficients, rather than being set to 0, tend to get very small values whose
square is negligible in the $\ell_{2}$-norm penalty. In practice, then, this
algorithm adds entire rows of nonzero coefficients to $\widehat{\beta}$.
Of course, as [OTJ09, p. 4] note, the $\ell_{2}$ norm used here could be
generalized to an $\ell_{p}$ norm for any $p\in[1,\infty)$. $p=1$ corresponds
to “no sharing” across the responses, since (20) would then reduce to the
objective function for $h$ independent $\ell_{1}$-penalized regressions. At
the opposite extreme, $p=\infty$ corresponds to “maximal sharing”; in that
case, only the maximum absolute coefficient value across a row matters.
A number of efficient optimization approaches have been proposed for (20) with
$p=2$ and $p=\infty$; see, e.g., the related-work sections of [TVW05] and
[ST07] for a survey. In particular, [OTJ09] propose a fast approximate
algorithm using a regularization path, that is, a method which implicitly
evaluates (20) at all values of $\lambda$.
### 2.2 Regression by MIC
Following Section 1.2, this section considers ways we might approach
regression with multiple responses from the perspective of information theory.
One strategy, of course, would be to apply the penalized-likelihood criterion
(17) to each response in isolation; we’ll call this the _RIC_ method, because
the dominant penalty in (17) is $\lg m$, the same value prescribed by the Risk
Inflation Criterion for equation (7).
However, following the intuition of BBLasso and other multitask algorithms, we
suspect that features predictive of one task are more likely predictive of
other, related tasks. For example, if two of our responses are “level of HDL
cholesterol” and “level of LDL cholesterol,” we expect that lifestyle
variables correlated with one will tend to be correlated with the other. The
following multitask coding scheme, called the _Multiple Inclusion Criterion_
(MIC), is designed to take advantage of such situations by making it easier to
add to our models features that are shared across responses.
#### 2.2.1 Coding the Model
The way MIC borrows strength across responses is to efficiently specify the
feature-response pairs in the $m\times h$ matrix $\widehat{\beta}$. The naïve
approach would be to put each of the $mh$ coefficients in a linear order and
specify the index of the desired coefficient using $\lg(mh)$ bits. But we can
do better. If we expect nearly all the responses to be correlated with the
predictive features, we could give all of the responses nonzero coefficients
(using $2h$ bits to code the values for each of the $h$ response coefficients)
and simply specify the feature that we’re talking about using $\lg m$ bits, as
in Section 1.2.2. We’ll call this the _fully dependent MIC_ coding scheme (or
_Full MIC_ for short). It amounts to adding entire rows of nonzero
coefficients at a time to our $\widehat{\beta}$ matrix, in much the same way
that BBLasso does.
In many cases, however, the assumption that each feature is correlated with
almost all of the responses is unrealistic. A more flexible coding scheme
would allow us to specify only a subset of the responses to which we want to
give nonzero coefficients. For instance, suppose we’re considering feature
number 703; of the $h=20$ responses, we think that only responses
$\left\\{2,5,11,19\right\\}$ should have nonzero coefficients with the current
feature. We can use $\lg m$ bits to specify our feature (number 703) once, and
then we can list the particular responses that have nonzero coefficients with
feature 703, thereby avoiding four different $\lg(mh)$ penalties to specify
each coefficient in isolation.
A standard way to code a subset of a set of size $h$ is to first specify how
many $k\leq h$ elements the subset contains and then which of the
$\binom{h}{k}$ possible subsets with $k$ elements we’re referring to [CT06,
sec. 7.2]. In particular, we choose to code $k$ using
$\begin{split}\lg^{*}k+c_{h}\end{split}$ (21)
bits, with $c_{h}$ as defined in (16).101010We could also assume a uniform
distribution on $\left\\{1,2,\ldots,h\right\\}$ and spend $\lg h$ bits to code
$k$’s index. However, in practice we find that smaller values of $k$ occur
more often, so that the $\lg^{*}$-based code is generally cheaper. We then
need $\lg\binom{h}{k}$ additional bits to specify the particular subset. We
refer to this code as _partially dependent MIC_ (or _Partial MIC_ for short).
#### 2.2.2 Coding the Data
As usual, $\mathcal{D}(Y\,\lvert\,\widehat{\beta}_{q})=-\lg
P(Y\,\lvert\,\widehat{\beta}_{q})$ for the model that Sender and Receiver
choose, which in this case is (18). If we allow rows of $\epsilon$ to have
arbitrary covariance as in (19), then
$\begin{split}\mathcal{D}(Y\,\lvert\,\widehat{\beta}_{q})=\frac{1}{2\ln
2}\left[n\ln{\left((2\pi)^{h}|\Sigma|\right)}+\sum_{i=1}^{n}\left(Y_{i}-X_{i}\widehat{\beta}_{q}\right)^{\prime}\Sigma^{-1}\left(Y_{i}-X_{i}\widehat{\beta}_{q}\right)\right],\end{split}$
(22)
with subscript $i$ denoting the $i^{\text{th}}$ row. Since $\Sigma$ is in fact
unknown, we estimate it using maximum likelihood:
$\begin{split}\widehat{\Sigma}_{F}=\frac{1}{n}\left(Y-X\widehat{\beta}_{q-1}\right)^{\prime}\left(Y-X\widehat{\beta}_{q-1}\right),\end{split}$
(23)
where the subscript $F$ stands for “full covariance,” and we use
$\widehat{\beta}_{q-1}$ instead of $\widehat{\beta}_{q}$ to prevent
overfitting, as in the single-response case of Section 1.2.1.
In practice, we find that estimating all $h^{2}$ entries of $\Sigma$ leads to
overfitting. Therefore, in our experiments, we estimate $\Sigma$ using a
diagonal matrix $\widehat{\Sigma}_{D}$ that’s the same as
$\widehat{\Sigma}_{F}$ except that the off-diagonal entries have been set to
0. In this case, (22) reduces to a sum of $h$ terms like (14) for each
response separately. We also experimented with shrunken estimates of the form
$\widehat{\Sigma}_{\lambda}=\lambda\widehat{\Sigma}_{D}+(1-\lambda)\widehat{\Sigma}_{F}$
for $\lambda\in[0,1]$ and observed informally that for $\lambda>0.5$ or so,
$\widehat{\Sigma}_{\lambda}$ performed comparably with $\widehat{\Sigma}_{D}$.
However, there did not appear to be a performance advantage, so we continued
to use $\widehat{\Sigma}_{D}$, which is faster computationally.111111We also
experimented with two other regularized full covariance-matrix estimators,
proposed in [LW04] and [SORS], respectively. While we never ran systematic
comparisons, our informal assessment was that these methods generally did not
improve performance relative to our current approach and sometimes reduced it.
#### 2.2.3 Comparing the Coding Schemes
This section has discussed three information-theoretic approaches to multitask
regression: RIC, Full MIC, and Partial MIC. In general, the negative log-
likelihood portion of RIC may differ from that of the other two, because Full
and Partial MIC may use a nondiagonal covariance estimate like
$\widehat{\Sigma}_{F}$, while RIC, only operating on one response at a time,
implicitly uses $\widehat{\Sigma}_{D}$. However, since we also use
$\widehat{\Sigma}_{D}$ for Full and Partial MIC, the real differences come
from the coding penalties.
These are compared in Table 2 for various values of $k$, the number of
responses for which we add the current feature under consideration. Of course,
Full MIC is only allowed to take $k=0$ or $k=h$, so it actually has $h$
nonzero coefficients in all three rows of the table. However, if the extra
$h-k$ coefficients correspond to non-predictive features, the extra reduction
in residual-coding cost that Full MIC enjoys over the other methods is likely
to be small.
The numbers beside the coding-cost formulas illustrate the case $m=2{,}000$
features and $h=20$ responses. As expected, each coding scheme is cheapest in
the case for which it was designed; however, we note that the MIC methods are
never excessively expensive, unlike RIC for $k=h$.
Table 2: Costs in bits for each of the three schemes to code the appearance of a feature in $k=1$, $k=\frac{h}{4}$, and $k=h$ response models. In general, we assume $m\gg h\gg 1$. Note that for $h\in\\{5,\ldots,1000\\}$, $c_{h}\approx 1$. Examples of these values for $m=2{,}000$ and $h=20$ appear in brackets; the smallest of the costs appears in bold. $k$ | Partial MIC | Full MIC | RIC
---|---|---|---
1 | $\lg m+c_{h}+\lg h+2$ | [18.4] | $\lg m+2h$ | [51.0] | $\lg m+2$ | [13.0]
$\frac{h}{4}$ | $\lg m+\lg^{*}\left(\frac{h}{4}\right)+c_{h}+\lg{h\choose h/4}+\frac{h}{2}$ | [39.8] | $\lg m+2h$ | [51.0] | $\frac{h}{4}\lg m+\frac{h}{2}$ | [64.8]
$h$ | $\lg m+\lg^{*}h+c_{h}+2h$ | [59.7] | $\lg m+2h$ | [51.0] | $h\lg m+2h$ | [259.3]
### 2.3 Implementation
As mentioned in Section 1.1, searching over all possible combinations of zero
and nonzero coefficients for the model that minimizes description length is
computationally intractable. We therefore implement MIC using a forward
stepwise search procedure, detailed in Algorithm 2.3. Beginning with a null
model containing only an intercept feature, we evaluate each feature for
addition to this model and add the one that would most reduce description
length, as computed using Algorithm 2.3. Updating our model, we re-evaluate
each remaining feature and add to the model the next-best feature. We continue
until we find no more features that would reduce description length if added
to the model.
Algorithm 2.1: StepwiseRegressionMIC($X,Y,\text{method}$)
$\begin{array}[]{@{\pcode@tab{1}}lr@{}}\hskip 4.30554pt\lx@intercol{}\text{ //
``method" is either ``partially dependent MIC" or ``fully dependent MIC."}\\\
\hskip 4.30554pt\lx@intercol\text{Make sure that the matrix $X$ contains a
column of 1's for the intercept. }\\\ \hskip
4.30554pt\lx@intercol\text{Initialize the model $\widehat{\beta}$ to have
nonzero coefficients only for the intercept terms. That is, if
$\widehat{\beta}$}\\\ \hskip 4.30554pt\lx@intercol\text{\ \ \ is an $m\times
h$ matrix, exactly the first row has nonzero elements.}\\\ \hskip
4.30554pt\lx@intercol\text{Compute $\widehat{\Sigma}$ based on the residuals
$Y-X\widehat{\beta}$ (whether as $\widehat{\Sigma}_{F}$,
$\widehat{\Sigma}_{D}$, or something in between).}\\\ \hskip
4.30554pt\lx@intercol\mbox{while }\mbox{ true }\\\ \hskip
4.30554pt\lx@intercol\left\\{\begin{array}[]{@{}lr@{}}\text{Find the feature
$f$ that, if added, would most reduce description length.}\\\ \text{ \ \ \ (If
method is ``fully dependent MIC," then ``adding a feature" means making}\\\
\text{ \ \ \ nonzero the coefficients for all of the responses with that
feature. If method is ``partially}\\\ \text{ \ \ \ dependent MIC," then it
means making nonzero the optimal subset of responses,}\\\ \text{ \ \ \ where
the ``optimal subset" is approximated by a stepwise-style search}\\\ \text{ \
\ \ through response subsets.)}\\\ \text{Let $\widehat{\beta}_{f}$ denote the
model that would result if this new feature were added.}\\\ \mbox{if
}\mbox{{DescrLength}}(X,Y,\widehat{\beta}_{f},\widehat{\Sigma},\text{method})<\mbox{{DescrLength}}(X,Y,\widehat{\beta},\widehat{\Sigma},\text{method})\\\
\hskip 4.30554pt\mbox{ then
}\left\\{\begin{array}[]{@{}lr@{}}\widehat{\beta}\leftarrow\widehat{\beta}_{f}\\\
\text{Update $\widehat{\Sigma}$.}\\\ \end{array}\right.\\\ \hskip
4.30554pt\mbox{ else }\mbox{break }\end{array}\right.\\\ \hskip
4.30554pt\lx@intercol\mbox{return }(\widehat{\beta})\end{array}$
Algorithm 2.2:
DescrLength($X,Y,\widehat{\beta},\widehat{\Sigma},\text{method}$)
$\begin{array}[]{@{\pcode@tab{1}}lr@{}}\hskip 4.30554pt\lx@intercol{}\text{Let
$m$ be the number of features (columns of $X$) and $h$ the number of responses
(columns of $Y$).}\\\ \hskip 4.30554pt\lx@intercol d\leftarrow 0\ \ \ \text{
// Initialize $d$, the description length.}\\\ \hskip 4.30554pt\lx@intercol
d\leftarrow d+\mathcal{D}(Y\,\lvert\,\widehat{\beta})\ \ \ \text{ // Add the
likelihood cost using \eqref{fullcoveq}. }\\\ \hskip
4.30554pt\lx@intercol\mbox{for }j\leftarrow 1\mbox{ to }m\ \ \ \ \text{ // Add
the cost of feature $j$, if any. }\\\ \hskip 4.30554pt\lx@intercol\hskip
4.30554pt\mbox{ do }\left\\{\begin{array}[]{@{}lr@{}}k\leftarrow\text{number
of nonzero responses for feature $j$}\\\ \mbox{if }k>0\\\ \hskip
4.30554pt\mbox{ then }\left\\{\begin{array}[]{@{}lr@{}}d\leftarrow d+\lg m\ \
\text{ // Cost to specify which feature.}\\\ d\leftarrow d+2k\ \ \ \ \text{ //
Cost to specify nonzero coefficients.}\\\ \mbox{if }\text{method ==
``partially dependent MIC"}\\\ \hskip 4.30554pt\mbox{ then }d\leftarrow
d+\lg^{*}k+c_{h}+\lg\binom{h}{k}\ \ \ \text{ // Specify which subset of
responses.}\end{array}\right.\end{array}\right.\\\ \hskip
4.30554pt\lx@intercol\mbox{return }(d)\end{array}$
The above procedure requires evaluating the quality of features
$\mathcal{O}(mr)$ times, where $r$ is the number of features eventually added
to the model. For Partial MIC, each time we evaluate a feature, we have to
determine the best subset of responses that might be associated with that
feature. As Algorithm 2.3 notes, this is also done using a stepwise-style
search: Start with no responses, add the best response, then re-evaluate the
remaining responses and add the next-best response, and so on.121212A stepwise
search that re-evaluates the quality of each response at each iteration is
necessary because, if we take the covariance matrix $\widehat{\Sigma}$ to be
nondiagonal, the values of the residuals for one response may affect the
likelihood of residuals for other responses. If we take $\widehat{\Sigma}$ to
be diagonal, as we end up doing in practice, then an $\mathcal{O}(h)$ search
through the responses without re-evaluation would suffice. Unlike an ordinary
stepwise algorithm, however, we don’t terminate the search if, at the current
number of responses in the model, the description length fails to be the best
seen yet. Because we’re interested in borrowing strength across responses, we
need to avoid overlooking cases where the correlation of a feature with any
single response is insufficiently strong to warrant addition, yet the
correlations with all of the responses are. Moreover, the $\lg\binom{h}{k}$
term in Partial MIC’s coding cost does not increase monotonically with $k$, so
even if adding the feature to an intermediate number of response models
doesn’t look promising, adding it to all of them might. Thus, we perform a
full $\mathcal{O}(h^{2})$ search in which we eventually add all the responses
to the model. As a result, Partial MIC requires a total of
$\mathcal{O}(mrh^{2})$ evaluations of description length.131313Full MIC, not
requiring the search through subsets of responses, is only $\mathcal{O}(mr)$
in the number of evaluations of description length.
However, a few optimizations can reduce the computational cost of Partial MIC
in practice.
* •
We can quickly filter out most of the irrelevant features at each iteration by
evaluating, for each feature, the decrease in negative log-likelihood that
would result from simply adding it with all of its responses, without doing
any subset search. Then we keep only the top $t$ features according to this
criterion, on which we proceed to do the full $\mathcal{O}(h^{2})$ search over
subsets. We use $t=75$, but we find that as long as $t$ is bigger than, say,
10 or 20, it makes essentially no impact to the quality of results. This
reduces the number of model evaluations to $\mathcal{O}(mr+rth^{2})$.
* •
We can often short-circuit the $\mathcal{O}(h^{2})$ search over response
subsets by noting that a model with more nonzero coefficients always has lower
negative log-likelihood than one with fewer nonzero coefficients. This allows
us to get a lower bound on the description length for the current feature for
each number $k\in\left\\{1,\ldots,h\right\\}$ of nonzero responses that we
might choose as
$\begin{split}&(\text{model cost for other features already in model})\\\
&+(\text{negative log-likelihood of $Y$ if all $h$ responses for this feature
were nonzero})\\\ &+(\text{the increase in model cost if $k$ of the responses
were nonzero}).\end{split}$ (24)
We then need only check those values of $k$ for which (24) is smaller than the
best description length for any candidate feature’s best response subset seen
so far. In practice, with $h=20$, we find that evaluating $k$ up to, say, 3 or
6 is usually enough; i.e., we typically only need to add $3$ to $6$ responses
in a stepwise manner before stopping, with a cost of only $3h$ to
$6h$.141414If $\widehat{\Sigma}$ is diagonal and we don’t need to re-evaluate
residual likelihoods at each iteration, the cost is only $3$ to $6$
evaluations of description length.
Although we did not attempt to do so, it may be possible to formulate MIC
using a regularization path, or _homotopy_ , algorithm of the sort that has
become popular for performing $\ell_{1}$ regularization without the need for
cross-validation (e.g., [Fri08]). If possible, this would be significantly
faster than stepwise search.
### 2.4 Experiments
We evaluate MIC on several synthetic and real data sets in which multiple
responses are associated with the same set of features. We focus on the
parameter regimes for which MIC was designed: Namely, $m\gg n$, but with only
a relatively small number of features expected to be predictive. We describe
the details of each data set in its own subsection below.
For comparing MIC against existing multitask methods, we used the Matlab
“Transfer Learning Toolkit” [KR], which provides implementations of seven
algorithms from the literature. Unfortunately, most of them did not apply to
our data sets, since they often required _meta-features_ (features describing
other features), or expected the features to be frequency counts, or were
unsupervised learners. The two methods that did apply were AndoZhang and
BBLasso, both described in Section 2.1.
The AndoZhang implementation was written by the TL Toolkit authors, Wei-Chun
Kao and Alexander Rakhlin. It included several free parameters, including the
regularization coefficients $\lambda_{k}$ for each task (all set to 1, as was
the default), the optimization method (also set to the default), and $H$.
[AZ05, pp. 1838-39] tried values of $H$ between 10 and 100, settling on 50 but
finding that the exact value was generally not important. We performed
informal cross-validation for values between 1 and 250 and found, perhaps
surprisingly, that $H=1$ consistently gave the best results. We used this
value throughout the experiments below.
The BBLasso implementation was written by Guillaume Obozinski, based on a
paper [OTJ06] published earlier than the [OTJ09] cited above; however, the
algorithm is essentially the same. For each parameter, we used the default
package setting.
AndoZhang and BBLasso are both classification methods, so in order to compare
against them, we had to turn MIC into a classification method as well. One way
would have been to update (22) to reflect a logistic model; however, the
resulting computation of $\widehat{\beta}$ would then have been nonlinear and
much slower. Instead, we continue to apply (22) and simply regress on
responses that have the values 0 or 1. Once we’ve chosen which features will
enter which models, we do a final round of logistic regression for each
response separately with the chosen features to get a slightly better
classifier.
#### 2.4.1 Synthetic Data
We created synthetic data according to three separate scenarios, which we’ll
call “Partial,” “Full,” and “Independent.” For each scenario, we generated a
matrix of continuous responses as
$Y_{\text{sim}}=X_{\text{sim}}{\bf\beta}_{\text{sim}}+\epsilon_{\text{sim}},$
with $m=2{,}000$ features, $h=20$ responses, and $n=100$ observations. Then,
to produce binary responses, we set to 1 those response values that were
greater than or equal to the average value for their columns and set to 0 the
rest, yielding a roughly 50-50 split between 1’s and 0’s because of the
normality of the data. Each entry of $X_{\text{sim}}$ was i.i.d.
$\mathcal{N}(0,1)$, each nonzero entry of $\beta_{\text{sim}}$ was i.i.d.
$\mathcal{N}(0,1)$, and entry of $\epsilon_{\text{sim}}$ was i.i.d.
$\mathcal{N}(0,0.1)$, with no covariance among the $\epsilon_{\text{sim}}$
entries for different responses.151515We mentioned in Section 2.2.2 that MIC
performed best when we used $\widehat{\Sigma}_{D}$, the diagonal covariance-
matrix estimate. One might wonder whether this was due only to the fact that
we created our synthetic test data without covariance among the responses.
However, this was not the case. When we generated synthetic noise using a
correlation matrix in which each off-diagonal entry was a specific nonzero
value (in particular, we tried 0.4 and 0.8), the same trend appeared: MIC with
$\widehat{\Sigma}_{D}$ had slightly lower test error. Each response had $4$
beneficial features, i.e., each column of $\beta_{\text{sim}}$ had 4 nonzero
entries.
The scenarios differed according to the distribution of the beneficial
features in $\beta_{\text{sim}}$.
* •
In the Partial scenario, the first feature was shared across all 20 responses,
the second was shared across the first 15 responses, the third across the
first 10 responses, and the fourth across the first 5 responses. Because each
response had four features, those responses ($6-20$) that didn’t have all of
the first four features had other features randomly distributed among the
remaining features (5, 6, …, 2000).
* •
In the Full scenario, each response shared exactly features $1-4$, with none
of features $5-2000$ being part of the model.
* •
In the Independent scenario, each response had four random features among
$\left\\{1,\ldots,2000\right\\}$.
Figure 1 illustrates these feature distributions, showing the first 40 rows of
random $\beta_{\text{sim}}$ matrices.
Figure 1: Examples of $\beta_{\text{sim}}$ for the three scenarios, with the
nonzero coefficients in black. The figures show all 20 columns of
$\beta_{\text{sim}}$ but only the first 40 rows. (For the Partial and
Independent scenarios, the number of nonzero coefficients appearing in the
first 40 rows is exaggerated above what would be expected by chance for
illustration purposes.)
Table 3 shows the performance of each of the methods on five random instances
of these data sets. Test-set errors for the “True Model” are those obtained
with logistic regression for each response separately using a model containing
exactly the features of $\beta_{\text{sim}}$ that had nonzero coefficients. In
addition to test-set error, we show precision and recall metrics.
_Coefficient-level_ precision and recall refer to individual coefficients: For
those coefficients of the data-generating $\beta_{\text{sim}}$ that were
nonzero, did our final $\widehat{\beta}$ show them as nonzero, and vice versa?
_Feature-level_ precision and recall look at the same question for entire
features: If a row of $\beta_{\text{sim}}$ had any nonzero coefficients, did
our corresponding row of $\widehat{\beta}$ have any nonzero coefficients (not
necessarily the same ones), and vice versa?
The best test-set error values are bolded (though not necessarily
statistically significant). As we would expect, each of Partial MIC, Full MIC,
and RIC is the winner in the synthetic-data regime for which it was designed,
although Partial MIC competes with Full MIC even in the Full regime. Though
the results are not shown, we observed informally that AndoZhang and BBLasso
tended to have larger differences between training and testing error than MIC,
implying that MIC may be less likely to overfit. Resistance to overfitting is
a general property of MDL approaches, because the requirement of completely
coding $\widehat{\beta}$ constrains model complexity. This point can also be
seen from the high precision of the information-theoretic methods, which,
surprisingly, show comparable recall to BBLasso. Full MIC behaves most like
BBLasso, because both algorithms select entire rows of coefficients for
addition to $\widehat{\beta}$.
Table 3: Test-set accuracy, precision, and recall of MIC and other methods on 5 instances of the synthetic data sets generated as described in Section 2.4.1. Because synthetic data is cheap and our algorithms, once trained, are fast to evaluate, we used $10{,}000$ test data points for each data-set instance. Standard errors are reported over each task; that is, with 5 data sets and 20 tasks per data set, the standard errors represent the sample standard deviation of 100 values divided by $\sqrt{100}$ (except for feature-level results, which apply only to entire data sets and so are divided by $\sqrt{5}$). Baseline accuracy, corresponding to guessing the majority category, is roughly 0.5. AndoZhang’s NA values are due to the fact that it does not explicitly select features. Method | True Model | Partial MIC | Full MIC | RIC | AndoZhang | BBLasso
---|---|---|---|---|---|---
Partial Synthetic Data Set
Test error | $0.07\pm 0.00$ | 0.10 $\pm$ 0.00 | $0.17\pm 0.01$ | $0.12\pm 0.01$ | $0.50\pm 0.02$ | $0.19\pm 0.01$
Coeff. precision | $1.00\pm 0.00$ | $0.84\pm 0.02$ | $0.26\pm 0.01$ | $0.84\pm 0.02$ | NA | $0.04\pm 0.00$
Coeff. recall | $1.00\pm 0.00$ | $0.77\pm 0.02$ | $0.71\pm 0.03$ | $0.56\pm 0.02$ | NA | $0.81\pm 0.02$
Feature precision | $1.00\pm 0.00$ | $0.99\pm 0.01$ | $0.97\pm 0.02$ | $0.72\pm 0.05$ | NA | $0.20\pm 0.03$
Feature recall | $1.00\pm 0.00$ | $0.54\pm 0.05$ | $0.32\pm 0.03$ | $0.62\pm 0.04$ | NA | $0.54\pm 0.01$
Full Synthetic Data Set
Test error | $0.07\pm 0.00$ | 0.08 $\pm$ 0.00 | 0.08 $\pm$ 0.00 | $0.11\pm 0.01$ | $0.45\pm 0.02$ | $0.09\pm 0.00$
Coeff. precision | $1.00\pm 0.00$ | $0.98\pm 0.01$ | $0.80\pm 0.00$ | $0.86\pm 0.02$ | NA | $0.33\pm 0.03$
Coeff. recall | $1.00\pm 0.00$ | $1.00\pm 0.00$ | $1.00\pm 0.00$ | $0.63\pm 0.02$ | NA | $1.00\pm 0.00$
Feature precision | $1.00\pm 0.00$ | $0.80\pm 0.00$ | $0.80\pm 0.00$ | $0.36\pm 0.06$ | NA | $0.33\pm 0.17$
Feature recall | $1.00\pm 0.00$ | $1.00\pm 0.00$ | $1.00\pm 0.00$ | $1.00\pm 0.00$ | NA | $1.00\pm 0.00$
Independent Synthetic Data Set
Test error | $0.07\pm 0.00$ | $0.17\pm 0.01$ | $0.36\pm 0.01$ | 0.13 $\pm$ 0.01 | $0.49\pm 0.00$ | $0.35\pm 0.01$
Coeff. precision | $1.00\pm 0.00$ | $0.95\pm 0.01$ | $0.06\pm 0.01$ | $0.84\pm 0.02$ | NA | $0.02\pm 0.00$
Coeff. recall | $1.00\pm 0.00$ | $0.44\pm 0.02$ | $0.15\pm 0.02$ | $0.58\pm 0.02$ | NA | $0.43\pm 0.02$
Feature precision | $1.00\pm 0.00$ | $1.00\pm 0.00$ | $1.00\pm 0.00$ | $0.83\pm 0.02$ | NA | $0.30\pm 0.05$
Feature recall | $1.00\pm 0.00$ | $0.44\pm 0.02$ | $0.14\pm 0.02$ | $0.58\pm 0.03$ | NA | $0.42\pm 0.06$
#### 2.4.2 Yeast Growth
Our first real data set comes from [LCCP09, pp. 5-6]. It consists of real-
valued growth measurements of 104 strains of yeast ($n=104$ observations)
under 313 drug conditions. In order to make computations faster, we
hierarchically clustered these 313 conditions into 20 groups using single-link
clustering with correlation as the similarity measure. Taking the average of
the values in each cluster produced $h=20$ real-valued responses, which we
then binarized into two categories: values at least as big as the average for
that response (set to 1) and values below the average (set to 0).161616The
split was not exactly 50-50, as the data were sometimes skewed, with the mean
falling above or below the median. Table 4 reflects this fact, showing that a
classifier that simply guesses the majority label achieves an average error
rate of 0.41. The features consisted of 526 markers (binary values indicating
major or minor allele) and 6,189 transcript levels in rich media for a total
of $m=6{,}715$ features.
The “Yeast Growth” section of Table 4 shows test errors from 5-fold CV on this
data set. Though the difference isn’t statistically significant, Partial MIC
appears to outperform BBLasso on test error. In any event, Partial MIC
produces a much sparser model, as can be seen from the numbers of nonzero
coefficients and features: Partial MIC includes nonzero coefficients in an
average of 4 rows of $\widehat{\beta}$, in contrast to 63 for BBLasso. Partial
MIC also appears to outperform Full MIC and RIC, presumably because the
assumptions of complete sharing or no sharing of features across responses
rarely hold for real-world data. Like Partial MIC, AndoZhang did well on this
data set; however, the algorithm scales poorly to large numbers of responses
and so took 39 days to run.171717Much of the reason for this is that
AndoZhang, as currently implemented, only produces predictions for one of the
responses after it trains on all of them. Thus, in order to predict all 20
responses, we had to run it separately 20 times. It is probably possible to
train once and then predict all 20 responses simultaneously, but we wanted to
avoid introducing errors by changing the code.
#### 2.4.3 Yeast Markers
The Yeast Growth data set described above includes as features both markers
and transcripts for 104 yeast strains. We can consider these as variables for
prediction in their own right, without reference to the growth responses at
all. In fact, regression of transcripts against markers is commonly done; it’s
known as “expression quantitative trait loci” (eQTL) mapping [KW06]. Since the
marker variables are already binary (major or minor allele at the given
location), we decided to flip the problem around from the usual eQTL setup:
Using the $m=6{,}189$ transcripts as features, we predicted a subset of 20 of
the 526 markers (numbered 25, 50, 75, …, 500 for a good distribution along the
genome).
The results are shown in the “Yeast Markers” section of Table 4. Unlike the
case of the Yeast Growth data, there is apparently less sharing of feature
information across markers, as RIC appears to outperform Partial MIC on test
error. We did not run AndoZhang on this data set.
Table 4: Accuracy and number of coefficients and features selected on five folds of CV for the Yeast Growth, Yeast Markers, and Breast Cancer data sets. Standard errors are over the five CV folds; i.e., they represent (sample standard deviation) / $\sqrt{5}$. The “Majority Label” column represents a classifier which guesses the more common of the 0 / 1 labels seen in the training set. AndoZhang’s NA values are due to the fact that it does not explicitly select features. Method | Partial MIC | Full MIC | RIC | AndoZhang | BBLasso | Majority Label
---|---|---|---|---|---|---
Yeast Growth
Test error | 0.38 $\pm$ 0.04 | $0.39\pm 0.04$ | $0.41\pm 0.05$ | $0.39\pm 0.03$ | $0.43\pm 0.03$ | $0.41\pm 0.04$
Num. coeff. sel. | $22\pm 4$ | $64\pm 4$ | $9\pm 1$ | NA | $1268\pm 279$ | NA
Num. feat. sel. | $4\pm 0$ | $3\pm 0$ | $9\pm 1$ | NA | $63\pm 14$ | NA
Yeast Markers
Test Error | $0.34\pm 0.07$ | $0.44\pm 0.04$ | 0.27 $\pm$ 0.06 | - | $0.45\pm 0.05$ | $0.53\pm 0.03$
Num. coeff. sel. | $20\pm 1$ | $68\pm 26$ | $37\pm 2$ | NA | $1044\pm 355$ | NA
Num. feat. sel. | $16\pm 1$ | $3\pm 1$ | $37\pm 2$ | NA | $52\pm 18$ | NA
Breast Cancer
Test error | 0.33 $\pm$ 0.08 | $0.37\pm 0.08$ | $0.36\pm 0.08$ | $0.44\pm 0.03$ | 0.33 $\pm$ 0.08 | $0.39\pm 0.04$
Num. coeff. sel. | $3\pm 0$ | $11\pm 1$ | $2\pm 0$ | NA | $61\pm 19$ | NA
Num. feat. sel. | $2\pm 0$ | $2\pm 0$ | $2\pm 0$ | NA | $12\pm 4$ | NA
#### 2.4.4 Breast Cancer
Our Breast Cancer data set represents a combination of five of the seven data
sets used in [vtVDvdV+02]. It contains $1{,}171$ observations for $22{,}268$
RMA-normalized gene-expression values. We considered five associated
responses; two were binary—prognosis (“good” or“poor”) and ER status
(“positive” or “negative”)—and three were not—age (in years), tumor size (in
mm), and grade (1, 2, or 3). We binarized the three non-binary responses into
two categories: Response values at least as high as the average, and values
below the average. Some of the responses were unavailable for some
observations, so we eliminated those observations, leaving 882. Of those, we
kept only the first $n=100$, both to save computational resources and to make
the problem “harder.” To reduce the features to a manageable number, we took
the $m=5{,}000$ that had highest variance.
Table 4 shows the results. In this case, BBLasso did as well as Partial MIC on
test error; however, as usual, Partial MIC produced a much sparser model.
Sparsity is important for biologists who want to interpret the selected
features.
## 3 Hypothesis Testing with a Single Response
As the experimental results in Section 2.4 demonstrate, regression can be an
effective tool for predicting one or more responses from features, and in
cases where the number of features is large, selection can improve
generalization performance. However, as we noted with the Yeast and Breast
Cancer data sets, accuracy is not always the only goal; sometimes model
interpretability is important. Indeed, in science, the entire goal of
regression is often not prediction but rather discovering a “true model,” by
testing hypotheses about whether pairs of variables really have a linear
relationship. For instance, an econometrician regressing housing prices on
air-pollution levels and resident income [HR78] is not trying to predict price
levels but to study consumer demand behavior.
Hypothesis testing with a small number of features is straightforward: Given a
regression coefficient, we can compute a $t$ statistic, and if it exceeds some
threshold, we reject the null hypothesis that the coefficient is 0. The
situation becomes slightly more complicated when we have many features whose
coefficients we want to examine. Section 3.1 reviews some standard approaches
to this multiple-testing problem, and Section 3.2 recasts them in the light of
MDL. This sets the stage for an MIC approach to hypothesis testing in Section
4.
### 3.1 Multiple-Testing Procedures
Consider the case of a single response variable $y$ (say, a gene-transcript
level), and suppose we’re trying to determine which of $m$ features (say,
genetic markers) are linearly correlated with the response. If we test each
feature at a significance level $\alpha$, the overall probability that we’ll
falsely reject a true null hypothesis will be much greater than $\alpha$—this
is the problem of _multiple hypothesis testing_.
#### 3.1.1 Bonferroni Correction
A standard way to control against so-called _alpha inflation_ is called the
_Bonferroni correction_ : Letting $H_{1},\ldots,H_{m}$ denote the null
hypotheses and $p_{1},\ldots,p_{m}$ the associated p-values, we reject $H_{j}$
when $p_{j}\leq\frac{\alpha}{m}$. By Boole’s inequality, this controls what is
known as the _family-wise error rate_ (FWER), the probability of making any
false rejection, at level $\alpha$ under the _complete null hypothesis_ that
all of $H_{1},\ldots,H_{m}$ are true.181818When the test statistics are
independent or _positive orthant dependent_ , one can replace
$\frac{\alpha}{m}$ by $1-(1-\alpha)^{\frac{1}{m}}$, but the resulting
improvement in power is small for small $\alpha$ [Sha95, p. 570]. In fact,
Bonferroni controls FWER in a _strong sense_ as well: For any subset of null
hypotheses, the probability of falsely rejecting any member of that subset
when all members are true is also bounded by $\alpha$ [Hoc88, p. 800].
The Bonferroni correction is a _single-stage testing procedure_ in which,
unfortunately, testing a larger number of hypotheses reduces the power for
rejecting any one of them [Sha95, p. 569]. This is especially problematic when
the number of hypotheses tested is in the thousands or tens of thousands, as
is common with, e.g., microarray or fMRI data. However, statisticians have
developed several _multistage testing procedures_ that help to overcome this
limitation by conditioning rejection thresholds for some test statistics on
the results of rejections by others.
#### 3.1.2 Improvements on Bonferroni
One of the first of these was the _Holm step-down procedure_ [Hol79]. Here, we
let $p_{(1)},\ldots,p_{(m)}$ denote the p-values sorted in increasing order
and $H_{(1)},\ldots,H_{(m)}$ the corresponding null hypotheses. We begin by
looking at $p_{(1)}$: If it fails to be $\leq\frac{\alpha}{m}$, we stop
without rejecting anything. (This is what Bonferroni would do as well, since
no p-value is $\leq\frac{\alpha}{m}$.) However, if we do reject $H_{(1)}$, we
move on to $H_{(2)}$ and reject if and only if
$p_{(2)}\leq\frac{\alpha}{m-1}$. We continue in this manner, rejecting
$H_{(j)}$ when $p_{(j)}\leq\frac{\alpha}{m-j+1}$, until we fail to reject a
hypothesis. Like Bonferroni, the Holm method controls FWER in the strong sense
for independent or dependent test statistics [Hoc88, p. 800].
Simes [Sim86, p. 751] proposed a more lenient approach: Reject the complete
null hypothesis if any of the following inequalities holds, $j=1,2,\ldots,m$:
$\begin{split}p_{(j)}<\frac{j\alpha}{m}.\end{split}$
This controls FWER for independent test statistics at level $\alpha$.
Moreover, simulation studies suggested that it remained true for various
multivariate-normal and multivariate-gamma test statistics [Sim86, p. 752].
Unfortunately, unlike the Bonferroni and Holm procedures, the Simes approach
says nothing about rejecting individual hypotheses once the complete null is
rejected [Sim86, p. 754], although [Hoc88] and [Hom88] subsequently proposed
limited procedures for doing so.
#### 3.1.3 FDR and the BH Procedure
In his closing discussion, Simes [Sim86, p. 754] proposed that if we wanted to
reject individual null hypotheses, we might reject the ordered hypotheses
$H_{(1)},\ldots,H_{(q)}$ such that
$\begin{split}q=\max\left\\{j:p_{(j)}\leq\frac{j\alpha}{m}\right\\}.\end{split}$
(25)
However, he cautioned that there was “no formal basis” for this. Indeed,
[Hom88, p. 384] showed that in certain cases, the FWER of this procedure could
be made arbitrarily close to 1 for large $m$.
However, Benjamini and Hochberg [BH95] pointed out that the Simes procedure
could be said to control a different measure, which they called the _false-
discovery rate_ (FDR). Letting $V$ denote the (unobservable) random variable
for the number of true null hypotheses rejected and $S$ the (unobservable)
random variable for the number of correctly rejected null hypotheses,
$\begin{split}\text{FDR}:=P(V+S>0)\
E\left[\frac{V}{V+S}\,\Bigg{\lvert}\,V+S>0\right]\end{split}$
(where $V+S$, the number of null hypotheses rejected in total, is observable).
Thus, FDR is the expected proportion of falsely rejected null hypotheses. This
statistic is more flexible than FWER because it accounts for the total number
of hypotheses considered; if, for instance, we had $m=1{,}000{,}000$
hypotheses, the FWER would be very high, but the _proportion_ of false
rejections could still be low. Because of this flexibility for large $m$, FDR
has become standard in bioinformatics, neuroscience, and many other fields.
[BH95, p. 293] showed that the _Benjamini-Hochberg (BH) step-up procedure_
(25) controls FDR at $\alpha$ for independent test statistics and any
configuration of false null hypotheses. [BY01] extended this result to show
that the procedure also controlled FDR for certain types of positive
dependency. This included positively correlated and normally distributed one-
sided test statistics, which occur often in, for instance, gene-expression
measurements [RYB03, p. 370].
Various extensions to the BH procedure have been proposed. For instance,
[BL99] suggested a step-down approach for independent test statistics that,
while not dominating the BH step-up procedure, tended experimentally to yield
higher power. [BY01, p. 1169] showed that replacing $\alpha$ by
$\alpha/\sum_{j=1}^{m}\frac{1}{j}$ would control FDR at $\alpha$ for any test-
statistic correlation structure, though this is often more conservative than
necessary (p. 1183). [YB99] proposed a resampling approach for estimating FDR
distributions based the data, in order to gain increased power when the test
statistics were highly correlated. Many further modifications to the BH
procedure have been put forward since. However, in this thesis, we stick with
the original version given by (25).
### 3.2 Hypothesis Testing by MDL
MDL compares log-likelihoods of data under various models, choosing a more
complicated model if and only if it has sufficiently lower negative log-
likelihood. Appendix A shows that, in fact, this process is equivalent to a
standard statistical procedure known as a generalized likelihood-ratio test,
at some implied significance level $\alpha$.
Hypothesis testing differs slightly from regression, however. The goal with
regression is to minimize test-set error by choosing a few highly informative
features. Because MIC penalizes for each feature, if we find several, very
similar features that are all correlated with the response, stepwise
regression by MIC will likely include only one of them. For instance, suppose
the true model is $y=f_{1}+2f_{2}+\epsilon$, where $f_{1}$ and $f_{2}$ are
nearly identical features. Not wanting to waste model-coding bits, regression
MIC would probably give the model $y=3f_{1}+\epsilon$ or $y=3f_{2}+\epsilon$.
With hypothesis testing, we aim to find all of the features that are
significantly correlated with the response. In the above example, we would
hope to include both $f_{1}$ and $f_{2}$ in our set of relevant features. To
do this, we regress $Y$ on each feature in isolation; we keep those features
that, according to MDL, deserve nonzero regression coefficients. Conceptually,
we can think of this as a for-loop over features, in a given iteration of
which we call an MIC-stepwise-regression function using the regular $Y$ matrix
but with an $X$ matrix that contains only the current feature (and an
intercept). There’s one catch, though: Since our for-loop searches through $m$
potential features to add, we’re effectively doing $m$ hypothesis tests, and
we need to incorporate some sort of multiple-testing penalty. Below we
describe a way to motivate such a penalty from an MDL perspective.
Section 1.2 described a scenario to motivate ordinary MDL regression: Sender
wanted to transmit data $Y$ to Receiver when both Sender and Receiver knew the
value of an associated matrix $X$ of features. Now suppose instead that there
are $m$ different Receivers, where Receiver $i$ knows the values only of the
$i^{\text{th}}$ feature (i.e., the $i^{\text{th}}$ column of $X$). All the
Receivers want to reconstruct $Y$, so Sender has to transmit messages to each
of them telling them how to do it, possibly using the feature that they
individually know. The messenger Hermes visits Sender and tells her some
ground rules: Sender must transmit directly to each Receiver only information
about how to reconstruct $Y$ given a model. All model information itself
(i.e., all the $\widehat{\beta}$ coefficients) Sender has to transmit to
Hermes, who will then visit each Receiver and tell him his appropriate
coefficient (possibly 0 if Sender hasn’t bothered to encode a model for that
feature). As a bonus, Sender is allowed to include in each model the intercept
coefficient of $Y$ for free, meaning that if Sender doesn’t specify a feature
regression coefficient for Receiver $i$, that Receiver at least gets the
average $\overline{Y}$ to use in reconstructing $Y$. Letting
$\widehat{\beta}_{q}$ denote the model Sender tells Hermes, Sender’s total
description length is
$\begin{split}\mathcal{D}(\widehat{\beta}_{q})+\mathcal{D}^{m}(Y\,\lvert\,\widehat{\beta}_{q}),\end{split}$
where
$\begin{split}\mathcal{D}^{m}(Y\,\lvert\,\widehat{\beta}_{q}):=\sum_{i=1}^{m}\mathcal{D}(Y\,\lvert\,\mathcal{M}_{i}).\end{split}$
(26)
Here, we define $\mathcal{M}_{i}$ to be a null model, containing only the free
intercept term, if Sender doesn’t specify a nonzero regression coefficient for
Receiver $i$ in $\widehat{\beta}_{q}$. Otherwise, $\mathcal{M}_{i}$ contains
both a costless intercept term and the regression coefficient for feature $i$.
The result should be something like a series of hypothesis tests, in which the
features in $\widehat{\beta}_{q}$ correspond to the rejected null hypotheses.
The idea is that, because Sender has to tell Hermes which coefficients go to
which Receiver, the coding of $\widehat{\beta}_{q}$ will involve a penalty
that grows with the number of possible features, which is what we need to
protect against alpha inflation from multiple tests.
### 3.3 MDL Multiple-Testing Corrections
We consider two coding schemes that Sender might use to tell Hermes which
features are contained in her model $\widehat{\beta}_{q}$.
#### 3.3.1 Bonferroni-Style Penalty
Suppose $m=5{,}000$ and Sender wants to use nonzero coefficients for features
226, 1,117 and 3,486. One way to tell Hermes this is to transmit the binary
representation of each of these numbers, using an idealized $\lg m$ bits for
each one. Then, as in Section 1.2.2, Sender spends 2 bits to specify each
coefficient. The resulting message to Hermes costs $3\lg m+3\cdot 2$ bits. In
general, Sender’s description length will be
$\begin{split}q\lg
m+2q+\mathcal{D}^{m}(Y\,\lvert\,\widehat{\beta}_{q}).\end{split}$ (27)
For each feature $j$, let $-\lg\Lambda_{j}$ denote the decrease in residual
coding cost that would result from using a nonzero coefficient for feature
$j$. (See the Appendix for the motivation behind this notation.) Then
minimizing (27) is equivalent to the following decision rule: Looking at the
features in decreasing order of their $-\lg\Lambda_{j}$ values, add feature
$j$ to the model as long as
$\begin{split}-\lg\Lambda_{j}\geq\lg m+2.\end{split}$ (28)
Interestingly, (39) in Appendix A.4 shows that this is equivalent to a
Bonferroni correction at some implied $\alpha$. In fact, according to Appendix
A.5, taking a cost per coefficient around 2,1919192.77 to be exact, for $m=1$
and one degree of freedom, though this changes somewhat with $m$ and the
number of degrees of freedom. as we did for stepwise regression, corresponds
to an $\alpha$ around 0.05.
#### 3.3.2 BH-Style Penalty
If Sender expects to code more than one coefficient, rather than coding an
entire index to specify each feature, she may find it advantageous to use a
scheme similar to (21). There, the scheme was used to convey a subset of
nonzero response coefficients in Partial MIC; here it would describe a subset
of features from $\left\\{1,\ldots,m\right\\}$. Using this code, Sender’s
description length using $\widehat{\beta}_{q}$ would be
$\begin{split}\lg^{*}q+c_{m}+\lg\binom{m}{q}+2q+\mathcal{D}^{m}(Y\,\lvert\,\widehat{\beta}_{q}).\end{split}$
(29)
Apart from the two (small) terms at the beginning, this can be made to
correspond at some implied $\alpha$202020In fact, the same $\alpha$ as for the
Bonferroni case above. with (44) and thus approximately with the ordinary BH
procedure.
## 4 MIC for Hypothesis Testing
As was the case with regression, real-world hypothesis-testing problems often
involve multiple responses. [TVW05, sec. 1] discuss the task of identifying a
subset of 770 wavelength features that best predict 14 chemical quantities
(e.g., pH, organic carbon, and total cations). _Expression quantitative trait
loci_ (eQTL) mapping is the process of looking for correlations between
potentially thousands of gene-expression transcripts and genetic markers
(e.g., [SMD+03]). Section 4.1 reviews some existing ways to deal with this
problem, and then Section 4.2 describes the MIC approach. Experimental
comparisons of the various methods are described in Section 4.3.
### 4.1 Existing Approaches
We describe a few examples of ways to perform hypothesis testing with multiple
responses.
#### 4.1.1 Bonferroni and BH
Probably the most straightforward approach is to treat each response
separately and apply, say, the Bonferroni or BH procedures of Section 3.1 one
response at a time. We’ll refer to these approaches as simply “Bonferroni” and
“BH,” respectively.
According to [KCY+06, p. 20], a number of early eQTL studies took this
approach, applying single-transcript mapping methods to each transcript
separately. However, because such methods are designed to control false
positives when analyzing a single transcript, multiple tests across
transcripts may result in inflated FDR [KCY+06, p. 23].
#### 4.1.2 BonferroniMatrix and BHMatrix
To account for the greater number of hypothesis tests being performed with
$h>1$ responses, we might penalize not just by $\frac{\alpha}{m}$ but
$\frac{\alpha}{mh}$. The procedure that consists in applying the latter
Bonferroni threshold to each p-value in our $m\times h$ matrix of p-values is
what we’ll call “BonferroniMatrix.” Effectively, we’re imagining our matrix of
p-values as one long vector and applying the standard Bonferroni correction to
that.
Similarly, we can imagine turning our matrix of p-values into a single vector
and applying the BH method to it: We’ll call this approach “BHMatrix.” In
contrast to BH, it has a harsher starting penalty ($\frac{\alpha}{mh}$ rather
than $\frac{\alpha}{m}$), but it also allows us to pick out the best p-values
from any location in the entire matrix, not just p-values in the particular
column (response) that we’re currently examining. [KCY+06, p. 21] describes a
conceptually similar approach, called “Q-ALL,” that used so-called q-values
[Sto03] to identify significant marker-transcript correlations.
The result of BonferroniMatrix at level $\alpha$ is the same as that of
regular Bonferroni at level $\frac{\alpha}{h}$, so if we’re looking at a range
of significance levels, it suffices to examine just Bonferroni or
BonferroniMatrix; we’ll show results for Bonferroni only in what follows. The
same is not true for BH vs. BHMatrix, so we report on both methods.
#### 4.1.3 Empirical Bayes
The BHMatrix approach does not treat each response separately, since, for
example, if one response has lots of low p-values that pass the harshest
thresholds of the step-up procedure, it can leave easier thresholds for the
other responses. But this doesn’t really “share information” across responses
in a meaningful way.
One approach that does is called _EBarrays_ [KNLG03], which uses an empirical
Bayes hierarchical model to estimate a prior for the underlying means of each
transcript (response), allowing for stable inference across transcripts
despite the small sample size of each individually. The goal is to determine,
at each marker (feature), which transcripts show significant association.
Originally designed for sharing transcript information one marker at a time,
the method has been extended to a _mixture over markers_ (MOM) model that uses
the EM algorithm to assign probabilities that a transcript correlates to each
marker [KCY+06]. As the names suggest, these methods apply specifically to
eQTL problems, though of course, the empirical-Bayes shrinkage framework
applies more generally. We give this as just an example of previous approaches
to sharing strength across responses that have been developed for hypothesis
testing.
### 4.2 The MIC Approach
While the EBarrays approach shares information across responses, it fails to
take advantage of one potentially important source of shared strength: Namely,
that features strongly associated with one response are perhaps more likely to
be associated with other responses. In the case of eQTL, for example, we might
suspect this type of sharing across responses by marker features corresponding
to trans-regulatory elements, which affect expression of many different genes.
On the other hand, the MIC approach of Section 2 does pick up on sharing of
features across responses, so we propose a method for hypothesis testing by
MIC. Just as the single-response version of hypothesis testing by MDL in
Section 3.2 applied single-response regression to each feature individually,
so we can also imagine applying the multiple-response MIC regression approach
of Section 2.2 to each feature individually in order to do multiple-response
hypothesis testing.
This amounts basically to a for-loop of the MIC Stepwise Regression Algorithm
2.3 over each feature, except that the costs of coding the features are
different from usual because of the setup that Hermes imposed (see Section
3.2). Just as in that section, we can consider a Bonferroni-style
(“Bonferroni-MIC”) and a BH-style (“BH-MIC”) coding scheme, using feature
penalties of $q\lg m$ and $\lg^{*}q+c_{m}+\lg\binom{m}{q}$, respectively.
Below we make each of these approaches more explicit.
#### 4.2.1 Implementation
With Bonferroni-MIC, we can evaluate each feature one at a time for inclusion
based on whether StepwiseRegressionMIC (Algorithm 2.3) would have included
nonzero coefficients for that feature, with the following exception. That
algorithm includes a feature penalty of $\lg$ of the number of features
(columns) in the given $X$ matrix. Here, when we give the algorithm only one
column of $X$ at a time, this term would normally be just $\lg 1$. However,
the coding scheme that Hermes imposed requires us instead to use $\lg m$ bits
to specify a feature, where $m$ is the total number of features in $X$.
Pseudocode appears in Algorithm 4.2.1.
Algorithm 4.1: Bonferroni-MIC($X,Y,\text{method}$)
$\begin{array}[]{@{\pcode@tab{1}}lr@{}}\hskip
4.30554pt\lx@intercol{}\widehat{\beta}\leftarrow 0\\\ \hskip
4.30554pt\lx@intercol\mbox{for }i=1\mbox{ to }m\text{ \ \ \ // Loop over the
features.}\\\ \hskip
4.30554pt\lx@intercol\left\\{\begin{array}[]{@{}lr@{}}\text{\ Essentially, we
call }\\\ \ \ \ \
\widehat{\beta}_{i}\leftarrow\mbox{{StepwiseRegressionMIC}}(X_{i},Y,\text{method}),\\\
\text{ \ except that the usual penalty term corresponding to $\lg$ of the
number of columns of the }\\\ \text{ \ input $X_{i}$ matrix (just 1 here) is
replaced by $\lg m$, where $m$ is the number of columns of $X$.}\\\
\end{array}\right.\\\ \hskip 4.30554pt\lx@intercol\mbox{return
}(\widehat{\beta})\end{array}$
BH-MIC is slightly trickier, because the cost of adding a new feature depends
on how many features are already in the model. Our approach, outlined in
Algorithm 4.2.1, is to loop through features and evaluate whether they would
include nonzero coefficients when no feature-coding cost is imposed (i.e., no
$(\lg m)$-like term is charged). This fills $\widehat{\beta}$ with a possibly
inflated number of nonzero coefficients that we subsequently trim down if
necessary. To do this, we evaluate the cost of keeping the best $q$ of the
currently nonzero features for each $q$ from 0 to the current number of
nonzero features $q_{\text{orig}}$, choose the optimal value $q^{*}$, and zero
out the rest. Note that zeroing out those rows of $\widehat{\beta}$ is
sufficient; we have no need to go back and recompute the optimal subset of
responses for the remaining features, because the cost to specify the included
features is the same regardless of the response subset.
Algorithm 4.2: BH-MIC($X,Y,\text{method}$)
$\begin{array}[]{@{\pcode@tab{1}}lr@{}}\hskip
4.30554pt\lx@intercol{}\widehat{\beta}\leftarrow 0\\\ \hskip
4.30554pt\lx@intercol\mbox{for }i=1\mbox{ to }m\text{ \ \ \ // Loop over the
features.}\\\ \hskip
4.30554pt\lx@intercol\left\\{\begin{array}[]{@{}lr@{}}\text{\ Essentially, we
call }\\\ \ \ \ \
\widehat{\beta}_{i}\leftarrow\mbox{{StepwiseRegressionMIC}}(X_{i},Y,\text{method}),\\\
\text{ \ but without any cost to code features (the usual $\lg m$ term). We
also record $s_{i}$,}\\\ \text{ \ the (positive) number of bits saved by using
the current subset of nonzero coefficients $\widehat{\beta}_{i}$ }\\\ \text{ \
instead of having all the coefficients zero. If
$\widehat{\beta}_{i}==0_{1\times h}$, i.e., all the coefficients}\\\ \text{ \
are already all zeros, $s_{i}\leftarrow 0$.}\\\ \end{array}\right.\\\ \hskip
4.30554pt\lx@intercol\text{Sort the $s_{i}$ in decreasing order and call them
$s_{(i)}$. $s_{(1)}$ corresponds to saving the most bits, etc.}\\\ \hskip
4.30554pt\lx@intercol q_{\text{orig}}\leftarrow\text{number of nonzero
features currently in $\widehat{\beta}$ (possibly too many)}\\\ \hskip
4.30554pt\lx@intercol q^{*}\leftarrow\displaystyle\max_{q\in\left\\{1,\ldots
q_{\text{orig}}\right\\}}\left\\{\sum_{i=1}^{q}s_{(i)}-\left(\lg^{*}q+c_{m}+\lg\binom{m}{q}\right)\right\\}\text{
(or }q^{*}\leftarrow 0\text{ if none of these is positive)}\\\ \hskip
4.30554pt\lx@intercol\text{Zero out the rows of $\widehat{\beta}$
corresponding to features whose inclusion saves fewer than $s_{(q^{*})}$
bits.}\\\ \hskip 4.30554pt\lx@intercol\mbox{return
}(\widehat{\beta})\end{array}$
In both Bonferroni-MIC and BH-MIC, the algorithmic complexity is dominated by
the for-loop. In Section 2.3, we saw that a call to StepwiseRegressionMIC
required $\mathcal{O}(m_{i}rh^{2})$ evaluations of description length, where
we’ve used $m_{i}$ to denote the number of columns of $X_{i}$. In particular,
$m_{i}=1$ for each $i$ and $r$ is at most 1, so the complexity within a given
for-loop iteration is only $\mathcal{O}(h^{2})$. Overall, then, these
algorithms evaluate description length $\mathcal{O}(mh^{2})$ times.
#### 4.2.2 Hypothesis-Testing Interpretation
Consider a single feature. In Section 3.2, we noted the correspondence between
MDL regression on that feature and a statistical likelihood-ratio test of
whether its regression coefficient was nonzero. With $h>1$ responses, MIC
hypothesis testing searches over subsets of responses to have nonzero
coefficients with the feature. Each evaluation of description length during
this process can be interpreted as a hypothesis test, in the way explained in
the Appendix. In particular, when we evaluate the description length for some
subset $S\subset\left\\{1,2,\ldots,h\right\\}$ of the responses being nonzero,
we’re implicitly doing a likelihood-ratio test with these null and alternative
hypotheses:
$\begin{split}&H_{0}:\beta=0_{1\times h}\ \ \text{ vs. }\\\
&H_{1}:\beta_{i}\neq 0\text{ for $i\in S$,}\end{split}$ (30)
where $\beta$ is the $1\times h$ matrix of true regression coefficients of the
feature with the $h$ responses. In theory, MDL performs exponentially many
such tests, one for each subset $S$,212121Of course, in practice we do only
$\mathcal{O}(h^{2})$ of them during our stepwise-style search through
responses to include. though they are obviously highly correlated. We can
interpret the $\lg^{*}k+c_{h}+\lg\binom{h}{k}$ penalty to specify the subset
of $k$ of the responses (as described in Section 2.2.1) as something of a
multiple-testing correction for doing all of these implicit hypothesis tests.
Section 3.3 and the corresponding portions of the Appendix pointed out the
approximate correspondence between particular penalties and particular
corrections: For instance, that $\lg m$ in log-likelihood space is essentially
a Bonferroni correction in p-value space, or that a cost per coefficient of
2.77 approximately corresponds to $\alpha=0.05$. With $h>1$ responses, these
approximations are somewhat rougher. The reason is that in (30),
$k=\left|S\right|$, the difference in dimension of the parameter spaces
between $H_{0}$ and $H_{1}$, is often greater than 1. In this case,
$-2\ln\Lambda\sim\chi^{2}_{(k)}$, and for $k>1$, the standard-normal
approximation using $\Phi$ in Appendix A.3 no longer goes through.
Nevertheless, we can expect the Bonferroni- and BH-style penalties from
Section 3.3 to do roughly the right things for multiple responses; we needn’t
make them match the Bonferroni and BH procedures exactly. (If we did, what
would be the point of using information theory?)
### 4.3 Experiments
When we measure the performance of our algorithms in terms of test-set error,
we can use cross-validation to run experiments on real data sets, as we did in
Section 2.4. When, instead, we need to judge how well our algorithm identifies
truly nonzero coefficients, we have to fall back to synthetic data. That’s
because the real world doesn’t tell us the true $\beta$ matrix222222Indeed,
there may not be any true $\beta$ matrix, because the data are not _really_
generated according to our model (18), though the approximation works well
enough.—we only know it if we make it ourselves. Thus, in this section, we
rely entirely on synthetic data. However, we can sometimes base our synthetic
data on a real-world data set, and we do this with the Yeast Growth data in
Section 4.3.2.
For each of the scenarios below, we generated 25 random instances of the data
sets, corresponding to 25 random $\beta_{\text{sim}}$ matrices, taking $n=100$
training data points and $h=20$ responses. Because synthetic data is cheap, we
evaluate test-set error on $10{,}000$ test data points. We calculate error for
each response as $\text{SSE}/\sqrt{n}$, where SSE is the sum of squares error
from the ordinary-least-squares regression of the given response on exactly
the features selected by the algorithm (plus an intercept). We report
precision and recall at the coefficient level (whether each particular nonzero
coefficient was selected). We calculate these results separately for each
response in each data set, so that standard errors represent standard
deviations divided by $\sqrt{20\cdot 25}$.
We compare the following feature-selection approaches:
* •
“Truth”: Use an oracle to select exactly the true features.
* •
“Bonf,$\alpha$=$\alpha_{0}$”: For each response separately, select those
features whose p-values with the response are $\leq\alpha_{0}/m$. We calculate
the p-value for a given feature and response by regressing the response
against only that feature and an intercept, and evaluating the p-value of the
slope regression coefficient.
* •
“Indep,cpc=$c_{0}$”: Apply the RIC algorithm described in Section 2.2 to each
response separately, using $c_{0}$ as the cost to code a coefficient value.
($c_{0}$ does not include the cost to specify which features enter the model,
which is always $\lg m$. It shouldn’t fall below 0, and the lowest value we
tried was 0.1.)
* •
“Bonf-MIC”: Bonferroni-style MIC, as described in Algorithm 4.2.1. The cost to
code a coefficient is 2 bits.
* •
“BH,$\alpha$=$\alpha_{0}$”: For each response separately, calculate p-values
the same way as with “Bonf,$\alpha$=$\alpha_{0}$,” but apply the BH procedure
(25) at level $\alpha_{0}$ instead of the Bonferroni correction.
* •
“BHMat,$\alpha$=$\alpha_{0}$”: Apply the BH procedure at level $\alpha_{0}$ to
the entire $m\times h$ matrix of p-values all at once.
* •
“BH-MIC”: BH-style MIC, as described in Algorithm 4.2.1. The cost to code a
coefficient is 2 bits.
The MIC methods have no free parameters (if we fix the cost per coefficient at
2 bits), so they give point values for precision and recall. To allow for
comparison, then, we ran the other methods at a variety of $\alpha_{0}$ or
$c_{0}$ levels, presenting results for $\alpha_{0}$ or $c_{0}$ levels at which
precision approximately matched that of MIC; relative performance can then be
assessed based on recall. Because we tried only a discrete grid of
$\alpha_{0}$ or $c_{0}$ values, precisions did not always match exactly, so we
took the highest precision not exceeding the MIC value. This puts MIC at a
slight disadvantage in comparing recall, because its precision is often
slightly higher than that of the other methods.
In our tables below, we make bold those results that represent the best
performance among the three Bonferroni-style methods and also among the three
BH-style methods. Here, “best performance” is based on the observed average
value, but the differences are often not statistically significant.
#### 4.3.1 Simulated Synthetic Data
We created three data sets in the same manner as described in Section 2.4.1,
the only difference being that we took $m=1{,}000$ features instead of
$2{,}000$ (for no particular reason). Table 5 shows the results for the
Partial, Full, and Independent scenarios.
MIC appears to have had worse test-error than the other methods except in the
Full scenario, where it performed close to the Truth. As the especially poor
Independent performance suggests, this is probably due to the fact that MIC
has a harder time picking up on single nonzero coefficients in a given row of
$\beta_{\text{sim}}$, due to its “higher overhead” cost to put them in (see
Table 2 for $k=1$).
Of course, for hypothesis-testing algorithms, test error is not the most
important metric. On recall, MIC does outperform its competitors in the
Partial and especially Full scenarios. And as we would expect, it performs
worse in the Independent case, again due to its high overhead for individual
coefficients on a row of $\widehat{\beta}$.
Partial Scenario
---
Method | Truth | Bonf,$\alpha$=1 | Indep,cpc=0.1 | Bonf-MIC | BH,$\alpha$=0.3 | BHMat,$\alpha$=0.3 | BH-MIC
Train Err | $0.31\pm 0.00$ | $0.52\pm 0.01$ | $0.63\pm 0.01$ | $0.56\pm 0.01$ | $0.51\pm 0.01$ | $0.52\pm 0.01$ | $0.53\pm 0.01$
Test Err | $0.32\pm 0.00$ | ${\bf 0.57\pm 0.01}$ | $0.66\pm 0.01$ | $0.59\pm 0.02$ | ${\bf 0.56\pm 0.01}$ | $0.57\pm 0.01$ | $0.57\pm 0.01$
Coeff Prec | $1.00\pm 0.00$ | $0.74\pm 0.01$ | $0.96\pm 0.00$ | $0.76\pm 0.01$ | $0.70\pm 0.01$ | $0.73\pm 0.01$ | $0.74\pm 0.01$
Coeff Rec | $1.00\pm 0.00$ | $0.60\pm 0.01$ | $0.53\pm 0.01$ | ${\bf 0.71\pm 0.01}$ | $0.61\pm 0.01$ | $0.60\pm 0.01$ | ${\bf 0.73\pm 0.01}$
Full Scenario
Method | Truth | Bonf,$\alpha$=2 | Indep,cpc=0.1 | Bonf-MIC | BH,$\alpha$=0.45 | BHMat,$\alpha$=0.5 | BH-MIC
Train Err | $0.31\pm 0.00$ | $0.49\pm 0.01$ | $0.61\pm 0.01$ | $0.30\pm 0.00$ | $0.49\pm 0.01$ | $0.48\pm 0.01$ | $0.30\pm 0.00$
Test Err | $0.32\pm 0.00$ | $0.54\pm 0.01$ | $0.63\pm 0.01$ | ${\bf 0.33\pm 0.00}$ | $0.54\pm 0.01$ | $0.53\pm 0.01$ | ${\bf 0.33\pm 0.00}$
Coeff Prec | $1.00\pm 0.00$ | $0.62\pm 0.01$ | $0.97\pm 0.00$ | $0.62\pm 0.01$ | $0.59\pm 0.01$ | $0.57\pm 0.01$ | $0.61\pm 0.01$
Coeff Rec | $1.00\pm 0.00$ | $0.61\pm 0.01$ | $0.53\pm 0.01$ | ${\bf 0.99\pm 0.00}$ | $0.61\pm 0.01$ | $0.61\pm 0.01$ | ${\bf 0.99\pm 0.00}$
Independent Scenario
Method | Truth | Bonf,$\alpha$=0.1 | Indep,cpc=0.1 | Bonf-MIC | BH,$\alpha$=0.04 | BHMat,$\alpha$=0.05 | BH-MIC
Train Err | $0.31\pm 0.00$ | $0.59\pm 0.01$ | $0.59\pm 0.01$ | $0.82\pm 0.02$ | $0.59\pm 0.01$ | $0.59\pm 0.01$ | $0.70\pm 0.01$
Test Err | $0.32\pm 0.00$ | ${\bf 0.62\pm 0.01}$ | ${\bf 0.62\pm 0.01}$ | $0.86\pm 0.02$ | ${\bf 0.62\pm 0.01}$ | ${\bf 0.62\pm 0.01}$ | $0.73\pm 0.02$
Coeff Prec | $1.00\pm 0.00$ | $0.96\pm 0.01$ | $0.96\pm 0.01$ | $0.96\pm 0.01$ | $0.96\pm 0.01$ | $0.96\pm 0.01$ | $0.96\pm 0.01$
Coeff Rec | $1.00\pm 0.00$ | $0.53\pm 0.01$ | ${\bf 0.54\pm 0.01}$ | $0.40\pm 0.01$ | ${\bf 0.54\pm 0.01}$ | ${\bf 0.54\pm 0.01}$ | $0.47\pm 0.01$
Table 5: Test-set accuracy, precision, and recall of MIC and other methods on
25 instances of the synthetic data sets generated as described in Section
2.4.1, except with $m=1{,}000$.
#### 4.3.2 Simulated Yeast Data
We created a synthetic Yeast Growth data set based on the one described in
Section 2.4.2. As will be explained below, talking about the “true
coefficients” of a synthetic model requires that the features be relatively
uncorrelated. However, the transcript features of the original data set were
highly correlated, so we included only the 526 marker features in the $X$
matrix. As before, we made use of the 20 growth clusters as the $Y$ matrix. We
created four types of data sets with varying degrees of correlation among the
features, but the basic data-generation process was the same for each, and we
describe it below.
We started by trying to approximate what the true distribution of nonzero
coefficients in $\beta$ might look like by running the RIC algorithm described
in Section 2.2 with a cost of 3 bits per coefficient. The resulting $526\times
20$ $\widehat{\beta}$ matrix contained 33 nonzero coefficients, many of them
in blocks of four or five down certain columns due to correlation among
adjacent marker features.
The next step was to construct a true $\beta$ matrix $\beta_{\text{sim}}$
resembling, but not identical to, $\widehat{\beta}$.232323If we took
$\beta_{\text{sim}}$ to be equal to $\widehat{\beta}$, then the RIC algorithm
might have artificially high precision and recall. That’s because we’re
defining the “true features” for the synthetic data set as those that the RIC
algorithm returned on the real data set. Since we designed the synthetic data
set to imitate the real one, we might expect RIC to return similar patterns of
coefficients in both cases. Generating a new matrix to serve as
$\beta_{\text{sim}}$ also allows for randomization over multiple instances of
this synthetic data set. Of course, one downside of this simulation approach
is that correlational structure from the original problem is lost. We did this
based on summary statistics about $\widehat{\beta}$: What fraction $f$ of the
features had any nonzero coefficients? ($f=0.05$ for the actual
$\widehat{\beta}$.) Of those that did, what was the average number $a$ of
nonzero coefficients in each row? ($a=1.3$ here.) And what were the sample
mean $\mu_{\widehat{\beta}}$ and standard deviation $\sigma_{\widehat{\beta}}$
of the nonzero coefficient values? (Actual values were $-0.12$ and $0.20$,
respectively.) We initialized an empty $526\times 20$ matrix
$\beta_{\text{sim}}$ and, to fill it in, walked down the rows, each time
flipping a coin with probability $f$ to decide whether to give that row
nonzero coefficients. We drew the number of nonzero coefficients from a
Poisson distribution with rate $a$ (capped at $20$, the total number of
responses) and distributed those coefficients randomly among the $20$ columns.
The values of each of those coefficients were drawn independently from a
normal distribution with mean $\mu_{\widehat{\beta}}$ and standard deviation
$\sigma_{\widehat{\beta}}$. When we had finished walking down the rows, we
checked to make sure that the total number of nonzero coefficients in
$\beta_{\text{sim}}$ was within 25% of that in the original $\widehat{\beta}$;
if not, we started the process over.
We then constructed a simulated $100\times 526$ $X$ matrix $X_{\text{sim}}$ by
drawing each row from a multivariate-normal distribution with some covariance
$\widehat{\Sigma}^{X}$ based on the covariance matrix of the real $X$ matrix.
The first three variants of our synthetic data set consisted of taking
$\widehat{\Sigma}^{X}$ to be one of $\widehat{\Sigma}_{D}^{X}$ (diagonal),
$\widehat{\Sigma}_{\lambda}^{X}$ with $\lambda=0.5$ (half diagonal, half
full), and $\widehat{\Sigma}_{F}^{X}$ (full), with the subscripts as in
Section 2.2.2. Figure 2 plots, respectively, $\widehat{\Sigma}_{D}^{X}$,
$\widehat{\Sigma}_{0.5}^{X}$, and $\widehat{\Sigma}_{F}^{X}$ as correlation
matrices (i.e., each entry is scaled to fall in the range $[0,1]$), with red
indicating “high correlation” and blue, “low correlation.”
All three of the above variants involved normally distributed feature values,
even though $X$ itself consisted only of 0’s and 1’s (minor and major allele,
respectively). Thus, as the fourth variant, we took $X_{\text{sim}}$ to be the
original binary-valued $X$ matrix.
Figure 2: Correlation matrices of the 526 marker features in the Yeast Growth
data set. The figures correspond to, from left to right, diagonal shrinkage,
half-diagonal shrinkage, and no shrinkage. Red = correlation near 1, yellow =
correlation near 0.8, light blue = correlation near 0.2, dark blue =
correlation near 0. Due to the relatively high correlation among features,
measures of precision and recall are slightly misleading, because which
features are “truly correlated” with responses becomes unclear, as the text
explains.
Finally, we estimated $\widehat{\Sigma}_{D}$ in the manner of Section 2.2.2
with only an intercept feature in the model. With these matrices in place, we
computed
$\begin{split}Y_{\text{sim}}=X_{\text{sim}}\beta_{\text{sim}}+\epsilon_{\text{sim}},\end{split}$
with each row of $\epsilon_{\text{sim}}$ being independently distributed as
$\mathcal{N}_{h}(0,\widehat{\Sigma}_{D})$.
Table 6 shows the results for each variant of the data set. In general, MIC
performs essentially indistinguishably from the other methods. This is
probably because $a$, the average number of responses per feature, is only 1.3
for this particular data set, implying that there isn’t substantial sharing of
features across responses. There does appear to be enough sharing that MIC
doesn’t perform as much worse than the other methods as in the case of the
Independent synthetic data set in Table 5.
The precision of each of the methods degrades significantly as the correlation
among the features increases. In fact, these numbers are somewhat misleading,
because the notion of a “true coefficient” becomes fuzzy when features are
correlated. We generated the data by imposing a particular
$\beta_{\text{sim}}$ matrix, with nonzero coefficients in certain locations.
For instance, for response 3, we might have given feature 186 a coefficient of
0.2, causing a correlation between that feature and that response. However,
because feature 186 is correlated with, say, features 185 and 187, features
185 and 187 will also be correlated with response 3, so we end up selecting
them as well. This isn’t necessarily wrong to do, because the data really do
show a correlation; the arbitrariness of our choice of zeros and nonzeros in
$\beta_{\text{sim}}$ is at fault for the large apparent rate of “false
positives.”
Each row of $X$ distributed $\mathcal{N}_{m}(0,\widehat{\Sigma}_{D}^{X})$—no
correlation among features.
---
Method | Truth | Bonf,$\alpha$=0.1 | Indep,cpc=0.1 | Bonf-MIC | BH,$\alpha$=0.06 | BHMat,$\alpha$=0.085 | BH-MIC
Train Err | $0.02\pm 0.00$ | $0.02\pm 0.00$ | $0.02\pm 0.00$ | $0.02\pm 0.00$ | $0.02\pm 0.00$ | $0.02\pm 0.00$ | $0.02\pm 0.00$
Test Err | $0.02\pm 0.00$ | $0.03\pm 0.00$ | ${\bf 0.02\pm 0.00}$ | $0.03\pm 0.00$ | ${\bf 0.02\pm 0.00}$ | ${\bf 0.02\pm 0.00}$ | ${\bf 0.02\pm 0.00}$
Coeff Prec | $1.00\pm 0.00$ | $0.92\pm 0.01$ | $0.90\pm 0.01$ | $0.92\pm 0.01$ | $0.91\pm 0.01$ | $0.91\pm 0.01$ | $0.92\pm 0.01$
Coeff Rec | $1.00\pm 0.00$ | $0.80\pm 0.01$ | $0.81\pm 0.01$ | $0.79\pm 0.01$ | ${\bf 0.81\pm 0.01}$ | ${\bf 0.81\pm 0.01}$ | $0.80\pm 0.01$
Each row of $X$ distributed
$\mathcal{N}_{m}(0,\widehat{\Sigma}_{0.5}^{X})$—some correlation among
features.
Method | Truth | Bonf,$\alpha$=0.055 | Indep,cpc=1 | Bonf-MIC | BH,$\alpha$=0.03 | BHMat,$\alpha$=0.045 | BH-MIC
Train Err | $0.02\pm 0.00$ | $0.03\pm 0.00$ | $0.03\pm 0.00$ | $0.02\pm 0.00$ | $0.02\pm 0.00$ | $0.02\pm 0.00$ | $0.02\pm 0.00$
Test Err | $0.02\pm 0.00$ | ${\bf 0.03\pm 0.00}$ | ${\bf 0.03\pm 0.00}$ | ${\bf 0.03\pm 0.00}$ | $0.03\pm 0.00$ | $0.03\pm 0.00$ | ${\bf 0.02\pm 0.00}$
Coeff Prec | $1.00\pm 0.00$ | $0.78\pm 0.01$ | $0.76\pm 0.01$ | $0.78\pm 0.01$ | $0.75\pm 0.01$ | $0.75\pm 0.01$ | $0.75\pm 0.01$
Coeff Rec | $1.00\pm 0.00$ | $0.79\pm 0.01$ | $0.79\pm 0.01$ | ${\bf 0.80\pm 0.01}$ | $0.79\pm 0.01$ | $0.79\pm 0.01$ | ${\bf 0.80\pm 0.01}$
Each row of $X$ distributed $\mathcal{N}_{m}(0,\widehat{\Sigma}_{F}^{X})$—lots
of correlation among features.
Method | Truth | Bonf,$\alpha$=0.19 | Indep,cpc=0.1 | Bonf-MIC | BH,$\alpha$=0.07 | BHMat,$\alpha$=0.06 | BH-MIC
Train Err | $0.02\pm 0.00$ | $0.02\pm 0.00$ | $0.02\pm 0.00$ | $0.02\pm 0.00$ | $0.02\pm 0.00$ | $0.02\pm 0.00$ | $0.02\pm 0.00$
Test Err | $0.02\pm 0.00$ | ${\bf 0.02\pm 0.00}$ | ${\bf 0.02\pm 0.00}$ | ${\bf 0.02\pm 0.00}$ | ${\bf 0.02\pm 0.00}$ | ${\bf 0.02\pm 0.00}$ | ${\bf 0.02\pm 0.00}$
Coeff Prec | $1.00\pm 0.00$ | $0.19\pm 0.01$ | $0.20\pm 0.01$ | $0.19\pm 0.01$ | $0.15\pm 0.01$ | $0.15\pm 0.00$ | $0.15\pm 0.01$
Coeff Rec | $1.00\pm 0.00$ | ${\bf 0.84\pm 0.01}$ | ${\bf 0.84\pm 0.01}$ | ${\bf 0.84\pm 0.01}$ | ${\bf 0.86\pm 0.01}$ | $0.85\pm 0.01$ | $0.85\pm 0.01$
Real, original $X$ matrix.
Method | Truth | Bonf,$\alpha$=0.17 | Indep,cpc=0.1 | Bonf-MIC | BH,$\alpha$=0.06 | BHMat,$\alpha$=0.055 | BH-MIC
Train Err | $0.02\pm 0.00$ | $0.02\pm 0.00$ | $0.02\pm 0.00$ | $0.02\pm 0.00$ | $0.02\pm 0.00$ | $0.02\pm 0.00$ | $0.02\pm 0.00$
Test Err | $0.02\pm 0.00$ | ${\bf 0.02\pm 0.00}$ | ${\bf 0.02\pm 0.00}$ | ${\bf 0.02\pm 0.00}$ | ${\bf 0.02\pm 0.00}$ | ${\bf 0.02\pm 0.00}$ | ${\bf 0.02\pm 0.00}$
Coeff Prec | $1.00\pm 0.00$ | $0.15\pm 0.00$ | $0.15\pm 0.00$ | $0.15\pm 0.00$ | $0.13\pm 0.00$ | $0.13\pm 0.00$ | $0.13\pm 0.00$
Coeff Rec | $1.00\pm 0.00$ | $0.82\pm 0.01$ | $0.81\pm 0.01$ | ${\bf 0.83\pm 0.01}$ | ${\bf 0.84\pm 0.01}$ | ${\bf 0.84\pm 0.01}$ | ${\bf 0.84\pm 0.01}$
Table 6: Test-set accuracy, precision, and recall of MIC and other methods on
25 instances of the synthetic data sets generated as described in Section
4.3.2, with $m=526$ features.
## 5 Conclusion
The MDL principle provides a natural framework in which to design penalized-
regression criteria for feature selection. In the case of a single response,
one example of this is RIC, which can be characterized by an information-
theoretic penalty of $2\lg m$ bits per feature when selecting from among $m$
total features. We proposed an extension of this criterion, called MIC, for
the case of multiple responses. By efficiently coding the locations of
feature-response pairs for features associated with multiple responses, MIC
allows for sharing of information across responses during feature selection.
The method is competitive with, and sometimes outperforms, existing multitask
learning algorithms in terms of prediction accuracy, while achieving generally
sparser, more interpretable models.
MDL can also be viewed in the domain of hypothesis testing as a way of
correcting against alpha inflation in multiple tests. We explained how MDL
regression applied to each feature separately can be interpreted along the
lines of standard Bonferroni and Benjamini-Hochberg feature-selection
procedures. Again using the MIC approach, we extended this to hypothesis
testing with multiple responses, allowing for greater power in selecting true
coefficients when features are significantly shared across responses.
## Appendix A MDL and Hypothesis Testing
Information theory describes an isomorphism between probabilities and code
lengths in which the idealized code length of some symbol $x$ is $-\lg P(x)$
[Gru05, p. 28]. There is a similar, though only approximate, relationship
between MDL and the process of statistical hypothesis testing: namely, that
MDL will tend to introduce an extra parameter into a model in roughly the same
cases that a hypothesis test would reject the null hypothesis that the
parameter is zero. This is perhaps unsurprising, because MDL is a tool for
model selection, and hypothesis testing is about rejecting models that fit the
data poorly.
### A.1 Generalized Likelihood Ratio Tests
Given two models $\mathcal{M}_{0}$ and $\mathcal{M}_{1}$ and data
$\mathcal{D}$, statisticians define $\Lambda$ to be the ratio of their
likelihoods:
$\begin{split}\Lambda:=\frac{P(\mathcal{D}\,\lvert\,\mathcal{M}_{0})}{P(\mathcal{D}\,\lvert\,\mathcal{M}_{1})}.\end{split}$
(31)
In many cases, these models correspond to the same probability density
function with different parameter settings. For instance, in regression of a
single response $y$ on a single feature $x$ (with no intercept for
simplicity):
$\begin{split}y=\beta x+\epsilon,\ \
\epsilon\sim\mathcal{N}(0,\sigma^{2}),\end{split}$ (32)
$\mathcal{M}_{0}$ might specify that, given $x$, $y$ is distributed
$\mathcal{N}(0,\sigma^{2})$, i.e., that $\beta=0$, while $\mathcal{M}_{1}$
might say that given $x$, $y$ is distributed
$\mathcal{N}(\widehat{\beta}x,\sigma^{2})$, where $\widehat{\beta}$ is the
maximum-likelihood estimate for $\beta$ derived from the training data. Note
that $\mathcal{M}_{1}$ here depends on the observed data $\mathcal{D}$. Some
authors prefer to speak of fixed probability distributions
$\mathcal{M}_{\theta}$ with parameters $\theta$ ranging over different
parameter spaces $\Theta_{0}$ and $\Theta_{1}$ [LM05, p. 463], in which case
$\begin{split}\Lambda=\frac{\displaystyle\sup_{\theta\in\Theta_{0}}P(\mathcal{D}\,\lvert\,\mathcal{M}_{\theta})}{\displaystyle\sup_{\theta\in\Theta_{1}}P(\mathcal{D}\,\lvert\,\mathcal{M}_{\theta})}.\end{split}$
In the regression example, $\Theta_{0}=\left\\{0\right\\}$ while
$\Theta_{1}=\mathbb{R}$, the entire real line.
In fact, it will be convenient to consider $-\lg\Lambda$, which is large when
$\Lambda$ is small. In some sense, $-\lg\Lambda$ measures the “badness of fit”
of $\mathcal{M}_{0}$ relative to $\mathcal{M}_{1}$. Over different data sets
$\mathcal{D}$, $-\lg\Lambda$ will take on different values, according to some
probability distribution. If $\mathcal{M}_{0}$ is true, $-\ln\Lambda$ is
unlikely to be very large, so we can define a threshold $T_{\alpha}$ such that
$\begin{split}P(-\lg\Lambda>T_{\alpha}\,\lvert\,\mathcal{M}_{0})=\alpha\end{split}$
(33)
and reject $\mathcal{M}_{0}$ whenever $-\ln\Lambda$ exceeds this threshold.
This is known as a generalized likelihood ratio test (GLRT), of which many
standard statistical hypothesis tests are examples, including the $t$-test on
the significance of a regression coefficient [LM05, p. 685].
### A.2 Model Selection
Consider the approach that MDL would take. It “rejects” $\mathcal{M}_{0}$ in
favor of $\mathcal{M}_{1}$ just in the case that the description length
associated with $\mathcal{M}_{1}$ is shorter:
$\begin{split}\ell(\mathcal{M}_{1})+\ell(\mathcal{D}\,\lvert\,\mathcal{M}_{1})<\ell(\mathcal{M}_{0})+\ell(\mathcal{D}\,\lvert\,\mathcal{M}_{0}),\end{split}$
(34)
where $\ell(\cdot)$ stands for description length. By the discussion in
section 1.2.1, $\ell(\mathcal{D}\,\lvert\,\mathcal{M}_{1})=-\lg
P(\mathcal{D}\,\lvert\,\mathcal{M}_{1})$, so that (34) becomes
$\begin{split}\ell(\mathcal{M}_{1})-\ell(\mathcal{M}_{0})<\lg
P(\mathcal{D}\,\lvert\,\mathcal{M}_{1})-\lg
P(\mathcal{D}\,\lvert\,\mathcal{M}_{0}),\end{split}$
or
$\begin{split}-\lg\Lambda>\ell(\mathcal{M}_{1})-\ell(\mathcal{M}_{0}).\end{split}$
(35)
If $\mathcal{M}_{1}$ is more complicated than $\mathcal{M}_{0}$, then
$\ell(\mathcal{M}_{1})-\ell(\mathcal{M}_{0})$ will be positive, and there will
be some $\alpha$ at which
$\ell(\mathcal{M}_{1})-\ell(\mathcal{M}_{0})=T_{\alpha}$ as defined in (33).
Thus, selecting between $\mathcal{M}_{0}$ and $\mathcal{M}_{1}$ is equivalent
to doing a likelihood ratio test at the implicit significance level $\alpha$
determined by $\ell(\mathcal{M}_{1})-\ell(\mathcal{M}_{0})$.
### A.3 Example: Single Regression Coefficient
While MDL introduces the quantity $-\lg\Lambda$, it is more common in
statistics to deal instead with $-2\ln\Lambda=-(2\ln 2)\lg\Lambda$ because of
the following result: If $\mathcal{M}_{0}$ and $\mathcal{M}_{1}$ both belong
to the same type of probability distribution satisfying certain smoothness
conditions, and if $\mathcal{M}_{0}$ has parameter space $\Theta_{0}$ with
dimensionality $\text{dim}(\Theta_{0})$ while $\mathcal{M}_{1}$ has parameter
space $\Theta_{1}$ with dimensionality $\dim(\Theta_{1})>\dim(\Theta_{0})$,
then under $\mathcal{M}_{1}$, $-(2\ln 2)\lg\Lambda$ is asymptotically chi-
square with degrees of freedom equal to $\dim(\Theta_{1})-\dim(\Theta_{0})$
[Ric95, sec. 9.5]. In particular, if the probability distribution is normal
(as in our regression model), then the chi-square distribution will be not
just asymptotic but exact.
In (32), where we have a single regression coefficient,
$\dim(\Theta_{1})-\dim(\Theta_{0})=1$, so that under $\mathcal{M}_{1}$,
$-(2\ln 2)\lg\Lambda$ has a chi-square distribution with 1 degree of freedom,
which is the same as the distribution of the square of a standard normal
random variable $Z\sim\mathcal{N}(0,1)$. Thus, we can rewrite (33) as
$\begin{split}P\left(-(2\ln 2)\lg\Lambda>(2\ln
2)T_{\alpha}\,\lvert\,\mathcal{M}_{0}\right)=\alpha\ &\Longleftrightarrow\
P(Z^{2}>(2\ln 2)T_{\alpha})=\alpha\\\ \ &\Longleftrightarrow\ P(Z<-\sqrt{(2\ln
2)T_{\alpha}})=\frac{\alpha}{2}\\\ &\Longleftrightarrow\ \Phi(-\sqrt{(2\ln
2)T_{\alpha}})=\frac{\alpha}{2},\end{split}$ (36)
where $\Phi$ is the cumulative distribution function of the standard-normal
distribution. Using the approximation
$\Phi(-x)\approx\frac{1}{4}e^{-\frac{x^{2}}{2}}$ for relatively large
$x>0$,242424This follows from approximating
$\begin{split}\Phi(-x)=\int_{-\infty}^{-x}\frac{1}{\sqrt{2\pi}}e^{-\frac{t^{2}}{2}}\,\,\mathrm{d}t\end{split}$
by the value of the integrand at $t=-x$, assuming
$\frac{1}{\sqrt{2\pi}}\approx\frac{1}{4}$. Another way to see the
approximation is as follows. [Pól45, pp. 63-67] proved that for $x\geq 0$,
$\begin{split}0.5-\Phi(-x)\approx\frac{1}{2}\sqrt{1-\exp\left(-\frac{2x^{2}}{\pi}\right)}.\end{split}$
(37) If $x$ is large enough that $\Phi(-x)^{2}$ is negligible compared with
$\Phi(-x)$, then we can multiply by 2 and square both sides of (37) to give
$\begin{split}1-4\Phi(-x)\approx 1-\exp\left(-\frac{2x^{2}}{\pi}\right)\
\Longleftrightarrow\
\Phi(-x)\approx\frac{1}{4}\exp\left(-\frac{2x^{2}}{\pi}\right)\end{split}$
Assuming $\pi\approx 4$ gives the result. Note that
$\Phi(-x)\approx\frac{1}{x\sqrt{2\pi}}e^{-\frac{x^{2}}{2}}$ tends to give a
tighter approximation, especially for $x$ bigger than $\sim 2$, but it’s less
mathematically convenient here. I credit [Sti, p. 17] with inspiring the
approach of using an approximation to $\Phi$ to compare hypothesis testing and
coding costs. (36) becomes
$\begin{split}\frac{1}{4}\exp\left(-\frac{(2\ln
2)T_{\alpha}}{2}\right)=\frac{\alpha}{2}\ \Longleftrightarrow\
T_{\alpha}=-\frac{\ln(2\alpha)}{\ln 2}=-\lg\alpha-1.\end{split}$ (38)
### A.4 Bonferroni Criterion
The Bonferroni criterion in p-value space says “reject $H_{0}$ when the
p-value is less than $\frac{\alpha}{m}$.” By (33), we can express this in log-
likelihood space by saying “reject $H_{0}$ when
$-\lg\Lambda>T_{\frac{\alpha}{m}}$,” where, by (38),
$\begin{split}T_{\frac{\alpha}{m}}=\lg m-\lg\alpha-1.\end{split}$ (39)
Note the similarity to the decision criterion (28).
### A.5 Closeness of the Approximation
It’s worth reflecting on why these correspondences are only approximate.
Indeed, if code length is just negative log probability, and a p-value $p$ is
a probability, then, say, the Bonferroni rule of rejecting when
$p<\frac{\alpha}{m}$ is equivalent to rejecting when $-\lg p>\lg m-\lg\alpha$,
which looks basically the same as (39). However, $\Lambda$, which is used by
MDL, is not exactly equal to $p$; the former is a comparison of a probability
density function at two points, while the latter is an area under the
probability density function in extreme regions. Still, the two are often
quite close in practice.
How close? Suppose we observe some value $\Lambda_{*}$ for $\Lambda$. The
associated p-value is
$\begin{split}p=P(\Lambda<\Lambda_{*}\,\lvert\,\mathcal{M}_{0})=P(-2\ln\Lambda>-2\ln\Lambda_{*}\,\lvert\,\mathcal{M}_{0}).\end{split}$
If $-2\ln\Lambda\sim\chi^{2}_{(k)}$ under $\mathcal{M}_{0}$,
$\begin{split}p=1-F_{\chi^{2}_{(k)}}\left(-2\ln\Lambda_{*}\right),\end{split}$
(40)
where $F$ stands for the cumulative distribution function. Figure 3 compares
$\Lambda_{*}$ to the associated p-value for the case of $k=1$ degree of
freedom; a straight, 45-degree line would be a perfect match, but the
approximation is generally correct to within a factor of 2 or 3.
We can use this curve to compute the $\alpha$ implied by a more complicated
model $\mathcal{M}_{1}$ in (35). Letting $\Delta
c:=\ell(\mathcal{M}_{1})-\ell(\mathcal{M}_{0})$, (35) and (40) give
$\begin{split}\alpha=1-F_{\chi^{2}_{(k)}}\left((2\ln 2)\Delta
c\right).\end{split}$
Table 7 shows examples for $k=1$ degree of freedom. $\alpha=0.05$ is achieved
at $\Delta c=2.77$ bits.252525[BL05, p. 582] present a very similar
discussion, only with log base $e$ and with their $\lambda$ equal to
$\frac{\Delta c}{2}$. The numerical values given are equivalent.
$\Delta c$ (bits) | 1 | 2 | 3 | 4
---|---|---|---|---
Implied $\alpha$ | 0.24 | 0.1 | 0.04 | 0.02
Table 7: Example implied $\alpha$ values against the increase in coding cost
$\Delta c$ of the more complicated model, for $k=1$ degree of freedom. Figure
3: The actual p-value $P(\Lambda<\Lambda_{*}\,\lvert\,\mathcal{M}_{0})$ vs.
$\Lambda_{*}$ for $k=1$ degree of freedom.
### A.6 BH-Style Penalty
Suppose we regress $y$ on each of $m$ features separately. We would obtain $m$
p-values, to which we could apply the BH step-up procedure (25).
Alternatively, we could evaluate, for each feature $j$, a negative log-
likelihood ratio:
$\begin{split}-\lg\Lambda_{j}=-\lg P(Y\,\lvert\,\mathcal{M}_{0})-\left(-\lg
P(Y\,\lvert\,\mathcal{M}_{1j})\right),\end{split}$
where $\mathcal{M}_{1j}$ denotes the alternative hypothesis in which feature
$j$ has a nonzero, maximum-likelihood coefficient.
In the same way that we obtained (39), we can derive a BH-type rejection level
of $\frac{j\alpha}{m}$:
$\begin{split}T_{\frac{j\alpha}{m}}=\lg m-\lg j-\lg\alpha-1.\end{split}$
We can now restate the BH procedure as follows: Put the $-\lg\Lambda_{j}$ in
decreasing order, and let $-\lg\Lambda_{(j)}$ be the $j^{\text{th}}$ biggest.
Reject $H_{(1)},\ldots,H_{(q)}$ such that
$\begin{split}q=\max\left\\{j:-\lg\Lambda_{(j)}\geq\lg m-\lg
j-\lg\alpha-1\right\\}.\end{split}$ (41)
We could rephrase (41) as choosing the $q$ that maximizes
$\begin{split}\sum_{j=1}^{q}\left(-\lg\Lambda_{(j)}-\lg m+\lg
j+\lg\alpha+1\right)\end{split}$
or that minimizes
$\begin{split}-\sum_{j=1}^{q}\left(-\lg\Lambda_{(j)}-\lg m+\lg
j+\lg\alpha+1\right)=\lg\frac{m^{q}}{q!}-q(\lg\alpha+1)+\sum_{j=1}^{q}\left(\lg
P(Y\,\lvert\,\mathcal{M}_{0})-\lg
P(Y\,\lvert\,\mathcal{M}_{1(j)})\right).\end{split}$ (42)
Since $\lg P(Y\,\lvert\,\mathcal{M}_{0})$ is a constant with respect to $q$,
we can subtract it $m$ times from (42) to give
$\begin{split}\lg\frac{m^{q}}{q!}-q(\lg\alpha+1)+\sum_{j=1}^{q}-\lg
P(Y\,\lvert\,\mathcal{M}_{1(j)})+\sum_{j=q+1}^{m}-\lg
P(Y\,\lvert\,\mathcal{M}_{0})=\lg\frac{m^{q}}{q!}-q(\lg\alpha+1)+\mathcal{D}^{m}(Y\,\lvert\,\widehat{\beta}_{q})\end{split}$
(43)
with $\mathcal{D}^{m}(Y\,\lvert\,\widehat{\beta}_{q})$ as in (26), in which
$\widehat{\beta}_{q}$ is the model containing the $q$ features with highest
negative log-likelihood ratio values. Now, if $m\gg q$, then
$m^{q}\approx\frac{m!}{(m-q)!}$, and (43) becomes
$\begin{split}\lg\frac{m!}{q!(m-q)!}-q(\lg\alpha+1)+\mathcal{D}^{m}(Y\,\lvert\,\widehat{\beta}_{q})=\lg\binom{m}{q}-q(\lg\alpha+1)+\mathcal{D}^{m}(Y\,\lvert\,\widehat{\beta}_{q}),\end{split}$
(44)
in very close analogy to (29).262626The point that MDL can reproduce BH-style
step-up procedures has been noted elsewhere (e.g., [Sti, p. 19]) in slightly
different words.
## Acknowledgements
The MIC approach to regression for prediction (i.e., the content of Section 2)
was originated by Jing Zhou, Dean Foster, and Lyle Ungar, who drafted a paper
outlining the theory and some preliminary experiments. Lyle suggested that I
continue this work as a project for summer 2008, focusing on hypothesis
testing (Section 4). With continued guidance from Lyle and Dean, I kept
working on the research into the fall and spring as part of this thesis. In
January 2009, Paramveer Dhillon, Lyle, and I submitted a paper to the
International Conference on Machine Learning (ICML 2009) highlighting the
experimental results in Section 2. The results in Section 4 are original to
this thesis.
In addition to the names above, I thank Robert Stine and Phil Everson for
advice on statistical theory; Qiao Liu for conversations on MIC with Lyle;
Dana Pe’er and her laboratory for providing the Yeast data set; Adam Ertel and
Ted Sandler for making accessible the Breast Cancer data set; Jeff Knerr for
computing assistance; Tia Newhall, Rich Wicentowski, and Doug Turnbull for
guidance on writing this thesis; and Santosh S. Venkatesh, my Honors thesis
examiner, for several helpful comments and corrections.
## References
* [AEP08] Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Convex multi-task feature learning. Mach. Learn., 73(3):243–272, 2008.
* [Aka73] H. Akaike. Information theory and an extension of the maximum likelihood principle. In B. N. Petrov and F. Csàki, editors, 2nd International Symposium on Information Theory, pages 261–281, Budapest, 1973. Akad. Kiàdo.
* [AM05] B. Abraham and G. Merola. Dimensionality reduction approach to multivariate prediction. Computational Statistics and Data Analysis, 48(1):5–16, 2005.
* [AZ05] R.K. Ando and T. Zhang. A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data. The Journal of Machine Learning Research, 6:1817–1853, 2005.
* [BF97] L. Breiman and J.H. Friedman. Predicting multivariate responses in multiple linear regression. Journal of the Royal Statistical Society. Series B (Methodological), pages 3–54, 1997.
* [BH95] Yoav Benjamini and Yosef Hochberg. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological), 57:289–300, 1995.
* [Bis06] Christopher M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, August 2006.
* [BL97] A.L. Blum and P. Langley. Selection of relevant features and examples in machine learning. Artificial Intelligence, 97(1-2):245–271, 1997.
* [BL99] Y. Benjamini and W. Liu. A step-down multiple hypotheses testing procedure that controls the false discovery rate under independence. Journal of Statistical Planning and Inference, 82(1-2):163–170, 1999.
* [BL05] Yoav Benjamini and Moshe Leshno. Data mining and knowledge discovery handbook, chapter 25. Springer, 2005.
* [BVF98] PJ Brown, M. Vannucci, and T. Fearn. Multivariate Bayesian variable selection and prediction. Journal of the Royal Statistical Society. Series B, Statistical Methodology, pages 627–641, 1998.
* [BVF02] PJ Brown, M. Vannucci, and T. Fearn. Bayes model averaging with selection of regressors. Journal of the Royal Statistical Society. Series B, Statistical Methodology, pages 519–536, 2002.
* [BY98] A.R. Barron and J.B. Yu. The minimum description length principle in coding and modeling. IEEE Transactions on Information Theory, 44(6):2743–2760, 1998\.
* [BY01] Y. Benjamini and D. Yekutieli. The control of the false discovery rate in multiple testing under dependency. Annals of Statistics, 29(4):1165–1188, 2001.
* [Car97] R. Caruana. Multitask learning. Machine Learning, 28(1):41–75, 1997.
* [CT06] T.M. Cover and J.A. Thomas. Elements of Information Theory. Wiley-Interscience New York, 2006.
* [DJ94] D. L. Donoho and I. M. Johnstone. Ideal spatial adaptation by wavelet shrinkage. Biometrika, 81:425–455, 1994.
* [Eli75] P. Elias. Universal codeword sets and representations of the integers. Information Theory, IEEE Transactions on, 21(2):194–203, 1975.
* [FG94] D. P. Foster and E. I. George. The risk inflation criterion for multiple regression. Annals of Statistics, 22:1947–1975, 1994.
* [Fri08] J. Friedman. Fast Sparse Regression and Classification. 2008\.
* [FS99] D.P. Foster and R.A. Stine. Local asymptotic coding. IEEE Transactions on Information Theory, 45:1289–1293, 1999.
* [FS04] D.P. Foster and R.A. Stine. Variable Selection in Data Mining: Building a Predictive Model for Bankruptcy. Journal of the American Statistical Association, 2004.
* [GF00] E.I. George and D.P. Foster. Calibration and empirical Bayes variable selection. Biometrika, 87(4):731–747, 2000.
* [Gru05] P. Grunwald. A Tutorial Introduction to the Minimum Description Length Principle, chapter 1-2. MIT Press, April 2005.
* [Hoc88] Y. Hochberg. A sharper Bonferroni procedure for multiple tests of significance, 1988\.
* [Hol79] S. Holm. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics, pages 65–70, 1979.
* [Hom88] G. Hommel. A stagewise rejective multiple test procedure based on a modified Bonferroni test, 1988.
* [HR78] D. Harrison and D.L. Rubinfeld. Hedonic housing prices and the demand for clean air. Journal of Environmental Economics and Management, 5(1):81–102, 1978.
* [HY99] M. Hansen and B. Yu. Bridging AIC and BIC: An MDL model selection criterion. In Proceedings of the IT Workshop on Detection, Estimation, Classification and Imaging, page 63, 1999.
* [KCY+06] C M Kendziorski, M Chen, M Yuan, H Lan, and A D Attie. Statistical methods for expression quantitative trait loci (eqtl) mapping. Biometrics, 62:19–27, March 2006. PMID: 16542225.
* [KNLG03] CM Kendziorski, MA Newton, H. Lan, and MN Gould. On parametric empirical Bayes methods for comparing multiple groups using replicated gene expression profiles. Statistics in medicine, 22(24):3899, 2003.
* [KR] Wei-Chun Kao and Alexander Rakhlin. Transfer learning toolkit. Downloaded Jan. 2009. http://multitask.cs.berkeley.edu.
* [KW06] C. Kendziorski and P. Wang. A review of statistical methods for expression quantitative trait loci mapping. Mammalian Genome, 17(6):509–517, 2006.
* [KWR+01] J. Khan, J.S. Wei, M. Ringnér, L.H. Saal, M. Ladanyi, F. Westermann, F. Berthold, M. Schwab, C.R. Antonescu, C. Peterson, et al. Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nature medicine, 7:673–679, 2001.
* [LCCP09] Oren Litvin, Helen C Causton, Bo-Juen Chen, and Dana Pe’er. Special feature: Modularity and interactions in the genetics of gene expression. Proceedings of the National Academy of Sciences of the United States of America, February 2009. PMID: 19223586.
* [LM05] Richard J. Larsen and Morris L. Marx. An Introduction to Mathematical Statistics and Its Applications. Prentice Hall, 4 edition, December 2005.
* [LPFU08] Dongyu Lin, Emily Pitler, Dean P. Foster, and Lyle H. Ungar. In defense of $\ell_{0}$. In Proceedings of the 25${}^{\text{th}}$ Annual International Conference on Machine Learning (ICML 2008), 2008.
* [LW04] O. Ledoit and M. Wolf. A well-conditioned estimator for large-dimensional covariance matrices. Journal of Multivariate Analysis, 88(2):365–411, 2004.
* [Mal73] CL Mallows. Some Comments on $C_{p}$. Technometrics, 15:661–675, 1973.
* [Nat95] BK Natarajan. Sparse approximate solutions to linear systems. SIAM journal on computing, 24:227, 1995.
* [Ng04] A.Y. Ng. Feature selection, $L_{1}$ vs. $L_{2}$ regularization, and rotational invariance. In Proceedings of the 21${}^{\text{st}}$ International Conference on Machine Learning. ACM New York, NY, USA, 2004.
* [OTJ06] Guillaume Obozinski, Ben Taskar, and Michael I. Jordan. Multi-task feature selection. In The Workshop of Structural Knowledge Transfer for Machine Learning in the 23${}^{\text{rd}}$ International Conference on Machine Learning (ICML 2006), 2006.
* [OTJ09] Guillaume Obozinski, Ben Taskar, and Michael I. Jordan. Joint covariate selection and joint subspace selection for multiple classification problems. Statistics and Computing, 2009.
* [Pól45] G. Pólya. Remarks on computing the probability integral in one and two dimensions. In Proceedings of 1st Berkeley Symposium on Mathematics Statistics and Probabilities, pages 63–78, 1945.
* [Ric95] J.A. Rice. Mathematical statistics and data analysis. Duxbury press Belmont, CA, 1995.
* [Ris78] J. Rissanen. Modeling by shortest data description. Automatica, 14(5):465–471, 1978.
* [Ris83] J. Rissanen. A universal prior for integers and estimation by minimum description length. Annals of Statistics, 11(2):416–431, 1983.
* [Ris86] J. Rissanen. Stochastic complexity and modeling. Ann. Statist., 14(3):1080–1100, September 1986.
* [Ris89] J. Rissanen. Stochastic complexity in statistical inquiry theory. World Scientific Publishing Co., Inc. River Edge, NJ, USA, 1989.
* [Ris99] J. Rissanen. Hypothesis selection and testing by the mdl principle. The Computer Journal, 42:260–269, April 1999.
* [RNK06] R. Raina, A.Y. Ng, and D. Koller. Constructing informative priors using transfer learning. In Proceedings of the 23${}^{\text{rd}}$ International Conference on Machine Learning, pages 713–720. ACM New York, NY, USA, 2006.
* [RYB03] A. Reiner, D. Yekutieli, and Y. Benjamini. Identifying differentially expressed genes using false discovery rate controlling procedures, 2003.
* [Sch78] Gideon Schwartz. Estimating the dimension of a model. The Annals of Statistics, 6(2):461–464, 1978.
* [Sha95] JP Shaffer. Multiple hypothesis testing. Annual Review of Psychology, 46(1):561–584, 1995.
* [Sim86] RJ Simes. An improved Bonferroni procedure for multiple tests of significance. Biometrika, 73(3):751–754, 1986.
* [SMD+03] Eric E. Schadt, Stephanie A. Monks, Thomas A. Drake, Aldons J. Lusis, Nam Che, Veronica Colinayo, Thomas G. Ruff, Stephen B. Milligan, John R. Lamb, Guy Cavet, Peter S. Linsley, Mao Mao, Roland B. Stoughton, and Stephen H. Friend. Genetics of gene expression surveyed in maize, mouse and man. Nature, 422:297–302, March 2003.
* [SORS] Juliane Schaefer, Rainer Opgen-Rhein, and Korbinian Strimmer. corpcor: Efficient estimation of covariance and (partial) correlation. http://cran.r-project.org/web/packages/corpcor/. Software downloaded July 2008.
* [ST05] T. Simila and J. Tikka. Multiresponse sparse regression with application to multidimensional scaling. Lecture notes in computer science, 3697:97, 2005.
* [ST07] T. Similä and J. Tikka. Input selection and shrinkage in multiresponse linear regression. Computational Statistics and Data Analysis, 52(1):406–422, 2007\.
* [Sti] Robert A. Stine. Information theory and model selection. http://www-stat.wharton.upenn.edu/~stine/research/select.info2.pdf.
* [Sti04] Robert A. Stine. Model selection using information theory and the mdl principle. Sociological Methods Research, 33(2):230–260, November 2004. In-text page numbers are based on a preprint at http://www-stat.wharton.upenn.edu/~stine/research/smr.pdf.
* [Sto03] J.D. Storey. The positive false discovery rate: A Bayesian interpretation and the q-value. Annals of Statistics, pages 2013–2035, 2003.
* [Tib96] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996.
* [TVW05] B.A. Turlach, W.N. Venables, and S.J. Wright. Simultaneous variable selection. Technometrics, 47(3):349–363, 2005.
* [vtVDvdV+02] L. J. van ’t Veer, H. Dai, M. J. van de Vijver, Y. D. He, A. A. Hart, M. Mao, H. L. Peterse, K. van der Kooy, M. J. Marton, A. T. Witteveen, G. J. Schreiber, R. M. Kerkhoven, C. Roberts, P. S. Linsley, R. Bernards, and S. H. Friend. Gene expression profiling predicts clinical outcome of breast cancer. Nature, 415(6871):530–536, January 2002.
* [Wal05] C.S. Wallace. Statistical and inductive inference by minimum message length. Springer, 2005.
* [YB99] D. Yekutieli and Y. Benjamini. Resampling-based false discovery rate controlling multiple test procedures for correlated test statistics. Journal of Statistical Planning and Inference, 82(1-2):171–196, 1999.
|
arxiv-papers
| 2009-05-30T03:41:37 |
2024-09-04T02:49:02.981748
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Brian Tomasik",
"submitter": "Brian Tomasik",
"url": "https://arxiv.org/abs/0906.0052"
}
|
0906.0061
|
11institutetext: Y.-Q. Lou 22institutetext: 1\. Physics Department and
Tsinghua Center for Astrophysics (THCA), Tsinghua University, Beijing 100084,
China;
2\. Department of Astronomy and Astrophysics, The University of Chicago, 5640
S. Ellis Avenue, Chicago, IL 60637 USA;
3\. National Astronomical Observatories, Chinese Academy of Sciences, A20,
Datun Road, Beijing 100012, China.
22email: louyq@tsinghua.edu.cn; lou@oddjob.uchicago.edu 33institutetext: X.
Zhai 44institutetext: Physics Department and Tsinghua Center for Astrophysics
(THCA), Tsinghua University, Beijing 100084, China
44email: zxzhaixiang@gmail.com
# Dynamic Evolution Model of Isothermal Voids and Shocks
Yu-Qing Lou Xiang Zhai
(Received: date / Accepted: date)
###### Abstract
We explore self-similar hydrodynamic evolution of central voids embedded in an
isothermal gas of spherical symmetry under the self-gravity. More
specifically, we study voids expanding at constant radial speeds in an
isothermal gas and construct all types of possible void solutions without or
with shocks in surrounding envelopes. We examine properties of void boundaries
and outer envelopes. Voids without shocks are all bounded by overdense shells
and either inflows or outflows in the outer envelope may occur. These
solutions, referred to as type $\mathcal{X}$ void solutions, are further
divided into subtypes $\mathcal{X}_{\rm I}$ and $\mathcal{X}_{\rm II}$
according to their characteristic behaviours across the sonic critical line
(SCL). Void solutions with shocks in envelopes are referred to as type
$\mathcal{Z}$ voids and can have both dense and quasi-smooth edges.
Asymptotically, outflows, breezes, inflows, accretions and static outer
envelopes may all surround such type $\mathcal{Z}$ voids. Both cases of
constant and varying temperatures across isothermal shock fronts are analyzed;
they are referred to as types $\mathcal{Z}_{\rm I}$ and $\mathcal{Z}_{\rm II}$
void shock solutions. We apply the ‘phase net matching procedure’ to construct
various self-similar void solutions. We also present analysis on void
generation mechanisms and describe several astrophysical applications. By
including self-gravity, gas pressure and shocks, our isothermal self-similar
void (ISSV) model is adaptable to various astrophysical systems such as
planetary nebulae, hot bubbles and superbubbles in the interstellar medium as
well as supernova remnants.
###### Keywords:
H II regions hydrodynamics ISM: bubbles ISM: clouds planetary nebulae
supernova remnants
###### pacs:
95.30.Qd98.38.Ly95.10.Bt97.10.Me97.60.Bw
††journal: Astrophysics and Space Science
## 1 Introduction
Voids may exist in various astrophysical cloud or nebula systems of completely
different scales, such as planetary nebulae, supernova remnants, interstellar
bubbles and superbubbles and so forth. In order to understand the large-scale
dynamic evolution of such gaseous systems, we formulate and develop in this
paper an isothermal self-similar dynamic model with spherical symmetry
involving central expanding voids under the self-gravity. In our dynamic model
framework, a void is approximately regarded as a massless region with a
negligible gravity on the surrounding gas envelope. With such idealization and
simplification in our hydrodynamic equations, a void is defined as a spherical
space or volume containing nothing inside. In any realistic astrophysical
systems, there are always materials inside voids such as stellar objects and
stellar winds or outflows etc. In Section 4.2.3, we shall show that in
astrophysical gas flow systems that our isothermal self-similar void (ISSV)
solutions are generally applicable, such as planetary nebulae, interstellar
bubbles or superbubbles and so on.
Observationally, early-type stars are reported to blow strong stellar winds
towards the surrounding interstellar medium (ISM). Hydrodynamic studies on the
interaction of stellar winds with surrounding gases have shown that a stellar
wind will sweep up a dense circumsteller shell (e.g. Pikel’ner & Shcheglov
1968; Avedisova 1972; Dyson 1975; Falle 1975). Such swept-up density ‘wall’
surrounding a central star thus form interstellar bubbles of considerably low
densities inside (e.g. Castor, McCray & Weaver 1975). For example, the Rosette
Nebula (NGC 2237, 2238, 2239, 2246) is a vast cloud of dusts and gases
spanning a size of $\sim 100$ light years. It has a spectacular appearance
with a thick spherical shell of ionized gas and a cluster of luminous massive
OB stars in the central region, whose strong stellar winds and intense
radiation have cleared a ‘hole’ or ‘cavity’ around the centre and given rise
to a thick spherical shell of ionized gases (e.g. Mathews 1966; Dorland,
Montmerle & Doom 1986). Weaver et al. (1977) outlined a dynamic theory to
explain interstellar bubbles. They utilized equations of motion and continuity
with spherical symmetry. They gave an adiabatic similarity solution, which is
applicable at early times and also derived a similarity solution including the
effect of thermal conduction between the hotter (e.g. $T\approx 10^{6}$K)
interior and the colder shell of swept-up ISM. Their solution was also
modified to include effects of radiation losses. Weaver et al. (1977) did not
consider the self-gravity of gas shell which can be dynamically important for
such a large nebula, and therefore possible behaviours of their self-similar
solutions were fairly limited. For example, the thickness of the gas shell was
limited to $\sim 0.14$ times of the bubble radius. In our model formulation,
we ignore the gravity of the central stellar wind region and of star(s)
embedded therein. Thus, this central region is treated as a void and we
explore the self-similar dynamic behaviours of surrounding gas shell and ISM
involving both the self-gravity and thermal pressure. Our ISSV solutions
reveal that the gas shell of a cloud can have many more types of dynamic
behaviours.
Planetary nebulae (PNe) represent an open realm of astrophysical applications
of our ISSV model, especially for those that appear grossly spherical (e.g.
Abell 39; see also Abell 1966).111Most planetary nebulae appear elliptical or
bipolar in terms of the overall morphology. During the stellar evolution,
planetary nebulae emerge during the transition from the asymptotic giant
branch (AGB) phase where the star has a slow AGB dense wind to a central
compact star (i.e. a hot white dwarf) where it blows a fast wind in the late
stage of stellar evolution (e.g. Kwok, Purton & Fitzgerald 1978; Kwok & Volk
1985; Chevalier 1997a). The high temperature of the compact star can be a
source of photoionizing radiation and may partially or completely photoionize
the dense slower wind. Chevalier (1997a) presented an isothermal dynamical
model for PNe and constructed spherically symmetric global hydrodynamic
solutions to describe the expansion of outer shocked shell with an inner
contact discontinuity of wind moving at a constant speed. In Chevalier
(1997a), gravity is ignored and the gas flow in the outer region can be either
winds or breezes. In this paper, we regard the inner expansion region of fast
wind as an effective void and use ISSV solutions with shocks to describe dense
shocked wind and AGB wind expansion. One essential difference between our ISSV
model and that of Chevalier (1997a) lays in the dynamic behaviour of the ISM.
In Chevalier (1997a), shocked envelope keeps expanding with a vanishing
terminal velocity or a finite terminal velocity at large radii. By including
the self-gravity, our model can describe a planetary nebula expansion
surrounded by an outgoing shock which further interacts with a static,
outgoing or even accreting ISM. In short, the gas self-gravity is important to
regulate dynamic behaviours of a vast gas cloud. Quantitative calculations
also show that the lack of gas self-gravity may lead to a considerable
difference in the void behaviours (see Section 4.1). Likewise, our ISSV model
provides more sensible results than those of Weaver et al. (1977). We also
carefully examine the inner fast wind region and show that a inner reverse
shock must exist and the shocked fast wind has a significant lower expansion
velocity than the unshocked innermost fast wind. It is the shocked wind that
sweeps up the AGB slow wind, not the innermost fast wind itself. This effect
is not considered in Chevalier (1997a). We also compare ISSV model with Hubble
observations on planetary nebula NGC 7662 and show that our ISSV solutions are
capable of fitting gross features of PNe.
Various aspects of self-similar gas dynamics have been investigated
theoretically for a long time (e.g. Sedov 1959; Larson 1969a, 1969b; Penston
1969a, 1969b; Shu 1977; Hunter 1977, 1986; Landau & Lifshitz 1987; Tsai & Hsu
1995; Chevalier 1997a; Shu et al 2002; Lou & Shen 2004; Bian & Lou 2005).
Observations also show that gas motions of this kind of patterns may be
generic. Lou & Cao (2008) illustrated one general polytropic example of
central void in a self-similar expansion as they explored self-similar
dynamics of a relativistically hot gas with a polytropic index $4/3$
(Goldreich & Weber 1980; Fillmore & Goldreich 1984). The conventional
polytropic gas model of Hu & Lou (2008) considered expanding central voids
embedded in “champagne flows” of star forming clouds and provided versatile
solutions to describe dynamic behaviours of “champagne flows” in H II regions
(e.g. Alvarez et al. 2006). In this paper, we systematically explore
isothermal central voids in self-similar expansion and present various forms
of possible ISSV solutions. With gas self-gravity and pressure, our model
represents a fairly general theory to describe the dynamic evolution of
isothermal voids in astrophysical settings on various spatial and temporal
scales.
This paper is structured as follows. Section 1 is an introduction for
background information, motivation and astrophysical voids on various scales.
Section 2 presents the model formulation for isothermal self-similar
hydrodynamics, including the self-similar transformation, analytic asymptotic
solutions and isothermal shock conditions. Section 3 explores all kinds of
spherical ISSV solutions constructed by the phase diagram matching method with
extensions of the so-called “phase net”. In Section 4, we demonstrate the
importance of the gas self-gravity, propose the physics on void edge and then
give several specific examples that the ISSV solutions are applicable,
especially in the contexts of PNe and interstellar bubbles. Conclusions are
summarized in Section 5. Technical details are contained in Appendices A and
B.
## 2 Hydrodynamic Model Formulation
We recount basic nonlinear Euler hydrodynamic equations in spherical polar
coordinates $(r,\theta,\varphi)$ with self-gravity and isothermal pressure
under the spherical symmetry.
### 2.1 Nonlinear Euler Hydrodynamic Equations
The mass conservation equations simply read
$\frac{\partial M}{\partial t}+u\frac{\partial M}{\partial r}=0\ ,\\\
\qquad\frac{\partial M}{\partial r}=4\pi r^{2}\rho\ ,$ (1)
where $u$ is the radial flow speed, $M(r,\ t)$ is the enclosed mass within
radius $r$ at time $t$ and $\rho(r,\ t)$ is the mass density. The differential
form equivalent to continuity equation (1) is
$\frac{\partial\rho}{\partial t}+\frac{1}{r^{2}}\frac{\partial}{\partial
r}(r^{2}\rho u)=0\ .$ (2)
For an isothermal gas, the radial momentum equation is
$\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial
r}=-\frac{a^{2}}{\rho}\frac{\partial\rho}{\partial r}-\frac{GM}{r^{2}}\ ,$ (3)
where $G\equiv 6.67\times 10^{-8}$ dyn cm2 g-2 is the gravitational constant,
$a\equiv(p/\rho)^{1/2}=(k_{\rm B}T/m)^{1/2}$ is the isothermal sound speed and
$p$ is the gas pressure, $k_{\rm B}\equiv 1.38\times 10^{-16}\hbox{ erg
K}^{-1}$ is Boltzmann’s constant, $T$ is the constant gas temperature
throughout and $m$ is the mean particle mass. Meyer (1997) and Chevalier
(1997a) ignored the gas self-gravity in the momentum equation.
A simple dimensional analysis for equations $(\ref{MCE})-(\ref{ME})$ gives an
independent dimensionless similarity variable
$x={r}/{(at)}\ ,$ (4)
involving the isothermal sound speed $a$. The consistent similarity
transformation is then
$\rho(r,\ t)=\alpha(x)/(4\pi Gt^{2})\ ,\\\ M(r,\ t)=a^{3}tm(x)/G\ ,\qquad
u(r,\ t)=av(x)\ ,\\\ $ (5)
where $\alpha(x)$, $m(x)$, $v(x)$ are the dimensionless reduced variables
corresponding to mass density $\rho(r,t)$, enclosed mass $M(r,t)$ and radial
flow speed $u(r,t)$, respectively. These reduced variables depend only on $x$
(Shu 1977; Hunter 1977; Whitworth & Summers 1985; Tsai & Hsu 1995; Shu et al
2002; Shen & Lou 2004; Lou & Shen 2004; Bian & Lou 2005). Meyer (1997) adopts
a different self-similar transformation by writing $\rho=\bar{\rho}(x)/r^{2}$.
By equation (4) above, we know that in Meyer (1997), $\bar{\rho}(x)$ is
exactly equal to $x^{2}\alpha(x)a^{2}/(4\pi G)$ here. So the similarity
transformation of Meyer (1997) is equivalent to similarity transformation (5)
here but without the self-gravity. Further analysis will show that
transformation (5) can satisfy the void boundary expansion requirement
automatically (see Section 4.2.1).
With self-similar transformation (4) and (5), equation (1) yields two ordinary
differential equations (ODEs)
$m+(v-x)\frac{dm}{dx}=0\ ,\qquad\qquad\frac{dm}{dx}=x^{2}\alpha\ .$ (6)
The derivative term $dm/dx$ can be eliminated from these two relations in
equation (6) to give
$m(x)=x^{2}\alpha(x-v)\ .$ (7)
A nonnegative mass $m(x)$ corresponds to $x-v>0$. In figure displays of
$-v(x)$ versus $x$ profiles, the lower left portion to the line $v-x=0$ is
thus unphysical. We refer to the line $v-x=0$ as the “Zero Mass Line” (ZML);
inside of $x=v$, there should be no mass, corresponding to a void.
Relation (7) plus transformation (4) and (5) lead to two coupled ODEs from
equations (2) and (3), namely
$\left[(x-v)^{2}-1\right]\frac{dv}{dx}=\left[\alpha(x-v)-\frac{2}{x}\right](x-v)\
,$ (8)
$\left[(x-v)^{2}-1\right]\frac{1}{\alpha}\frac{d\alpha}{dx}=\left[\alpha-\frac{2}{x}(x-v)\right](x-v)\
$ (9)
(Shu 1977). ODEs (8) and (9) differ from eqs (3) and (4) of Chevalier (1997a)
by including the self-gravity effect.
For ODEs (8) and (9), the singularity at $(x-v)^{2}=1$ corresponds to two
parallel straight lines in the diagram of $-v(x)$ versus $x$, representing the
isothermal sonic critical lines (SCL) (e.g. Shu 1977; Whitworth & Summers
1985; Tsai & Hsu 1995; Shu et al. 2002; Lou & Shen 2004; Bian & Lou 2005). As
$x-v=-1<0$ is unphysical for a negative mass, we have the SCL characterized by
$\qquad v=x-1\ ,\\\ \alpha={2}/{x}\ .$ (10)
Two important global analytic solutions of nonlinear ODEs (8) and (9) are the
static singular isothermal sphere (SIS; e.g. Shu 1977)
$v=0\ ,\qquad\qquad\alpha=\frac{2}{x^{2}}\ ,\qquad\qquad m=2x\ ,$ (11)
and non-relativistic Einstein-de Sitter expansion solution for an isothermal
gas
$v=\frac{2}{3}x\ ,\qquad\alpha=\frac{2}{3}\ ,\qquad m=\frac{2}{9}x^{3}\ $ (12)
(e.g. Whitworth & Summers 1985; Shu et al. 2002).
Let $x_{*}$ denote the value of $x$ at a sonic point on SCL. A Taylor series
expansion of $v(x)$ and $\alpha(x)$ in the vicinity of $x_{*}$ shows that
solutions crossing the SCL smoothly at $x_{*}$ have the form of either
$\displaystyle-v=(1-x_{*})+\bigg{(}{1\over x_{*}}-1\bigg{)}(x-x_{*})+\cdots\
,$ (13) $\displaystyle\alpha={2\over x_{*}}-{2\over x_{*}}\bigg{(}{3\over
x_{*}}-1\bigg{)}(x-x_{*})+\cdots\ ,$
or
$\displaystyle-v=(1-x_{*})-{1\over x_{*}}(x-x_{*})+\cdots\ ,$ (14)
$\displaystyle\alpha={2\over x_{*}}-{2\over x_{*}}(x-x_{*})+\cdots\ ,$
(e.g. Shu 1977; Whitworth & Summers 1985; Tsai & Hsu 1995; Bian & Lou 2005;
see Appendix A of Lou & Shen 2004 for higher-order derivatives). Thus
eigensolutions crossing the SCL smoothly are uniquely determined by the value
of $x_{*}$. Eigensolutions of type 1 and 2 are defined by equations (13) and
(14). Physically, equations (13) and (14) describe how the gas behaves as it
flows from subsonic to supersonic regimes across the SCL in the local comoving
framework.
An important numerical solution to ODEs (8) and (9) is the Larson-Penston (LP)
solution (Larson 1969a; Larson 1969b; Penston 1969). This solution have an
asymptotic behaviour $v\rightarrow 2x/3$ and $\alpha\rightarrow 1.67$ as
$x\rightarrow 0^{+}$. And the LP solution is also an eigensolution of type 2
(equation 14) and passes through the SCL smoothly at $x_{*}=2.33$.
When $x\rightarrow+\infty$, either at large radii or at very early time,
solutions to two nonlinear ODEs (8) and (9) have asymptotic behaviours
$v=V+\frac{2-A}{x}+\frac{V}{x^{2}}+\frac{(A/6-1)(A-2)+2V^{2}/3}{x^{3}}+\cdots\
,\\\
\alpha=\frac{A}{x^{2}}+\frac{A(2-A)}{2x^{4}}+\frac{(4-A)VA}{3x^{5}}+\cdots\
,\\\ m=Ax-AV+\frac{A(2-A)}{2x}+\frac{A(A-4)V}{6x^{2}}+\cdots\ ,\\\ $ (15)
where $V$ and $A$ are two integration constants (e.g. Whitworth & Summers
1985; Lou & Shen 2004), referred to as the reduced velocity and mass
parameters, respectively. The non-relativisitic Einstein-de Sitter expansion
solution does not follow this asymptotic solution (15) at large $x$. Chevalier
(1997a) also presented asymptotic solutions of $v(x)$ and $\alpha(x)$ at large
$x$. Case $A=1$ in our solution (15) should correspond to asymptotic
behaviours of $v(x)$ and $\alpha(x)$ in Chevalier’s model (1997a; see his
equations 5 and 6). However, the coefficient of $x^{-4}$ term for $\alpha(x)$
in his model is $1$ while in our model, it is $1/2$; and coefficient of
$x^{-1}$ term for $v(x)$ in his model is $2$ while in our model, it is $2-A$.
These differences arise by dropping the gravity in Chevalier (1997a).
Physically in our model, the gas has a slower outgoing radial velocity and the
density decreases more rapidly than that of Chevalier (1997a), because when
the self-gravity is included, the gas tends to accumulate around the centre.
Counterpart solutions to equations (11)$-$(15) can also be generalized for
conventional and general polytropic gases (Lou & Wang 2006; Wang & Lou 2007;
Lou & Cao 2008; Wang & Lou 2008; Hu & Lou 2008).
### 2.2 Isothermal Shock Jump Conditions
For an isothermal fluid, the heating and driving of a cloud or a progenitor
star (such as a sudden explosion of a star) at $t=0$ will compress the
surrounding gas and give rise to outgoing shocks (e.g. Tsai & Hsu 1995). In
the isothermal approximation, the mass and momentum should be conserved across
a shock front in the shock comoving framework of reference
$\rho_{d}(u_{d}-u_{s})=\rho_{u}(u_{u}-u_{s})\ ,$ (16)
$a_{d}^{2}\rho_{d}+\rho_{d}u_{d}(u_{d}-u_{s})=a_{u}^{2}\rho_{u}+\rho_{u}u_{u}(u_{u}-u_{s})\
,$ (17)
where subscripts $d$ and $u$ denote the downstream and upstream sides of a
shock, respectively (e.g. Courant & Friedricks 1976; Spitzer 1978; Dyson &
Williams 1997; Shen & Lou 2004; Bian & Lou 2005). Physically, we have
$u_{s}=a_{d}x_{sd}=a_{u}x_{su}=r_{s}/t$ as the outgoing speed of a shock with
$r_{s}$ being the shock radius. Conditions (18) and (19) below in terms of
self-similar variables are derived from conditions (16) and (17) by using the
reduced variables $v(x)$, $x$ and $\alpha(x)$ and the isothermal sound speed
ratio $\tau\equiv a_{d}/a_{u}=x_{su}/x_{sd}$,
$\alpha_{d}/\alpha_{u}=(v_{u}-x_{su})/[\tau(v_{d}-x_{sd})]\ ,$ (18)
$v_{d}-x_{sd}-\tau(v_{u}-x_{su})=(\tau
v_{d}-v_{u})(v_{u}-x_{su})(v_{d}-x_{sd})\ .$ (19)
Consequently, we have $\tau=(T_{d}/T_{u})^{1/2}$ with $T$ being the gas
temperature. Physics requires $T_{d}\geq T_{u}$ leading to $\tau\geq 1$. For
$\tau=1$, conditions (18) and (19) reduce to
$\alpha_{d}/\alpha_{u}=(v_{u}-x_{s})/(v_{d}-x_{s})\ ,$ (20)
$(v_{u}-x_{s})(v_{d}-x_{s})=1\ ,$ (21)
where $x_{s}=x_{u}=x_{d}$ is the reduced shock location or speed (e.g. Tsai &
Hsu 1995; Chevalier 1997a; Shu et al. 2002; Shen & Lou 2004; Bian & Lou 2005).
## 3 Isothermal Self-Similar Voids
Various similarity solutions can be constructed to describe outflows (e.g.
winds and breezes), inflows (e.g. contractions and accretions), static outer
envelope and so forth. These solutions will be presented below in order, as
they are useful to construct ISSV solutions.
### 3.1 Several Relevant Self-Similar Solutions
We first show some valuable similarity solutions in reference to earlier
results of Shu (1977) and Lou & Shen (2004). These solutions behave
differently as $x\rightarrow+\infty$ for various combinations of parameters
$V$ and $A$ in asymptotic solution (15).
#### 3.1.1 CSWCP and EWCS Solutions of Shu (1977)
Shu (1977) presents a class of solutions: collapse solutions without critical
point (CSWCP) and expansion-wave collapse solution (EWCS) (see Fig. 1).
The CSWCP solutions (light dotted curves in Fig. 1) have asymptotic behaviours
of $V=0$ and $A>2$ according to solution (15) at large $x$, and describe the
central free-fall collapse of gas clouds with contracting outer envelopes of
vanishing velocities at large radii.
The EWCS solution (the heavy solid curve in Fig. 1) is obtained with
$A\rightarrow 2^{+}$ and $V=0$ in solution (15). This solution is tangent to
the SCL at $x_{*}=1$ and has an outer static SIS envelope (solution 11) and a
free-fall collapse towards the centre ( Shu 1977; Lou & Shen 2004; Bian & Lou
2005); the central collapsed region expands in a self-similar manner.
Figure 1: Isothermal self-similar solutions of Shu (1977): $-v(x)$ versus $x$
profile of EWCS (heavy solid curve) and a sequence of five CSWCP solutions
(dotted curves) with mass parameter $A$ values marked along these five curves
($A=2.2$ to 3.0). Four solutions smoothly crossing the SCL (dash-dotted line
to the upper right) twice of Lou & Shen (2004): three light solid curves
labelled by numerals ‘1’, ‘2’ and ‘3’ are type 2-type 1 solutions with their
smaller crossing points $x_{*}(1)$ matching the type 2 derivative and the
larger crossing points $x_{*}(2)$ fitting the type 1 derivative. The light
dotted curve labelled by ‘type 2-type 2’ is the unique type 2-type 2 solution
with both crossing points following the type 2 derivative (see Table 1 for
relevant parameters). The dash-dotted line to the lower left is the ZML of
$x-v=0$.
#### 3.1.2 Solutions smoothly crossing the SCL twice
Lou & Shen (2004) studied isothermal similarity solutions and divided Class I
similarity solutions, which follow free-fall behaviours as $x\rightarrow
0^{+}$, into three subclasses according to their behaviours at large $x$.
Class Ia similarity solutions have positive $V$ at large $x$ (see solution
15), which describe a cloud with an envelope expansion. Class Ia solutions are
referred to as ‘envelope expansion with core collapse’ (EECC) solutions by Lou
& Shen (2004). The case of $V=0$ corresponds to Class Ib solutions and $V<0$
renders Class Ic solutions. The CSWCP solutions belong to Class Ic and the
EWCS solution belongs to Class Ib.
Lou & Shen (2004) constructed four discrete solutions crossing the SCL twice
smoothly (i.e. satisfying equations 13 and 14) by applying the phase diagram
matching scheme (Hunter 1977; Whitworth & Summers 1985; see subsection 3.4
below). These are the first four examples among an infinite number of discrete
solutions (see Lou & Wang 2007).
Let $x_{*}(1)$ and $x_{*}(2)$ be the smaller (left) and larger (right) cross
points for each of the four solutions. As they are all Class I solution,
$x_{*}(1)$ is less than $1$ and the behaviours near $x_{*}(1)$ are determined
by type 2 (as defined by equation 14) to assure a negative derivative
$d(-v)/dx$ at $x_{*}(1)$ along the SCL. As in Lou & Shen (2004), we name a
solution that crosses the SCL twice smoothly as ‘type2-type1’ or ‘type2-type2’
solution, which corresponds the derivative types at $x_{*}(1)$ and $x_{*}(2)$.
Three of the four solutions of Lou & Shen (2004) are ‘type2-type1’ solutions
and we further name them as ‘type2-type1-1’, ‘type2-type1-2’ and
‘type2-type1-3’ (see Fig. 1 and Table 1 for more details).
Table 1: Relevant parameters are summarized here for the four solutions
crossing the SCL twice shown in Figure 1. Three of them cross the SCL at
$x_{*}(1)$ with the second kind of derivative and at $x_{*}(2)$ with the first
kind of derivative. In the ‘Solution’ column of this table, they are named as
‘type2-type1-1’, ‘type2-type1-2’ and ‘type2-type1-3’ corresponding to the
curves labelled by ‘1’, ‘2’ and ‘3’, respectively. The fourth solution crosses
the SCL at $x_{*}(1)$ and $x_{*}(2)$ both with the second kind of derivative
and is named as ‘type2-type2’ in the ‘Solution’ column, corresponding to the
curve labelled by ‘type 2-type 2’. The first and third type2-type1 solutions
and type2-type2 solution all belong to EECC solution (or Class Ia solution),
while the type2-type1-2 solution belongs to Class Ic solution. In Lou & Shen
(2004), this type2-type2 EECC solution passes the SCL at $x_{*}(1)\approx
0.632$ and $x_{*}(2)\approx 1.349$. We reproduce these results. However, we
here adopt $x_{*}(1)\approx 0.71$ and $x_{*}(2)\approx 1.287$, which are
calculated with a higher accuracy.
Solution Class $x_{*}(1)$ $x_{*}(2)$ $V$ $A$ type2-type1-1 EECC(Ia) $0.23$
$1.65$ $1.8$ $5.1$ type2-type1-2 Ic $2.5858\times 10^{-4}$ $0.743$ $-0.77$
$1.21$ type2-type1-3 EECC(Ia) $6\times 10^{-6}$ $1.1$ $0.3$ $2.4$ type2-type2
EECC(Ia) $0.71$ $1.287$ $1.5$ $4.7$
### 3.2 Isothermal Self-Similar Void (ISSV) Solutions
Relation (7) indicates that the ZML $v-x=0$ separates the solution space into
the upper-right physical part and the lower-left unphysical part in a $-v(x)$
versus $x$ presentation.
For a solution (with $x-v>0$) touching the ZML at $x_{0}$, then
$v(x_{0})=x_{0}$ holds on and so does $m(x_{0})=0$ there. Given the
definitions of $m(x)\equiv GM(r,\ t)/(a^{3}t)$ and $x\equiv r/(at)$, condition
$m(x_{0})=0$ indicates a spherical isothermal gas whose enclosed mass
$M(atx_{0},t)$ vanishes, and thus a central void expands at a constant radial
speed $ax_{0}$.
The condition $v(x_{0})=x_{0}$ marks a void boundary in expansion. Physically,
$x_{0}$ is the start point of a streamline as $v(x_{0})=x_{0}$ indicates
matters flowing outwards at a velocity of the boundary expansion velocity.
However, a problem would arise when the gas density just outside the void
boundary is not zero. For an isothermal gas, the central vacuum cannot resist
an inward pressure from the outer gas across the void boundary. We propose
several mechanisms/scenarios to provide sufficient energy and pressure to
generate voids and maintain them for a period of time without invalidating our
‘vacuum approximation’ for voids. In Section 4, we present explanations and
quantitative calculations about these mechanisms.
Mathematically, to construct spherical isothermal self-similar voids is to
search for global solutions reaching the ZML at $x_{0}>0$. If a solution
touches the ZML at point $x_{0}$, then $x_{0}$ should be the smallest point of
the solution with gas. Solutions with a negative mass are unphysical. So if
$v=x$ holds on at $x_{0}$, both $v(x)$ and $\alpha(x)$ should be zero in the
range $0<x<x_{0}$. 222 The vacuum solution $v=0$ and $\alpha=0$ satisfies
dimensional equations (1)$-$(3); it is not an apparent solution to reduced ODE
(8) because a common factor $\alpha$ has been taken out before we arrive at
ODE (8). With this physical understanding, we still regard $v=0$ and
$\alpha=0$ as a solution to ODEs (8) and (9).
Before exploring isothermal self-similar void (ISSV) solutions of spherical
symmetry, we note a few properties of such solutions. Nonlinear ODEs (8) and
(9) give the following two first derivatives at $x_{0}$
$\frac{dv}{dx}\bigg{|}_{x_{0}}=0\
,\qquad\qquad\frac{d\alpha}{dx}\bigg{|}_{x_{0}}=0\ .$ (22)
So in the $-v(x)$ versus $x$ and $\alpha(x)$ versus $x$ presentations, the
right tangents to these solution curves at $x_{0}$ are horizontal.
For spherical isothermal self-similar dynamics, we can show that across a void
boundary, $\alpha(x)$ must jump from $0$ to a nonzero value (see Appendix A).
Voids must be bounded by relatively dense shells with a density jump. We
propose that the height of such a density jump may indicate the energy to
generate and maintain such a void. Energetic processes of short timescales
include supernovae involving rebound shock waves or neutrino emissions and
driving processes of long timescales include powerful stellar winds (see
Section 4 for more details). In reality, we do not expect an absolute vacuum
inside a void. Regions of significantly lower density in gas clouds are
usually identified as voids.
For $x\rightarrow+\infty$, the physical requirement of finite mass density and
flow velocity can be met for $\alpha(x)$ and $v(x)$ by asymptotic solution
(15).
ISSV solutions need to cross the SCL in the $-v(x)$ versus $x$ profiles as
they start at the ZML and tend to a horizontal line at large $x$ with a
constant $V$. Given conditions (13) and (14), ISSV solutions can be divided
into two subtypes: crossing the SCL smoothly without shocks, which will be
referred to as type $\mathcal{X}$, and crossing the SCL via shocks, which will
be referred to as type $\mathcal{Z}$ and can be further subdivided into types
$\mathcal{Z}_{\rm I}$ and $\mathcal{Z}_{II}$ as explained presently.
### 3.3 Type $\mathcal{X}$ ISSV Solutions without Shocks
As analyzed in Section 3.2, type $\mathcal{X}$ ISSV solutions cross the SCL
smoothly. Let $x_{*}$ denote the cross point on the SCL. We have
$v(x_{*})=x_{*}-1$ and $\alpha(x_{*})=2/x_{*}$ by equation (10). Conditions
(13) and (14) give the eigen solutions for the first derivatives
$v^{\prime}(x)$ and $\alpha^{\prime}(x)$ at $x_{*}$ as either type 1 (equation
13) or type 2 (equation 14). Given $x_{*}$ and the type of eigen-derivative at
$x_{*}$, all the necessary initial conditions [$v(x_{*})$, $\alpha(x_{*})$,
$v^{\prime}(x_{*})$, $\alpha^{\prime}(x_{*})$] are available for integrating
nonlinear ODEs (8) and (9) in both directions.
We use $x_{0}$, $\alpha_{0}\equiv\alpha(x_{0})$, $x_{*}$ and the types of
eigen-derivative at $x_{*}$ to construct type $\mathcal{X}$ ISSV solutions.
While $x_{0}$ and $\alpha_{0}$ are the key parameters for the void expansion
speed and the density of the shell around the void edge, we use $x_{*}$ to
obtain type $\mathcal{X}$ ISSV solutions as $x_{*}$ parameter can be readily
varied to explore all type $\mathcal{X}$ ISSV solutions.
#### 3.3.1 Type $\mathcal{X}_{\rm I}$ ISSV Solutions: Voids with Sharp Edge,
Smooth Envelope and Type 1 Derivative on the SCL
Type $\mathcal{X}_{\rm I}$ solutions cross the SCL smoothly and follow the
Type 1 derivative at $x_{*}$ on the SCL. By equation (13), the first
derivative $d(-v)/dx$ is positive for $0<x_{*}<1$ and negative for $x_{*}>1$.
This allows the $x_{*}$ of type $\mathcal{X}_{\rm I}$ solutions to run from
$0$ to $+\infty$ when behaviours of these solutions in the inner regions are
ignored temporarily. When the existence of $x_{0}$ is required for
constructing isothermal central voids, the range of $x_{*}$ along the SCL is
then restricted.
Given $x_{*}$ on the SCL and using equations (10) and (13), we obtain the
initial condition $v(x_{*}),\ \alpha(x_{*}),\ v^{\prime}(x_{*})$,
$\alpha^{\prime}(x_{*})$ to integrate ODEs (8) and (9) from $x_{*}$ in both
directions. If an integration towards small $x$ can touch the ZML at a
$x_{0}>0$ and an integration towards $+\infty$ exists, then a type
$\mathcal{X}_{\rm I}$ ISSV solution is constructed.
We now list five important numerals: $x_{1}\approx 0.743$, $x_{2}=1$,
$x_{3}\approx 1.1$, $x_{4}\approx 1.65$ and $x_{5}=3$. They are the cross
points at the SCL of Lou & Shen type2-type1-2, SIS (see equation 11), Lou &
Shen type2-type1-3, Lou & Shen type2-type1-1, Einstein-de Sitter solution,
respectively. All these solutions have type 1 derivatives near their cross
points on the SCL. However, their behaviours differ substantially as
$x\rightarrow 0^{+}$. The three solutions of Lou & Shen approach central free-
fall collapses with a constant reduced core mass $m_{0}$ as the central mass
accretion rate; while SIS and Einstein-de Sitter solution have vanishing
velocities as $x\rightarrow 0^{+}$.
Numerical computations show that type $\mathcal{X}_{\rm I}$ ISSV solutions
exist when $x_{*}$ falls into four intervals out of six intervals along $x>0$
axis divided by the five numerals above. The four intervals are
$0<x_{*}<x_{1}\approx 0.743$, $x_{1}\approx 0.743<x_{*}<x_{2}=1$,
$x_{2}=1<x_{*}<x_{3}\approx 1.1$ and $x_{3}\approx 1.1<x_{*}<x_{4}\approx
1.65$. No type $\mathcal{X}_{\rm I}$ exist with $x_{*}$ in intervals
$x_{4}\approx 1.65<x_{*}<x_{5}=3$ and $x_{*}>x_{5}=3$, because integrations
from $x_{*}$ in these two intervals towards $+0$ or $+\infty$ must halt when
they encounter the SCL again, respectively. The six regions mentioned above
are named as conditions I, II, III, IV, V and VI, respectively. Figure 2
illustrates several typical type $\mathcal{X}_{\rm I}$ ISSV solutions with
their $x_{*}$ in different regions. The relevant solution parameters are
summarized in Table 2.
Figure 2: Typical type $\mathcal{X}_{\rm I}$ ISSV solutions. Panel A in linear
scales shows $-v(x)$ versus $x$ curves of several type $\mathcal{X}_{\rm I}$
solutions. The sonic critical point $x_{*}$ of each curve is noted along the
$x-$axis. The dash-dotted curve which crosses the SCL at $x_{*}=1.7$ smoothly
and encounters the line again at $\sim 0.4$ shows a typical behaviour when
$x_{4}\approx 1.65<x_{*}<x_{5}=3$. The inset shows the enlarged portions of
these ISSV solution curves near $x\rightarrow 0^{+}$. The dash-dotted lines in
both panel A and inset are the ZML. Panel B shows the $\log\alpha$ versus $x$
curves of the same solutions and the curves in panel A, inset and panel B with
the same line type (light and heavy solid, dotted and dash) correspond to the
same type $\mathcal{X}_{\rm I}$ solutions. They are distinguished by their
$x_{*}$ values. Each of these curves jumps from a zero value (left) to a
nonzero value (right) at $x_{0}$, indicating a void in the region of
$x<x_{0}$. Table 2: Parameters of several typical type $\mathcal{X}_{\rm I}$
ISSV solutions. Eight solutions under four different conditions discussed in
Section 3.3.1 are tabulated here.
Condition $x_{*}$ $x_{0}$ $\alpha_{0}$ $V$ $A$ I $0.10$ $0.0132$ $1.11\times
10^{5}$ $-3.51$ $0.070$ I $0.50$ $0.0417$ $1.92\times 10^{3}$ $-1.57$ $0.649$
I $0.70$ $0.0238$ $1.35\times 10^{3}$ $-0.91$ $1.10$ II $0.75$ $7.5\times
10^{-5}$ $1.5\times 10^{10}$ $-0.75$ $1.23$ II $0.85$ $7.5\times 10^{-4}$
$2.5\times 10^{7}$ $-0.46$ $1.50$ II $0.95$ $5.6\times 10^{-4}$ $6.9\times
10^{6}$ $-0.15$ $1.83$ III $1.05$ $9.2\times 10^{-5}$ $1.5\times 10^{9}$
$+0.14$ $2.17$ IV $1.60$ $0.0242$ $1.5\times 10^{4}$ $+1.71$ $4.77$
#### 3.3.2 Type $\mathcal{X}_{\rm II}$ Void Solutions: Voids with Sharp
Edges, Smooth Envelope and Type 2 Derivative at the SCL
Type $\mathcal{X}_{\rm II}$ solutions cross the SCL smoothly and have the type
2 derivative at $x_{*}$. By condition (14), the first derivative $d(-v)/dx$ is
negative for $x_{*}>0$. Because $x_{0}<x_{*}$, $x_{*}$ of type
$\mathcal{X}_{\rm II}$ solutions must be larger than $1$ to assure behaviours
of solution (15) at large $x$ (note that type 2 derivative is used to obtain a
free-fall behaviour around the centre). Similar to the approach to investigate
type $\mathcal{X}_{\rm I}$ ISSV solutions, we now list four important
numerals: $x_{1}^{\prime}=1$, $x_{2}^{\prime}\approx 1.287$,
$x_{3}^{\prime}=1.50$ and $x_{4}^{\prime}\approx 2.33$. Here,
$x_{2}^{\prime}\approx 1.287$ is the right cross point on the SCL of the
type2-type2 solution of Lou & Shen (2004) and $x_{4}^{\prime}\approx 2.33$ is
the cross point of the LP solution on the SCL. These two solutions both follow
behaviours of type 2 derivative near their cross points. However, their
behaviours differ as $x\rightarrow 0^{+}$. Lou & Shen type 2-type 2 EECC
solution has a central free-fall collapse with a constant reduced core mass
$m_{0}$ for the central mass accretion rate; while LP solution has vanishing
velocity and mass as $x\rightarrow 0^{+}$. And $x_{3}^{\prime}=1.5$ is the
critical point where the second-order type 2 derivative diverges (see Appendix
A of Lou & Shen 2004).
The four points given above subdivide the $x\geq 1$ portion of the $x-$axis
into four intervals: $[x_{1}^{\prime}=1,x_{2}^{\prime}\approx 1.287]$,
$[x_{2}^{\prime}\approx 1.287,x_{3}^{\prime}=1.5]$,
$[x_{3}^{\prime}=1.5,x_{4}^{\prime}\approx 2.34]$ and $[x_{4}^{\prime}\approx
2.34,+\infty)$ which are referred to as condition I’, II’, III’, and IV’,
respectively. Similar to subsection 3.3.1, we choose a $x_{*}$ and integrate
from $x_{*}$ in both directions. Numerical calculations indicate that type
$\mathcal{X}_{\rm II}$ void solutions only exist when their $x_{*}$ falls
under conditions II’ or IV’.
We show four typical type $\mathcal{X}_{\rm II}$ ISSV solutions in Figure 3
with relevant parameters summarized in Table 3.
Figure 3: Four typical type $\mathcal{X}_{\rm II}$ ISSV solutions in $-v(x)$
versus $x$ profile. Values of their $x_{*}$ are marked by the grid lines of
$x-$axis in panel A. The inner box presents details of two curves near
$x\rightarrow 0^{+}$ in the same line type as corresponding curves in Panel A.
Panel B and C give the four solutions in $\log\alpha$ versus $x$ profile.
Panel B presents two solution curves of II’ condition marked with
$x_{*}=1.288$ (heavy solid line) and $x_{*}=1.49$ (light solid line)
respectively and panel C shows two solution curves of IV’ condition marked
with $x_{*}=2.35$ (light dash line) and $x_{*}=2.5$ (heavy dash line)
respectively. The density jumps at $x_{0}$ are obviously much sharper in
condition II’ than in condition IV’ (compare the numbers along $y-$axes).
Table 3: Parameters of several typical type $\mathcal{X}_{\rm II}$ ISSV
solutions. Five solutions under two conditions in Section 3.3.1 are listed
below.
Condition $x_{*}$ $x_{0}$ $\alpha_{0}$ $V$ $A$ II’ $1.288$ $2.8\times 10^{-4}$
$4\times 10^{8}$ $1.5$ $4.7$ II’ $1.49$ $0.022$ $9.1\times 10^{3}$ $4.5$ $18$
IV’ $2.35$ $0.304$ $1.61$ $3.30$ $8.54$ IV’ $2.5$ $0.852$ $1.32$ $3.46$ $8.92$
IV’ $2.58$ $1.007$ $1.23$ $9.48$ $3.54$
#### 3.3.3 Interpretations for Type $\mathcal{X}$ ISSV Solutions
We have explored all possible type $\mathcal{X}$ ISSV solutions. Now we offer
interpretations for these solutions. In preceding sections, we used parameter
$x_{*}$ on the SCL as the key parameter to construct type $\mathcal{X}$ ISSV
solutions. This method assures us not to miss any possible ISSV solutions.
However, for ISSV solutions, $x_{0}$ and $\alpha_{0}$ are the most direct
parameters describing properties of central void expansions.
Figure 4: The relation between isothermal void boundary $x_{0}$ and the sonic
point $x_{*}$ at SCL. The ‘solid circle curve’ and the ‘open circle curve’ are
the $\log(x_{0})$ versus $x_{*}$ curves of type $\mathcal{X}_{\rm I}$ and type
$\mathcal{X}_{\rm II}$ void solutions. The key numerals dividing the positive
real $x-$axis into several ranges are indicated by the dash vertical lines
with numerals marked along or at $x-$axis. In regions I, II, III, and IV, we
have four branches for type $\mathcal{X}_{\rm I}$ void solutions. In regions
II’ and IV’, we have two branches for type $\mathcal{X}_{\rm II}$ ISSV
solutions.
The void edge $x_{0}$ is the reduced radius in the self-similar description
and $ax_{0}$ is the expansion speed of the void boundary. We adjust the
$x_{*}$ value at the SCL to search for void solutions and there exists a
certain relationship between $x_{*}$ and $x_{0}$ as expected. Figure 4 shows
these relationships for types $\mathcal{X}_{\rm I}$ and $\mathcal{X}_{\rm II}$
ISSV solutions.
Under conditions I, II, III and given $x_{*}$, the corresponding $x_{0}$ runs
from $0$ to a maximum value and then back to $0$. The maximum values of
conditions I, II and III are $x_{0}=0.042$, $x_{0}=8\times 10^{-4}$ and
$x_{0}=0.9\times 10^{-4}$ which correspond to $x_{*}\approx 0.5$,
$x_{*}\approx 0.89$ and $x_{*}\approx 1.04$, respectivly. For $x_{*}$ under
conditions IV or II’, the corresponding $x_{0}$ increases with $x_{*}$ and the
intervals of $x_{0}$ are $[0^{+},0.027]$ and $[0^{+},0.022]$, respectively.
Under condition IV’, $x_{0}$ increases with $x_{*}$ from $[0^{+},+\infty)$
monotonically. However, when $x_{*}$ is under condition IV’, the corresponding
$x_{0}$ is usually very large (at least 100 times larger) compared to other
type $\mathcal{X}$ ISSV solutions unless $x_{*}$ is near $2.34$. Physically
under condition IV’, isothermal voids usually expand at a relatively high
speed. Based on the value and meaning of $x_{0}$, type $\mathcal{X}$ ISSV
solutions can be divided into two classes: rapid and slow ISSV expansion
solutions. All type $\mathcal{X}_{\rm I}$ ISSV solutions and type
$\mathcal{X}_{\rm II}$ ISSV solutions under condition II’ belong to slow void
expansion solutions. Type $\mathcal{X}_{\rm II}$ ISSV solutions under
condition IV’ belong to rapid void expansion solutions.
Parameter $\alpha_{0}$ represents the reduced density $\alpha$ at the void
boundary. Figures 2 and 3 and Tables 2 and 3 clearly indicate that isothermal
voids, described by type $\mathcal{X}$ ISSV solutions, are all surrounded by
dense mass shells and the gas density around voids attenuates monotonically
with increasing radius. So there are classes of voids that evolve with fairly
sharp edges but without shocks. Nevertheless, sharp edge around voids is not a
general property of all void solutions. Expanding voids with shocks can be
surrounded by quasi-smooth edges (never smooth edges as shown in Appendix A).
In Section 4, we will show that $\alpha_{0}$ is an important parameter which
may reveal the mechanism that generates and sustains a void. Large
$\alpha_{0}$ requires very energetic mechanisms against the high inward
pressure across the boundary. So type $\mathcal{X}$ voids may be difficult to
form because all type $\mathcal{X}$ voids have dense boundaries.
Figures 2 and 3 and Tables 2 and 3 clearly show that all type
$\mathcal{X}_{\rm II}$ void solutions and type $\mathcal{X}_{\rm I}$ ISSV
solutions under conditions III and IV describe isothermal voids surrounded by
gas envelopes in expansion (i.e. velocity parameter $V$ at
$x\rightarrow\infty$ are positive). Astrophysical void phenomena are usually
coupled with outflows (i.e. winds). Our ISSV solutions indicate that rapidly
expanding voids must be surrounded by outflows.
Type $\mathcal{X}_{\rm I}$ ISSV solutions under conditions I and II describe
voids surrounded by contracting envelopes, although under these two conditions
the voids expand very slowly ($\leq 0.042a$, see subsection 3.3.1) and are
surrounded by very dense shells (see Table 2).
Outflows and inflows are possible as indicated by type $\mathcal{X}$ ISSV
solutions, but no static shell is found in type $\mathcal{X}$ ISSV solutions.
In the following section, we show voids with shocks being surrounded by static
envelopes.
A clarification deems appropriate here that division of the $x>0$ axis in
subsection 3.3.1 is actually not precise. In subsection 3.3.1, we divide
$(0,+\infty)$ into six intervals by five points $x_{1}$ to $x_{5}$ and
$x_{1}$, $x_{3}$ and $x_{4}$ are the cross points at the SCL of Lou & Shen
type2-type1 solution. We note that they are only the first three examples of
an infinite number of discrete solutions that cross the SCL smoothly twice via
type 2 derivative first at a smaller $x$ and then type 1 derivative at a
larger $x$ (Lou & Shen 2004). Our numerical computations show that the fourth
type2-type1 solution will pass the SCL smoothly at $x_{*}(1)\approx 4\times
10^{-8}$ and $x_{*}(2)=0.97$, so there should be another regime of
$0.97<x<1=x_{2}$ inside condition II. The first four right cross points of
type2-type1 solutions are $1.65$, $0.743$, $1.1$ and $0.97$. By inference, the
cross points of the fifth and following type2-type1 solutions will be narrowly
located around $x=1$. When the infinite number of type2-type1 solutions are
taken into account, there will be fine structures around $x=1$ in subsection
3.3.1 and Figure 4. However, the solution behaviours of crossing the SCL
smoothly under condition of fine structure are like those under conditions II,
III and IV near $x=1$.
### 3.4 Type $\mathcal{Z}$ Voids: ISSV Solutions with Shocks
Shock phenomena are common in various astrophysical flows, such as planetary
nebulae, supernova remnants, and even galaxy clusters gas (e.g. Castor et al.
1975; McNamara et al. 2005). In this subsection, we present type $\mathcal{Z}$
ISSV solutions, namely, self-similar void solutions with shocks. Equations
(16) and (17) are mass and momentum conservations. The isothermality is a
strong energy requirement. In our isothermal model, an example of polytropic
process, the energy process is simplified. This simplification gives
qualitative or semi-quantitative description of the energy process for a shock
wave. By introducing parameter $\tau$ for the temperature difference after and
before a shock, we can describe more classes of shocks (Bian & Lou 2005).
The basic procedure to construct a spherical ISSV solution with shocks is as
follows. Given $(x_{0},\ v_{0}=x_{0},\ \alpha_{0})$ at the void boundary, we
can integrate ODEs (8) and (9) outwards from $x_{0}$; in general, numerical
solutions cannot pass through the SCL smoothly (if they do, they will be
referred to as type $\mathcal{X}$ ISSV solutions); however, with an outgoing
shock, solutions can readily cross the SCL (e.g. Tsai & Hsu 1995; Shu et al.
2002; Shen & Lou 2004; Bian & Lou 2005); finally global ($x_{0}<x<+\infty$)
solutions can be constructed by a combination of integration from $x_{0}$ to
$x_{ds}$, shock jump and integration from $x_{us}$ to $+\infty$, where
$x_{ds}$ and $x_{us}$ are defined in subsection 2.2 as the radial expanding
velocity of a shock on the downstream and upstream sides, respectively.
A typical ISSV solution with a shock has four degrees of freedom (DOF) within
a sensible parameter regime. For example, we need independent input of
$x_{0}$, $\alpha_{0}$, $x_{ds}$ and $\tau$ to determine an ISSV solution with
a shock, while the degree of freedom for type $\mathcal{X}$ solutions is one
(i.e. $x_{*}$ plus the type of eigen-derivative crossing the SCL are enough to
make a type $\mathcal{X}$ ISSV solution). When we consider the simplest
condition that $\tau=1$, the DOF of a void solution with shocks is three. So
infinite void shock solutions exist. By fixing one or two parameters, we can
enumerate all possible values of the other parameter to obtain all possible
ISSV solutions. For example, by fixing velocity parameter $V$, we can adjust
mass parameter $A$ and $x_{us}$ to explore all void solutions (see equation
15); following this procedure, $x_{0}$, $\alpha_{0}$ and $x_{ds}$ are
determined by $V$, $A$ and $x_{us}$. There is a considerable freedom to set up
an ISSV solution with shocks. In astrophysical flows, we would like to learn
the expansion speed of a void, the density surrounding a void and the radial
speed of gas shell at large radii. We then choose one or two parameters in
$x_{0}$, $\alpha_{0}$ and $V$ as given parameters to search for ISSV solutions
by changing other parameters such as $x_{us}$.
In Section 3.4.1, we will first consider the simple case: equi-temperature
shock void (i.e. $\tau=1$), and refer to as void solutions with equi-
temperature shocks or type $\mathcal{Z}_{\rm I}$ ISSV solutions. Several type
$\mathcal{Z}_{\rm I}$ voids with different behaviours near void boundaries and
outer envelopes (a static envelope, outflows and inflows) will be presented.
Phase diagram matching method will be described and extended to the so-called
‘phase net’, with the visual convenience to search for ISSV solutions with
more DOF. Section 3.4.2 presents type $\mathcal{Z}_{\rm II}$ ISSV solutions:
void solutions with two-soundspeed shocks (i.e. $\tau>1$).
#### 3.4.1 Type $\mathcal{Z}_{\rm I}$ Void Solutions: Voids
with Same-Soundspeed Shocks
For an equi-temperature shock, we have $\tau=1$ and thus $x_{ds}=x_{us}=x_{s}$
(see Section 2.2). As already noted, the DOF of $\mathcal{Z}_{\rm I}$ void
solutions is three. We can use $\\{x_{0},\ \alpha_{0}$, $\ x_{s}\\}$ to
construct type $\mathcal{Z}_{\rm I}$ ISSV solutions.
We have freedom to set the condition $(x_{0},\ \alpha_{0})$ at the void edge
to integrate outwards. Before reaching the SCL, we set an equi-temperature
shock at a fairly arbitrary $x_{s}$ to cross the SCL. We then combine the
integrations from $x_{0}$ to $x_{s}^{-}$ and from $x_{s}^{+}$ to $+\infty$ by
a shock jump to form a type $\mathcal{Z}_{\rm I}$ ISSV solution with a shock.
We emphasize that under type $\mathcal{Z}_{\rm I}$ condition, the insertion of
an equi-temperature shock does assure that void solutions jump across the SCL.
Physics requires that at every point $(x,\ v(x),\ \alpha(x))$ of any void
solution, there must be $x>v(x)$ for a positive mass. Equation (21) indicates
that across the equi-temperature shock front, the product of two negative
$v_{d}-x_{s}$ and $v_{u}-x_{s}$ makes 1. In our model, $x_{s}-v_{d}<1$ so
$x_{s}-v_{u}$ must be larger than $1$. So the downstream and upstream are
separated by the SCL. However, in type $\mathcal{Z}_{\rm II}$ condition with
$\tau>1$, this special property may not always hold on. We shall require such
a separation across the SCL as a necessary physical condition.
Figure 5 shows several type $\mathcal{Z}_{\rm I}$ ISSV shock solutions with
void expansion at half sound speed and $0.03$ times sound speed. From this
figure, we know that even the voids expand at the same speed, with different
density $\alpha_{0}$ near the void edge and the radial velocities of the shock
wave ($x_{s}$), they can have outer envelopes of various dynamic behaviours.
From values of $v(x)$ and $\alpha(x)$ at large $x$ (say 10), we can estimate
$V$ and $A$. Different from type $\mathcal{X}$ ISSV solutions, some type
$\mathcal{Z}_{\rm I}$ ISSV shock solutions have outer envelopes with a
negative $V$ (e.g. curves $1^{\prime}$ and $2^{\prime}$ in Fig. 5), that is,
contracting outer envelopes. Numerical calculations show that voids with
contracting envelopes usually expand very slowly (voids $1^{\prime}$ and
$2^{\prime}$ in Fig. 5 expand at $0.03a$).
Figure 5: We show six type $\mathcal{Z}_{\rm I}$ void shock solutions. Four of
them have same $x_{0}=0.5$ but different values of $\alpha_{0}$ marked by
numbers $1,2,3,4$. The other two have same $x_{0}=0.03$ but different values
of $\alpha_{0}$ marked by number
$1^{\prime},2^{\prime},3^{\prime},4^{\prime}$. Panel A above presents $-v(x)$
versus $x$ profiles and panel B below shows $\log\alpha$ versus $x$ profiles.
Values of the two free parameters $(\alpha_{0},\ x_{s})$ are $(50,\ 0.8)$,
$(5,\ 1.5)$, $(1,\ 2.3)$, $(0.1,\ 2.5)$ for curves $1,2,3,4$, respectively;
and $(100,\ 0.7)$, $(0.01,\ 2)$ for curves $1^{\prime}$ and $2^{\prime}$,
respectively. The dotted curves in both panels are Sonic Critical Line. Figure
6: Several type $\mathcal{Z}_{I}$ ISSV solutions with quasi-smooth edges in
light solid curves with the same values of $x_{0}=0.5$ and
$\alpha_{0}=10^{-3}$ but different shock speed $x_{s}$ from $2.5$ to $5.0$
every $0.5$ in step. Panel A presents $\alpha(x)$ versus $x$ profiles and
panel B shows $-v(x)$ versus $x$ profiles. The dotted and dash lines in panel
B are the SCL and the ZML, respectively.
In addition to shocks and outer envelope dynamics, another important
difference between type $\mathcal{Z}_{I}$ and type $\mathcal{X}$ ISSV
solutions lies in behaviours of shells surrounding the void edge. From Section
3.3 (see Fig. 2), we have known that the reduced mass density of type
$\mathcal{X}$ void solutions must encounter a sharp jump and decrease
monotonically with increasing $x$. However, Fig. 5 (curves $4$ and
$2^{\prime}$ with the y-axis in logarithmic scales) and Fig. 6 indicate that
with shocks involved, the density of shells near void edges can increase with
increasing $x$. Under these conditions, density jumps from a void to gas
materials around voids appear not to be very sharp. Voids described by
solutions like curves in Figure 6 and curves $4$ and $2^{\prime}$ in Figure 5
have such ‘quasi-smooth’ edges. These solutions can approximately describe a
void with a smooth edge, whose outer shell gradually changes from vacuum in
the void to gas materials, without sharp density jump. Or, they describe a
void with a quasi-smooth edge, whose outer shell gradually changes from vacuum
in the void to gas materials, with a small density jump. Figure 6 clearly
shows that the faster a shock moves relative to void edge expansion, the
higher density rises from void edge to shock.
##### Type $\mathcal{Z}_{\rm I}$ Voids with a SIS Envelope
Void phenomena in astrophysics indicate an expanding void in the centre and
static gas medium around it in the outer space. For example, a supernova
explodes and ejects almost all its matter into space. If the shock explosion
approximately starts from the central core of the progenitor star, the remnant
of the supernova is then approximately spherically symmetric and a void may be
generated around the explosion centre (e.g. Lou & Cao 2008). If the gravity of
the central compact object may be ignored, we then describe this phenomenon as
an expanding spherical void surrounded by a static outer envelope. The
analysis of Section 3.3.3 indicates that all type $\mathcal{X}$ ISSV solutions
cannot describe this kind of phenomena. However with rebound shocks, it is
possible to construct a model for an expanding void surrounded by a static SIS
envelope.
Shu (1977) constructed the expansion-wave collapse solution (EWCS) to describe
a static spherical gas with an expanding region collapsing towards the centre.
In fact, EWCS outer envelope with $x>1$ is the outer part of a SIS solution
(see equation 11). We now construct several ISSV solutions with an outer SIS
envelope. An outer SIS envelope has fixed two DOF of a type $\mathcal{Z}_{\rm
I}$ ISSV solution, that is, $V=0$ and $A=2$, so there is only one DOF left. A
simple method is to introduce a shock at a chosen point $x_{s}$ of EWCS
solution except $(x=1,\ v=0,\ \alpha=1)$ (we emphasize that only one point
$(x=1,\ v=0,\ \alpha=1)$ of the EWCS solution is at the SCL and all the other
points lay on the upper right to the SCL in the $-v(x)$ versus $x$ profile)
and make the right part of EWCS solution the upstream of a shock, then we can
obtain $(v_{ds},\ \alpha_{ds})$ on the downstream side of a shock. If the
integration from $(x_{s},\ v_{ds},\ \alpha_{ds})$ leftward touches the ZML at
$x_{0}$, a type $\mathcal{Z}_{\rm I}$ ISSV solution with a static outer
envelope is then constructed.
We introduce the $\alpha-v$ phase diagram to deal with the relationship among
the free parameters and search for eigensolutions of ODEs (8) and (9). Hunter
(1977) introduced this method to search for complete self-similar
eigensolutions of two ODEs. Whitworth & Summer (1985) used this method to
combine free-fall solutions and LP solutions in the centre with certain
asymptotic solutions at large radii. Lou & Shen (2004) applies this method to
search for eigensolutions of ODEs (8) and (9) which can cross the SCL twice
smoothly. In the case of Lou & Shen (2004), the DOF is 0. So there is an
infinite number of discrete eigensolutions.
Figure 7: The phase diagram of $\alpha$ versus $v$ at a chosen meeting point
$x_{F}=0.3$ for match of the SIS solution (e.g., Shu 1977) and type
$\mathcal{Z}$ void shock solutions. Each open circle symbol joint by a heavy
solid curve denotes an integration from a chosen $x_{s}$ (marked besides each
open circle) towards $x_{F}<x_{s}$. We impose the equi-temperature shock
conditions at $x_{s}$ from $0.32$ to $1.36$ for the outer SIS solution and
then integrate from $x_{s}$ back to $x_{F}$ to get $[v(x_{F}),\
\alpha(x_{F})]$ as marked in the phase diagram. The change of $x_{s}$
naturally leads to a phase curve shown here by the heavy solid curve. We
choose the range of $x_{s}$ to be $[0.32,\ 1.36]$ because $x_{s}$ must be
larger than $x_{F}$ and a larger $x_{s}$ than $1.36$ will give rise to
solution curves encountering the SCL at $x\geq x_{F}=0.3$. Meanwhile, two
“phase nets”, made of the light solid curves and dotted curves and connected
by a medium heavy solid curve, are actually generated by integrations from
chosen $(x_{0},\ v_{0}=x_{0},\ \alpha_{0})$. Each solid curve in the two nets
(including the connecting medium heavy solid curve) is an equal$-x_{0}$ curve,
that is, every point in the same solid curve corresponds to the same value of
$x_{0}$ noted besides each equal$-x_{0}$ curve; and each dotted curve in the
two nets is an equal$-\alpha_{0}$ curve with the value of $\alpha_{0}$ in
boldface noted besides each curve. For the lower right net, the points in the
net correspond to an initial condition of $(x_{0},\
\alpha_{0})\in\\{x_{0}|x_{0}\in[0,0.29]\\}\times\\{\alpha_{0}|\alpha_{0}\in[5,10]\\}$;
the value of $x_{0}$ is $0$, $0.15$, $0.2$, $0.25$, $0.29$ respectively from
the left equal-$x_{0}$ solid curve to the right one and the value of
$\alpha_{0}$ is $5$, $7$, $9$, $10$ respectively from the bottom
equal-$\alpha_{0}$ dotted curve to the top one. For the upper left net, the
points in the net correspond to an initial condition of $(x_{0},\
\alpha_{0})\in\\{x_{0}|x_{0}\in[0,0.1]\\}\times\\{\alpha_{0}|\alpha_{0}\in[500,800]\\}$;
the value of $x_{0}$ is $0$, $0.04$, $0.06$, $0.08$, $0.10$ respectively from
the right equal-$x_{0}$ solid curve to the left one and the value of
$\alpha_{0}$ is $500$, $600$, $700$, $800$ respectively from the top
equal-$\alpha_{0}$ dotted curve to the bottom one. The medium solid curve is
an equal-$x_{0}$, too, with $x_{0}=0$ and $\alpha_{0}=10$, $50$, $100$, $200$,
$350$ and $500$ from lower right to left. This medium solid curve shows the
trend of phase curves as we increase $\alpha_{0}$ value in large steps.
For type $\mathcal{Z}_{\rm I}$ ISSV shock solutions with a static SIS outer
envelope, the DOF is one. We insert a shock at $x_{s}$ in the SIS and then
integrate inwards from $x_{s}$ to a fixed meeting point $x_{F}$. Adjusting the
value of $x_{s}$ will lead to a phase curve $[v(x_{F})^{+},\
\alpha(x_{F})^{+}]$ in $\alpha$ versus $v$ phase diagram. Meanwhile, an
outward integration from a chosen void boundary condition $(x_{0},\
v_{0}=x_{0},\ \alpha_{0})$ reaches a phase point $[v(x_{F})^{-},\
\alpha(x_{F})^{-}]$ at $x_{F}$. Varying values of $x_{0}$ or $\alpha_{0}$ will
lead to another phase curve $[v(x_{F})^{-},\ \alpha(x_{F})^{-}]$ in the
$\alpha$ versus $v$ phase diagram. We note that changing both values of
$x_{0}$ and $\alpha_{0}$ will result in a “phase net” (i.e. a two-dimensional
mesh of phase curves) in $\alpha$ versus $v$ diagram (see Fig. 7). If such a
“phase net” of $(x_{0},\ \alpha_{0})$ and the phase curve of $x_{s}$ share
common points (usually, there will be an infinite number of common points
continuously as such “phase net” is two dimensional), type $\mathcal{Z}_{\rm
I}$ ISSV shock solutions with a static SIS envelope can be constructed.
Figure 7 presents the phase diagram at a meeting point $x_{F}=0.3$ to search
for type $\mathcal{Z}_{\rm I}$ ISSV shock solutions with an outer SIS
envelope. Note that part of the phase curve falls into the phase net,
revealing that an infinite number of type $\mathcal{Z}_{\rm I}$ ISSV shock
solutions with outer SIS envelope indeed exist continuously. Shown by Figure
7, numerical results suggest that when shock position $x_{s}>0$ is less than
$0.62$ or larger than $1.335$, there is at least one type $\mathcal{Z}_{\rm
I}$ ISSV that can exist in the downstream side of a shock. However, if a shock
expands at a radial velocity between $0.62a$ and $1.335a$, it is impossible
for a type $\mathcal{Z}_{\rm I}$ ISSV to exist inside a shock with a SIS
envelope. Table 4 contains values of $x_{0}$, $\alpha_{0}$ and $x_{s}$ of some
typical type $\mathcal{Z}_{\rm I}$ ISSV shock solutions with a SIS envelope.
Figure 8 is a phase diagram showing how $x_{0}$ and $\alpha_{0}$ are evaluated
with $x_{s}$ changing to construct a type $\mathcal{Z}_{\rm I}$ ISSV shock
solutions with an outer SIS envelope.
Table 4: Values of $x_{0}$, $\alpha_{0}$ and $x_{s}$ for several type
$\mathcal{Z}_{\rm I}$ void shock solutions with an outer static SIS envelope
are summarized here
$x_{0}$ $\alpha_{0}$ $x_{s}$ $0.018$ $4.3\times 10^{6}$ $0.02$ $0.063$
$1.5\times 10^{4}$ $0.1$ $0.09$ $1.1\times 10^{3}$ $0.26$ $0.077$ $5.9\times
10^{2}$ $0.4$ $0.01$ $9.5\times 10^{2}$ $0.62$ $<0.005$ $7.9$ $1.335$ $0.12$
$7.5$ $1.338$ $0.27$ $6.3$ $1.36$ $0.65$ $4.1$ $1.5$ $2.65$ $2.3$ $3.00$
Figure 8: The phase diagram of $\log\alpha_{0}$ versus $\log x_{0}$ shows the
relationship among $x_{s}$, $x_{0}$ and $\alpha_{0}$ of type $\mathcal{Z}_{\rm
I}$ void shock solutions with a static SIS envelope. The DOF of these three
parameters is only one; i.e. given arbitrary one of the three, the other two
parameters are determined in constructing a type $\mathcal{Z}_{\rm I}$ void
shock solutions with a static SIS envelope. For any point in the two curves,
$x_{0}$ and $\alpha_{0}$ indicated by its $x$ and $y$ coordinates with $x_{s}$
marked, correspond to a type $\mathcal{Z}_{\rm I}$ void shock solution. The
upper left solid curve with its data points in asterisk symbol corresponds to
the condition $x_{s}<0.62$ and the lower dotted curve with its data points in
open circle corresponds to the condition $x_{s}>1.335$. The first condition
referred to as Class I gives its largest $x_{0}=0.09$ when $x_{s}=0.26$.
Although $x_{0}$ for the second condition referred to as Class II ranges along
the entire real axis, it usually takes a relative large value; moreover, in
Class II solutions, the reduced density $\alpha_{0}$ at the void boundary is
in the order of unity.
All of Fig. 7, Fig. 8 and Table 4 clearly indicate type $\mathcal{Z}_{\rm I}$
ISSV shock solutions with a SIS envelope can be generally divided into two
classes according to $x_{s}$. Class I type $\mathcal{Z}_{\rm I}$ void shock
solutions with an outer static SIS envelope have $x_{s}<0.62$ usually with a
smaller value of $x_{0}$ and a higher value of $\alpha_{0}$. Class II
solutions have $x_{s}>1.335$ usually with a larger value of $x_{0}$ and a
medium value of $\alpha_{0}$. By a numerical exploration, the maximum of
$x_{0}$ of Class I solutions is $\sim 0.09$ for $x_{s}=0.26$ and
$\alpha_{0}=1.1\times 10^{3}$; these voids expand at a low speed of $<0.1a$;
and the reduced density $\alpha_{0}$ at the void boundary is usually
$>10^{2}$, indicating a sharp edge density peak. Finally, we note that Class I
voids involve shocks expanding at subsonic speeds. In this situation, the
outer region $x>x_{s}$ is not completely static. The SIS envelope only exists
at $x\geq 1$, and the region between $x_{s}$ and $1$ is a collapse region (see
two class I ISSV shock solutions 1 and 2 in Fig. 9). While in Class II ISSV
shock solutions, shock expands supersonically and with $x_{0}$ usually
relatively large. So the upstream side of a shock in Class II ISSV solutions
is static (see two class II ISSV solutions 3 and 4 shown in Fig. 9). In rare
situations, $x_{0}$ can be small (e.g. $x_{s}<1.4$) and $\alpha_{0}$ neither
large nor small, indicating that Class II voids have moderately sharp edges.
Figure 9: Four typical type $\mathcal{Z}_{\rm I}$ void shock solutions with
static SIS envelope. The two heavy solid curves in both panels are the Class I
type $\mathcal{Z}_{\rm I}$ void solutions with a SIS envelope and a subsonic
shock (i.e., $x_{s}<1$), and the two light solid curves in both panels are the
Class II type $\mathcal{Z}_{\rm I}$ void solutions with an outer SIS envelope
and a supersonic shock (i.e., $x_{s}>1$). Panel A presents $-v(x)$ versus $x$
profiles. Panel B presents $\alpha$ versus $x$ profiles using a logarithmic
scale along the $y-$axis. The key data $(x_{0},\ \alpha_{0},\ x_{s})$ of these
solutions are $(0.09,\ 1.1\times 10^{3},\ 0.26)$, $(0.01,\ 9.5\times 10^{2},\
0.62)$, $(0.12,\ 7.5,\ 1.338)$, and $(0.65,\ 4.1,\ 1.50)$ for curves 1, 2, 3,
and 4, respectively. The dash curves in both panels are part of the EWCS
solution.
##### Type $\mathcal{Z}_{\rm I}$ voids with expanding envelopes: breezes,
winds, and outflows
In our ISSV model, we use parameters $V$ and $A$ to characterize dynamic
behaviours of envelopes. Equation (15) indicates that $V>0$ describes an
expanding envelope at a finite velocity of $Va$, and a larger $A$ corresponds
to a denser envelope. For $V=0$, the expansion velocity vanishes at large
radii, corresponding to a breeze; a smaller $A$ than $2$ is required to make
sure an outer envelope in breeze expansion. For $V>0$, the outer envelope is a
wind with finite velocity at large radii.
We apply the similar method to construct type $\mathcal{Z}_{\rm I}$ voids with
expanding envelopes as we deal with type $\mathcal{Z}_{\rm I}$ voids with
outer SIS envelopes. The difference between the two cases is that $V$ and $A$
are allowed to be different from $V=0$ and $A=2$. In this subsection, we
usually choose a meeting point $x_{F}$ between $x_{0}$ and $x_{s}$. Then by
varying $x_{0}$ and $\alpha_{0}$, we obtain a phase net composed by
$[v(x_{F})^{-},\ \alpha(x_{F})^{-}]$. Given $V$, we adopt $A$ and integrate
ODEs (8) and (9) from large $x$ towards $x_{s}$. After setting a shock at
$x_{s}$, we integrate ODEs towards $x_{F}$. By varying $A$ and $x_{s}$, we
obtain a phase net composed by $[v(x_{F})^{+},\ \alpha(x_{F})^{+}]$. The
overlapped area of two phase nets reveals the existence of type
$\mathcal{Z}_{\rm I}$ ISSV with dynamic envelopes characterized by $V$ and
$A$.
Figure 10: Three typical type $\mathcal{Z}_{\rm I}$ void shock solutions with
different outer envelopes. Panel A presents $-v(x)$ versus $x$ profiles. Panel
B presents $\alpha(x)$ versus $x$ profiles in a logarithmic scale along the
$y-$axis. The heavy solid curve $1$ gives a type $\mathcal{Z}_{\rm I}$ void
shock solution with a quite thin breeze outer envelope whose $(x_{0},\
\alpha_{0},\ x_{s},\ V,\ A)=(1.62,\ 0.046,\ 2.6,\ 0,\ 0.1)$. The heavy dash
curve $2$ gives a type $\mathcal{Z}_{\rm I}$ void shock solution with a wind
outer envelope whose $(x_{0},\ \alpha_{0},\ x_{s},\ V,\ A)=(0.25,\ 28.2,\
0.90,\ 0.2,\ 2.5)$. The heavy dotted curve $3$ gives a type $\mathcal{Z}_{\rm
I}$ void shock solution with an accretion outer envelope whose $(x_{0},\
\alpha_{0},\ x_{s},\ V,\ A)=(0.50,\ 18.8,\ 1.00,\ 0,\ 2.50)$. The monotonic
dotted curves in both panels stand for the SCL.
Figure 10 gives two examples of type $\mathcal{Z}_{\rm I}$ voids with breeze
and wind. We emphasize that $A>2$ is required in such ISSV solutions with an
outer envelope wind and subsonic shock. Actually, the larger the velocity
parameter $V$ is, the smaller the mass parameter $A$ is needed. Large $A$ is
required to guarantee that the upstream region of a global solution is on the
upper right part of the SCL when the shock moves subsonically (i.e.
$x_{s}<1$). Physically, if an ISSV is surrounded by a subsonic shock wave, the
wind outside needs to be dense enough.
##### Type $\mathcal{Z}_{\rm I}$ voids with contracting outer envelopes:
accretions and inflows
We explore ISSV with contracting envelopes, such as accretion envelopes. Type
$\mathcal{X}_{\rm I}$ voids under conditions I and II have contracting
envelopes. These type $\mathcal{X}_{\rm I}$ voids are all surrounded by very
dense shells with density decreasing with increasing radius. With shocks
involved, central voids can have envelopes of various properties. Type
$\mathcal{Z}_{\rm I}$ voids with contracting outer envelopes are also studied.
To have a contracting envelope, the velocity parameter $V$ should be negative
or approach $0^{-}$. A negative $V$ and a positive $A$ describe an outer
envelope inflowing at a velocity of $aV$ from large radii. For $V=0$ and
$A>2$, an outer envelope has an inflow velocity vanishing at large radii (Lou
& Shen 2004; Bian & Lou 2005). Figure 10 gives examples of type
$\mathcal{Z}_{\rm I}$ voids with accreting outer envelopes.
#### 3.4.2 Type $\mathcal{Z}_{\rm II}$ ISSV Solutions: Voids Surrounded by
Two-Soundspeed Shocks in Envelopes
In the previous section, we explored type $\mathcal{Z}_{\rm I}$ ISSV solutions
featuring the equi-temperature shock. There, $\tau=1$ indicates the same sound
speed $a$ across a shock. The isothermal sound speed $a$ can be expressed as
$a=\left(\frac{p}{\rho}\right)^{1/2}=\left[\frac{(Z+1)k_{B}}{\mu}T\right]^{1/2}\
,$ (23)
where $k_{B}$ is Boltzmann’s constant, $\mu$ is the mean atomic mass and $Z$
is the ionization state. $Z=0$ corresponds to a neutral gas and $Z=1$
corresponds to a fully ionized gas. In various astrophysical processes, shock
waves increase the downstream temperature, or change the proportion of gas
particles; moreover, the ionization state may change after a shock passage
(e.g. champagne flows in HII regions, Tsai & Hsu 1995; Shu et al. 2002; Bian &
Lou 2005; Hu & Lou 2008). Such processes lead to two-soundspeed shock waves
with $\tau>1$. In this section, we consider $\tau>1$ for type
$\mathcal{Z}_{\rm II}$ ISSV shock solutions with two-soundspeed shocks. Global
$\mathcal{Z}_{\rm II}$ void solutions have temperature changes across shock
waves while both downstream and upstream sides remain isothermal, separately.
With a range of $\tau>1$, it is possible to fit our model to various
astrophysical flows.
For $\tau>1$, the DOF of type $\mathcal{Z}_{\rm II}$ ISSV shock solutions is
four, i.e. one more than that of type $\mathcal{Z}_{\rm I}$ ISSV shock
solutions with $\tau=1$. We will not present details to construct type
$\mathcal{Z}_{\rm II}$ ISSV shock solutions as they differ from the
corresponding type $\mathcal{Z}_{\rm I}$ ISSV shock solution only in the
quantitative sense. General properties such as the behaviours near the void
boundary and the outer envelope of type $\mathcal{Z}_{\rm II}$ ISSV shock
solutions remain similar to those of type $\mathcal{Z}_{\rm I}$ ISSV shock
solutions. We present examples of typical type $\mathcal{Z}_{\rm II}$ ISSV
shock solutions in Figure 11.
Figure 11: Six type $\mathcal{Z}$ ISSV solutions with shocks. Panel A presents
$-v(x)$ versus $x$ profiles and panel B presents $\alpha(x)$ versus $x$
profiles. The heavy solid curves labelled by their corresponding $\tau$ in
both panels are type $\mathcal{Z}$ ISSV shock solutions with $x_{s}=2$ and
$\tau=1,\ 1.1,\ 1.2$ respectively. The $\tau=1$ case is a type
$\mathcal{Z}_{\rm I}$ ISSV shock solution and the latter two with $\tau>1$ are
type $\mathcal{Z}_{\rm II}$ ISSV shock solutions. The heavy dash curves
labelled by their corresponding $\tau$ in both panels are solutions with
reduced velocity $V=0$ and $A=1,\ 1.5,\ 2.5$ respectively. The one with
$A=2.5$ has an envelope accretion and the other two have envelope breezes. The
relevant data of these six solutions are summarized in Table 5. Shock jumps of
type $\mathcal{Z}_{\rm II}$ ISSV solutions do not appear vertical as those of
type $\mathcal{Z}_{\rm I}$ ISSV shock solutions (e.g. Bian & Lou 2005) because
of different sound speeds across a shock front and thus different scales of
reduced radial $x\equiv r/(at)$. Table 5: Parameters of several type
$\mathcal{Z}_{\rm I}$ and $\mathcal{Z}_{\rm II}$ ISSV shock solutions shown in
Figure 11
$x_{0}$ $\alpha_{0}$ $x_{ds}$ $x_{us}$ $\tau$ $V$ $A$ $1.45$ $2.72$ $2$ $2$
$1$ $0$ $2$ $1.12$ $2.81$ $1.82$ $2$ $1.1$ $0$ $2$ $0.74$ $3.51$ $1.67$ $2$
$1.2$ $0$ $2$ $2.65$ $2.26$ $3$ $3$ $1$ $0$ $2$ $1.21$ $1.87$ $2$ $3$ $1.5$
$0$ $2$ $0.3$ $0.1$ $2.2303$ $4.7528$ $2.131$ $0$ $1$ $0.3$ $0.1$ $2.2064$
$5.8898$ $2.6694$ $0$ $1.5$ $0.3$ $0.1$ $2.1886$ $7.6767$ $3.5076$ $0$ $2.5$
## 4 Astrophysical Applications
### 4.1 The role of self-gravity in gas clouds
Earlier papers attempted to build models for hot bubbles and planetary nebulae
(e.g. Weaver et al. 1977; Chevalier 1997a) without including the gas self-
gravity. In reference to Chevalier (1997a) and without gravity, our nonlinear
ODEs (8) and (9) would then become
$\left[(x-v)^{2}-1\right]\frac{dv}{dx}=-\frac{2}{x}(x-v)\ ,$ (24)
$\left[(x-v)^{2}-1\right]\frac{1}{\alpha}\frac{d\alpha}{dx}=-\frac{2}{x}(x-v)^{2}\
.$ (25)
ODEs (24) and (25) allow both outgoing and inflowing outer envelopes around
expanding voids.333Actually, Chevalier (1997a) did not consider physically
possible situations of contracting outer envelopes. Self-similar solutions of
ODEs (24) and (25) cannot be matched via shocks with a static solution of
uniform mass density $\rho$. For comparison, the inclusion of self-gravity can
lead to a static SIS. In some circumstances, there may be no apparent problem
when ODEs (24) and (25) are applied to describe planetary nebulae because AGB
wind outer envelope may have finite velocity at large radii. However, in the
interstellar bubbles condition, a static ISM should exist outside the
interaction region between stellar wind and ISM. In Weaver et al. (1977), a
static uniformly distributed ISM surrounding the central bubble was
specifically considered. In our ISSV model, a single solution with shock is
able to give a global description for ISM shell and outer region around an
interstellar bubble. Naturally, our dynamic model with self-gravity is more
realistic and can indeed describe expanding voids around which static and
flowing ISM solutions exist outside an expanding shock front (Figs. 9 and 11;
Tables 4 and 5).
For gas dynamics, another problem for the absence of gravity is revealed by
asymptotic solution (15), where the coefficient of $x^{-4}$ term in
$\alpha(x)$ differs by a factor of $1/2$ between models with and without self-
gravity, and the expression for $v(x)$ at large radii differs from the
$x^{-1}$ term. These differences lead to different dynamic evolutions of voids
(see Section 2.1 for details).
Under certain circumstances, the subtle difference between the shell
behaviours with and without gas self-gravity may result in quite different
shell profiles around void regions. We illustrate an example for such
differences in Fig. 12 with relevant parameters for the two solutions therein
being summarized in Table 6. Given the same asymptotic condition of $V=0.2$
and $A=1$ at large radii (Chevalier 1997a shows only $A=1$ case), the
behaviours of such voids differ from each other significantly with and without
self-gravity, although both of them fit asymptotic condition (15) well. With
gravity, the given boundary condition and shock wave lead to a thin, very
dense shell with a sharp void edge while the same condition leads to a quasi-
smooth edge void without gravity. In Section 4.2, we will show that these two
types of voids reveal different processes to generate and maintain them. In
certain situations, void models without gravity might be misleading.
Figure 12: A comparison of ISSV solutions from different ODEs. Panel A
presents $-v(x)$ versus $x$ profiles and panel B presents $\alpha(x)$ versus
$x$ profiles. The light solid curves is a solution of ODEs (8) and (9) with
self-gravity. The heavy dashed curves represent a solution of ODEs (24) and
(25) without self-gravity. An equi-temperature shock with $\tau=1$ is
introduced in both solutions at $x_{s}=2.7025$. The dotted curves in both
panels represent the SCL. Other relevant parameters are contained in Table 6.
Table 6: Parameters for two ISSV solutions shown in Fig. 12
Gravity $x_{0}$ $\alpha_{0}$ $x_{s}$ $V$ $A$ With $2.1$ $0.675$ $2.7025$ $0.2$
$1$ Without $0.37$ $0.191$ $2.7025$ $0.2$ $1$
### 4.2 Formation of ISSV Edge
In our model, the central void region is simply treated as a vacuum with no
materials inside. We need to describe astrophysical mechanisms responsible for
creating such voids and for their local evolution. On the right side of the
void edge $x_{0}^{+}$, the gas density $\alpha(x)>0$ for $x>x_{0}$, while
$\alpha(x)=0$ for $x<x_{0}$. If no materials exist within the void edge, there
would be no mechanism to confine the gas against the inward pressure force
across the void edge. We offer two plausible astrophysical scenarios to
generate and maintain such voids.
#### 4.2.1 Energetics and Pressure Balance
If we allow a tenuous gas to exist within a ‘void’ region to counterbalance
the pressure across the ‘void’ edge for a certain time $t$, transformation (5)
gives the edge density $\rho_{0}$ as
$\rho_{0}(t)=7.16\times 10^{8}\alpha_{0}\left(\frac{10^{3}\hbox{
yr}}{t}\right)^{2}m_{p}\ \ \hbox{cm}^{-3},$ (26)
where $\alpha_{0}\equiv\alpha(x_{0})$ is the reduced mass density at ISSV edge
and $m_{p}$ is the proton mass. For a gas temperature $T$, the isothermal
sound speed $a$ is
$a=2.87\times 10^{7}\bigg{(}\frac{T}{10^{7}\hbox{ K}}\bigg{)}^{1/2}\ \hbox{cm
s}^{-1},$ (27)
where the mean particle mass is that of hydrogen atom. Then the gas pressure
$p_{0}$ just on the outer side of the ISSV edge is
$p_{0}(t)=\rho_{0}a^{2}=0.99\alpha_{0}\left(\frac{10^{3}\hbox{
yr}}{t}\right)^{2}\frac{T}{10^{7}\hbox{ K}}\hbox{ dyne cm}^{-2},$ (28)
where $\rho_{0}$ is the mass density at the ISSV edge. Here, we take the
proton mass as the mean particle mass. Equation (28) gives a pressure scaling
$p_{0}\propto t^{-2}T$ governed by self-similar hydrodynamics.
Within a ‘void’, we may consider that a stellar wind steadily blows gas
outwards with a constant speed. Various astrophysical systems can release
energies at different epochs. For an early evolution, massive stars steadily
blow strong winds into the surrounding ISM (e.g. Mathews 1966; Dyson 1975;
Falle 1975; Castor et al. 1975; Weaver et al. 1977). In the late stage of
evolution, compact stars can also blow fast winds to drive the surrounding gas
(e.g. Chevalier 1997a; Chevalier 1997b). As a model simplification, we assume
that a tenuous gas moving outwards at constant speed $v_{w}$ with temperature
$T_{w}$ (we refer to this as a central wind) may carve out a central ‘void’
and can provide a pressure against the pressure gradient across the ‘void’
boundary; suppose that this central wind begins to blow at time $t=0$. Then
after a time $t$, the radius $r$ of the central wind front is at
$r_{w}=v_{w}t$. By the mass conservation in a spherically symmetric flow, the
mass density of the central wind front at time $t$ is then
$\rho_{w,front}=\frac{\dot{M}}{4\pi v_{w}^{3}t^{2}}\ ,$ (29)
where $\dot{M}$ is the mass loss rate of the central wind. For a contact
discontinuity the ISSV edge between the inner stellar wind front and outer
slower wind, the plasma pressure $p_{w,front}$ of this central fast wind is
$p_{w,front}=\frac{k_{B}T_{w}}{m}\frac{\dot{M}}{4\pi v_{w}^{3}t^{2}}\ ,$ (30)
which can be estimated by
$\frac{4.16\times 10^{-6}\dot{M}}{10^{-6}M_{\odot}\hbox{
yr}^{-1}}\left(\frac{10\hbox{km
s}^{-1}}{v_{w}}\right)^{3}\left(\frac{10^{3}\hbox{yr}}{t}\right)^{2}\frac{T_{w}}{10^{7}\hbox{K}}\hbox{dyne\
cm}^{-2},$ (31)
where $M_{\odot}$ is the solar mass and the mean particle mass is that of
hydrogen atom. We adopt the parameters based on estimates and numerical
calculations (see Section 4.3). By expressions (30) and (31), the plasma
pressure at the central wind front also scales as $p_{w,front}\propto
t^{-2}T_{w}$. By a contact discontinuity between the central wind front and
the ISSV edge with a pressure balance $p_{w,front}=p_{0}$, our self-similar
‘void’ plus the steady central wind can sustain an ISSV evolution as long as
the central stellar wind can be maintained. Across such a contact
discontinuity, the densities and temperatures can be different on both sides.
Equations (31) and (28) also show that, while pressures balance across a
contact discontinuity, the reduced density at ISSV edge $\alpha_{0}$ is
determined by the mass loss rate $\dot{M}$ and central wind radial velocity
$v_{w}$. Back into this steady central stellar wind of a tenuous plasma at a
smaller radius, it is possible to develope a spherical reverse shock. This
would imply an even faster inner wind inside the reverse shock (i.e. closer to
the central star); between the reverse shock and the contact discontinuity is
a reverse shock heated downstream part of the central stellar wind.
Physically, the downstream portion of the central stellar wind enclosed within
the contact discontinuity is expected to be denser, hotter and slower as
compared with the upstream portion of the central stellar wind enclosed within
the reverse shock.
The above scenario may be also adapted to supernova explosions. At the onset
of supernova explosions, the flux of energetic neutrinos generated by the core
collapse of a massive progenitor star could be the main mechanism to drive the
explosion and outflows. This neutrino pressure, while different from the
central wind plasma pressure discussed above, may be able to supply sufficient
energy to drive rebound shocks in a dense medium that trigger supernova
explosions (e.g. Chevalier 1997b; Janka et al. 2007; Lou & Wang 2006, 2007;
Lou & Cao 2007; Arcones et al. 2008; Hu & Lou 2009). Under certain conditions,
such neutrino pressure may even counterbalance the strong inward pressure
force of an extremely dense gas and generate central ‘voids’. We will apply
this scenario and give examples in Section 4.3.
#### 4.2.2 Diffusion Effects
For astrophysical void systems with timescales of sufficient energy supply or
pressure support being shorter than their ages, there would be not enough
outward pressure at void edges to balance the inward pressure of the gas shell
surrounding voids after a certain time. In such situations, the gas shell
surrounding a central void will inevitably diffuse into the void region across
the void edge or boundary. This diffusion effect will affect behaviours of
void evolution especially in the environs of void edge and gradually smear the
‘void boundary’. However, because the gas shell has already gained a steady
outward radial velocity before the central energy supply such as a stellar
wind pressure support fails, the inertia of a dense gas shell will continue to
maintain a shell expansion for some time during which the gas that diffuses
into the void region accounts for a small fraction of the entire shell and
outer envelope. As a result, it is expected that our ISSV solutions remain
almost valid globally to describe void shell behaviours even after a fairly
long time of insufficient central support.
We now estimate the gas diffusion effect quantitatively. We assume that a void
boundary expands at $ax_{0}$ and the void is surrounded by a gas envelope
whose density profile follows a $\rho(r)\propto r^{-2}$ fall-off (asymptotic
solution 15). The gas shell expands at radial velocity $ax$ for $x>x_{0}$. The
central energy supply mechanism has already maintained the void to a radius
$r_{0}$ and then fails to resist the inward pressure across the void edge. We
now estimate how many gas particles diffuse into void region in a time
interval of $\Delta t=r_{0}/(ax_{0})$, during which the void is supposed to
expand to a radius $2r_{0}$. We erect a local Cartesian coordinate system in
an arbitrary volume element in gas shell with the $x-$axis pointing radially
outwards. The Maxwellian velocity distribution of thermal particles gives the
probability density of velocity $\overrightarrow{v}=(v_{x},v_{y},v_{z})$ as
$p_{v}(\overrightarrow{v})\propto\exp{\left[-\frac{(v_{x}-ax_{0})^{2}+v_{y}^{2}+v_{z}^{2}}{2a^{2}}\right]}\
.$ (32)
Define $l$ as the mean free path of particles in the gas shell near void edge.
If a gas particle at radius $r$ can diffuse into radius $\tilde{r}$ without
collisions, its velocity is limited by $(r+v_{x}\Delta t)^{2}+(v_{y}\Delta
t)^{2}+(v_{z}\Delta t)^{2}<\tilde{r}^{2}$ and its position is limited by
$r-\tilde{r}<l$. We simplify the velocity limitation by slightly increasing
the interval as $(r+v_{x}\Delta t)^{2}<\tilde{r}^{2}$ and
$v_{y}^{2}+v_{z}^{2}<(\tilde{r}/\Delta t)^{2}$. We first set $\tilde{r}=r_{0}$
and integrate the ratio of particles that diffuse into radius $r_{0}$ during
$\Delta t$ to total gas shell particles within radius $r_{0}+l$ as
$\\!\\!\\!\\!\frac{\int_{r_{0}}^{r_{0}+l}4\pi
r^{2}\rho(r)dr\int_{\|r+v_{x}\Delta
t\|<r_{0}\&v_{y}^{2}+v_{z}^{2}<(r_{0}/\Delta
t)^{2}}p_{v}d^{3}v}{\int_{r_{0}}^{r_{0}+l}4\pi r^{2}\rho(r)dr\int
p_{v}d^{3}v}\\\ =\frac{1-\exp{\left[-r_{0}^{2}/(2a^{2}\Delta
t^{2})\right]}}{(2\pi)^{1/2}l}\\\
\qquad\qquad\times\int_{r_{0}}^{r_{0}+l}dr\int_{-\frac{r+r_{0}}{a\Delta
t}-x_{0}}^{-\frac{r-r_{0}}{a\Delta
t}-x_{0}}\exp{(-\tilde{v}^{2}/2)}d\tilde{v}\\\
=\frac{1-\exp{(-x_{0}^{2}/2)}}{(2\pi)^{1/2}}\frac{r_{0}}{l}\\\ \qquad\
\times\int_{1}^{1+l/r_{0}}d\tilde{x}\int_{-\tilde{x}-2x_{0}}^{-\tilde{x}}\exp{\left(-\tilde{v}^{2}/2\right)}d\tilde{v}\
,$ (33)
where both $\tilde{x}$ and $\tilde{v}$ are integral elements. We simply set
$x_{0}=1$ and present computational results in Table 7. It is clear that even
the inner energy supply fails to sustain inward pressure for a fairly long
time, there are only very few particles that diffuse into the original void
region, namely, a void remains quite empty.
In the context of PNe, the particle mean free path $l$ may be estimated for
different species under various situations. For an example of PN to be
discussed in Section 4.3.1, $l=1/(n\sigma)=3\times 10^{20}$ cm, where
$n\approx 5000$ cm-3 is the proton (electron) number density in the H II
region and $\sigma=6.65\times 10^{-25}$ cm2 is the electron cross section in
Thomson scattering. One can also estimate cross section of coulomb interaction
between two protons as $\sim 10^{-17}\hbox{ cm}^{2}$ and thus the mean free
path for proton collisions is $\sim 10^{1}3$cm. A PN void radius is $\sim
5\times 10^{17}$ cm. If no inner pressure is acting further as a void expands
to radius $\sim 10^{18}$ cm, gas particles that diffuse into $r\;\lower
4.0pt\hbox{${\buildrel\displaystyle<\over{\sim}}$}\;5\times 10^{17}$ cm only
take up $\sim 6\times 10^{-5}$ of those in the gas shell. However, at the
onset of a supernova explosion, the particle mean free path in the stellar
interior is very small. If the density at a void edge is $\sim 1.2\times
10^{8}$ g cm-3 (see Section 4.3.2), and the scattering cross section is
estimated as the iron atom cross section $4.7\times 10^{-18}$ cm2, the
particle mean free path is only $l\sim 10^{-13}$ cm. Under this condition,
particle that diffuse into a void region can account for $6.2\%$ of those in
the total gas shell.
In short, while diffusion effect inevitably occurs when the inner pressure can
no longer resist the inward gas pressure across a void edge, it usually only
affects the gas behaviour near the void edge but does not alter the global
dynamic evolution of gas shells and outer envelopes over a long time. However,
we note that for a very long-term evolution, there will be more and more
particles re-entering the void region when no sufficient pressure is supplied,
and eventually the diffusion effect will result in significant changes of
global dynamical behaviours of voids and shells. These processes may happen
after supernova explosions. When neutrinos are no longer produced and rebound
shocks are not strong enough to drive outflows, a central void generated in
the explosion will gradually shrink in the long-term evolution of supernova
remnants (SNRs).
Table 7: Ratio of molecules that diffuse into radius $r_{0}$ during $\Delta t$
to total gas shell molecules within radius $r_{0}+l$ under different ratio
$l/r_{0}$. We take $x_{0}=1$.
$l/r_{0}$ $+\infty$ $10$ $1$ $0.1$ $<<1$ ratio $3.6\%r_{0}/l$ $0.34\%$ $2.9\%$
$5.8\%$ $6.2\%$
#### 4.2.3 Applications of ISSV Model Solutions
In formulating the basic model, we ignore the gravity of the central void
region. By exploring the physics around the void boundary, a tenuous gas is
unavoidable inside a ‘void’ region for either an energy supply mechanism
leading to an effective pressure or that diffused across the void boundary. As
long as the gravity associated with such a tenuous gas inside the ‘void’ is
sufficiently weak, our ISSV model should remain valid for describing the
large-scale dynamic evolution of void shells, shocks, and outer envelopes.
In a planetary nebula or a supernova remnant, there is usually a solar mass
compact star at the centre (e.g. Chevalier 1997a). For outgoing shells at a
slow velocity of sound speed $\sim 10$ km s-1, the Parker-Bondi radius of a
central star of $10M_{\odot}$ is $\sim 10^{15}$ cm (i.e. $\sim 10^{-3}$ light
year or $\sim 3\times 10^{-4}$ pc). The typical radius of a planetary nebula
is about $10^{18}$ cm (see Section 4.3.1). Even the youngest known supernova
remnant G1.9+0.3 in our Galaxy, estimated to be born $\sim 140$ yr ago, has a
radius of $\sim 2$ pc (e.g. Reynolds 2008; Green et al. 2008). Thus the
central star only affects its nearby materials and has little impact on
gaseous remnant shells.
For a stellar wind bubble (e.g. Rosette Nebula), there usually are several,
dozens or even thousands early-type stars blowing strong stellar winds in all
directions. For example, the central ‘void’ region of Rosette Nebula contains
the stellar cluster NGC 2244 of $\sim 2000$ stars (e.g. Wang et al. 2008).
Conventional estimates show that the thick nebular shell has a much large mass
of around $10$,$000-16$,$000$ solar masses (e.g. Menon 1962; Krymkin 1978).
For a sound speed of $\sim 10$ km s-1, the Parker-Bondi radius of a central
object of $2000M_{\odot}$ is then $\sim 0.08$ pc, which is again very small
compared to the $\sim 6$ pc central void in Rosette Nebula (e.g. Tsivilev et
al. 2002). For a typical interstellar bubble consdiered in Weaver et al.
(1977), the total mass inside the bubble, say, inside the dense shell, is no
more than $50$ solar masses, which is significantly lower than that of the
$2000$ solar mass dense shell.
We thus see that the dynamical evolution of flow systems on scales of
planetary nebulae, supernova remnants and interstellar bubbles are only
affected very slightly by central stellar mass objects (e.g. early-type stars,
white dwarfs, neutron stars etc). Based on this consideration, we regard the
grossly spherical region inside the outer dense shells in those astrophysical
systems as a void in our model formulation, ignore the void gravity, emphasize
the shell self-gravity and invoke the ISSV solutions to describe their dynamic
evolution.
### 4.3 Astrophysical Applications
Our ISSV model is adaptable to astrophysical flow systems such as planetary
nebulae, supernova explosions, supernova remnants, bubbles and hot bubbles on
different scales.
#### 4.3.1 Planetary Nebulae
In the late phase of stellar evolution, a star with a main-sequence mass
$\;\lower 4.0pt\hbox{${\buildrel\displaystyle<\over{\sim}}$}\;8M_{\odot}$
makes a transition from an extended, cool state where it blows a slow dense
wind to a compact, hot state444A hottest white dwarf (KPD 0005 5106) detected
recently has a temperature of $\sim 2\times 10^{5}$ K (Werner et al. 2008).
where it blows a fast wind. The interaction between the central fast wind with
the outer slow wind results in a dense shell crowding the central region which
appears as a planetary nebula (e.g. Kwok, Purton & Fitzgerald 1978). The hot
compact white dwarf star at the centre is a source of photoionizing radiation
to ionize the dense shell (e.g. Chevalier 1997a). When the fast wind catches
up the slow dense wind, a forward shock and a reverse shock will emerge on
outer and inner sides of a contact discontinuity, respectively. As shown in
Section 4.2.3, the gravity of central white dwarf and its fast steady wind is
negligible for the outer dense wind. Practically, the region inside the
contact discontinuity may be regarded approximately as a void. Meanwhile, the
photoionizing flux is assumed to be capable of ionizing and heating the slow
wind shell to a constant temperature (Chevalier 1997a), and the outer
envelope, the cool AGB slow wind that is little affected by the central wind
and radiation, can be also regarded approximately as isothermal. The constant
temperatures of dense photoionized shell and outer envelope are usually
different from each other which can be well characterized by the isothermal
sound speed ratio $\tau$ (see Section 2.2). Thus the dynamic evolution of
dense shell and outer AGB wind envelope separated by a forward shock is
described by a type $\mathcal{Z}$ ISSV solution.
Within the contact discontinuity spherical surface, there is a steady
downstream wind blowing outside the reverse shock front (e.g. Chevalier &
Imamura 1983). Consistent with Section 4.2.1, we define $r_{w}$ as the radius
where the downstream wind front reaches and $r_{r}$ as the radius of reverse
shock. Within the radial range of $r_{r}<r<r_{w}$, i.e. in the downstream
region of the reverse shock, we have a wind mass density
$\rho_{w}(r,\ t)=\frac{\dot{M}}{4\pi v_{w}r^{2}}\ .$ (34)
We define $a_{w,d(u)}$ and $T_{w,d(u)}$ as the sound speed and gas temperature
on the downstream (upstream) side of the reverse shock and ratio
$\tau_{w}\equiv a_{w,d}/a_{w,u}$ $=(T_{w,d}/T_{w,u})^{1/2}$ to characterize
the reverse shock. For a reverse shock in the laboratory framework of
reference given by shock conditions (16) and (17), we have in dimensional
forms
$\\!\\!\\!\\!\\!\\!\\!u_{u,rs}-u_{rs}=\frac{1}{2}\left(v_{w}-u_{rs}+\frac{a_{w,d}^{2}}{v_{w}-u_{rs}}\right)\\\
\\!\\!\\!+\frac{1}{2}\left\\{\left[\frac{(v_{w}-u_{rs})^{2}-a_{w,d}^{2}}{v_{w}-u_{rs}}\right]^{2}+4a_{w,d}^{2}\frac{\tau_{w}^{2}-1}{\tau_{w}^{2}}\right\\}^{1/2}\\!\\!,\\\
\\!\\!\\!\\!\\!\\!\\!v_{w}-u_{rs}=\frac{1}{2}\left(u_{u,rs}-u_{rs}+\frac{a_{w,u}^{2}}{u_{u,rs}-u_{rs}}\right)\\\
\\!\\!\\!-\frac{1}{2}\left\\{\left[\frac{(u_{u,rs}-u_{rs})^{2}-a_{w,u}^{2}}{u_{u,rs}-u_{rs}}\right]^{2}+4a_{w,u}^{2}(1-\tau_{w}^{2})\right\\}^{1/2}\\!\\!,\\\
\\!\\!\\!\\!\\!\\!\\!\rho_{u,rs}=\frac{v_{w}-u_{rs}}{u_{u,rs}-u_{rs}}\rho_{d,rs}\
,\\\ $ (35)
where $u_{rs}$ is the outgoing speed of the reverse shock, $u_{u,rs}$ is the
upstream wind velocity, $\rho_{u(d),rs}$ is the upstream (downstream) mass
density, respectively. The first and second expressions in equation (35) are
equivalent: the first expresses the upstream flow velocity in terms of the
downstream parameters, while the second expresses the downstream flow velocity
in terms of the upstream parameters. In solving the quadratic equation, we
have chosen the physical solution, while the unphysical one is abandoned.
Normally, in the downstream region $r_{r}<r<r_{w}$, the plasma is shock heated
by the central faster wind with $\tau_{w}>1$. In the regime of an isothermal
shock for effective plasma heating, we take $\tau_{w}=1$ (i.e.
$a_{w,d}=a_{w,u}=a_{w}$) for a stationary reverse shock in the laboratory
framework of reference, shock conditions (35) and (34) reduce to
$\rho_{u,rs}=\frac{\dot{M}}{4\pi u_{u,rs}r_{r}^{2}}\ ,\qquad
u_{u,rs}=\frac{a_{w}^{2}}{v_{w}}\ ,\qquad u_{rs}=0\ .$ (36)
In this situation, the reverse shock remains stationary in space and this may
shed light on the situation that an inner fast wind encounters an outer dense
shell of slow speed. While a reverse shock always moves inwards relative to
both upstream and downstream winds, either outgoing or incoming reverse shocks
are physically allowed in the inner wind zone in the laboratory framework of
reference. In the former situation, the reverse shock surface, contact
discontinuity surface and forward shock surface all expand outwards steadily,
with increasing travelling speeds, respectively. In the latter situation, the
downstream wind zone, namely, the shocked hot fast wind zone expands both
outwards and inwards, and eventually fills almost the entire spherical volume
within the dense shell. The reverse shock here plays the key role to heat the
gas confined within a planetary nebula to a high temperature, which is thus
referred to as a hot bubble. In all situations, between the reverse shock and
the contact discontinuity, the downstream wind has a constant speed, supplying
a wind plasma pressure to counterbalance the inward pressure force across the
contact discontinuity.
Here, type $\mathcal{Z}$ ISSV solutions are utilized to describe the self-
similar dynamic evolution of gas shell outside the outgoing contact
discontinuity. An outgoing forward shock propagates in the gas shell after the
central fast wind hits the outer dense shell. According to properties of type
$\mathcal{Z}$ ISSV solutions, there may be outflows (i.e. winds and breezes),
static ISM or inflows (i.e. accretion flows and contractions) in the region
outside the forward shock. The spatial region between the contact
discontinuity and the forward shock is the downstream side of the forward
shock.
Figure 13: A type $\mathcal{Z}$ ISSV shock model with an inner fast wind to
fit the planetary nebula NGC 7662. Panels A, B and C show our model results
for proton number density $n_{\rm p}$, radial flow velocity and temperature of
NGC 7662 in solid curve, respectively. Numerals $1$, $2$ and $3$ in Panel A
mark reverse shock, contact discontinuity and forward shock surface,
respectively. The dashed curve in Panel A is the estimate of proton number
density by Guerrero et al. (2004). The inner fast wind blows at $1500$ km s-1
inside the reverse shock, which is not shown in Panel B.
We now provide our quantitative model PN estimates for a comparison. Guerrero
et al. (2004) probed the structure and kinematics of the triple-shell
planetary nebula NGC 7662 based on long-slit echelle spectroscopic
observations and Hubble Space Telescope archival narrowband images. They
inferred that the nebula with a spatial size of $\sim 4\times 10^{18}$ cm
consists of a central cavity surrounded by two concentric shells, i.e. the
inner and outer shells, and gave a number density $n_{\rm p}$ distribution
(the dotted curve in panel A of Figure 13). The temperatures of the inner and
outer shells were estimated as $\sim 1.4\times 10^{4}$ K and $\sim 1.1\times
10^{4}$ K, respectively. No information about the inner fast wind is given in
Guerrero et al. (2004). In our model consideration, the planetary nebula NGC
7662 may be described by our type $\mathcal{Z}$ ISSV model with a shocked
inner fast wind. In our scenario for a PN, the central cavity in the model of
Guerrero et al. (2004) should actually involve an inner fast wind region with
a reverse shock. The inner and outer shells correspond to the downstream and
upstream dense wind regions across a forward shock, respectively. Thus the
inner boundary of the inner shell is the contact discontinuity in our model
scenario. Physically, we suppose that the central star stops to blow a dense
slow wind of $\sim 10$ km s-1 about $\sim 1000$ years ago and the inner fast
wind of $10^{5}$ K began to blow outwards from a white dwarf at
$u_{u,rs}=1500$ km s-1 about $\sim 600$ years ago. When the inner fast wind
hits the dense slow wind $\sim 4$ years after its initiation, a reverse shock
and a forward shock are generated on the two sides of the contact
discontinuity. The reverse shock moves inwards at a speed $\sim 10$ km s-1.
The best density fit to the estimate of Guerrero et al. (2004) is shown in
Figure 13. In our model, the inner fast wind has a mass loss rate from the
compact star as $\sim 2\times 10^{-8}$M⊙ yr-1 (consistent with earlier
estimates of Mellema 1994 and Perinotto et al. 2004) and the reverse shock is
able to heat the downstream wind to a temperature of $\sim 6.4\times 10^{6}$
K. The downstream wind of the reverse shock has an outward speed of $\sim
25.9$ km s-1, corresponding to a kinematic age (i.e. the time that a shocked
fast wind at that velocity blows from the central point to its current
position) of $\sim 630$ years. In Guerrero et al. (2004), the kinematic age is
estimated to be $\sim 700$ years. In Guerrero et al. (2004), the inner shell
density is $\sim 5\times 10^{3}$ $m_{\rm p}$ cm-3 and our model shows a
density variation from $\sim 4\times 10^{3}$ $m_{\rm p}$ cm-3 to $\sim 7\times
10^{3}$ $m_{\rm p}$ cm-3 with a comparable mean. The forward shock travels
outwards at a speed $\sim 43.0$ km s-1, consistent with an average outward
velocity of the inner shell at $\sim 44$ km s-1. The total mass of the inner
shell is $\sim 8.5\times 10^{-3}$M⊙, and the mass of the outer shell within a
radius of $2.5\times 10^{18}$ cm is $\sim 0.036$M⊙, which are all consistent
with estimates of Guerrero et al. (2004). However, Guerrero et al. inferred
that the outer shell has an outward velocity of around $50$ km s-1 and a
proton number density of $\sim 3000$ cm-3. Our model estimates indicate that
the outer shell has a proton number density from $3200$ cm-3 at the immediate
upstream side of the forward shock to $\sim 400$ cm-3 at $2.5\times 10^{18}$
cm, and the outward velocity varies from $\sim 26$ km s-1 at the upstream
point of the forward shock to $\sim 20$ km s-1 at $2.5\times 10^{18}$ cm. And
thus the dense slow wind mass loss rate is $0.68\times 10^{-5}$M⊙yr-1, which
is consistent with earlier numerical simulations (e.g. Mellema 1994; Perinotto
et al. 2004), but is lower by one order of magnitude than $\sim 10^{-4}$M⊙yr-1
estimated by Guerrero et al. (2004). In summary, our ISSV model appears
consistent with observations of the NGC 7662, and a combination of
hydrodynamic model with optical and X-ray observations would be valuable to
understand the structure and dynamic evolution of planetary nebulae.
#### 4.3.2 Supernova Explosions and Supernova Remnants
At the onset of a type II supernova (or core-collapse supernova) for a massive
progenitor, extremely energetic neutrinos are emitted by the neutronization
process to form a ‘neutrino sphere’ that is deeply trapped by the nuclear-
density core and may trigger a powerful rebound shock breaking through the
heavy stellar envelope. At that moment, the central iron core density of a
$\sim 15M_{\odot}$ progenitor star can reach as high as $\sim 7.3\times
10^{9}$ g cm-3 and the core temperature could be higher than $\sim 7.1\times
10^{9}$ K. The density of the silicon layer is $\sim 4.8\times 10^{7}$ g cm-3
with a temperature of $\sim 3.3\times 10^{9}$ K. The tremendous pressure
produced by relativistic neutrinos may drive materials of such high density to
explosion (e.g. Woosley & Janka 2005). During the first $\sim 10$ s of the
core collapse, a power of about $\sim 10^{53}$ erg s-1 is released as high-
energy neutrinos within a radius of $\sim 10^{5}$ km (e.g. Woosley & Janka
2005). The neutrino-electron cross section was estimated to be $\sim
10^{-42}$(E/GeV)cm2 with $E$ being the neutrino energy (e.g. Marciano & Parsa
2003). During the gravitational core collapse of a SN explosion, a typical
value of E would be E$\sim$20MeV (e.g. Hirata et al. 1987; Arcones et al.
2008). Therefore, the neutrino-electron cross section is estimated to be $\sim
2\times 10^{-44}\hbox{ cm}^{2}$. If we adopt the neutrino luminosity
$L=10^{53}\hbox{ erg s}^{-1}$, the ratio of neutrino pressure to one electron
and the iron-core gravity on one silicon nucleus is $\sim 10^{-6}$. By these
estimates, neutrino pressure is unable to split the silicon layer from the
iron core. A pure vacuum void is unlikely to appear during the gravitational
core collapse and the rebound process to initiate a SN.
During the subsequent dynamic evolution, diffusion effects would gradually
smooth out any sharp edges and the inner rarefied region will be dispersed
with diffused gaseous materials. Behaviour of this gas may be affected by the
central neutron star. For example, a relativistic pulsar wind is able to power
a synchrotron nebula referred to as the pulsar wind nebula, which is found
within shells of supernova remnants (e.g. Gaensler et al. 1999). Pulsar winds
relate to the magnetic field of pulsars and are not spherically symmetric.
However, if the angle between magnetic and spin axes of a pulsar is
sufficiently large and the pulsar is rapidly spinning, the averaged pressure
caused by a pulsar wind may appear grossly spherical. This pulsar wind
pressure may also counterbalance the inward pressure of outer gas and slow
down the diffusion process. We offer a scenario that a supernova remnant makes
a transition from a sharp-edged ‘void’ to a quasi-smooth one due to combined
effects of diffusion and pulsar wind.
#### 4.3.3 Interstellar Bubbles
Our ISSV model solutions may also describe large-scale nebula evolution around
early-type stars or Wolf-Rayet stars. Here our overall scenario parallels to
the one outlined in Section 4.3.1 for PNe but is different at the ambient
medium surrounding the flow systems. For an interstellar bubble, a central
stellar wind collides with the ISM (i.e. no longer a dense wind) and gives
rise to a reverse shock that heats the downstream stellar wind zone, and a
forward shock that propagates outwards in the ISM. Meanwhile, by inferences of
radio and optical observations for such nebulae (e.g. Carina Nebula and
Rosette Nebula), the central hot stars are capable of ionizing the entire
swept-up shell and thus produce huge H II regions surrounding them (e.g. Menon
1962; Gouguenheim & Bottinelli 1964; Dickel 1974). The temperature of a H II
region is usually regarded as weakly dependent on plasma density, thus an
isothermal H II region should be a fairly good approximation (e.g. Wilson,
Rohlfs & Hüttemeister 2008). As the ISM outside the forward shock is almost
unaffected by the central wind zone, we may approximate the static ISM as
isothermal and the dynamic evolution of gas surrounding interstellar bubbles
can be well characterized by various ISSV solutions. Several prior models that
describe ISM bubble shells in adiabatic expansions without gravity, might
encounter problems. For example, the self-similar solution of Weaver et al.
(1977) predicts a dense shell with a thickness of only $\sim 0.14$ times the
radial distance from the central star to the shell boundary, which is indeed a
very thin shell. However, observations have actually revealed many ISM bubbles
with much thicker shells (e.g. Dorland, Montmerle & Doom 1986). The Rosette
Nebula has a H II shell with a thickness of $\sim 20$ pc while the radius of
central void is only $\sim 6$ pc (e.g. Tsivilev et al. 2002). The shell
thickness thus accounts for $\sim 70\%$ of the radius of shell outer boundary,
which is much larger than the computational result of Weaver et al. (1977). In
our ISSV solutions, there are more diverse dynamic behaviours of gas shells
and outer envelopes. For example in Figure 9, four ISSV solutions with static
outer envelope indicate that the ratio of shell thickness to the radius of the
forward shock covers a wide range. For these four solutions, this ratio is
$0.65$, $0.98$, $0.91$ and $0.57$ respectively.
A type $\mathcal{Z}_{I}$ void shock solution with $x_{0}=0.58$,
$\alpha_{0}=0.014$ and an isothermal shock at $x_{s}=2.2$ may characterize
gross features of Rosette Nebula reasonably well. This ISSV solution indicates
that when the constant shell temperature is $7000$ K (somewhat hotter than
$6400$ K as inferred by Tsivilev et al. 2002) and the entire nebula system has
evolved for $\sim 10^{6}$ years, the central void has a radius of $\sim 6.0$
pc; the forward shock outlines the outer shell radius as $\sim 22.8$ pc; the
electron number density in the HII shell varies from $\sim 8.5$ cm-3 to $\sim
12.9$ cm-3; the contact discontinuity surface expands at a speed of $\sim 6.1$
km s-1 and the forward shock propagates into the ISM at a speed of $\sim 16.4$
km s-1; the surrounding ISM remains static at large radii. In the above model
calculation, an abundance He$/$H ratio of $0.1/0.9$ is adopted. Various
observations lend supports to our ISSV model results. For example, Tsivilev et
al. (2002) estimated a bit higher shell electron number density as $15.3$ cm-3
and an average shell expanding velocity of about $8.5$ km s-1. Dorland et al.
(1986) gave an average shell electron number density as $11.3$ cm-3. Our
calculation also gives a shell mass of $\sim 1.55\times 10^{4}M_{\odot}$,
which falls within $10$,$000M_{\odot}$ and $16$,$000M_{\odot}$ as estimated by
Menon (1962) and Krymkin (1978), respectively.
To study these inner voids embedded in various gas nebulae, diagnostics of
X$-$ray emissions offers a feasible means to probe hot winds. The thermal
bremsstrahlung and line cooling mechanisms can give rise to detectable X$-$ray
radiation from optically thin hot gas (e.g. Sarazin 1986) and the high
temperature interaction fronts of stellar winds with the ISM and inner fast
wind with the void edge can produce X$-$ray photons (e.g. Chevalier 1997b). We
will provide the observational properties of different types of voids and
present diagnostics to distinguish the ISSV types and thus reveal possible
mechanisms to generate and maintain such voids by observational inferences in
a companion paper (Lou & Zhai 2009 in preparation).
## 5 Conclusions
We have explored self-similar hydrodynamics of an isothermal self-gravitating
gas with spherical symmetry and shown various void solutions without or with
shocks.
We first obtain type $\mathcal{X}$ ISSV solutions without shocks outside
central voids in Section 3.3. Based on different behaviours of eigen-
derivatives across the SCL, type $\mathcal{X}$ void solutions are further
divided into two subtypes: types $\mathcal{X}_{\rm I}$ and $\mathcal{X}_{\rm
II}$ ISSV solutions. All type $\mathcal{X}$ ISSV solutions are characterized
by central voids surrounded by very dense shells. Both types $\mathcal{X}_{\rm
I}$ and $\mathcal{X}_{\rm II}$ ISSV solutions allow envelope outflows but only
type $\mathcal{X}_{\rm I}$ ISSV solutions can have outer envelopes in
contraction or accretion flows.
We then consider self-similar outgoing shocks in gas envelopes surrounding
central voids in Section 3.4. Type $\mathcal{Z}_{\rm I}$ void solutions are
referred to as the equi-temperature shock solutions with a constant gas
temperature across a shock front (this is an idealization; see Spitzer 1978).
We also investigate various cases of shocks in type $\mathcal{Z}_{\rm II}$
void solutions (always a higher downstream temperature for the increase of
specific entropy). In Section 3.4, we have developed the ‘phase net’ matching
procedure to search for type $\mathcal{Z}$ ISSV solutions with static,
expanding, contracting and accreting outer envelopes. ISSV solutions with
quasi-smooth edges exist only when gas flows outwards outside the shock; all
other types of voids are surrounded by fairly dense shells or envelopes.
We have systematically examined voids with sharp or quasi-smooth edges for
various ISSV solutions. There must be some energetic processes such as
supernova explosions or powerful stellar winds (including magnetized
relativistic pulsar winds) that account for the appearance of sharp-edge
voids. The denser the nebular shell is, the more difficult the void formation
is. In other words, voids of quasi-smooth edges might be easier to form in the
sense of a less stringent requirement for the initial energy input. In Section
4.1, we point out that the gas self-gravity can influence the void evolution
significantly, especially for the property of regions near void edge as shown
in Fig. 12. With the same boundary condition of outer medium, ISSV models with
and without self-gravity can lead to different types of voids, e.g. quasi-
smooth edge or sharp-edge voids. We suggest that these two types of voids may
have different mechanisms to generate and sustain. Thus the inclusion of gas
self-gravity is both physically realistic and essential. Besides, we show that
all voids with quasi-smooth edges are type $\mathcal{Z}$ voids, that is, with
shocks surrounding voids. This indicates that shock and expanding outer
envelope may well imply a likely presence of a central void. In fact,
observations on hot gas flows in clusters of galaxies might also be relevant
in this regard. For example, McNamara et al. (2006) reported giant cavities
and shock fronts in a distant ($z=0.22$) cluster of galaxies caused by an
interaction between a synchrotron radio source and the hot gas around. Such
giant X-ray cavities were reported to be left behind large-scale shocks in the
galaxy cluster MS0735.6+7421.
Another point to note is that ISSV solutions we have constructed are
physically plausible with special care taken for the expanding void boundary
$x_{0}$. Void boundary $x_{0}$ involves density and velocity jumps not in the
sense of a shock; local diffusion processes should happen to smooth out such
jumps in a non-self-similar manner. Nonlinear ODEs (8) and (9) are valid in
intervals $(0,\ x_{0}^{-})$ and $(x_{0}^{+},\ +\infty)$. We have indicated
this property in Section 3.2 when introducing the concept of ISSV and discuss
this issue in Section 4.2. There, several plausible mechanisms are noted such
as powerful stellar winds and energetic neutrino driven supernova explosion.
More specifically, we apply the ISSV solutions to grossly spherical planetary
nebulae, supernova explosions or supernova remnants and interstellar bubbles.
Our model for planetary nebulae involve three characteristic interfaces:
reverse shock, contact discontinuity surface, and forward shock. Steady inner
stellar winds of different speeds blow on both sides of the inner reverse
shock and a contact discontinuity surface confines the slower downstream wind
zone outside the inner reverse shock. This reverse shock may be stationary or
moving (either inwards or outwards) in the laboratory framework of reference.
The contact discontinuity surface between the steady downstream wind zone (on
the downstream side of the inner reverse shock) and the outer expanding gas
shell moves outwards at a constant radial speed. Behaviours of outer shocked
gas shell outside the contact discontinuity are described by type
$\mathcal{Z}$ ISSV shock solutions with quasi-smooth edges. Stellar core
collapses prior to supernova explosions lead to neutrino bursts during a short
period of time, which might momentarily stand against the inward pressure
force across the ‘void’ edge and give rise to a sharp-edge ‘void’ structure.
In the long-term evolution after the escape of neutrinos, diffusion effect and
outer forward shocks will dominate the behaviours of supernova remnants and
the sharp edge will be smoothed out eventually. In other situations when
central magnetized relativistic pulsar winds begin to resist diffusion
effects, a quasi-smooth void with shocked shell (i.e. type $\mathcal{Z}$ ISSV)
might also form.
Similar to PNe, interstellar bubbles may originate from strong stellar winds
of early-type stars on larger scales. We invoke type $\mathcal{Z}$ ISSV
solutions with quasi-smooth void edge to describe the structure and evolution
of dense shell and outer ISM envelope. In our model, the hot shocked stellar
wind zone, which is located between the reverse shock and the contact
discontinuity surface, is filled with steady shocked wind plasma. In Weaver et
al. (1977), the standard Spitzer conduction was included to study the shocked
stellar wind and shell gas that diffuse into the shocked stellar wind region.
However, the stellar magnetic field is predominantly transverse to the radial
direction at large radii, which will suppress the thermal conduction through
the hot interstellar bubble gas (e.g., Chevalier & Imamura 1983). As a weak
magnetic field can drastically reduce this thermal conduction coefficient
(e.g. Narayan & Medvedev 2001; Malyshkin 2001) and a weak magnetic field has
little effects on behaviours of the gas shells and nebulae (e.g. Avedisova
1972; Falle 1975), we do not include thermal conduction effect in our model
but present a dynamic evolution model for interstellar bubbles in terms of a
self-similar nebular shell sustained by a central steady stellar wind. We
would note that the existence of a random magnetic field may reduce the
density ’wall’ around a void and make the formation of a void easier (Lou & Hu
2009). We do not include magnetic field in our model for simplicity.
Recent observations of ultraluminous X-ray sources (ULXs) show that ULXs may
blow very strong winds or jets into the surrounding ISM and generate hot
bubbles. For instance, ULX Bubble MH9-11 around Holmberg IX X-1 has
experienced an average inflating wind/jet power of $\sim 3\times 10^{39}$ erg
s-1 over an age of $\sim 10^{6}$ years. The shock of the bubble travels
outwards at $\sim 100$ km s-1 at radius $\sim 100$ pc. The particle density
around the shock is $\sim 0.3$ cm-3 (e.g. Pakull & Grisé 2008). Approximately,
this bubble corresponds to a type $\mathcal{Z}$ ISSV solution with
$\alpha_{0}\approx 3.8\times 10^{-4}$, $x_{0}=0.4$ and $x_{s}=1$, and the
temperature of gas shell is $\sim 10^{7}$ K. Our ISSV model predicts a contact
discontinuity surface, or interaction surface of ULX wind with the ISM, at a
radius $\sim 40$ pc.
Finally, to diagnose voids observationally, we would suggest among others to
detect X$-$ray emissions from hot gas. In a companion paper), we shall adapt
our ISSV solutions for hot optically thin X$-$ray gas clouds or nebulae, where
we advance a useful concept of projected self-similar X$-$ray brightness. It
is possible to detect ISSVs and identify diagnostic features of ISSV types
which may in turn to reveal clues of ISSV generation mechanisms. We will also
compare our ISSV model with observational data on more specific terms.
Moreover, projected self-similar X$-$ray brightness is a general concept which
can be useful when we explore other self-similar hydrodynamic or
magnetohydrodynamic processes (e.g. Yu et al. 2006; Wang & Lou 2008).
###### Acknowledgements.
This research was supported in part by the National Natural Science Foundation
of China (NSFC) grants 10373009 and 10533020 at Tsinghua University, the SRFDP
20050003088 and 200800030071, the Yangtze Endowment and the National
Undergraduate Innovation Training Project from the Ministry of Education at
Tsinghua University and Tsinghua Centre for Astrophysics (THCA). The kind
hospitality of Institut für Theoretische Physik und Astrophysik der Christian-
Albrechts-Universität Kiel Germany and of International Center for
Relativistic Astrophysics Network (ICRANet) Pescara, Italy is gratefully
acknowledged.
## Appendix A Jump of $\alpha$ from zero to
nonzero value across the ZML
Let us assume that the reduced mass density $\alpha(x)$ can transit from zero
to nonzero across the ZML and $x_{0}$ is the transition point. For this, we
require $v(x_{0})=x_{0}$, $\alpha(x_{0})=0$ and when $x<x_{0}$, $\alpha(x)$
and $v(x)$ vanish. For an arbitrarily small real number $\varepsilon>0$, we
also require $\alpha(x_{0}+\varepsilon)>0$ for a positive mass such that there
exists a positive integer $n$ for which
$\frac{d^{n}\alpha}{dx^{n}}\bigg{|}_{x_{0}}\neq 0\ .$ (37)
We now cast equation (9) in the form of
$\frac{d\alpha}{dx}=\alpha\frac{[\alpha-2(x-v)/x](x-v)}{(x-v)^{2}-1}\equiv\alpha{\cal
F}(x)\ ,$ (38)
where ${\cal F}(x)\equiv[\alpha-2(x-v)/x](x-v)/[(x-v)^{2}-1]$.
At $x=x_{0}$, the denominator of ${\cal F}(x)$ does not vanish because
$v(x_{0})=x_{0}$. So ${\cal F}(x)$ is a finite continuous analytic function
near $x_{0}$. An arbitrary order derivative of ${\cal F}(x)$ at $x_{0}$ should
be finite as well.
The $k$th-order derivative of equation (A2) by the Leibnitz rule reads
$\frac{d^{k+1}\alpha}{dx^{k+1}}=\sum_{i=0}^{k}C_{k}^{i}\frac{d^{i}\alpha}{dx^{i}}\frac{d^{k-i}{\cal
F}(x)}{dx^{k-i}}\ ,$ (39)
where $C_{k}^{i}$ stands for $k!/[i!(k-i)!]$ and $!$ is the standard factorial
operation.
Because $\alpha(x_{0})=0$, equation (A3) yields $\alpha^{\prime}(x_{0})=0$,
$\alpha^{\prime\prime}(x_{0})=0$ and so forth. That is, $\alpha(x_{0})=0$ and
equation (9) determine that the arbitrary order derivative of $\alpha(x)$ at
$x_{0}$ is zero, which is contrary to presumption (A1) based on the former
assumption that $\alpha(x)$ can transit from zero to nonzero across $x_{0}$ of
the ZML. Therefore $\alpha$ cannot transit from zero to nonzero across the ZML
for an isothermal gas. For properties of void boundary in a polytropic gas,
the interested reader is referred to Hu & Lou (2008) and Lou & Hu (2008).
## Appendix B Proof of inequality $\alpha^{\prime\prime}(x_{0})<0$
Equation (9) can be written in the form of
$\big{[}(x-v)^{2}-1\big{]}\alpha^{\prime}=\alpha\big{[}\alpha-2(1-v/x)\big{]}(x-v)\
,$ (40)
whose first-order derivative $d/dx$ is simply
$\\!\\!\\!\\!\\!\\!\\!\\!\\!2(x-v)(1-v^{\prime})\alpha^{\prime}+\big{[}(x-v)^{2}-1\big{]}\alpha^{\prime\prime}\\\
\\!\\!\\!\\!\\!\\!\\!\\!\\!=\\{\alpha^{\prime}\big{[}\alpha-2(1-v/x)\big{]}+\alpha\big{[}\alpha^{\prime}+2v^{\prime}/x-2v/x^{2}\big{]}\\}(x-v)\\\
+\alpha\big{[}\alpha-2(1-v/x)\big{]}(1-v^{\prime})\ .$ (41)
We have $v(x_{0})=x_{0}$ and thus both $v^{\prime}(x_{0})=0$ and
$\alpha^{\prime}(x_{0})=0$ at $x_{0}$ by equation (22). Therefore at
$x_{0}^{+}$, equation (B2) becomes
$\alpha^{\prime\prime}(x_{0})=-\alpha(x_{0})^{2}<0\ $ (42)
for a positive density $\alpha(x_{0})>0$ as a physical requirement.
## References
* (1) Abell G. O., 1966, ApJ, 144, 259
* (2) Alvarez M. A., Bromm V., Shapiro P. R., 2006, ApJ, 639, 621
* (3) Arcones A., Martínez-Pinedo G., O’Connor E., Schwenk A., Janka H.-Th., Horowitz C. J. Langanke K., 2008, Phys. Rev. C, 78, 015806
* (4) Avedisova V. S., 1972, Soviet Astr.-AJ, 15, 708
* (5) Band D. L., Liang E. P., 1988, ApJ, 334, 266
* (6) Bian F.-Y., Lou Y.-Q., 2005, MNRAS, 363, 1315
* (7) Castor J., McCray R., and Weaver R., 1975, ApJ, 200, L107
* (8) Chevalier R. A., 1982, ApJ, 259, 302
* (9) Chevalier R. A., Imamura J. N., 1983, ApJ, 270, 554
* (10) Chevalier R. A., 1997a, ApJ, 488, 263
* (11) Chevalier R. A., 1997b, Science, 276, 1374
* (12) Dorland H., Montmerle T., Doom C., 1986, A & A, 160, 1
* (13) Courant R., Friedrichs K. O., 1976, Supersonic Flow and Shock Waves. Springer-Verlag, New York
* (14) Dickel H. R., 1974, A&A, 31, 11
* (15) Dyson J. E., 1974, ApSS, 35, 299
* (16) Dyson J. E., Williams D. A., 1997, The Physics of the Interstellar Medium. IOP, Bristol and Philadelphia
* (17) Falle S. A. E. G., 1975, A&A, 43, 323
* (18) Fillmore J. A., Goldreich P., 1984, ApJ, 281, 9
* (19) Gaensler B. M., Gotthelf E. V., Vasisht G., 1999, ApJ, 526, L37
* (20) Goldreich P., Weber S. V., 1980, ApJ, 233, 991
* (21) Gouguenheim L., Bottinelli L., 1964, Ann. Astrophys, 27, 685
* (22) Green D. A., Reynolds S. P., Borkowski K. J., Hwang U., Harrus I., Petre R., 2008, MNRAS, 387, L54
* (23) Guerrero M. A., Jaxon E. G., Chu Y-H, 2004, ApJ, 128, 1705
* (24) Hirata K., et al., 1987, Phys. Rev. Lett, 58, 1490
* (25) Hu R.-Y., Lou Y.-Q., 2008, MNRAS, 390, 1619
* (26) Hu R.-Y., Lou Y.-Q., 2009, MNRAS, in press
* (27) Hunter C., 1977, ApJ, 218, 834
* (28) Hunter C., 1986, MNRAS, 223, 391
* (29) Janka H.-Th., Marek Andreas, Kitaura Francisco-Shu, 2007, Supernova 1987A: 20 Years After: Supernovae and Gamma-Ray Bursters. AIP Conference Proceedings, 937, 144
* (30) Kamper K. W., van den Bergh S., 1978, ApJ, 224, 851
* (31) Krymkin V. V., 1978, ApSS, 54, 187
* (32) Kwok S., Purton C. R., Fitzgerald P. M., 1978, ApJ, 219, L125
* (33) Kwok S., Volk K., 1985, ApJ, 299, 191
* (34) Landau L. D., Lifshitz E. M., 1987, Fluid Mechanics, 2nd ed., Pergamon Press, New York
* (35) Larson R. B., 1969a, MNRAS, 145, 271
* (36) Larson R. B., 1969b, MNRAS, 145, 405
* (37) Lou Y.-Q., Cao Y., 2008, MNRAS, 384, 611
* (38) Lou Y.-Q., Shen Y., 2004, MNRAS, 348, 717
* (39) Lou Y.-Q., Wang W.-G., 2006, MNRAS, 372, 885
* (40) Lou Y.-Q., Wang W.-G., 2007, MNRAS, 378, L54
* (41) Lou Y.-Q., Hu R.-Y., 2009, submitted
* (42) Malyshkin L., 2001, ApJ, 554, 561
* (43) Marciano W. J., Parsa Z., 2003, J. Phys. G: Nucl. Part. Phys. 29, 2629
* (44) Mathews W. G., 1966, ApJ, 144, 206
* (45) McNamara B. R., Nulsen P. E. J., Wise M. W., Rafferty D. A., Carilli C., Sarazin C. L., Blanton E. L., 2005, Nature, 433, 45
* (46) Mellema G., 1994, A&A, 290, 915
* (47) Menon T. K., 1962, ApJ, 135, 394
* (48) Meyer F., 1997, MNRAS, 285, L11
* (49) Narayan R., Medvedev M. V., 2001, ApJ, 562, L129
* (50) Pakull M. W., Grisé F., 2008, A Population Explosion: The Nature and Evolution of X-ray Binaries in Diverse Environments. AIP Conference Proceedings, Vol 1010, pp. 303-307 (arXiv:0803.4345v1 [astro-ph])
* (51) Penston M. V., 1969a, MNRAS, 144, 425
* (52) Penston M. V., 1969b, MNRAS, 145, 457
* (53) Perinotto M., Schönberner D., Steffen M., Calonaci C., 2004, A&A, 414, 993
* (54) Reynolds S. P., Borkowski K. J., Green D. A., Hwang U., Harrus I., Peter R., 2008, ApJ, 680, L41
* (55) Pikel’ner S. B., Shcheglov P. V., 1969, Soviet Astro.-AJ, 12, 757
* (56) Sarazin C. L., 1986, Review of Modern Physics, 58, 1
* (57) Shen Y., Lou Y.Q., 2004, ApJ, 611, L117
* (58) Shu F. H., 1977, ApJ, 214, 488
* (59) Shu F. H., Lizano S., Galli D., Cant$\acute{\rm o}$ J., Laughlin G., 2002, ApJ, 580, 969
* (60) Spitzer L., 1978, Physical Processes in the Interstellar Medium. Wiley, New York
* (61) Tsai J. C., Hsu J. J. L., 1995, ApJ, 448, 774
* (62) Tsivilev A. P., Poppi S., Cortiglioni S., Palumbo G. G. C., Orsini M., Maccaferri G., 2002, New Astronomy, 7, 449
* (63) Wang J., Townsley L. K., Feigelson E. D., Broos P., Getman K., Román-Zúñiga C. G., Lada E., 2008, ApJ, 675, 464
* (64) Wang W.-G., Lou Y.-Q., 2007, ApSS, 311, 363
* (65) Wang W.-G., Lou Y.-Q., 2008, ApSS, 315, 135
* (66) Weaver R., McCray R., Castor J., 1977, ApJ, 218, 377
* (67) Werner K., Rauch T., Kruk J. W., 2008, A&A, 492, L43
* (68) Whitworth A., Summers D., 1985, MNRAS, 214, 1
* (69) Wilson T. L., Rohlfs K., Hüttemeister S., 2008, Tools of Radio Astronomy, 5th ed., Springer Press
* (70) Woosley S., Janka T., 2005, Nature Physics, 1, 147
* (71) Yu C., Lou Y.-Q., 2005, MNRAS, 364, 1168
* (72) Yu C., Lou Y.-Q., Bian F.-Y., Wu Y., 2006, MNRAS, 370, 121
|
arxiv-papers
| 2009-05-30T05:36:38 |
2024-09-04T02:49:02.999766
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Yu-Qing Lou, Xiang Zhai",
"submitter": "Xiang Zhai",
"url": "https://arxiv.org/abs/0906.0061"
}
|
0906.0065
|
# Managing Distributed MARF with SNMP
Serguei A. Mokhov
Lee Wei “Lewis” Huynh
Jian “James” Li
Concordia Univerisity
Montréal, Québec, Canada
(Tue 2 Jun 2009 05:22:48 EDT )
###### Contents
1. 1 Introduction
1. 1.1 Background
2. 1.2 Scope
3. 1.3 Tools
4. 1.4 Summary
2. 2 Methodology
1. 2.1 Introduction
2. 2.2 MARF-Manager-Agent Architecture
3. 2.3 SMI Structure
4. 2.4 MARF Services
1. 2.4.1 General Service MIB
2. 2.4.2 Storage
3. 2.4.3 Sample Loading
4. 2.4.4 Preprocessing
5. 2.4.5 Feature Extraction
6. 2.4.6 Classification
7. 2.4.7 Applications
3. 3 Conclusion
1. 3.1 Review of Results
1. 3.1.1 MIBs
2. 3.1.2 SNMP Proxy Agents
3. 3.1.3 SNMP MARF Application Managers
4. 3.1.4 Difficulties
5. 3.1.5 Contributions
6. 3.1.6 Open Source
2. 3.2 Future Work
1. 3.2.1 Scenarios
2. 3.2.2 Summary
3. 3.3 Acknowledgments
###### List of Figures
1. 1.1 The Core MARF Pipeline Data Flow
2. 1.2 The Distributed MARF Pipeline
3. 1.3 SpeakerIdenApp Client GUI Prototype (Manager)
4. 1.4 MARF Service Status Monitor GUI Prototype (Agent)
5. 2.1 MARF-Manager-Agent Architecture
6. 2.2 Preliminary MARF Private Enterprises Number.
7. 2.3 Preliminary MARF General Tree.
8. 2.4 General Service MIB 1.
9. 2.5 General Service MIB 2.
10. 2.6 Storage MIB.
11. 2.7 Preliminary MARF Sample Loading Service MIB.
12. 2.8 Preliminary MARF Preprocessing Service MIB.
13. 2.9 Preliminary MARF Feature Extraction Service MIB.
14. 2.10 Preliminary MARF Classification Service MIB.
15. 2.11 SpeakerIdentApp MIB.
16. 2.12 LangIdentApp MIB.
## Chapter 1 Introduction
$Revision:1.1.2.6$
### 1.1 Background
The Modular Audio Recognition Framework (MARF) [The09, MCSN03, Mok06] is an
open-source research platform and a collection of voice, sound, speech, text,
and natural language processing (NLP) algorithms written in Java and arranged
into a modular and extensible framework facilitating addition of new
algorithms. MARF can run distributedly over the network (using CORBA, XML-RPC,
or Java RMI) and may act as a library in applications or be used as a source
for learning and extension. A few example applications are provided to show
how to use the framework. One of MARF’s applications, SpeakerIdentApp has a
database of speakers, where it can identify who people are regardless what
they say.
Original MARF [MCSN03] was developed by Serguei Mokhov with a few classmates
and others throughout a variety of courses. Distributed MARF [Mok06] (DMARF)
proof-of-concept (PoC) implementation was done by Serguei Mokhov in the
Distributed Systems class. The Distributed MARF nodes are hard to manage if
there are many, including configuration, statistics, and status management.
MARF has several applications. Most revolve around its recognition pipeline –
sample loading, preprocessing, feature extraction, and training or
classification. One of the applications, for example is Text-Independent
Speaker Identification. In the classical MARF, the pipeline and the
applications as they stand are purely sequential with even little or no
concurrency when processing a bulk of voice samples. Thus, the purpose of
DMARF in [Mok06] was to make the pipeline distributed and run on a cluster or
a just a set of distinct computers to compare with the traditional version and
add disaster recovery and service replication, communication technology
independence, and so on.
Figure 1.1: The Core MARF Pipeline Data Flow
The classical MARF’s pipeline is in Figure 1.1. The goal of DMARF was to
distribute the shown stages of the pipeline as services as well as stages that
are not directly present in the figure – sample loading, front-end application
service (e.g. speaker identification service, etc.) among other things in the
distributed system. The reasons to be able flexibly distribute these services
is to offload the bulk of multimedia/data crunching and processing to a higher
performance servers, that can communicate, while the data collection may
happened at several low-cost computers (e.g. low-end laptops) or PDAs alike,
embedded devices, etc., which may not necessarily have the processing power
and storage capacity locally to cope with the amount of incoming data, so they
pass it on to the servers. (A possible infrastructure of such a setup can, for
example, be in place in different law enforcement agencies spread out across a
country, yet, being able to identify speakers across all jurisdictions if say
recorded phone conversations of a suspect are available. In another scenario,
the could be used in tele-conferencing.)
Figure 1.2: The Distributed MARF Pipeline
In Figure 1.2 the distributed version of the pipeline is presented. It
indicates different levels of basic front-ends, from higher to lower, which a
client application may invoke as well as services may invoke other services
through their front-ends while executing in a pipeline-mode. The back-ends are
in charge of providing the actual servant implementations as well as the
features like primary-backup replication, monitoring, and disaster recovery
modules.
The status management graphical user interface (GUI) prototypes were developed
in DMARF, but not implemented to provide application managers an ability to
monitor services’ status (or the whole pipeline) as well as change
configuration options, as in Figure 1.3 and in Figure 1.4.
Figure 1.3: SpeakerIdenApp Client GUI Prototype (Manager) Figure 1.4: MARF
Service Status Monitor GUI Prototype (Agent)
### 1.2 Scope
The scope of this project’s work focuses on the research and prototyping of
the extension of the Distributed MARF such that its services can be managed
through the most popular management protocol familiarly, SNMP. The rationale
behind SNMP vs. MARF’s proprietary management protocols, is that can be
integrated with the use of common network service and device management, so
the administrators can manage MARF nodes via a already familiar protocol, as
well as monitor their performance, gather statistics, set desired
configuration, etc. perhaps using the same management tools they’ve been using
for other network devices and application servers.
MARF has generally thee following type of services: application, core
pipeline, sample loading, preprocessing, feature extraction, and
classification. There are common data structures, configuration, storage
management attributed to them. DMARF’s components in general are stand-alone
and may listen on RMI, XML-RPC, CORBA, and TCP connections for their needs,
and natively do not “understand” SNMP. Therefore, each managed service will
have to have a proxy SNMP-aware agent for management tasks and delegate
instrumentation proxy to communicate with the service’s specifics. Thus, in
this work we are designing and implementing to some extent the following:
* •
Defining MIBs for MARF Services
* •
Producing Proxy SNMP Agents
* •
Agent-Adapter Delegate Instrumentation
* •
SNMP MARF Manager Applications
The original proposal has a lot more provisional tasks, that are outlined in
Section 3.2.
### 1.3 Tools
We will use platform-independent tools like Java, possibly JDMK based on JMX,
eventually RMI, CORBA, XML-RPC as provided by AdventNet or others. SimpleWeb
[The07] for original MIB cross-validation and AdventNet’s SNMP Java API
[Adv07b] and Java Agent SDK [Adv07a] tools are in actual active use.
### 1.4 Summary
The project seems like a viable and useful option to extend MARF and
contribute its implementation at the same time to the open-source community.
## Chapter 2 Methodology
$Revision:1.1.2.10$
### 2.1 Introduction
Distributed MARF [The09] offers a number of service types:
1. 1.
Application Services
2. 2.
General MARF Pipeline Services
3. 3.
Sample Loading Services
4. 4.
Preprocessing Services
5. 5.
Feature Extraction Services
6. 6.
Training and Classification Services
which are backed by the corresponding server implementations in CORBA, Java
RMI, and Web Services XML-RPC. The services can potentially be embedded into
other application or hardware systems for speaker and language identification,
and others.
We are interested in managing such services over a network as a whole, collect
their processing statistics, perform remote management, etc., so we need to
define them in MIB using ASN.1 including the types of requests and responses
and their stats.
We have applied to IANA [(IA07] and registered a private enterprise number
(PEN) under the enterprises node, which is 28218 (under the MARF Research and
Development Group).
### 2.2 MARF-Manager-Agent Architecture
Figure 2.1: MARF-Manager-Agent Architecture
We devised a preliminary management architecture for MARF application and
services, presented in Figure 2.1. In this figure we capture a relationship
between all the major entities in the system. Application are the ultimate
managers, whereas, the remaining services can be both managers and agents in
some cases. The MARF service which operates the pipeline on behalf of main
applications, can manage sample loading, preprocessing, feature extraction,
and classification. Since those services can talk to each other and request
data from each other (e.g. classification can request data from feature
extraction), they may exhibit manager characteristics, not fully explored in
this work. Applications don’t need to come through the MARF service manager,
but in case some need arises (debugging, development, maintenance), can
connect to the terminal services of the pipeline directly.
### 2.3 SMI Structure
Figure 2.2: Preliminary MARF Private Enterprises Number. Figure 2.3:
Preliminary MARF General Tree.
We have worked on the MIB tree for the entire DMARF and did some progress in
defining general services and storage types, Preprocessing, Feature
Extraction, Classification, SpeakerIdentApp, and LangIdentApp MIBs, which are
under the marf/src/mib directory. We started off with WWW-MIB [HKS99] as an
example for a WwwService’s and some works from ATM-TC-MIB and related files
from the same source. We preliminary completed the indicated MIBs and provided
default proxy implementations for the services, at least a few of them using
the AdventNet’s API and SDK, which can be found under marf/src/marf/net/snmp
directory tree. In Figure 2.2 we show where MARF’s MIB subtree begins under
the enterprises node. And in Figure 2.3 we present the general overview of the
most SMI components for MARF services and other required components. We also
used lecture notes of Dr. Chadi Assi [Ass07]. We primarily using SNMPv2 along
with SMIv2 in this project.
### 2.4 MARF Services
This section provides a sneakpeek on the MARF service types we are dealing
with and some of them have their preliminary MIB subtrees drawn. As mentioned
earlier we provide our MIBs so far in marf/src/mib. Note, some of the diagrams
may not correspond 1-to-1 to MIBs as it is work in progress and things
sometimes get out of sync. The relevant files are (as of this writing):
* •
MARF-MIB.mib – main file meant to consolidate all when ready
* •
MARF-types.mib – some common textual conventions
* •
MARF-storage.mib – storage-related issues and types so far
* •
MARF-services.mib – general services description
* •
MARF-sample-loading.mib – concrete sample loading service
* •
MARF-preprocessing.mib – concrete preprocessing service
* •
MARF-feature-extraction.mib – concrete feature extraction service
* •
MARF-classification.mib – concrete classification service
* •
MARF-APPS-SPEAKERIDENTAPP.mib – a MIB for SpeakerIdentApp
* •
MARF-APPS-LANGIDENTAPP.mib – a MIB for LangIdentApp
#### 2.4.1 General Service MIB
Most MARF services share some common definitions about indexing the services,
their statistics, and so on, so the general service module was created such
that most of this general functionality is captured in there including
statistics, and the more specific modules extend by augmenting its tables and
using its types. The generic service description is in Figure 2.5.
Figure 2.4: General Service MIB 1. Figure 2.5: General Service MIB 2.
#### 2.4.2 Storage
MIB for storage related activities (e.g. training sets, classification
results, etc.) has to be provided, as such the MIB presented in Figure 2.6 was
devised.
Figure 2.6: Storage MIB.
#### 2.4.3 Sample Loading
The Sample Loading Service knows how to load certain file or stream types
(e.g. WAVE) and convert them accordingly for further preprocessing. In our
project, we are introducing the sample loading module, which we will use to
manage and keep tracking its specific parameters which are:
1. 1.
iFormat: a sample format, an integer type attribute.
2. 2.
adSample: sample data, a collection of bytes.
All these attributes will be located in sampleLoadingServiceEntry object which
is in sampleLoadingServiceTable object. See the example of the SMI tree for
sample loading in Figure 2.7.
Figure 2.7: Preliminary MARF Sample Loading Service MIB.
#### 2.4.4 Preprocessing
The Preprocessing service accepts incoming voice or text samples and does the
requested preprocessing (all sorts of filters, normalization, etc.). Its
function generally is to normalize the incoming sample file by amplitude
starting from certain index and/or filter it. It has several algorithms to be
as an option to filter the voice frequencies. The algorithms in this module
are FFT-based and CFE-based. In our project, we introduce the preprocessing
MIB module, which will use to manage and keep tracking its specific parameters
which are:
1. 1.
Sample: a collection of doubles and a format.
2. 2.
dSilenceThreshold : a double type for the silence cut off threshold.
3. 3.
bRemoveNoise : a Boolean type to indicate noise removal.
4. 4.
bRemoveSilence : a Boolean type to indicate silence removal.
All these attributes are located in the preprocessingServiceEntry object which
is in preprocessingServiceTable tabular object. See the example of the SMI
tree for preprocessing in Figure 2.8.
Figure 2.8: Preliminary MARF Preprocessing Service MIB.
#### 2.4.5 Feature Extraction
Feature Extraction service accepts data, presumably preprocessed, and attempts
to extract features out of it given requested algorithm (out of currently
implemented, like FFT, LPC, MinMax, etc.) and may optionally query the
preprocessed data from the Preprocessing Service. See the SMI tree for feature
extraction in Figure 2.9. The parameters been used in the Feature Extraction
are below:
1. 1.
adFeatures: a currently being processed feature vector.
2. 2.
oFeatureSet: a collection of feature vectors or sets for a subject.
Figure 2.9: Preliminary MARF Feature Extraction Service MIB.
#### 2.4.6 Classification
Classification-type of services are responsible for either training on
features data (read/write) or classification (read-only) of the incoming data.
Being a bit similar in operation to the service types mentioned earlier,
Classification does a lot more legwork and is actually responsible saving the
training set database (in files or otherwise) locally, provide concurrency
control, persistence, ACID, etc. Its MIB tree in Figure 2.10 is very similar
to the other services at this point.
Figure 2.10: Preliminary MARF Classification Service MIB.
1. 1.
adFeatures: a currently being processed feature vector for
training/classification.
2. 2.
oResultSet: a collection of classification results.
#### 2.4.7 Applications
MARF has many applications, for this project we were considering primarily two
applications for management, SpeakerIdentApp as well as LangIdentApp for
speaker and language identification. The MIB trees we designed are in in
Figure 2.11 and Figure 2.12.
Figure 2.11: SpeakerIdentApp MIB. Figure 2.12: LangIdentApp MIB.
## Chapter 3 Conclusion
$Revision:1.1.2.5$
### 3.1 Review of Results
#### 3.1.1 MIBs
By far, we have successfully finished a set of compilable and loadable MIBs
from different aspects of DMARF over SNMP:
1. 1.
MARF-MIB
2. 2.
MARF-types
3. 3.
MARF-storage
4. 4.
MARF-services
5. 5.
MARF-sample-loading
6. 6.
MARF-preprocessing
7. 7.
MARF-feature-extraction
8. 8.
MARF-classification
9. 9.
MARF-APPS-SPEAKERIDENTAPP
10. 10.
MARF-APPS-LANGIDENTAPP
While some of the information in the above MIBs is in debug form, it is enough
to compile and generate proxy agent for minimal testing. There are some unused
definitions that will either be removed or become used in the follow up
revisions. We also obtained a PEN SMI number of 28218 from IANA for use in
MARF.
#### 3.1.2 SNMP Proxy Agents
Based on the MIBs above, we produced proxy agents, using AdventNet’s tools
[Adv07a]. There are two kinds of proxies: one is proxy talking to the MARF’s
API and the agent, we call it API proxy or instrumentation proxy. The other
one is proxy talking to manager and agents by using SNMP. Because the time
constraints, they are not fully instrumented.
Agents produced with the help of MIB Compiler of AdventNet also produces two
types of SNMP agents master agent (proxy to a group of SNMP sub-agents that
manager does not see) vs. sub-agents (SNMP) as well as instrumentation
(delegates, application-specific business logic). E.g. in the case of Feature
Extraction and LPC, the former would be the master agent proxy, and LPC would
be a sub-agent (e.g. a MIB subtree), which is a more specific type of feature
extraction.
#### 3.1.3 SNMP MARF Application Managers
Within the timeframe of the course we did not manage to produce our own SNMP
manager application part and integrate it into the proposed GUI, so we are
leaving this task to the future work. The manager application we used to test
our work was the MIB Browser provided by the AdventNet’s tools.
#### 3.1.4 Difficulties
During our design, MIB validation, and compilation, we faced certain
difficulties. One of them the difference in ASN.1/SMI syntax checks between
tools such as SimpleWeb vs AdventNet, where we could not debug for long time
one problem we faced: double AUGMENTS when we do the MIB of general an
concrete MARF services (e.g. Feature Extraction to LPC or Classification to
Neural Network, etc.). MIB loading and configuration management in AdventNet
and the corresponding mapping of operations (instrumentation delegates in a
pipeline) were a part of the learning curve for AdventNet’s API.
Here is the example that was holding us back where SimpleWeb’s validator
[The07] complained, but the AdventNet’s MIB Browser and Compiler didn’t have
problems with. Assume:
tableFoo
tableEntryFoo
tableBar
tableEntryBar
AUGMENTS {tableEntryFoo}
SimpleWeb’s validator complained here:
tableBaz
tableEntryBaz
AUGMENTS {tableEntryBar}
To give a more concrete example from one of our MIBs (a bit stripped). In this
case lpcServiceEntry would be managed by a sub-agent of feature extraction.
Please see more up-to-date MIB in MARF-feature-extraction.mib.
featureextractionServiceTable OBJECT-TYPE
SYNTAX SEQUENCE OF FeatureextractionServiceEntry
MAX-ACCESS not-accessible
STATUS current
DESCRIPTION
"The table of the Featureextraction services known by the SNMP agent."
AUGMENTS { serviceTable }
::= { featureextractionService 1 }
featureextractionServiceEntry OBJECT-TYPE
SYNTAX FeatureextractionServiceEntry
MAX-ACCESS not-accessible
STATUS current
DESCRIPTION
"Details about a particular Featureextraction service."
AUGMENTS { serviceEntry }
::= { featureextractionServiceTable 1 }
FeatureextractionServiceEntry ::= SEQUENCE {
oFeatureSet FeatureSet,
adFeatures VectorOfDoubles
}
lpcServiceTable OBJECT-TYPE
SYNTAX SEQUENCE OF LPCServiceEntry
MAX-ACCESS not-accessible
STATUS current
DESCRIPTION " "
AUGMENTS { featureextractionServiceTable }
::={ featureextractionService 2 }
lpcServiceEntry OBJECT-TYPE
SYNTAX LPCServiceEntry
MAX-ACCESS not-accessible
STATUS current
DESCRIPTION " "
AUGMENTS { featureextractionServiceEntry }
::={ lpcServiceTable 1 }
LPCServiceEntry ::=SEQUENCE {
iPoles INTEGER,
iWindowLen INTEGER
}
#### 3.1.5 Contributions
Generally, the entire team has focused on the development of the MIBs,
learning and trying out AdventNet’s API, project presentation, and report
equally. Since MARF consists of multiple components, the workload was
subdivided roughly along those components, some common structures were worked
on together by the team. Some specific breakdown is as follows:
* •
Serguei: overall project design and management, PEN application, MIB
compilation, Classification and MARF server MIBs.
* •
Jian: learning MARF’s API, Feature Extraction, LPC MIBs, AdventNet’s MIB
Compiler and API investigation, MIB diagrams.
* •
Lee Wei: learning MARF’s API, Sample Loading, Preprocessing, Application MIBs,
AdventNet’s MIB Compiler and API investigation.
#### 3.1.6 Open Source
This project, as the MARF itself, was developed as open-source project as
SourceForge.net. To checkout lastest version the source code, this report
sources, MIBs, and the presentation from our CVS repository, one can do by
executing the following commands:
cvs -d:pserver:anonymous@marf.cvs.sourceforge.net:/cvsroot/marf login
cvs -z3 -d:pserver:anonymous@marf.cvs.sourceforge.net:/cvsroot/marf co -rINSE7120 -P marf
or alternatively browse it on-line:
http://marf.cvs.sourceforge.net/marf/marf/?pathrev=INSE7120
The files of the interest related to this project can be found as follows:
this report sources are in marf/doc/src/tex/inse7120, the presentation is in
marf/doc/presentations/inse7120, the corresponding graphics is in
marf/doc/src/graphics/distributed/mib, the source code of the entire MARF is
in marf/src, and the generated agent code is in marf/src/marf/net/snmp.
### 3.2 Future Work
This section summarizes future work highlights.
#### 3.2.1 Scenarios
Here we present a few scenarios where DMARF along with the SNMP management
could be usefully employed. Management of these infrastructures is better be
conducted over SNMP, which is well adapted for the use of networks under
stress, along with RMON, and configuration management by some kind of central
authority.
* •
Police agents in various law enforcement agencies spread out across a country,
yet, being able to identify speakers across all jurisdictions if say recorded
phone conversations of a suspect are available.
* •
Assume another scenario for conference microphones for different speakers
installed in a large room or separate rooms or even continents using
teleconferencing. We try to validate/identify who the speakers are at a given
point in time.
* •
Alternatively, say these are recorded voices of conference calls or phone
conversations, etc. We could have it reliable, distributed, with recovery of
agents/clients and the server database.
* •
Perhaps, other multimedia, VoIP, Skype, translation and interpretation natural
language services can be used with DMARF over the Internet and other media.
#### 3.2.2 Summary
There is a lot of distributed multimedia traffic and computation involved
between sample loading, preprocessing, feature extraction, and classification
servers. Efficiency in network management involves avoidance of computations
that already took place on another server and just offloading the computed
data over.
Since DMARF’s originally implemented the Java RMI, SOAP (WebServices over XML-
RPC), and CORBA, some of that work can be transplanted towards the management
needs. In particular, DMARF’s CORBA IDL definitions can as well be used for
SNMP agent generation. Thus, in this project another area to focus will be on
the role of CORBA in network management and the extension of Distributed
MARF’s [Mok06] CORBA services implementation and its SpeakerIdentApp to
provide efficient network management, monitoring, multimedia (audio) transfer,
fault-tolerance and recovery, availability through replication, security, and
configuration management.
More specifically, in the context of MARF we’ll look into few more of the
aspects, based on the need, time, and interest. Some research and
implementation details to amend Distributed MARF we will consider of the
following:
* •
Finish proxy agents and instrumentation.
* •
Implement our own managers and the functions to compile new MIBs into the
manager.
* •
Complete prototyped GUI for ease-of-use of our management applications (as-is
MARF is mostly console-based).
* •
Complete full statistics MIB and implement RMON along with some performance
management functions such as collecting statistics and plotting the results.
* •
Propose a possible RFC.
* •
Make a public release and a publication.
* •
Implement some fault management functions such as alarms reporting.
* •
Look into XML in Network Management (possibly for XML-RPC).
* •
Look more in detail at Java and network management, JMX (right now through
AdventNet).
* •
Distributed Management of different DMARF nodes from various locations.
* •
Management of Grid-based Computing in DMARF.
* •
Analysis of CORBA and where it fits in Network Management in DMARF.
* •
Multimedia Management using SNMP.
### 3.3 Acknowledgments
* •
Dr. Chadi Assi
* •
SimpleWeb
* •
Open-Source Community
* •
AdventNet
## Bibliography
* [Adv07a] AdventNet. AdventNet SNMP Agent Toolkit Java Edition 6. adventnet.com, 2007. http://www.adventnet.com/products/javaagent/index.html.
* [Adv07b] AdventNet. AdventNet SNMP API 4. adventnet.com, 2007. http://snmp.adventnet.com/index.html.
* [Ass07] Chadi Assi. INSE7120: Advanced Network Management, Course Notes. CIISE, Concordia University, 2007. http://users.encs.concordia.ca/~assi/courses/inse7120.htm.
* [HKS99] Harrie Hazewinkel, Carl W. Kalbfleisch, and Juergen Schoenwaelder. WWW Service MIB Module, RFC2594. IETF Application MIB Working Group, 1999. http://www.simpleweb.org/ietf/mibs/modules/IETF/txt/WWW-MIB.
* [(IA07] Internet Assigned Numbers Authority (IANA). PRIVATE ENTERPRISE NUMBERS: SMI Network Management Private Enterprise Codes. iana.org, March 2007. http://www.iana.org/assignments/enterprise-numbers.
* [MCSN03] Serguei Mokhov, Ian Clement, Stephen Sinclair, and Dimitrios Nicolacopoulos. Modular Audio Recognition Framework. Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada, 2002–2003. Project report, http://marf.sf.net, last viewed April 2008.
* [Mok06] Serguei A. Mokhov. On design and implementation of distributed modular audio recognition framework: Requirements and specification design document. [online], August 2006. Project report, http://arxiv.org/abs/0905.2459, last viewed May 2009\.
* [The07] The SimpleWeb. MIB module validation. simpleweb.org, 2007. http://www.simpleweb.org/ietf/mibs/validate/.
* [The09] The MARF Research and Development Group. The Modular Audio Recognition Framework and its Applications. SourceForge.net, 2002–2009. http://marf.sf.net, last viewed December 2008.
## Index
* API
* adFeatures item 1, item 1
* adSample item 2
* bRemoveNoise item 3
* bRemoveSilence item 4
* dSilenceThreshold item 2
* enterprises §2.1, §2.3
* iFormat item 1
* lpcServiceEntry §3.1.4
* oFeatureSet item 2
* oResultSet item 2
* preprocessingServiceEntry §2.4.4
* preprocessingServiceTable §2.4.4
* Sample item 1
* sampleLoadingServiceEntry §2.4.3
* sampleLoadingServiceTable §2.4.3
* SpeakerIdentApp §1.1
* WwwService §2.3
* Applications §2.4.7
* Classification §2.4.6
* Files
* ATM-TC-MIB §2.3
* MARF-APPS-LANGIDENTAPP.mib 10th item
* MARF-APPS-SPEAKERIDENTAPP.mib 9th item
* MARF-classification.mib 8th item
* MARF-feature-extraction.mib 7th item, §3.1.4
* MARF-MIB.mib 1st item
* MARF-preprocessing.mib 6th item
* MARF-sample-loading.mib 5th item
* MARF-services.mib 4th item
* MARF-storage.mib 3rd item
* MARF-types.mib 2nd item
* marf/doc/presentations/inse7120 §3.1.6
* marf/doc/src/graphics/distributed/mib §3.1.6
* marf/doc/src/tex/inse7120 §3.1.6
* marf/src §3.1.6
* marf/src/marf/net/snmp §2.3, §3.1.6
* marf/src/mib §2.3, §2.4
* WWW-MIB §2.3
* Introduction Chapter 1
* MARF
* Core Pipeline Figure 1.1
* Distributed Pipeline Figure 1.2
* Methodology Chapter 2
|
arxiv-papers
| 2009-05-30T06:42:55 |
2024-09-04T02:49:03.015269
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Serguei A. Mokhov, Lee Wei Huynh, Jian Li (Concordia University,\n Montreal, Canada)",
"submitter": "Serguei Mokhov",
"url": "https://arxiv.org/abs/0906.0065"
}
|
0906.0123
|
A de Bruijn - Erdős theorem and metric spaces
Ehsan Chiniforooshan and Vašek Chvátal
Department of Computer Science and Software Engineering
Concordia University
Montreal, Quebec H3G 1M8, Canada
Abstract
De Bruijn and Erdős proved that every noncollinear set of $n$ points in the
plane determines at least $n$ distinct lines. Chen and Chvátal suggested a
possible generalization of this theorem in the framework of metric spaces. We
provide partial results in this direction.
## 1 Introduction
Two distinct theorems are referred to as “the de Bruijn - Erdős theorem”. One
of them [8] concerns the chromatic number of infinite graphs; the other [7] is
our starting point:
Every noncollinear set of $n$ points in the plane
determines at least $n$ distinct lines.
This theorem involves neither measurement of distances nor measurement of
angles: the only notion employed here is incidence of points and lines. Such
theorems are a part of ordered geometry [6], which is built around the ternary
relation of betweenness: point $y$ is said to lie between points $x$ and $z$
if $y$ is an interior point of the line segment with endpoints $x$ and $z$. It
is customary to write $[xyz]$ for the statement that $y$ lies between $x$ and
$z$. In this notation, a line $\overline{uv}$ is defined — for any two
distinct points $u$ and $v$ — as
$\\{p:[puv]\\}\;\cup\;\\{u\\}\;\cup\;\\{p:[upv]\\}\;\cup\;\\{v\\}\;\cup\;\\{p:[uvp]\\}.$
(1)
In terms of the Euclidean metric $d$, we have
$[abc]\;\Leftrightarrow\;a,b,c$ are three distinct points and
$d(a,b)+d(b,c)=d(a,c)$. (2)
For an arbitrary metric space, equivalence (2) defines the ternary relation of
metric betweenness introduced in [9] and further studied in [1, 2, 5]; in
turn, (1) defines the line $\overline{uv}$ for any two distinct points $u$ and
$v$ in the metric space. The resulting family of lines may have strange
properties. For instance, a line can be a proper subset of another: in the
metric space with points $u,v,x,y,z$ and
$\displaystyle d(u,v)=d(v,x)=d(x,y)=d(y,z)=d(z,u)=1,$ $\displaystyle
d(u,x)=d(v,y)=d(x,z)=d(y,u)=d(z,v)=2,$
we have
$\overline{vy}=\\{v,x,y\\}\;\;\mbox{ and }\;\;\overline{xy}=\\{v,x,y,z\\}.$
Chen [4] proved that a classic theorem of ordered geometry, the Sylvester-
Gallai theorem, generalizes in the framework of metric spaces (when lines in
these spaces are defined differently than here). Chen and Chvátal [3]
suggested that the de Bruijn - Erdős theorem, too, might generalize in this
framework:
True or false? Every finite metric space $(X,d)$
where no line consists of the entire ground set $X$
determines at least $|X|$ distinct lines.
They proved that
* •
in every metric space on $n$ points, there are at least $\lg n$ distinct lines
or else some line consists of all $n$ points.
We prove that
* •
in every metric space on $n$ points, there are $\Omega((n/\rho)^{2/3})$
distinct lines, where $\rho$ is the ratio between the largest distance and the
smallest nonzero distance (Theorem 1);
* •
in every metric space induced by a connected graph on $n$ vertices, there are
$\Omega(n^{2/7})$ distinct lines or else some line consists of all $n$
vertices (Corollary 1);
* •
in every metric space on $n$ points where each nonzero distance equals $1$ or
$2$, there are $\Omega(n^{4/3})$ distinct lines and this bound is tight
(Theorem 3).
## 2 Lines in hypergraphs
A hypergraph is an ordered pair $(X,H)$ such that $X$ is a set and $H$ is a
family of subsets of $X$; elements of $X$ are the vertices of the hypergraph
and members of $H$ are its edges. A hypergraph is called $k$-uniform if each
of its edges consists of $k$ vertices. The definition of lines in a metric
space $(X,d)$ depends only on the $3$-uniform hypergraph $(X,H(d))$ where
$H(d)=\\{\\{a,b,c\\}:\;d(a,b)+d(b,c)=d(a,c)\mbox{ and }a\neq b,\,b\neq c\\}:$
the line $\overline{uv}$ equals $\\{u,v\\}\cup\\{w:\\{u,v,w\\}\in H(d)\\}.$
This observation suggests extending the notion of lines in metric spaces to a
notion of lines in $3$-uniform hypergraphs: for any two distinct vertices $u$
and $v$ in a $3$-uniform hypergraph $(X,H)$, the line $\overline{uv}$ is
defined as $\\{u,v\\}\cup\\{w:\\{u,v,w\\}\in H\,\\}.$ Now every metric space
$(X,d)$ and its associated hypergraph $(X,H(d))$ define the same family of
lines.
Let $f(n)$ denote the smallest number of lines in a $3$-uniform hypergraph on
$n$ vertices where no line consists of all $n$ vertices and let $g(n)$ denote
the smallest number of lines in a metric space on $n$ points where no line
consists of all $n$ points. In this notation, $f(n)\leq g(n)$ for all $n$;
Chen and Chvátal [3] proved that
$\lg n\leq f(n)<c^{\sqrt{\lg n}}$
for some positive constant $c$. (The proof of the lower bound is based on the
observation that $w\not\in\overline{uv}$ if and only if
$v\not\in\overline{uw}$, and so — unless some line contains all the vertices —
the mapping that assigns to each vertex the set of lines containing it is one-
to-one.) The upper bound on $f(n)$ does not rule out the possibility of
$g(n)=n$: not all $3$-uniform hypergraphs arise from metric spaces $(X,d)$ as
$(X,H(d))$. (It has been proved ([5, 4]) that the hypergraph consisting of the
seven vertices $0,1,2,3,4,5,6$ and the seven edges
$\\{i\bmod{7},\,(i+1)\bmod{7},\,(i+3)\bmod{7}\\}$ with $i=0,1,2,3,4,5,6$ does
not arise from any metric space. This $3$-uniform hypergraph is known as the
Fano plane or the projective plane of order two.)
We let $K^{3}_{4}$ denote the $3$-uniform hypergraph with four vertices and
four edges.
###### Lemma 1.
Let $H$ be a $3$-uniform hypergraph, let $x$ be a vertex of $H$, and let $T$
be a set of vertices of $H$ such that (i) $x\not\in T$ and (ii) there are no
vertices $u,v,w$ in $T$ such that $x,u,v,w$ induce a $K^{3}_{4}$ in $H$. Then
$H$ defines at least $0.25(2|T|)^{2/3}$ distinct lines.
Proof. We may assume that $|T|>4$: otherwise $0.25(2|T|)^{2/3}\leq 1$, which
makes the assertion trivial. Let $S$ denote a largest subset of $T$ such that
all the lines $\overline{xv}$ with $v\in S$ are identical. Now $H$ defines at
least $|T|/|S|$ distinct lines, which gives the desired conclusion when
$|S|\leq(2|T|)^{1/3}$. We will prove that $H$ defines at least $|S|(|S|-1)/2$
distinct lines, which gives the desired conclusion when
$|S|\geq(2|T|)^{1/3}\geq 2$. More precisely, we will prove that all the lines
$\overline{uv}$ with $u,v\in S$ are distinct. For this purpose, consider any
three pairwise distinct vertices $u,v,w$ in $S$. Since
$\overline{xu}=\overline{xv}=\overline{xw}$, all three of $\\{x,u,v\\}$,
$\\{x,u,w\\}$, $\\{x,v,w\\}$ are edges of $H$; since $x,u,v,w$ do not induce a
$K^{3}_{4}$, it follows that $\\{u,v,w\\}$ is not an edge of $H$; since $w$ is
an arbitrary vertex in $S$ distinct from $u$ and $v$, the line $\overline{uv}$
intersects $S$ in $\\{u,v\\}$. $\Box$
## 3 Lines in metric spaces
###### Theorem 1.
In every metric space on $n$ points such that $n\geq 2$, there are at least
$0.25(n/\rho)^{2/3}$ distinct lines, where $\rho$ is the ratio between the
largest distance and the smallest nonzero distance.
Proof. Let the metric space be $(X,d)$, let $\delta$ denote the smallest
nonzero distance and let $x$ be an arbitrary point of $X$. The $n-1$ distances
$d(x,u)$ with $u\neq x$ are distributed into buckets $[i\delta,(i+1)\delta)$
with $i=1,2,\ldots,\lfloor\rho\rfloor$. It follows that there are a subset $T$
of $X-\\{x\\}$ and a positive integer $i$ such that
$u\in T\;\Rightarrow\;i\delta\leq d(x,u)<(i+1)\delta$
and
$|T|\;\geq\;\frac{n-1}{\lfloor\rho\rfloor}\;\geq\;\frac{n-1}{\rho}\;\geq\;\frac{n}{2\rho}\,.$
We will complete the proof by showing that the hypergraph $(X,H(d))$ satisfies
the hypothesis of Lemma 1. For this purpose, consider arbitrary points $u,v,w$
in $T$ such that $\\{x,u,v\\},\\{x,u,w\\},\\{x,v,w\\}\in H(d)$; we will prove
that $\\{u,v,w\\}\not\in H(d)$. Since $\\{x,u,v\\}\in H(d)$ and
$|d(x,u)-d(x,v)|<\delta\leq d(u,v)$, we have $d(u,v)=d(x,u)+d(x,v)$;
similarly, $d(u,w)=d(x,u)+d(x,w)$ and $d(v,w)=d(x,v)+d(x,w)$. Now
$i\delta\leq d(x,u),d(x,v),d(x,w)<2i\delta,$
and so
$2i\delta\leq d(u,v),d(u,w),d(v,w)<4i\delta,$
and so $\\{u,v,w\\}\not\in H(d)$. $\Box$
## 4 Metric spaces induced by graphs
Every finite connected undirected graph induces a metric space, where the
distance between vertices $u$ and $v$ is defined as the smallest number of
edges in a path from $u$ to $v$.
###### Theorem 2.
If, in a metric space $(X,d)$ induced by a graph of diameter $t$, no line
equals $X$, then there are at least $\sqrt{t/2\;}$ distinct lines.
Proof. There are vertices $v_{0},v_{1},\ldots,v_{t}$ such that
$d(v_{i},v_{j})=j-i$ whenever $0\leq i<j\leq t$. Consider a largest set $S$ of
subscripts $r$ such that all lines $\overline{v_{r}v_{r+1}}$ are equal. There
are at least $t/|S|$ distinct lines; this gives the desired conclusion when
$|S|\leq\sqrt{2t}$. We will complete the argument by proving that there are at
least $|S|/2$ distinct lines, which gives the desired conclusion when
$|S|\geq\sqrt{2t}$.
Let $u$ be any vertex outside the line $\overline{v_{r}v_{r+1}}$ with $r\in
S$. We will prove that at least $|S|/2$ of the lines $\overline{uv_{r}}$ with
$r\in S$ are pairwise distinct: more precisely, for every three subscripts
$i,j,k$ in $S$, at least two of the three lines $\overline{uv_{i}}$,
$\overline{uv_{j}}$, $\overline{uv_{k}}$ are distinct.
By the triangle inequality and since $u\not\in\overline{v_{r}v_{r+1}}$, we
have
$|\,d(u,v_{r})-d(u,v_{r+1})\,|\;<\;d(v_{r},v_{r+1})\;\;\mbox{ for all $r$ in
$S$};$
since $d(v_{r},v_{r+1})=1$, it follows that $d(u,v_{r})=d(u,v_{r+1})$ for all
$r$ in $S$. Now consider any three subscripts $i,j,k$ in $S$ such that
$i<j<k$. We have
$\displaystyle
d(v_{i},\\!v_{j})\\!+\\!d(v_{j},\\!u)>d(v_{i+1},\\!v_{j})\\!+\\!d(v_{j},\\!u)\geq
d(v_{i+1},\\!u)=d(v_{i},\\!u),\\!$ $\displaystyle
d(u,\\!v_{i})\\!+\\!d(v_{i},\\!v_{j})>d(u,\\!v_{i})\\!+\\!d(v_{i+1},\\!v_{j})=d(u,\\!v_{i+1})\\!+\\!d(v_{i+1},\\!v_{j})\geq
d(u,\\!v_{j}),\\!$
and so $v_{j}\in\overline{uv_{i}}$ if and only if
$d(v_{i},\\!u)\\!+\\!d(u,\\!v_{j})=d(v_{i},\\!v_{j})$. Similarly,
$v_{k}\in\overline{uv_{j}}$ if and only if
$d(v_{j},\\!u)\\!+\\!d(u,\\!v_{k})=d(v_{j},\\!v_{k})$. Since
$d(u,v_{i})+d(u,v_{k})\>=\>d(u,v_{i})+d(u,v_{k+1})\>\geq\>d(v_{i},v_{k+1})\>=\>k+1-i,$
we have $d(u,v_{i})>j-i$ or else $d(u,v_{k})>k-j$. If $d(u,v_{i})>j-i$, then
$d(v_{i},\\!u)\\!+\\!d(u,\\!v_{j})>d(v_{i},\\!v_{j})$, and so
$v_{j}\not\in\overline{uv_{i}}$ (and $v_{i}\not\in\overline{uv_{j}}$), which
implies $\overline{uv_{i}}\neq\overline{uv_{j}}$. If $d(u,v_{k+1})>k-j$, then
$d(v_{j},\\!u)\\!+\\!d(u,\\!v_{k})>d(v_{j},\\!v_{k})$, and so
$v_{k}\not\in\overline{uv_{j}}$ (and $v_{j}\not\in\overline{uv_{k}}$), which
implies $\overline{uv_{j}}\neq\overline{uv_{k}}$. $\Box$
###### Corollary 1.
If, in a metric space induced by a connected graph on $n$ vertices, no line
consists of all $n$ vertices, then there are at least $2^{-8/7}n^{2/7}$
distinct lines.
Proof. If the graph has diameter at most $2^{-9/7}n^{4/7}$, then the bound
follows from Theorem 1; else it follows from Theorem 2. $\Box$
## 5 When each nonzero distance equals $1$ or $2$
By a $1$-$2$ metric space, we mean a metric space where each nonzero distance
is $1$ or $2$.
###### Theorem 3.
The smallest number $h(n)$ of lines in a $1$-$2$ metric space on $n$ points
satisfies the inequalities
$(1+o(1))\alpha n^{4/3}\leq h(n)\leq(1+o(1))\beta n^{4/3}$
with $\alpha=2^{-7/3}$ and $\beta=3\cdot 2^{-5/3}$.
We say that points $u,v$ in a $1$-$2$ metric space are twins if, and only if,
$d(u,v)=2$ and $d(u,w)=d(v,w)$ for all $w$ distinct from both $u$ and $v$. Our
proof of Theorem 3 relies on the following lemma, whose proof is routine.
###### Lemma 2.
If $u_{1},u_{2},u_{3},u_{4}$ are four distinct points in a $1$-$2$ metric
space, then:
* (i)
if $d(u_{i},u_{j})=1$ for all choices of distinct $i$ and $j$, then
$\overline{u_{1}u_{2}}\neq\overline{u_{3}u_{4}}$,
* (ii)
if $d(u_{1},u_{2})=1$ and $d(u_{3},u_{4})=2$, then
$\overline{u_{1}u_{2}}\neq\overline{u_{3}u_{4}}$,
* (iii)
if $d(u_{1},u_{2})=d(u_{3},u_{4})=2$ and $u_{4}$ has a twin other than
$u_{3}$,
then $\overline{u_{1}u_{2}}\neq\overline{u_{3}u_{4}}$.
If $u_{1},u_{2},u_{3}$ are three distinct points in a $1$-$2$ metric space,
then:
* (iv)
if $d(u_{1},u_{2})=d(u_{2},u_{3})=1$ and $u_{1},u_{3}$ are not twins, then
$\overline{u_{1}u_{2}}\neq\overline{u_{2}u_{3}}$,
* (v)
if $d(u_{1},u_{2})=1$, $d(u_{2},u_{3})=2$, and $u_{3}$ has a twin other than
$u_{2}$,
then $\overline{u_{1}u_{2}}\neq\overline{u_{2}u_{3}}$,
* (vi)
if $d(u_{1},u_{2})=d(u_{2},u_{3})=2$, then
$\overline{u_{1}u_{2}}\neq\overline{u_{2}u_{3}}$.
$\Box$
Proof of Theorem 3. To see that $h(n)\leq(1+o(1))\beta n^{4/3}$, consider the
metric space where the ground set is split into pairwise disjoint groups of
sizes as nearly equal as possible, every two points that belong to two
different groups have distance $1$, and every two points that belong to one
group have distance $2$. If each group includes at least three points, then
$\overline{uv}=\overline{wx}$ if and only if either $\\{u,v\\}=\\{w,x\\}$ or
else there are two distinct groups such that each of the sets $\\{u,v\\}$,
$\\{w,x\\}$ has one element in each of these two groups. Consequently, when
there are $n$ points altogether and $(1+o(1))2^{-1/3}n^{2/3}$ groups, there
are $(1+o(1))\beta n^{4/3}$ lines.
To prove that $h(n)\geq(1+o(1))\alpha n^{4/3}$, consider an arbitrary $1$-$2$
metric space $(X,d)$ and write $n=|X|$. Let $X_{1}$ be any maximal subset of
$X$ that does not contain a pair of twins.
Case 1: $|X_{1}|\geq n/2$. In this case, consider a largest set of distinct
two-point subsets $\\{u_{i},v_{i}\\}$ ($i=1,2,\ldots,s$) of $X_{1}$ such that
$\overline{u_{1}v_{1}}=\overline{u_{2}v_{2}}=\ldots=\overline{u_{s}v_{s}}.$
Since every two-point subset of $X_{1}$ determines a line, there are at least
$\binom{|X_{1}|}{2}\cdot\frac{1}{s}$
distinct lines; this gives the desired conclusion when $s\leq(n/2)^{2/3}$. We
will complete the argument by proving that there are at least
$\binom{s}{2}-5$
distinct lines, which gives the desired conclusion when $s\geq(n/2)^{2/3}$.
For this purpose, we may assume that $s\geq 5$. Part (iv) of Lemma 2
guarantees that the sets $\\{u_{i},v_{i}\\}$ with $d(u_{i},v_{i})=1$ are
pairwise disjoint; part (iv) of Lemma 2 guarantees that the sets
$\\{u_{i},v_{i}\\}$ with $d(u_{i},v_{i})=2$ are pairwise disjoint; part (ii)
of Lemma 2 guarantees that each of the sets $\\{u_{i},v_{i}\\}$ with
$d(u_{i},v_{i})=1$ meets each of the sets $\\{u_{i},v_{i}\\}$ with
$d(u_{i},v_{i})=2$; now our assumption $s\geq 5$ guarantees that all $s$
distances $d(u_{i},v_{i})$ are equal. We are going to prove that there are at
least $s(s-1)/2$ distinct lines: for every choice of subscripts $i,j$ such
that $1\leq i<j\leq s$, there is a line $L_{ij}$ such that
$\\{u_{k},v_{k}\\}\subseteq L_{ij}\;\;\Leftrightarrow\;\;k\in\\{i,j\\}.$
Subcase 1.1: $d(u_{1},v_{1})=d(u_{2},v_{2})=\ldots=d(u_{s},v_{s})=1$.
Since $\\{u_{j},v_{j}\\}\subseteq\overline{u_{j}v_{j}}=\overline{u_{i}v_{i}}$
and $\\{u_{i},v_{i}\\}\subseteq\overline{u_{i}v_{i}}=\overline{u_{j}v_{j}}$,
we may assume (after switching $u_{j}$ with $v_{j}$ if necessary) that
$d(u_{i},u_{j})=2$ and $d(u_{i},v_{j})=d(u_{j},v_{i})=1$. Now we may set
$L_{ij}=\overline{u_{i}u_{j}}$: if $k\not\in\\{i,j\\}$, then
$u_{j}\in\overline{u_{j}v_{j}}=\overline{u_{k}v_{k}}$ implies that one of
$d(u_{j},u_{k})$ and $d(u_{j},v_{k})$ equals $2$, and so
$\\{u_{k},v_{k}\\}\not\subseteq\overline{u_{i}u_{j}}$.
Subcase 1.2: $d(u_{1},v_{1})=d(u_{2},v_{2})=\ldots=d(u_{s},v_{s})=2$.
Since
$\overline{u_{1}v_{1}}=\overline{u_{2}v_{2}}=\ldots=\overline{u_{s}v_{s}}$,
the distance between any point in one of the sets $\\{u_{1},v_{1}\\}$,
$\\{u_{2},v_{2}\\}$, …, $\\{u_{s},v_{s}\\}$ and any point in another of these
$s$ sets equals $1$; it follows that we may set
$L_{ij}=\overline{u_{i}u_{j}}$.
Case 2: $|X_{1}|<n/2$. Write $X_{2}=X-X_{1}$, consider a largest set $S$ of
points in $X_{2}$ such that
$u,v\in S,\,u\neq v\;\;\Rightarrow\;\;d(u,v)=1,$
and write
$\displaystyle E_{1}$ $\displaystyle=$ $\displaystyle\\{\;\\{u,v\\}\\!:u,v\in
S,\,u\neq v\\},$ $\displaystyle E_{2}$ $\displaystyle=$
$\displaystyle\\{\;\\{u,v\\}\\!:u,v\in X_{2},\,d(u,v)=2\\}.$
Since every vertex of $X_{2}$ has a twin (else it could be added to $X_{1}$),
Lemma 2 guarantees that every two distinct pairs in $E_{1}\cup E_{2}$
determine two distinct lines. We complete the argument by pointing out that
$|E_{1}\cup E_{2}|\geq(1+o(1))\alpha n^{4/3}:$
the famous theorem of Turán [10] guarantees that
$|S|\;\geq\;\frac{|X_{2}|^{2}}{2|E_{2}|+|X_{2}|},$
and so $|E_{2}|\;<\;\alpha n^{4/3}$ implies $|E_{1}|\;\geq\;(1+o(1))\,\alpha
n^{4/3}$. $\Box$
The lower bound of Theorem 3 can be easily improved through a more careful
analysis of Case 2: a routine exercise in calculus shows that
$x\geq 3,\,y\geq
0\;\;\Rightarrow\;\;\frac{1}{2}\cdot\left(\frac{x^{2}}{2y+x}\right)^{2}+y\;\geq\;\beta
x^{4/3}-\frac{x}{2}\>,$
and so $|E_{1}\cup E_{2}|\geq(1+o(1))\beta|X_{2}|^{4/3}$. Perhaps
$h(n)=(1+o(1))\beta n^{4/3}$.
Acknowledgment
This research was carried out in ConCoCO (Concordia Computational
Combinatorial Optimization Laboratory) and undertaken, in part, thanks to
funding from the Canada Research Chairs Program and from the Natural Sciences
and Engineering Research Council of Canada.
## References
* [1] L.M. Blumenthal, Theory and Applications of Distance Geometry, Oxford University Press, Oxford, 1953.
* [2] H. Busemann, The Geometry of Geodesics, Academic Press, New York, 1955\.
* [3] X. Chen and V. Chvátal, “Problems related to a de Bruijn - Erdős theorem”, Discrete Applied Mathematics 156 (2008), 2101 – 2108.
* [4] X. Chen, The Sylvester-Chvátal theorem, Discrete & Computational Geometry 35 (2006) 193–199.
* [5] V. Chvátal, Sylvester-Gallai theorem and metric betweenness, Discrete & Computational Geometry 31 (2004) 175–195.
* [6] H.S.M. Coxeter, Introduction to Geometry, Wiley, New York, 1961.
* [7] N.G. de Bruijn and P. Erdős, On a combinatorial problem, Indagationes Mathematicae 10 (1948) 421–423.
* [8] N.G. de Bruijn and P. Erdős, A colour problem for infinite graphs and a problem in the theory of relations, Indagationes Mathematicae 13 (1951) 369–373.
* [9] K. Menger, Untersuchungen über allgemeine metrik, Mathematische Annalen 100 (1928) 75–163.
* [10] P. Turán, Egy gráfelméleti szélsőértékfeladatról, Mat. Fiz. Lapok 48 (1941), 436–452; see also: On the theory of graphs, Colloq. Math. 3 (1954), 19–30.
|
arxiv-papers
| 2009-05-31T00:01:01 |
2024-09-04T02:49:03.023207
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Ehsan Chiniforooshan and Va\\v{s}ek Chv\\'atal",
"submitter": "Va\\v{s}ek Chv\\'atal",
"url": "https://arxiv.org/abs/0906.0123"
}
|
0906.0147
|
# Hahn Decomposition Theorem
of Signed Lattice Measure
Jun Tanaka University of California, Riverside, USA juntanaka@math.ucr.edu,
yonigeninnin@gmail.com
(Date: January, 15, 2009)
###### Abstract.
In this paper, we will define a signed Lattice measure on $\sigma$-algebras,
as well as give the definition of positive and negative Lattice. Herein, we
will show that the Hahn Decomposition Theorem decomposes any space X into a
positive lattice A and a negative Lattice B such that $A\vee B$ =X and the
signed Lattice measure of $A\wedge B$ is 0.
###### Key words and phrases:
Lattice measure, signed Lattice measure, Hahn decomposition, Lattice
###### 2000 Mathematics Subject Classification:
Primary: 28A12, 28C15
## 1\. Introduction
In this paper, we will show Hahn Decomposition Theorem of Signed Lattice
Measure when the Lattice do not necessarily satisfy the law of non-
contradiction. We consider a set X and a Lattice $\mathcal{L}$ of subsets of
X, which may not satisfy the law of non-contradiction and which contain
$\emptyset$ and X.
We will define a signed Lattice measure on $\sigma$-algebras, and we will show
that the Lattice Hahn Decomposition Theorem decomposes any set X into a
positive Lattice A and a negative Lattice B such that $A\vee B$=X and the
signed Lattice measure of $A\wedge B$ is 0.
This theory discussed in this paper is a Lattice version of the paper [8]. In
the paper [8], J. Tanaka showed Hahn Decomposition Theorem of Signed Fuzzy
Measure. As one already saw, Fuzzy measure is a classical measure, provided
that fuzzy sets are restricted to classical sets. As in classical measure
theory, he defined a fuzzy signed measure on $\sigma$-algebras, and he showed
that the Fuzzy Hahn Decomposition Theorem decomposes any space X into a
positive set A and a negative set B such that A+B=X and the signed measure of
$A\wedge B$ is 0.
Unfortunately, the monotonicity prohibits the signed Lattice measure from
being a signed measure in a classical sense. Please note that monotonicity is
not required in the definition of classical signed measure. In Section 3, we
will define a signed Lattice measure on $\sigma$-algebras, as well as give the
definition of positive and negative Lattice. Furthermore, we will show that
any countable union of positive Lattice is a positive Lattice, as well as the
following proposition: if E is a Lattice such that 0 $<\nu(E)<\infty$, then
there is a positive Lattice A $\leq E$ with $\nu(A)>$ 0\. Finally, we will
show the Hahn Decomposition Theorem for signed Lattice measure in the section
titled Main Result.
## 2\. Preliminaries : Extension of Lattice
In this section, we shall briefly review the well-known facts about lattice
theory (e.g. Birkhoff [1] ), propose an extension lattice, and investigate its
properties. (L,$\wedge$,$\vee$) is called a lattice if it is closed under
operations $\wedge$ and $\vee$ and satisfies, for any elements x,y,z in L:
(L1) the commutative law: x $\wedge$ y = y $\wedge$ x and x $\vee$ y = y
$\vee$ x
(L2) the associative law:
$\displaystyle x\wedge(y\wedge z)=(x\wedge y)\wedge z\ \ \text{and}\ \
x\vee(y\vee z)=(x\vee y)\vee z$
(L3) the absorption law: x $\vee$ ( y $\wedge$ x ) =x and x $\wedge$ ( y
$\vee$ x ) = x.
Hereinafter, the lattice (L,$\wedge$,$\vee$) will often be written as L for
simplicity.
A mapping h from a lattice L to another $L^{\prime}$ is called a lattice-
homomorphism, if it satisfies
$\displaystyle h(x\wedge y)=h(x)\wedge h(y)\ \ \text{and}\ \ h(x\vee
y)=h(x)\vee h(y),\forall x,y\in L.$
If h is a bijection, that is, if h is one-to-one and onto, it is called a
lattice-isomorphism; and in this case, $L^{\prime}$ is said to be lattice-
isomorphic to L.
A lattice (L,$\wedge$,$\vee$) is called distributive if, for any x,y,z in L,
(L4) the distributive law holds:
$\displaystyle x\vee(y\wedge z)=(x\vee y)\wedge(y\vee z)\ \ \text{and}\ \
x\wedge(y\vee z)=(x\wedge y)\vee(y\wedge z)$
A lattice L is called complete if, for any subset A of L, L contains the
supremum $\vee$ A and the infimum $\wedge$ A. If L is complete, then L itself
includes the maximum and minimum elements, which are often denoted by 1 and 0,
or I and O, respectively.
A distributive lattice is called a Boolean lattice if for any element x in L,
there exists a unique complement $x^{C}$ such that
$\displaystyle x\vee x^{C}=1\ \ \ \ \ \ \ \ \text{(L5) the law of excluded
middle}$ $\displaystyle x\wedge x^{C}=0\ \ \ \ \ \ \ \ \text{(L6) the law of
non-contradiction}.$
Let L be a lattice and $\cdot^{c}$: L $\rightarrow$ L be an operator. Then
$\cdot^{c}$ is called a lattice complement in L if the following conditions
are satisfied:
$\displaystyle\text{(L5) and (L6)};$ $\displaystyle\forall x\in L,\ x^{C}\vee
x=I\ \text{and}\ x^{C}\wedge x=0,$ $\displaystyle\text{(L7) the law of
contrapositive};$ $\displaystyle\forall x,y\in L,x\leq y\Rightarrow x^{C}\geq
y^{C},$ $\displaystyle\text{(L8) the law of double negation};$
$\displaystyle\forall x\in L,(x^{C})^{C}=x.$
###### Definition 1.
Complete Heyting algebra (cHa)
A complete lattice is called a complete Heyting algebra (cHa) if
$\displaystyle\vee_{i\in I}\ (x_{i}\wedge y)=(\vee_{i\in I}\ x_{i})\wedge y$
holds for $\forall x_{i},y\in L$ ($i\in I$); where I is an index set of
arbitrary cardinal number.
It is well-known that for a set E, the power set P(E) = $2^{E}$. The set of
all subsets of E is a Boolean algebra.
## 3\. Definitions and Lemmas
Throughout this paper, we will consider Lattices as complete Lattices which
obey (L1)-(L8) except for (L6) the law of non-contradiction.
###### Definition 2.
Unless otherwise stated, X is the entire set and $\mathcal{L}$ is a Lattice of
any subset sets of X. If a Lattice $\mathcal{L}$ satisfies the following
conditions, then it is called a Lattice $\sigma$-algebra;
(1) $\forall h\in\mathcal{L}$, $h^{C}\in\mathcal{L}$
(2) if $h_{n}\in\mathcal{L}$ for n=1,2,3…., then
$\vee^{\infty}h_{n}\in\mathcal{L}$.
We denote $\sigma(\mathcal{L})$ as the Lattice $\sigma$-Algebra generated by
$\mathcal{L}$.
###### Definition 3.
If m : $\sigma(\mathcal{L})$ $\mapsto$ $\mathbb{R}\cup\\{\infty\\}$ satisfies
the following properties, then m is called a Lattice measure on the Lattice
$\sigma$-Algebra $\sigma(\mathcal{L})$.
(1) m $(\emptyset)$ = m $(0)$ = 0.
(2) $\forall h,g\in\sigma(\mathcal{L})$ s.t. $m(h),m(g)\geq$ 0 : $h\leq g$
$\Rightarrow$ m $(h)\leq$ m $(g)$.
(3) $\forall h,g\in\sigma(\mathcal{L})$ : m$(h\vee g)+$ m$(h\wedge g)=$
m$(h)+$ m$(g)$.
(4) if $h_{n}$ $\subset\sigma(\mathcal{L}),n\in N$ such that $h_{1}\leq
h_{2}\leq\cdots\leq h_{n}\leq\cdots$, then $m(\vee^{\infty}h_{n})=\lim$
m$(h_{n})$.
Let $m_{1}$ and $m_{2}$ be Lattice measures defined on the same Lattice
$\sigma$-algebra $\sigma(\mathcal{L})$. If one of them is finite, the set
function $m(E)=m_{1}(E)-m_{2}(E)$ , $E\in\sigma(\mathcal{L})$ is well defined
and countably additive on $\sigma(\mathcal{L})$. However, it is not
necessarily nonnegative; it is called a signed Lattice measure.
###### Definition 4.
By a signed Lattice measure on the measurable Lattice (X,
$\sigma(\mathcal{L})$) we mean $\nu$ : $\sigma(\mathcal{L})$ $\mapsto$
$\mathbb{R}\cup\\{\infty\\}$ or $\mathbb{R}\cup\\{-\infty\\}$, satisfying the
following property:
(1) $\nu(\emptyset)$ = $\nu(0)$ = 0.
(2) $\forall h,g\in\sigma(\mathcal{L})$ s.t. $\nu(h),\nu(g)\geq$ 0 : $h\leq g$
$\Rightarrow$ $\nu(h)\leq\nu(g)$.
$\forall h,g\in\sigma(\mathcal{L})$ s.t. $\nu(h),\nu(g)\leq$ 0 : $h\leq g$
$\Rightarrow$ $\nu(g)\leq\nu(h)$.
(3) $\forall h,g\in\sigma(\mathcal{L})$ : $\nu(h\vee g)+\nu(h\wedge
g)=\nu(h)+\nu(g)$.
(4) if $h_{n}$ $\subset\sigma(\mathcal{L}),n\in N$ such that $h_{1}\leq
h_{2}\leq\cdots\leq h_{n}\leq\cdots$, then $m(\vee^{\infty}h_{n})=\lim$
m$(h_{n})$.
This is meant in the sense that if the left-hand side is finite, the limit on
the right-hand side is convergent, and if the left-hand side is $\pm\infty$,
then the limit on the right-hand side diverges accordingly.
###### Remark 1.
The signed Lattice measure is a Lattice measure when it takes only positive
values. Thus, the signed Lattice measure is a generalization of Lattice
measure.
###### Definition 5.
A is a positive Lattice if for any Lattice measurable set E in A, $\nu(E)\geq
0$. Similarly, B is a negative Lattice if for any Lattice measurable set E in
B, $\nu(E)\leq 0$.
###### Lemma 1.
Every sublattice of a positive Lattice is a positive Lattice and any countable
union of positive Lattices is a positive Lattice.
###### Proof.
The first claim is clear. Before we show the second claim, we need to show
that every union of positive Lattice is a positive Lattice. Let A, B be
positive Lattices and E $\leq$ A $\vee$ B be a measurable Lattice. By (2) in
Definition 4, 0 $\leq$ $\nu(B\wedge E)$ \- $\nu(A\wedge B\wedge E)$. By (3),
$\nu(E)\geq$ 0\. Now by induction, every finite union of positive Lattice is a
positive Lattice. Let $A_{n}$ be a positive Lattice for all n and E $\leq$
$\vee$ $A_{n}$ be a measurable Lattice. Then $E_{m}$ :=
$E\wedge\vee_{n=1}^{m}A_{n}$ = $\vee_{n=1}^{m}E\wedge A_{n}$. Then $E_{m}$ is
a measurable Lattice and a positive Lattice. In particular, $E_{m}\leq
E_{m+1}$ for all n and $E$ = $\vee_{n=1}^{\infty}E_{m}$. Thus 0
$\leq\lim\nu(E_{m})$ = $\nu(E)$. Therefore $\vee$ $A_{n}$ is a positive
Lattice.
∎
###### Lemma 2.
Let E be a measurable Lattice such that 0 $<\nu(E)<\infty$. Then there is a
positive Lattice A $\leq E$ with $\nu(A)>$ 0.
###### Proof.
If E is a positive Lattice, we take A=E. Otherwise, E contains a Lattice of
negative measure. Let $n_{1}$ be the smallest positive integer such that there
is a measurable Lattice $E_{1}\subset$ E with $\nu(E_{1})<-\frac{1}{n_{1}}$.
Proceeding inductively, if E$\wedge\wedge_{j=1}^{k-1}E_{j}^{C}$ is not already
a positive Lattice, let $n_{k}$ be the smallest positive integer for which
there is a measurable Lattice $E_{k}$ such that $E_{k}\leq
E\wedge\wedge_{j=1}^{k-1}E_{j}$ and $\nu(E_{k})<-\frac{1}{n_{k}}$.
Let A = $(\vee E_{k})^{C}$.
Then $\nu(E)$ = $\nu(E\wedge A)+\nu(E\wedge\vee E_{k})$ = $\nu(E\wedge
A)+\nu(\vee E_{k})$. Since $\nu(E)$ is finite,
$\lim_{n\rightarrow\infty}\nu(\vee^{n}E_{k})$ is finite and $\nu(\vee
E_{k})\leq$ 0\. Since $\nu(E)>$ 0 and $\nu(\vee E_{k})\leq$ 0, $\nu(E\wedge
A)>$ 0.
We will show that A is a positive Lattice. Let $\epsilon>$ 0\. Since
$\frac{1}{n_{k}}\rightarrow$ 0, we may choose k such that $-\frac{1}{n_{k}-1}\
>\ -\epsilon$. Thus A contains no measurable Lattice of measure less than
$-\epsilon$. Since $\epsilon$ was an arbitrary positive number, it follows
that A can contain no Lattice of negative measure and so must be a positive
Lattice.
∎
## 4\. Main result: Lattice Hahn Decomposition
Without loss of generality, let’s omit + $\infty$ as a value of $\nu$. Let
$\lambda$ = $\sup\\{\nu(A):A$ is a positive Lattice $\\}$.
Then $\lambda\geq$ 0 since $\nu(\emptyset)$ = 0.
Let $A_{i}$ be a sequence of positive Lattices such that $\lambda$ =
$\lim\nu(A_{i})$ and $A$ = $\vee A_{i}$. By Theorem 1, A is a positive Lattice
and $\lambda\geq\nu(A)$.
$\vee^{n}A_{i}\leq$ $A$ for any n implies $\nu(\vee^{n}A_{i})\geq$ 0 for any
n. Thus $\lambda$ = $\lim\nu(A_{i})$ = $\nu(A)$ = 0.
Let $E\leq A^{C}$ be a positive Lattice. Then $\nu(E)\geq$ 0 and $A\vee E$ is
a positive Lattice. Thus $\lambda\geq$ $\nu(A\vee E)=\nu(A)+\nu(E)-\nu(A\wedge
E)$ = $\lambda+\nu(E)-\nu(A\wedge E)$. Thus $\nu(E)=\nu(A\wedge E)$. We have
that $\nu(E)$ = 0 since $A\wedge E\leq A\wedge A^{C}$ and $\nu(A\wedge A^{C})$
= 0.
Thus, $A^{C}$ contains no positive sublattice of positive measure and hence no
sublattice of positive measure by Lemma 2. Consequently, $A^{C}$ is a negative
Lattice.
## 5\. Conclusion
Let X be an entire set. Then by the previous theorem, we find such a positive
Lattice A and a negative Lattice B (= $A^{C}$). By the Lattice measurability
of $\nu$, $\nu(A\wedge A^{C})$ = 0. $A\vee A^{C}$ = X. These characteristics
provide the following: X = $A\cup B$ and $A\cap B=\emptyset$ in the classical
set sense.
## References
* [1] G. Birkhoff, Lattice Theory 3rd ed., AMS Colloquim Publications, Providence, RI, 1967
* [2] N. Dunford and J.T. Schwartz, Liner Operators Part 1 General Theory, Willy Interscience Publication, 1988.
* [3] P.R. Halmos, Measure Theory (Springer, New York, 1974).
* [4] Z. Qiang, Further disucussion on the Hahn decomposition theorem for signed fuzzy measure, Fuzzy Sets and Sytems 70 (1995) 89-95.
* [5] Z Qiang and L. Ke, Decomposition of Reviesed Monotone Signed Fuzzy Measure, Tsinghua Science and Technology, Volume 8, Number 1, February 2003 pp60-64.
* [6] M. Sahin, On Caratheodory Extension Theorem on Fuzzy Measure Spaces, Far East J Math Sci 26(2) 2007 pp311-317.
* [7] C. Traina, Outer Measures Associated With Lattice Measures and Their Application, Internat. J. Math. and Math Sci. Vol. 18 No4 (1995) 725-734.
* [8] J. Tanaka, Hahn Decomposition Theorem of Signed Fuzzy Measure, Advances in Fuzzy Sets and Systems, No. 3 Vol 3 (2008) 315-323.
* [9] L. Xuecheng, Hahn Decomposition theorem for infinite signed fuzzy measure, Fuzzy Sets and Sytems 57 (1993) 377-380.
|
arxiv-papers
| 2009-05-31T12:03:24 |
2024-09-04T02:49:03.029247
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Jun Tanaka",
"submitter": "Jun Tanaka",
"url": "https://arxiv.org/abs/0906.0147"
}
|
0906.0211
|
# Equations of States in Statistical Learning
for a Nonparametrizable and Regular Case
Sumio Watanabe
PI Lab., Tokyo Institute of Technology,
Mailbox R2-5, 4259, Midori-ku, Yokohama, 226-8503 Japan
(E-mail) swatanab(AT)pi.titech.ac.jp
###### Abstract
Many learning machines that have hierarchical structure or hidden variables
are now being used in information science, artificial intelligence, and
bioinformatics. However, several learning machines used in such fields are not
regular but singular statistical models, hence their generalization
performance is still left unknown. To overcome these problems, in the previous
papers, we proved new equations in statistical learning, by which we can
estimate the Bayes generalization loss from the Bayes training loss and the
functional variance, on the condition that the true distribution is a
singularity contained in a learning machine. In this paper, we prove that the
same equations hold even if a true distribution is not contained in a
parametric model. Also we prove that, the proposed equations in a regular case
are asymptotically equivalent to the Takeuchi information criterion.
Therefore, the proposed equations are always applicable without any condition
on the unknown true distribution.
## 1 Introduction
Nowadays, a lot of learning machines are being used in information science,
artificial intelligence, and bioinformatics. However, several learning
machines used in such fields, for example, three-layer neural networks, hidden
Markov models, normal mixtures, binomial mixtures, Boltzmann machines, and
reduced rank regressions have hierarchical structure or hidden variables, with
the result that the mapping from the parameter to the probability distribution
is not one-to-one. In such learning machines, it was pointed out that the
maximum likelihood estimator is not subject to the normal distribution [5, 4,
6, 2], and that the a posteriori distribution can not be approximated by any
gaussian distribution [11, 13, 14, 15]. Hence the conventional statistical
methods for model selection, hypothesis test, and hyperparameter optimization
are not applicable to such learning machines. In other words, we have not yet
established the theoretical foundation for learning machines which extract
hidden structures from random samples.
In statistical learning theory, we study the problem of learning and
generalization based on several assumptions. Let $q(x)$ be a true probability
density function and $p(x|w)$ be a learning machine, which is represented by a
probability density function of $x$ for a parameter $w$. In this paper, we
examine the following two assumptions.
(1) The first is the parametrizability condition. A true distribution $q(x)$
is said to be parametrizable by a learning machine $p(x|w)$, if there is a
parameter $w_{0}$ which satisfies $q(x)=p(x|w_{0})$. If otherwise, it is
called nonparametrizable.
(2) The second is the regularity condition. A true distribution $q(x)$ is said
to be regular for a learning machine $p(x|w)$, if the parameter $w_{0}$ that
minimizes the log loss function
$L(w)=-\int q(x)\log p(x|w)dx$ (1)
is unique and if the Hessian matrix $\nabla^{2}L(w_{0})$ is positive definite.
If a true distribution is not regular for a learning machine, then it is said
to be singular.
In study of layered neural networks and normal mixtures, both conditions are
important. In fact, if a learning machine is redundant compared to a true
distribution, then the true distribution is parametrizable and singular. Or if
a learning machine is too simple to approximate a true distribution, then the
true distribution is nonparametrizable and regular. In practical applications,
we need a method to determine the optimal learning machine, therefore, a
general formula is desirable by which the generalization loss can be estimated
from the training loss without regard to such conditions.
In the previous papers [18, 19, 20, 21, 22], we studied a case when a true
distribution is parametrizable and singular, and proved new formulas which
enable us to estimate the generalization loss from the training loss and the
functional variance. Since the new formulas hold for an arbitrary set of a
true distribution, a learning machine, and an a priori distribution, they are
called equations of states in statistical estimation. However, it has not been
clarified whether they hold or not in a nonparametrizable case.
In this paper, we study the case when a true distribution is nonparametrizable
and regular, and prove that the same equations of states also hold. Moreover,
we show that, in a nonparametrizable and regular case, the equations of states
are asymptotically equivalent to the Takeuchi information criterion (TIC) for
the maximum likelihood method. Here TIC was derived for the model selection
criterion in the case when the true distribution is not contained in a
statistical model [10]. The network information criterion [7] was devised by
generalizing it to an arbitrary loss function in the regular case.
If a true distribution is singular for a learning machine, TIC is ill-defined,
whereas the equations of states are well-defined and equal to the average
generalization losses. Therefore, equations of states can be understood as the
generalized version of TIC from the maximum likelihood method in a regular
case to Bayes method for regular and singular cases.
This paper consists of six sections. In Section 2, we summarized the framework
of Bayes learning and the results of previous papers. In Section 3, we show
the main results of this paper. In Section 4, some lemmas are prepared which
are used in the proofs of the main results. The proofs of lemmas are given in
the Appendix. In Section 5, we prove the main theorems. In Section 5 and 6, we
discuss and conclude this paper.
## 2 Background
In this section, we summarize the background of the paper.
### 2.1 Bayes learning
Firstly we introduce the framework of Bayes and Gibbs estimations, which is
well known in statistics and learning theory.
Let $N$ be a natural number and ${\bf R}^{N}$ be the $N$-dimensional Euclidean
space. Assume that an information source is given by a probability density
function $q(x)$ on ${\bf R}^{N}$ and that random samples
$X_{1},X_{2},...,X_{n}$ are independently subject to the probability
distribution $q(x)dx$. Sometimes $X_{1},X_{2},..,X_{n}$ are said to be
training samples and the information source $q(x)$ is called a true
probability density function. In this paper we use notations for a given
function $g(x)$,
$\displaystyle E_{X}[g(X)]$ $\displaystyle=$ $\displaystyle\int g(x)q(x)dx,$
$\displaystyle E_{j}^{(n)}[g(X_{j})]$ $\displaystyle=$
$\displaystyle\frac{1}{n}\sum_{j=1}^{n}g(X_{j}).$
Note that the expectation $E_{X}[g(X)]$ is given by the integration by the
true distribution, but that the empirical expectation $E_{j}^{(n)}[g(X_{j})]$
can be calculated using random samples.
We study a learning machine $p(x|w)$ of $x\in{\bf R}^{N}$ for a given
parameter $w\in{\bf R}^{d}$. Let $\varphi(w)$ be an a priori probability
density function on ${\bf R}^{d}$. The expectation operator $E_{w}[\;\;]$ by
the a posteriori probability distribution with the inverse temperature
$\beta>0$ for a given function $g(w)$ is defined by
$E_{w}[g(w)]=\frac{1}{Z(\beta)}\int
g(w)\varphi(w)\prod_{i=1}^{n}p(X_{i}|w)^{\beta}\;dw,$
where $Z(\beta)$ is the normalizing constant. The Bayes generalization loss
$B_{g}$, the Bayes training loss $B_{t}$, the Gibbs generalization loss
$G_{g}$, and the Gibbs training loss $G_{t}$ are respectively defined by
$\displaystyle B_{g}$ $\displaystyle=$ $\displaystyle-E_{X}[\;\log
E_{w}[p(X|w)]\;],$ $\displaystyle B_{t}$ $\displaystyle=$ $\displaystyle-
E_{j}^{(n)}[\;\log E_{w}[p(X_{j}|w)]\;],$ $\displaystyle G_{g}$
$\displaystyle=$ $\displaystyle-E_{X}[\;E_{w}[\log p(X|w)]\;],$ $\displaystyle
G_{t}$ $\displaystyle=$ $\displaystyle-E_{j}^{(n)}[\;E_{w}[\log
p(X_{j}|w)]\;].$
The functional variance $V$ is defined by
$V=n\times E_{j}^{(n)}\\{E_{w}[\;(\log p(X_{j}|w))^{2}\;]-E_{w}[\log
p(X_{j}|w)]^{2}\\}.$
The concept of the functional variance was firstly proposed in the papers [18,
19, 20, 21]. In this paper, we show that the functional variance plays an
important role in learning theory. Remark that $B_{g}$, $B_{t}$, $G_{g}$,
$G_{t}$, and $V$ are random variables because $E_{w}[\;\;]$ depends on random
samples. Let $E[\;\;]$ denote the expectation value overall sets of training
samples. Then $E[B_{g}]$ and $E[B_{t}]$ are respectively called the average
Bayes generalization and training error, and $E[G_{g}]$ and $E[G_{t}]$ the
average Gibbs ones.
In theoretical analysis, we assume some conditions on a true distribution and
a learning machine. If there exists a parameter $w_{0}$ such that
$q(x)=p(x|w_{0})$, then the true distribution is said to be parametrizable. If
otherwise, nonparametrizable. In both cases, we define $w_{0}$ as the
parameter that minimizes the log loss function $L(w)$ in eq.(1). Note that
$w_{0}$ is equal to the parameter that minimizes the Kullback-Leibler distance
from the true distribution to the parametric model. If $w_{0}$ is unique and
if the Hessian matrix
$\frac{\partial^{2}}{\partial w_{j}\partial w_{k}}L(w_{0})$
is positive definite, then the true distribution is said to be regular for a
learning machine. Remark. Several learning machines such as a layered neural
network or a normal mixture have natural nonidentifiability by the symmetry of
a parameter. For example, in a normal mixture,
$p(x|a,b,c)=\frac{a}{\sqrt{2\pi}}\;e^{-|x-b|^{2}/2}+\frac{1-a}{\sqrt{2\pi}}\;e^{-|x-c|^{2}/2},$
two probability distributions $p(x|a,b,c)$ and $p(1-a,c,b)$ give the same
probability distribution, hence the parameter $w_{0}$ that minimizes $L(w)$ is
not unique for any true distribution. In a parametrizable and singular case,
such nonidentifiability strongly affects learning [11, 13]. However, in a
nonparametrizable and regular case, the a posteriori distribution in the
neighborhood of each optimal parameter has the same form, resulting that we
can assume $w_{0}$ is unique without loss of generality.
### 2.2 Notations
Secondly, we explain some notations.
For given scalar functions $f(w)$ and $g(w)$, the vector $\nabla f(w)$ and two
matrices $\nabla f(w)\nabla g(w)$ and $\nabla^{2}f(w)$ are respectively
defined by
$\displaystyle(\nabla f(w))_{j}$ $\displaystyle=$ $\displaystyle\frac{\partial
f(w)}{\partial w_{j}},$ $\displaystyle(\nabla f(w)\nabla g(w))_{jk}$
$\displaystyle=$ $\displaystyle\frac{\partial f(w)}{\partial
w_{j}}\frac{\partial g(w)}{\partial w_{k}},$
$\displaystyle(\nabla^{2}f(w))_{jk}$ $\displaystyle=$
$\displaystyle\frac{\partial^{2}f(w)}{\partial w_{j}\partial w_{k}}.$
Let $n$ be the number of training samples. For a given constant $\alpha$, we
use the following notations.
(1) $Y_{n}=O_{p}(n^{\alpha})$ shows that a random variable $Y_{n}$ satisfies
$|Y_{n}|\leq Cn^{\alpha}$ with some random variable $C\geq 0$.
(2) $Y_{n}=o_{p}(n^{\alpha})$ shows that a random variable $Y_{n}$ satisfies
the convergence in probability $|Y_{n}|/n^{\alpha}\rightarrow 0$.
(3) $y_{n}=O(n^{\alpha})$ shows that a sequence $y_{n}$ satisfies $|y_{n}|\leq
Cn^{\alpha}$ with some constant $C\geq 0$.
(4) $y_{n}=o(n^{\alpha})$ shows that a sequence $y_{n}$ satisfies the
convergence $|y_{n}|/n^{\alpha}\rightarrow 0$. Remark. For a sequence of
random variables, it needs mathematically technical procedure to prove
convergence in probability or convergence in law. If we adopt the completely
mathematical procedure in the proof, a lot of readers in information science
may not find the essential points in the theorems. For example, see [18, 21,
22]. Therefore, in this paper, we adopt the natural and appropriate level of
mathematical rigorousness, from the viewpoint of mathematical sciences. The
notations $O_{p}$ and $o_{p}$ are very useful and understandable for such a
purpose.
### 2.3 Parametrizable and singular case
Thirdly, we introduce the results of the previous researches [18, 19, 20, 21].
We do not prove these results in this paper.
Assume that a true distribution is parametrizable. Even if the true
distribution is singular for a learning machine,
$\displaystyle E[B_{g}]$ $\displaystyle=$ $\displaystyle
S_{0}+\frac{\lambda_{0}-\nu_{0}}{n\beta}+\frac{\nu_{0}}{n}+o(\frac{1}{n}),$
(2) $\displaystyle E[B_{t}]$ $\displaystyle=$ $\displaystyle
S_{0}+\frac{\lambda_{0}-\nu_{0}}{n\beta}-\frac{\nu_{0}}{n}+o(\frac{1}{n}),$
(3) $\displaystyle E[G_{g}]$ $\displaystyle=$ $\displaystyle
S_{0}+\frac{\lambda_{0}}{n\beta}+\frac{\nu_{0}}{n}+o(\frac{1}{n}),$ (4)
$\displaystyle E[G_{t}]$ $\displaystyle=$ $\displaystyle
S_{0}+\frac{\lambda_{0}}{n\beta}-\frac{\nu_{0}}{n}+o(\frac{1}{n}),$ (5)
$\displaystyle E[V]$ $\displaystyle=$
$\displaystyle\frac{2\nu_{0}}{\beta}+o(1),$ (6)
where $S_{0}$ is the entropy of the true probability density function $q(x)$,
$S_{0}=-\int q(x)\log q(x)dx.$
The constants $\lambda_{0}$ and $\nu_{0}$ are respectively the generalized log
canonical threshold and the singular fluctuation, which are birational
invariants. The concrete values of them can be derived by using algebraic
geometrical transformation called resolution of singularities. By elliminating
$\lambda_{0}$ and $\nu_{0}$ from eq.(2)-eq.(6),
$\displaystyle E[B_{g}]$ $\displaystyle=$ $\displaystyle
E[B_{t}]+(\beta/n)E[V]+o(\frac{1}{n}),$ (7) $\displaystyle E[G_{g}]$
$\displaystyle=$ $\displaystyle E[G_{t}]+(\beta/n)E[V]+o(\frac{1}{n}),$ (8)
hold, which are called equations of states in learning, because these
relations hold for an arbitrary set of a true distribution, a learning
machine, and an a priori distribution. By this relation, we can estimate the
generalization loss using the training loss and the functional variance.
However, it has been left unknown whether the equations of states, eq.(7) and
eq.(8), hold or not in nonparametrizable cases.
## 3 Main Results
In this section, we describe the main results of this paper. The proofs of
theorems are given in Section 5.
### 3.1 Equations of states
In this paper, study the case when a true distribution is nonparametrizable
and regular. Three constants $S$, $\lambda$, and $\nu$ are respectively
defined by the following equations. Let $w_{0}$ be the unique parameter that
minimizes $L(w)$. Three constants are defined by
$\displaystyle S$ $\displaystyle=$ $\displaystyle L(w_{0}),$ (9)
$\displaystyle\lambda$ $\displaystyle=$ $\displaystyle\frac{d}{2},$ (10)
$\displaystyle\nu$ $\displaystyle=$
$\displaystyle\frac{1}{2}\mbox{tr}(IJ^{-1}),$ (11)
where $d$ is the dimension of the parameter, and $I$ and $J$ are $d\times d$
matrices defined by
$\displaystyle I$ $\displaystyle=$ $\displaystyle\int\nabla\log
p(x|w_{0})\nabla\log p(x|w_{0})q(x)dx,$ (12) $\displaystyle J$
$\displaystyle=$ $\displaystyle-\int\nabla^{2}\log p(x|w_{0})q(x)dx.$ (13)
###### Theorem 1
Assume that a true distribution $q(x)$ is nonparametrizable and regular for a
learning machine $p(x|w)$. Then
$\displaystyle E[B_{g}]$ $\displaystyle=$ $\displaystyle
S+\frac{\lambda-\nu}{n\beta}+\frac{\nu}{n}+o(\frac{1}{n}),$ (14)
$\displaystyle E[B_{t}]$ $\displaystyle=$ $\displaystyle
S+\frac{\lambda-\nu}{n\beta}-\frac{\nu}{n}+o(\frac{1}{n}),$ (15)
$\displaystyle E[G_{g}]$ $\displaystyle=$ $\displaystyle
S+\frac{\lambda}{n\beta}+\frac{\nu}{n}+o(\frac{1}{n}),$ (16) $\displaystyle
E[G_{t}]$ $\displaystyle=$ $\displaystyle
S+\frac{\lambda}{n\beta}-\frac{\nu}{n}+o(\frac{1}{n}),$ (17) $\displaystyle
E[V]$ $\displaystyle=$ $\displaystyle\frac{2\nu}{\beta}+o(1).$ (18)
Therefore, equations of states hold,
$\displaystyle E[B_{g}]$ $\displaystyle=$ $\displaystyle
E[B_{t}]+(\beta/n)E[V]+o(\frac{1}{n}),$ (19) $\displaystyle E[G_{g}]$
$\displaystyle=$ $\displaystyle E[G_{t}]+(\beta/n)E[V]+o(\frac{1}{n}).$ (20)
Proof of this theorem is given in Section 5. Note that constants are different
between the parametrizable and nonparametrizable cases, that is to say, $S\neq
S_{0}$, $\lambda\neq\lambda_{0}$, and $\nu\neq\nu_{0}$. However, the same
equations of states still hold. In fact, eq.(19) and eq.(20) are completely
equal to as eq.(7) and eq.(8), respectively.
By combining the results of the previous papers with the new result in Theorem
1, it is ensured that the equations of states are applicable to arbitrary set
of a true distribution, a learning machine, and an a priori distribution,
without regard to the condition on the unknown true distribution. Remark. If a
true distribution is parametrizable and regular, then $I=J$, hence
$\lambda=\nu=d/2$. If otherwise, $I\neq J$ in general. Note that $J$ is
positive definite by the assumption, but that $I$ may not be positive definite
in general.
### 3.2 Comparison TIC with equations of states
If the maximum likelihood method is employed, or equivalently if
$\beta=\infty$, then $B_{g}$ and $B_{t}$ are respectively equal to the
generalization and training losses of the maximum likelihood method. It was
proved in [10] that
$E[B_{g}]=E[B_{t}]+\frac{TIC}{n}+o(\frac{1}{n})\;\;\;(\beta=\infty),$ (21)
where
$TIC=\mbox{tr}(I(w_{0})J(w_{0})^{-1}).$
On the other hand, the equations of states, eq.(19) in Theorem 1 show that,
$E[B_{g}]=E[B_{t}]+\frac{E[\beta
V]}{n}+o(\frac{1}{n}).\;\;\;(0<\beta<\infty),$ (22)
Therefore, in this subsection, let us compare $\beta V$ with $TIC$ in the
nonparametrizable and regular case.
Let $L_{n}(w)$ be the empirical log loss function
$L_{n}(w)=-E_{j}^{(n)}[\log p(X_{j}|w)]-\frac{1}{n\beta}\log\varphi(w).$
Three matrices are defined by
$\displaystyle I_{n}(w)$ $\displaystyle=$ $\displaystyle
E_{j}^{(n)}[\nabla\log p(X_{j}|w)\nabla\log p(X_{j}|w)],$ (23) $\displaystyle
J_{n}(w)$ $\displaystyle=$ $\displaystyle-E_{j}^{(n)}[\nabla^{2}\log
p(X_{j}|w)],$ (24) $\displaystyle K_{n}(w)$ $\displaystyle=$
$\displaystyle\nabla^{2}L_{n}(w).$ (25)
In practical applications, instead of $TIC$, the empirical TIC is employed,
$TIC_{n}=\mbox{tr}(I_{n}(w_{MLE})J_{n}(w_{MLE})^{-1}),$
where $w_{MLE}$ is the maximum likelihood estimator. Then by using the
convergence in probability $w_{MLE}\rightarrow w_{0}$,
$E[TIC_{n}]=TIC+o(1).$
On the other hand, we have shown in Theorem 1,
$E[\beta V]=TIC+o(1).$
Hence let us compare $\beta V$ with $TIC_{n}$ as random variables.
###### Theorem 2
Assume that $q(x)$ is nonparametrizable and regular for a learning machine
$p(x|w)$. Then
$\displaystyle TIC_{n}$ $\displaystyle=$ $\displaystyle
TIC+O_{p}(\frac{1}{\sqrt{n}}),$ $\displaystyle\beta V$ $\displaystyle=$
$\displaystyle TIC+O_{p}(\frac{1}{\sqrt{n}}),$ $\displaystyle\beta V$
$\displaystyle=$ $\displaystyle TIC_{n}+O_{p}(\frac{1}{n}).$
Proof of this theorem is given in Section 5. Theorem 2 shows that the
difference between $\beta V$ and $TIC_{n}$ is in the smaller order than the
variance of them. Therefore, if a true distribution is nonparametrizable and
regular for a learning machine, then the equations of states are
asymptotically equivalent to the empirical TIC. If a true distribution is
singular or if the number of training samples are not so large, then the
empirical TIC and the equations of states are not equivalent, in general.
Hence the equations of states are applicable more widely than TIC.
Experimental analysis for the equations of states was reported in [18, 19,
20]. The main purpose of this paper is to prove Theorems 1 and 2. Its
application to practical problems is a topic for the future study.
## 4 Preparation of Proof
In this section, we summarize the basic properties which are used in the
proofs of main theorems.
### 4.1 Maximum a posteriori estimator
Firstly, we study the asymptotic behavior of the maximum a posteriori
estimator. By the definition, for each $w$,
$K_{n}(w)=J_{n}(w)+O(\frac{1}{n}).$
By the central limit theorem, for each $w$,
$\displaystyle I_{n}(w)$ $\displaystyle=$ $\displaystyle
I(w)+O_{p}(\frac{1}{\sqrt{n}}),$ (26) $\displaystyle J_{n}(w)$
$\displaystyle=$ $\displaystyle J(w)+O_{p}(\frac{1}{\sqrt{n}}),$ (27)
$\displaystyle K_{n}(w)$ $\displaystyle=$ $\displaystyle
J(w)+O_{p}(\frac{1}{\sqrt{n}}).$ (28)
The parameter that minimizes $L_{n}(w)$ is denoted by $\hat{w}$, which is
called the maximum a posteriori estimator (MAP). If $\beta=1$, then it is
equal to the conventional maximum a posteriori estimator (MAP). If
$\beta=\infty$, or equivalently $1/\beta=0$, then it is the maximum likelihood
estimator (MLE), which is denoted by $w_{MLE}$.
Let us summarize the basic properties of the maximum a posteriori estimator.
Becaue $w_{0}$ and $\hat{w}$ minimizes $L(w)$ and $L_{n}(w)$ respectively,
$\displaystyle\nabla L(w_{0})$ $\displaystyle=$ $\displaystyle 0,$ (29)
$\displaystyle\nabla L_{n}(\hat{w})$ $\displaystyle=$ $\displaystyle 0.$ (30)
By the assumption, $w_{0}$ is unique and the matrix $J$ is positive definite,
the consistency of $\hat{w}$ holds under the natural condition, in other
words, the convergences in probability $\hat{w}\rightarrow w_{0}$
($n\rightarrow\infty$) hold for $0<\beta\leq\infty$. In this paper, we assume
such consistency condition.
From eq.(30), there exists $w_{\beta}^{*}$ which satisfies
$\nabla L_{n}(w_{0})+\nabla^{2}L_{n}(w_{\beta}^{*})(\hat{w}-w_{0})=0$ (31)
and
$|w_{\beta}^{*}-w_{0}|\leq|\hat{w}-w_{0}|,$
where $|\cdot|$ denotes the norm of ${\bf R}^{d}$. By using the definition
$K_{n}(w_{\beta}^{*})=\nabla^{2}L_{n}(w_{\beta}^{*})$,
$\hat{w}-w_{0}=-K_{n}(w_{\beta}^{*})^{-1}\nabla L_{n}(w_{0}).$ (32)
By using the law of large numbers and the central limit theorem,
$K_{n}(w_{\beta}^{*})$ converges to $J$ in probability and $\sqrt{n}\;\nabla
L_{n}(w_{0})$ converges in law to the normal distribution with average 0 and
covariance matrix $I$. Therefore
$\sqrt{n}\;(\hat{w}-w_{0})$
converges in law to the normal distribution with average 0 and covariance
matrix $J^{-1}IJ^{-1}$, resulting that
$E[(\hat{w}-w_{0})(\hat{w}-w_{0})^{T}]=\frac{J^{-1}IJ^{-1}}{n}+o(\frac{1}{n}),$
(33)
for $0<\beta\leq\infty$, where $(\;\;)^{T}$ denotes the transposed vector. In
other words,
$\hat{w}=w_{0}+O_{p}(\frac{1}{\sqrt{n}}).$ (34)
Hence,
$K_{n}(w_{\beta}^{*})=J(w_{0})+O_{p}(\frac{1}{\sqrt{n}}).$
By using eq.(32),
$\hat{w}-w_{MLE}=\Bigl{(}K_{n}(w_{\infty}^{*})^{-1}-K_{n}(w_{\beta}^{*})^{-1}\Bigr{)}\nabla
L_{n}(w_{0}).$
Since $\nabla L_{n}(w_{0})=O_{p}(1/\sqrt{n})$ and $J(w_{0})$ is positive
definite, we have
$w_{MLE}=\hat{w}+O_{p}(\frac{1}{n}).$ (35)
### 4.2 Expectations by a posteriori distribution
Secondly, the behavior of the a posteriori distribution is described as
follows.
For a given function $g(w)$, the average by the a posteriori distribution is
defined by
$E_{w}[g(w)]=\frac{\int g(w)\exp(-n\beta L_{n}(w))dw}{\int\exp(-n\beta
L_{n}(w))dw}.$
Then we can prove the following relations.
###### Lemma 1
$\displaystyle E_{w}[(w-\hat{w})]$ $\displaystyle=$ $\displaystyle
O_{p}(\frac{1}{n}),$ (36) $\displaystyle E_{w}[(w-\hat{w})(w-\hat{w})^{T}]$
$\displaystyle=$
$\displaystyle\frac{K_{n}(\hat{w})^{-1}}{n\beta}+O_{p}(\frac{1}{n^{2}}),$ (37)
$\displaystyle E_{w}[(w-\hat{w})_{i}(w-\hat{w})_{j}(w-\hat{w})_{k}]$
$\displaystyle=$ $\displaystyle O_{p}(\frac{1}{n^{2}}),$ (38) $\displaystyle
E_{w}[|w-\hat{w}|^{m}]$ $\displaystyle=$ $\displaystyle
O_{p}(\frac{1}{n^{m/2}})\;\;\;(m\geq 1).$ (39)
Moreover,
$\displaystyle EE_{w}[(w-w_{0})(w-w_{0})^{T}]$ $\displaystyle=$
$\displaystyle\frac{J^{-1}IJ^{-1}}{n}+\frac{J^{-1}}{n\beta}+o(\frac{1}{n}).$
(40) $\displaystyle EE_{w}[|w-w_{0}|^{3}]$ $\displaystyle=$ $\displaystyle
o(\frac{1}{n}).$ (41)
For the proof of this lemma, see Appendix.
Let us introduce a log density ratio function $f(x,w)$ by
$f(x,w)=\log\frac{p(x|w_{0})}{p(x|w)}.$
Then $f(x,w_{0})\equiv 0$ and
$\displaystyle\nabla f(x,w)$ $\displaystyle=$ $\displaystyle-\nabla\log
p(x|w),$ $\displaystyle\nabla^{2}f(x,w)$ $\displaystyle=$
$\displaystyle-\nabla^{2}\log p(x|w).$
In the proof of Theorems 1, we need the following six expectation values,
$\displaystyle D_{1}$ $\displaystyle=$ $\displaystyle EE_{X}[E_{w}[f(X,w)]],$
$\displaystyle D_{2}$ $\displaystyle=$
$\displaystyle(1/2)EE_{X}[E_{w}[\;f(X,w)^{2}\;]],$ $\displaystyle D_{3}$
$\displaystyle=$ $\displaystyle(1/2)EE_{X}[\;E_{w}[f(X,w)]^{2}\;],$
$\displaystyle D_{4}$ $\displaystyle=$ $\displaystyle
EE_{j}^{(n)}[E_{w}[f(X_{j},w)]],$ $\displaystyle D_{5}$ $\displaystyle=$
$\displaystyle(1/2)EE_{j}^{(n)}[E_{w}[\;f(X_{j},w)^{2}\;]],$ $\displaystyle
D_{6}$ $\displaystyle=$
$\displaystyle(1/2)EE_{j}^{(n)}[\;E_{w}[f(X_{j},w)]^{2}\;].$
The constant $\mu$ is defined by
$\mu=\frac{1}{2}\mbox{tr}(IJ^{-1}IJ^{-1}).$ (42)
Then we can prove the following relations.
###### Lemma 2
Let $\nu$ and $\mu$ be constants which are respectively defined by eq.(11) and
eq.(42). Then
$\displaystyle D_{1}$ $\displaystyle=$
$\displaystyle\frac{d}{2n\beta}+\frac{\nu}{n}+o(\frac{1}{n}),$ $\displaystyle
D_{2}$ $\displaystyle=$
$\displaystyle\frac{\nu}{n\beta}+\frac{\mu}{n}+o(\frac{1}{n}),$ $\displaystyle
D_{3}$ $\displaystyle=$ $\displaystyle\frac{\mu}{n}+o(\frac{1}{n}),$
$\displaystyle D_{4}$ $\displaystyle=$
$\displaystyle\frac{d}{2n\beta}-\frac{\nu}{n}+o(\frac{1}{n}),$ $\displaystyle
D_{5}$ $\displaystyle=$
$\displaystyle\frac{\nu}{n\beta}+\frac{\mu}{n}+o(\frac{1}{n}),$ $\displaystyle
D_{6}$ $\displaystyle=$ $\displaystyle\frac{\mu}{n}+o(\frac{1}{n}).$
For the proof of this lemma, see Appendix.
## 5 Proofs
In this section, we prove theorems.
### 5.1 Proof of Theorem 1
Firstly, by using the definitions
$\displaystyle S$ $\displaystyle=$ $\displaystyle L(w_{0})=-E_{X}[\log
p(X|w_{0})],$ $\displaystyle p(x|w)$ $\displaystyle=$ $\displaystyle
p(x|w_{0})\exp(-f(x,w)),$
the Bayes generalization loss is given by
$\displaystyle E[B_{g}]$ $\displaystyle=$ $\displaystyle-EE_{X}\log
E_{w}[p(X|w)]$ $\displaystyle=$ $\displaystyle S-EE_{X}[\log
E_{w}[\exp(-f(X,w))]]$ $\displaystyle=$ $\displaystyle S-EE_{X}[\log
E_{w}(1-f(X,w)+\frac{f(X,w)^{2}}{2})]+o(\frac{1}{n})$ $\displaystyle=$
$\displaystyle S+EE_{X}E_{w}[f(X,w)]-\frac{1}{2}EE_{X}E_{w}[f(X,w)^{2}]$
$\displaystyle+\frac{1}{2}EE_{X}[\;E_{w}[f(X,w)]^{2}\;]+o(\frac{1}{n})$
$\displaystyle=$ $\displaystyle S+D_{1}-D_{2}+D_{3}+o(\frac{1}{n})$
$\displaystyle=$ $\displaystyle
S+\frac{d}{2n\beta}-\frac{\nu}{n\beta}+\frac{\nu}{n}+o(\frac{1}{n}).$
Secondly, the Bayes training loss is
$\displaystyle E[B_{t}]$ $\displaystyle=$ $\displaystyle-EE_{j}^{(n)}\log
E_{w}[p(X_{j}|w)]$ $\displaystyle=$ $\displaystyle S-EE_{j}^{(n)}[\log
E_{w}[\exp(-f(X_{j},w))]]$ $\displaystyle=$ $\displaystyle S-EE_{j}^{(n)}[\log
E_{w}(1-f(X_{j},w)+\frac{f(X_{j},w)^{2}}{2})]+o(\frac{1}{n})$ $\displaystyle=$
$\displaystyle
S+EE_{j}^{(n)}E_{w}[f(X_{j},w)]-\frac{1}{2}EE_{j}^{(n)}E_{w}[f(X_{j},w)^{2}]$
$\displaystyle+\frac{1}{2}EE_{j}^{(n)}[\;E_{w}[f(X_{j},w)]^{2}\;]+o(\frac{1}{n})$
$\displaystyle=$ $\displaystyle S+D_{4}-D_{5}+D_{6}+o(\frac{1}{n})$
$\displaystyle=$ $\displaystyle
S+\frac{d}{2n\beta}-\frac{\nu}{n\beta}-\frac{\nu}{n}+o(\frac{1}{n}).$
Thirdly, the Gibbs generalization loss is
$\displaystyle E[G_{g}]$ $\displaystyle=$ $\displaystyle-EE_{X}E_{w}[\log
p(X|w)]$ $\displaystyle=$ $\displaystyle S+EE_{X}E_{w}[f(X,w)]$
$\displaystyle=$ $\displaystyle S+D_{1}$ $\displaystyle=$ $\displaystyle
S+\frac{d}{2n\beta}+\frac{\nu}{n}+o(\frac{1}{n}).$
Forthly, the Gibbs training loss is
$\displaystyle E[G_{t}]$ $\displaystyle=$ $\displaystyle-
EE_{X}^{(n)}E_{w}[\log p(X|w)]$ $\displaystyle=$ $\displaystyle
S+EE_{X}^{(n)}E_{w}[f(X,w)]$ $\displaystyle=$ $\displaystyle S+D_{4}$
$\displaystyle=$ $\displaystyle
S+\frac{d}{2n\beta}-\frac{\nu}{n}+o(\frac{1}{n}).$
Lastly, the functional variance is given by
$\displaystyle E[V]$ $\displaystyle=$ $\displaystyle 2n(D_{5}-D_{6})$
$\displaystyle=$ $\displaystyle 2n(D_{2}-D_{3})+o(1)$ $\displaystyle=$
$\displaystyle\frac{2\nu}{\beta}+o(1).$
Therefore, we obtained Theorem 1.
### 5.2 Proof of Theorem 2
Let $V_{w}[f(X,w)]$ be the variance of $f(X,w)$ in the a posteriori
distribution,
$V_{w}[f(X,w)]\equiv E_{w}[f(X,w)^{2}]-E_{w}[f(X,w)]^{2}.$
Then
$V_{w}[f(X,w)]=V_{w}[f(X,w)-f(X,\hat{w})]$
holds because $f(X,\hat{w})$ is a constant function of $w$. By the Taylor
expansion at $w=\hat{w}$,
$\displaystyle f(X,w)-f(X,\hat{w})=\nabla f(X,\hat{w})\cdot(w-\hat{w})$
$\displaystyle+\frac{1}{2}(w-\hat{w})\cdot\nabla^{2}f(X,\hat{w})(w-\hat{w})+O(|w-\hat{w}|^{3}).$
Using this expansion, and eq.(36), eq.(37), eq.(38), and eq.(39),
$V_{w}[f(X,w)]=V_{w}[(\nabla
f(X,\hat{w}))\cdot(w-\hat{w})]+O_{p}(\frac{1}{n^{2}}).$
Hence
$\displaystyle\beta V$ $\displaystyle\equiv$ $\displaystyle n\beta
E_{j}^{(n)}[V_{w}[f(X_{j},w)]]$ $\displaystyle=$ $\displaystyle n\beta
E_{j}^{(n)}\\{E_{w}[(\nabla f(X_{j},\hat{w})\cdot(w-\hat{w}))^{2}]$
$\displaystyle-E_{w}[\nabla
f(X_{j},\hat{w})\cdot(w-\hat{w})]^{2}\\}+O_{p}(\frac{1}{n}).$
The second term is $O_{p}(1/n)$ by eq.(36). Therefore, by applying eq.(37) to
the first term,
$\displaystyle\beta V$ $\displaystyle=$ $\displaystyle
n\beta\;\mbox{tr}\Bigl{(}E_{j}^{(n)}[(\nabla f(X_{j},\hat{w}))(\nabla
f(X_{j},\hat{w}))^{T}]$ $\displaystyle\times
E_{w}[(w-\hat{w})(w-\hat{w})^{T}]\Bigr{)}+O_{p}(\frac{1}{n})$ $\displaystyle=$
$\displaystyle\mbox{tr}(I_{n}(\hat{w})K_{n}^{-1}(\hat{w}))+O_{p}(\frac{1}{n}).$
Therefore, by using eq.(35), proof of Theorem 2 is completed.
## 6 Discussion
Let us discuss the results of this paper from the three different points of
view.
Firstly, we discuss the method how to numerically calculate the equations of
states. The widely applicable information criterion (WAIC) [18, 22] is defined
by
WAIC $\displaystyle=$ $\displaystyle-\sum_{i=1}^{n}\log E_{w}[p(X_{i}|w)]$
$\displaystyle+\beta\sum_{i=1}^{n}\Bigl{\\{}E_{w}[(\log
p(X_{i}|w))^{2}]-E_{w}[\log p(X_{i}|w)]^{2}\Bigr{\\}}.$
Then by Theorem 1,
$E[WAIC]=E[nB_{g}]+o(1)$
holds. Hence by minimization of WAIC, we can optimize the model and the
hyperparameter for the minimum Bayes generalization loss. In Bayes estimation,
a set of parameters $\\{w_{k}\\}$ is prepared so that it approximates the a
posteriori distribution. Sometimes it is done by the Markov chain Monte Carlo
method, and we can approximate the average by the a posteriori distribution by
$E_{w}[f(w)]\cong\frac{1}{K}\sum_{k=1}^{K}f(w_{k}).$
Therefore the WAIC can numerically calculate by such a set $\\{w_{k}\\}$.
Secondly, we study the fluctuation of the Bayes generalization error. In
Theorem 1, we proved that, as the number of training samples tends to
infinity, two expectation values converge to the same value,
$\displaystyle E[n(B_{g}-B_{t})]$ $\displaystyle\rightarrow$
$\displaystyle\mbox{tr}(IJ^{-1}),$ $\displaystyle E[\beta V]$
$\displaystyle\rightarrow$ $\displaystyle\mbox{tr}(IJ^{-1}).$
Moreover, in Theorem 2, we proved the convergence in probability,
$\beta V\rightarrow\mbox{tr}(IJ^{-1}).$
On the other hand, by the same way as Theorem 1, we can prove
$n(B_{g}-B_{t})=n\times\mbox{tr}(I(\hat{w}-w_{0})(\hat{w}-w_{0})^{T})+o_{p}(1).$
Since $\sqrt{n}(\hat{w}-w_{0})$ converges in law to the gaussian random
variable whose average is zero and variance is $J^{-1}IJ^{-1}$, the random
variable $n(B_{g}-B_{t})$ converges to not a constant in probability but to a
random variable in law. In other words, the relation between expectation
values
$E[B_{g}]=E[B_{t}]+\frac{\beta E[V]}{n}+o(\frac{1}{n})$ (43)
holds, whereas they are not equal to each other as random variables,
$B_{g}\neq B_{t}+\frac{\beta V}{n}+o_{p}(\frac{1}{n}).$ (44)
Note that, even if the true distribution is paramertrizable and regular, the
generalization and training losses have same properties, therefore both AIC
and TIC have same properties as eq.(43) and eq.(44).
Lastly, let us compare the generalization loss by the Bayes estimation with
that by the maximum likelihood estimation. In a regular and parametrizable
case, they are equal to each other asymptotically. In a parametrizable and
singular case, the Bayes generalization error is smaller than that of the
maximum likelihood method. Let us compare them in a nonparametrizable and
regular case.
$E[B_{g}]=S+\frac{\mbox{tr}(IJ^{-1})}{2n}+\frac{d-\mbox{tr}(IJ^{-1})}{2n\beta}+o(\frac{1}{n}).$
When $\beta=\infty$, this is the generalization error of the maximum
likelihood method. If $d>\mbox{tr}(IJ^{-1})$, then $E[B_{g}]$ is a decreasing
function of $1/\beta$. Or if $d<\mbox{tr}(IJ^{-1})$, then $E[B_{g}]$ is an
increasing function of $1/\beta$. If $I<J$, then $\mbox{tr}(IJ^{-1})<d$. By
the definition of $I$ and $J$,
$I=\int\nabla p(x|w_{0})\nabla p(x|w_{0})\frac{q(x)}{p(x|w_{0})^{2}}dx$
and
$J=I-Q,$
where
$Q=\int(\nabla^{2}p(x|w_{0}))\;\frac{q(x)}{p(x|w_{0})}dx.$
If $Q<0$, then $\mbox{tr}(IJ^{-1})<d$, resulting that the generalization loss
of Bayes estimation is smaller than that by the maximum likelihood method.
Example. For $w\in{\bf R}$,
$p(x|w)=\frac{1}{\sqrt{2\pi}}\exp(-\frac{(x-w)^{2}}{2}),$
Then
$L(w)=\frac{1}{2}\int(x-w)^{2}q(x)dx+\frac{1}{2}\log(2\pi).$
Hence $w_{0}=E_{X}[X]$ and
$L(w_{0})=V(X)+\frac{1}{2}\log(2\pi),$
where $V(X)=E_{X}[X^{2}]-E_{X}[X]^{2}$. The value $Q$ is
$Q=V(X)-1.$
If $V(X)>1$, then the generalization error is a decreasing function of
$1/\beta$, in other words, the Bayes estimation makes the generalization loss
is smaller than that by the maximum likelihood method. Hence, in a
nonparametrizable case, it depends on the case which estimation makes the
generalization loss smaller.
## 7 Conclusion
In this paper, we theoretically proved that equations of states in statistical
estimation hold even if a true distribution is nonparametrizable and regular
for a learning machine. In the previous paper, we proved that the equations of
states hold even if a true distribution is parametrizable and singular. By
combining these results, the equations of states are applicable without regard
to the condition of the true distribution and the learning machine. Moreover,
the equations of states contains AIC and TIC in the special cases.
### Acknowledgment
This research was partially supported by the Ministry of Education, Science,
Sports and Culture in Japan, Grant-in-Aid for Scientific Research 18079007.
## 8 Appendix
### 8.1 Proof of Lemma 1
By using eq.(30), $\nabla L_{n}(\hat{w})=0$, in a neighborhood of $\hat{w}$,
$L_{n}(w)=L_{n}(\hat{w})+\frac{1}{2}(w-\hat{w})\cdot
K_{n}(\hat{w})(w-\hat{w})+r(w),$
where $r(w)$ is given by
$r(w)=\frac{1}{6}\sum_{i,j,k=1}^{d}(\nabla^{3}L_{n}(\hat{w}))_{ijk}(w-\hat{w})_{i}(w-\hat{w})_{j}(w-\hat{w})_{k}+O(|w-\hat{w}|^{4}).$
Hence, for a given function $g(w)$, the average by the a posteriori
distribution is given by
$\displaystyle E_{w}[g(w)]$ $\displaystyle=$ $\displaystyle\frac{\int
g(w)\;\exp\Bigl{(}-\frac{n\beta}{2}(w-\hat{w})\cdot
K_{n}(\hat{w})(w-\hat{w})-n\beta
r(w)\Bigr{)}dw}{\int\exp\Bigl{(}-\frac{n\beta}{2}(w-\hat{w})\cdot
K_{n}(\hat{w})(w-\hat{w})-n\beta r(w)\Bigr{)}dw}.$
The main region of the integration is a neighborhood of $\hat{w}$,
$|w-\hat{w}|<\epsilon$, hence by putting $w^{\prime}=\sqrt{n}(w-\hat{w})$,
$E_{w}[g(w)]=\frac{\int
g(\hat{w}+\frac{w^{\prime}}{\sqrt{n}})\;\exp(-\frac{\beta}{2}w^{\prime}\cdot
K_{n}(\hat{w})w^{\prime}-\frac{\beta\delta(w^{\prime})}{\sqrt{n}}+O_{p}(\frac{1}{n}))dw^{\prime}}{\int\exp(-\frac{\beta}{2}w^{\prime}\cdot
K_{n}(\hat{w})w^{\prime}-\frac{\beta\delta(w^{\prime})}{\sqrt{n}}+O_{p}(\frac{1}{n}))dw^{\prime}},$
where $\delta(w^{\prime})$ is the third-order polynomial,
$\delta(w^{\prime})=\frac{1}{6}\sum_{i,j,k=1}^{d}(\nabla^{3}L_{n}(\hat{w}))_{ijk}w^{\prime}_{i}w^{\prime}_{j}w^{\prime}_{k}.$
By using
$\exp\Bigl{(}-\frac{\beta\delta(w^{\prime})}{\sqrt{n}}+O_{p}(\frac{1}{n})\Bigr{)}=1-\frac{\beta\delta(w^{\prime})}{\sqrt{n}}+O_{p}(\frac{1}{n}),$
it follows that
$E_{w}[g(w)]=\frac{\int
g(\hat{w}+\frac{w^{\prime}}{\sqrt{n}})\;(1-\frac{\beta\delta(w^{\prime})}{\sqrt{n}})\exp(-\frac{\beta}{2}w^{\prime}\cdot
K_{n}(\hat{w})w^{\prime})dw^{\prime}}{\int\exp(-\frac{\beta}{2}w^{\prime}\cdot
K_{n}(\hat{w})w^{\prime})dw^{\prime}}+O_{p}(\frac{1}{n}).$
Hence by putting $g(w)=w-\hat{w}$, we obtain eq.(36) and by putting
$g(w)=(w-\hat{w})(w-\hat{w})^{T}$, eq.(37). By the same way, eq.(38) and
eq.(39) are proved. Let us prove eq.(40). By using eq.(37),
$\displaystyle E_{w}[(w-w_{0})(w-w_{0})^{T}]$
$\displaystyle=E_{w}[(\hat{w}-w_{0}+\frac{w^{\prime}}{\sqrt{n}})(\hat{w}-w_{0}+\frac{w^{\prime}}{\sqrt{n}})^{T}]$
$\displaystyle=(\hat{w}-w_{0})(\hat{w}-w_{0})^{T}+\frac{1}{n}E_{w}[w^{\prime}w^{\prime
T}]+O_{p}(\frac{1}{n^{3/2}})$
$\displaystyle=(\hat{w}-w_{0})(\hat{w}-w_{0})^{T}+\frac{K_{n}(\hat{w})^{-1}}{n\beta}+O_{p}(\frac{1}{n^{3/2}}).$
(45)
Then by applying eq.(33), eq.(40) is obtained. Lastly, in general,
$|w-w_{0}|^{3}\leq 3(|w-\hat{w}|^{3}+|\hat{w}-w_{0}|^{3}).$
Then, by eq.(34) and eq.(39), eq.(41) is derived. Therefore we have obtained
Lemma 1.
### 8.2 Proof of Lemma 2
By the Taylor expansion of $f(X,w)$ at $w_{0}$,
$\displaystyle f(X,w)$ $\displaystyle=$ $\displaystyle\nabla
f(X,w_{0})\cdot(w-w_{0})$ (46)
$\displaystyle+\frac{1}{2}(w-w_{0})\cdot\nabla^{2}f(X|w_{0})(w-w_{0})$
$\displaystyle+f_{3}(X,w),$
where $f_{3}(X,w)$ satisfies
$|f_{3}(X,w)|\leq C(X,w)|w-w_{0}|^{3}$
in a neighborhood of $w_{0}$ with a function $C(X,w)\geq 0$. Let us estimate
$D_{1},...,D_{6}$. Firstly, by using eq.(29) and eq.(41),
$\displaystyle D_{1}$ $\displaystyle=$
$\displaystyle\frac{1}{2}EE_{w}E_{X}[(w-w_{0})\cdot\nabla^{2}f(X,w_{0})(w-w_{0})]+o(\frac{1}{n})$
$\displaystyle=$ $\displaystyle\frac{1}{2}EE_{w}[(w-w_{0})\cdot
J(w-w_{0})]+o(\frac{1}{n}).$
Then by using the identity
$(\forall u,v\in{\bf R}^{d},A\in{\bf R}^{d\times d})\;\;\;u\cdot
Av=\mbox{tr}(Avu^{T}),$
and eq.(40),
$\displaystyle D_{1}$ $\displaystyle=$
$\displaystyle\frac{1}{2}EE_{w}[\mbox{tr}((J(w-w_{0}))(w-w_{0})^{T})]+o(\frac{1}{n})$
$\displaystyle=$
$\displaystyle\frac{d}{2n\beta}+\frac{\mbox{tr}(IJ^{-1})}{2n}+o(\frac{1}{n}).$
Secondly, by using the identity
$(\forall u,v\in{\bf R}^{d})\;\;\;(u\cdot v)^{2}=\mbox{tr}((uu^{T})(vv^{T})),$
the definition of $I$, and eq.(40),
$\displaystyle D_{2}$ $\displaystyle=$
$\displaystyle(1/2)EE_{w}[\;E_{X}[(\nabla
f(X,w_{0})\cdot(w-w_{0}))^{2}]\;]+o(\frac{1}{n})$ $\displaystyle=$
$\displaystyle(1/2)EE_{w}[\mbox{tr}(I(w-w_{0})(w-w_{0})^{T})]+o(\frac{1}{n})$
$\displaystyle=$
$\displaystyle\frac{\mbox{tr}(IJ^{-1})}{2n\beta}+\frac{\mbox{tr}(IJ^{-1}IJ^{-1})}{2n}+o(\frac{1}{n}).$
Thirdly, by the definition of $I$, eq.(36), and eq.(33),
$\displaystyle D_{3}$ $\displaystyle=$ $\displaystyle(1/2)EE_{X}[E_{w}[\nabla
f(X,w_{0})\cdot(w-w_{0})]^{2}]+o(\frac{1}{n})$ $\displaystyle=$
$\displaystyle(1/2)EE_{X}[(\nabla
f(X,w_{0})\cdot(\hat{w}-w_{0}))^{2}]+o(\frac{1}{n})$ $\displaystyle=$
$\displaystyle(1/2)E[\mbox{tr}(I(\hat{w}-w_{0})(\hat{w}-w_{0})^{T})]+o(\frac{1}{n})$
$\displaystyle=$
$\displaystyle\frac{\mbox{tr}(IJ^{-1}IJ^{-1})}{2n}+o(\frac{1}{n}).$
Fourthly, by the Taylor expansion eq.(46)
$\displaystyle D_{4}$ $\displaystyle=$ $\displaystyle
EE_{w}[E_{j}^{(n)}[\nabla f(X_{j},w_{0})\cdot(w-w_{0})]]$
$\displaystyle+\frac{1}{2}EE_{w}[E_{j}^{(n)}[(w-w_{0})\cdot\nabla^{2}f(X_{j},w_{0})(w-w_{0})]]+o(\frac{1}{n})$
$\displaystyle=$ $\displaystyle E[\;E_{j}^{(n)}[\nabla f(X_{j},w_{0})]\cdot
E_{w}[w-w_{0}]\;]$ $\displaystyle+\frac{1}{2}EE_{w}[(w-w_{0})\cdot
J_{n}(w_{0})(w-w_{0})]]+o(\frac{1}{n}).$
Then by using $E[J_{n}(w_{0})]=J$, the second term is equal to $D_{1}$. To the
first term, we apply eq.(36) and
$E_{j}^{(n)}[\nabla f(X_{j},w_{0})]=\nabla L_{n}(w_{0})+O_{p}(\frac{1}{n}),$
we obtain
$\displaystyle D_{4}$ $\displaystyle=$ $\displaystyle E[(\nabla
L_{n}(w_{0}))\cdot(\hat{w}-w_{0})]+D_{1}+o(\frac{1}{n}).$
Then applying eq.(31), $K_{n}(w_{0})\rightarrow J$, and
$w_{\beta}^{*}\rightarrow w_{0}$,
$\displaystyle D_{4}$ $\displaystyle=$
$\displaystyle-E[(K_{n}(w_{\beta}^{*})(\hat{w}-w_{0}))\cdot(\hat{w}-w_{0})]+D_{1}+o(\frac{1}{n})$
$\displaystyle=$
$\displaystyle\frac{d}{2n\beta}-\frac{\mbox{tr}(IJ^{-1})}{2n}+o(\frac{1}{n}).$
And lastly, by the definitions,
$\displaystyle D_{5}$ $\displaystyle=$
$\displaystyle(1/2)EE_{w}[\;E_{j}^{n}[(\nabla
f(X_{j},w_{0})\cdot(w-w_{0}))^{2}]\;]+o(\frac{1}{n}),$ $\displaystyle D_{6}$
$\displaystyle=$ $\displaystyle(1/2)EE_{j}^{n}[E_{w}[\nabla
f(X_{j},w_{0})\cdot(w-w_{0})]^{2}]+o(\frac{1}{n}).$
By using the convergences in probability, $I_{n}(w_{0})\rightarrow I$ and
$J_{n}(w_{0})\rightarrow J$, it follows that
$\displaystyle D_{5}$ $\displaystyle=$ $\displaystyle D_{2}+o(\frac{1}{n}),$
$\displaystyle D_{6}$ $\displaystyle=$ $\displaystyle D_{3}+o(\frac{1}{n}),$
which completes Lemma 2.
## References
* [1] H. Akaike. A new look at the statistical model identification. IEEE Trans. on Automatic Control, Vol.19, pp.716-723, 1974.
* [2] S.-i. Amari, H. Park, and T. Ozeki, “Singularities Affect Dynamics of Learning in Neuromanifolds,” Neural Comput., 18(5),pp.1007 - 1065,2006.
* [3] M.Aoyagi, S.Watanabe,“Stochastic complexities of reduced rank regression in Bayesian estimation,” Neural Networks, Vol.18,No.7, pp.924-933, 2005.
* [4] K. Hagiwara, “On the Problem in Model Selection of Neural Network Regression in Overrealizable Scenario, ” Neural Comput., Vol.14,Vol.8, pp.1979 - 2002, 2002.
* [5] J.A.Hartigan,“A failure of likelihood asymptotics for normal mixture,” Proc. of Barkeley Conf. in honor of Jerzy Neyman and Jack Keifer, Vol.2, pp.807-810,1985.
* [6] T. Hayasaka, M. Kitahara, and S. Usui, “On the Asymptotic Distribution of the Least-Squares Estimators in Unidentifiable Models,” Neural Comput., Vol.16 ,No.1, pp.99 - 114, 2004.
* [7] N. Murata, S. Yoshizawa, and S. Amari, “Network information criterion - determining the number of hidden units for an artificial neural network model,” IEEE Transactions on Neural Networks, Vol.5, No.6, pp.865-872, November, 1994.
* [8] K. Nagata, S. Watanabe, ”Asymptotic Behavior of Exchange Ratio in Exchange Monte Carlo Method”, International Journal of Neural Neworks, to appear.
* [9] K. Nagata and S. Watanabe,”Exchange Monte Carlo Sampling from Bayesian Posterior for Singular Learning Machines”, IEEE Transactions on Neural Networks, to appear.
* [10] K. Takeuchi, “Distribution of statistic and criterion for optimal model selection,” Surikagaku, Vol. 153, pp. 12-18, 1976.
* [11] S.Watanabe, “Generalized Bayesian framework for neural networks with singular Fisher information matrices,” Proc. of International Symposium on Nonlinear Theory and Its applications, (Las Vegas), pp.207-210, 1995.
* [12] S.Watanabe, “Algebraic analysis for singular statistical estimation,” Proc. of International Journal of Algorithmic Learning Theory, Lecture Notes on Computer Sciences, 1720, pp.39-50, 1999.
* [13] S.Watanabe, ”Algebraic Analysis for Nonidentifiable Learning Machines,” Neural Computation, Vol.13, No.4, pp.899-933, 2001.
* [14] S. Watanabe, ”Algebraic geometrical methods for hierarchical learning machines,” Neural Networks, Vol.14, No.8,pp.1049-1060, 2001.
* [15] S. Watanabe, ”Learning efficiency of redundant neural networks in Bayesian estimation,” IEEE Transactions on Neural Networks, Vol.12, No.6, 1475-1486,2001.
* [16] S.Watanabe, S.-I.Amari,”Learning coefficients of layered models when the true distribution mismatches the singularities”, Neural Computation, Vol.15,No.5,1013-1033, 2003.
* [17] S.Watanabe,“Algebraic geometry of singular learning machines and symmetry of generalization and training errors,” Neurocomputing, Vol.67,pp.198-213,2005.
* [18] S. Watanabe, “Equations of states in statistical estimation,” arXiv:0712.0653, 2007.
* [19] S. Watanabe,“ Generalization and Training Errors of Bayes and Gibbs estimation in Singular Learning Machines,” IEICE Technical report, NC2007-75, Vol.2007-12,pp.25-30,2007.
* [20] S. Watanabe,“A formula of equations of states in singular learning machines,” Proceedings of WCCI, 2008.
* [21] S. Watanabe, “A limit theorem in singular regression problem,” arXiv:0901.2376v1, 2009.
* [22] S. Watanabe, “Algebraic geometry and statistical learning theory,” Cambirdge University Press, 2009.
* [23] S. Watanabe, “On a relation between a limit theorem in learning theory and singular fluctuation,” IEICE Technical Report, NC2008-111, pp.45-50, 2009.
* [24] K.Yamazaki, S.Watanabe,“Singularities in mixture models and upper bounds of stochastic complexity.” International Journal of Neural Networks, Vol.16, No.7, pp.1029-1038,2003.
* [25] K.Yamazaki, S.Watanabe,“ Singularities in Complete bipartite graph-type Boltzmann machines and upper bounds of stochastic complexities”, IEEE Trans. on Neural Networks, Vol. 16 (2), pp.312-324, 2005.
* [26] K. Yamazaki and S. Watanabe, ”Algebraic geometry and stochastic complexity of hidden Markov models”, Neurocomputing, Vol.69, pp.62-84, 2005.
|
arxiv-papers
| 2009-06-01T04:47:15 |
2024-09-04T02:49:03.035677
|
{
"license": "Public Domain",
"authors": "Sumio Watanabe",
"submitter": "Sumio Watanabe",
"url": "https://arxiv.org/abs/0906.0211"
}
|
0906.0215
|
# Computational Analysis of
Control Systems
Using Dynamic Optimization ††thanks: This work was supported in part by U.S.
Naval Research Laboratory and Air Force Office of Scientific Research
Wei Kang and Liang Xu Wei Kang is with Faculty of Applied Mathematics, Naval
Postgraduate School, Monterey, CA, USA wkang@nps.eduLiang Xu is at Naval
Research Laboratory, Monterey, CA, USA liang.xu@nrlmry.navy.mil
###### Abstract
Several concepts on the measure of observability, reachability, and robustness
are defined and illustrated for both linear and nonlinear control systems.
Defined by using computational dynamic optimization, these concepts are
applicable to a wide spectrum of problems. Some questions addressed include
the observability based on user-information, the determination of strong
observability vs. weak observability, partial observability of complex
systems, the computation of $L^{2}$-gain for nonlinear control systems, and
the measure of reachability in the presence of state constraints. Examples on
dynamic systems defined by both ordinary and partial differential equations
are shown.
## 1 Introduction
Control systems are analyzed and characterized by using fundamental concepts
such as observability, reachability, and input-to-output gain [1, 2, 3]. These
concepts have a vast volume of literature. For nonlinear systems, the
challenge is to define the concepts so that they are characteristic and
fundamental to control systems and, meanwhile, they are practically
verifiable. In this paper, the goal is to use dynamic optimization to define
quantitative measures of control system properties. Moreover, computational
methods of dynamic optimization provide practical tools to numerically
implement these concepts in applications.
In Section 2, the ambiguity in estimation is defined as a measure of
observability. This quantity can be numerically computed by solving a dynamic
optimization. An example is shown in which some systems observable in
traditional sense are not practically observable because of their poor value
of ambiguity in estimation. In other words, we can quantitatively tell
strongly observable from weakly observable. Another feature of this concept is
the capability of taking into account non-sensor information or user knowledge
of systems, in addition to the output. For instance, an example is shown in
which the system is unobservable under a traditional definition. It turns out
that the system is strongly observable if we know the control input has a
bounded variation, but without using its accurate upper bound. Moreover, this
concept can be used to measure partial observability of complex systems,
including the observability of a function of the states and the observability
of unknown parameters in a model.
In Section 3, computational methods for the $L^{p}$-gain of control systems
are introduced. The assumption is that the space of input has finite
dimension. Then, the $L^{p}$-gain can be computed using dynamic optimization.
In addition, a method of approximating $L^{p}$-gain is also introduced, which
is based on the eigenvalues of covariance matrices. The methods are
exemplified by a nonlinear model of atomic force microscope.
In Section 4, we define the concepts of ambiguity in control and control cost.
These definitions take into account the control input as well as systems’
constraints. For instance, the concept can be used to quantitatively measure
the reachability of nonlinear systems under the constraint that the states
must stay in a given region of safety. As an example, heat equation with
boundary control is studied.
## 2 Observability
Consider a general control system
$\begin{array}[]{lllllllll}\dot{x}=f(t,x,u,\mu),&x\in\Re^{n_{x}},&u\in\Re^{n_{u}}&\mu\in\Re^{n_{\mu}}\\\
y=h(t,x,u,\mu),&y\in\Re^{n_{y}}\\\ z=e(t,x,u,\mu),&z\in\Re^{n_{z}}\\\
(x(\cdot),u(\cdot),\mu)\in{\cal C}\end{array}$ (1)
in which $y$ is the output, $z$ is the variable to be estimated, which is
either the state $x$ or a function of $x$ in the case of partial observability
for large scale systems. The system state is $x$, $u$ is the control input,
$\mu$ is the parameter or model uncertainty. In (1), ${\cal C}$ is a general
formulation of constraints. Some examples of constraints include, but not
limited to,
$\begin{array}[]{lllllllll}E(x(t_{0}),x(t_{f}))\leq 0,&\mbox{ end point
condition }\\\ s(x,u)\leq 0,&\mbox{ state-control constraints }\\\
s(x(t_{1}))=0,&\mbox{ known event at time }t_{1}\\\
\mu_{min}\leq\mu\leq\mu_{max}&\mbox{ model uncertainties }\\\
s(x,\mu)=0,&\mbox{ DAE (differential-algebraic equations, $\mu$ is a variable)
}\\\ \mbox{Variation}(u)\leq V_{max},&\begin{array}[]{ll}\mbox{control input
with bounded variation}\\\ \mbox{(non-sensor
information)}\end{array}\end{array}$
These constraints represent known information about the system in addition to
the measured output $y$. This general form of constraints makes it possible to
take into account non-sensor information, or user knowledge about the system,
in the estimation process. For instance, some state variables are known to be
nonnegative; or a control input has bounded variation; or an event is known to
happen at certain moment. All these are valuable information that can be used
for the estimation of $z$. The goal of this section is to define a measure for
the observability of $z$ using the observation data of $y$ as well as the
constraints and the control system model.
### 2.1 Definition
We assume that variables along trajectories are associated with metrics. For
instance, $y=h(t,x(t),u(t),\mu)$, as a function of $t$, has $L^{2}$ or
$L^{\infty}$ norm; $z=e(t,x(t),u(t),\mu)$ can be measured by its function
norm, or by the norm of its initial value $e(t_{0},\xi(t_{0}),u(t_{0}),\mu)$.
A metric used for $z$ is denoted by $||\cdot||_{Z}$; and $||\cdot||_{Y}$
represents the metric for $y=h(t,x,u,\mu)$. The following definition is
applicable to systems with general metrics, including $L^{p}$ and
$L^{\infty}$. Unless otherwise specified, a norm $||a||$ for $a\in\Re^{k}$ is
defined by
$(a_{1}^{2}+\cdots+a_{k}^{2})^{1/2}$
For any function $h(t)$, $t\in[t_{0},t_{f}]$, its $L^{p}$-norm is defined by
$||h||_{L^{p}}=\left(\int_{t_{0}}^{t_{f}}|h(t)|^{p}dt\right)^{1/p}$
The infinity norm is defined by
$||h||_{\infty}=\lim_{p\rightarrow\infty}\left(\int_{t_{0}}^{t_{f}}|h(t)|^{p}dt\right)^{1/p}$
which equals its essential supremum value. In this paper, a triple
$(x(t),u(t),\mu)$ represents a trajectory of (1) satisfying the differential
equations as well as the constraints. Given a positive number $\epsilon>0$ and
a nominal, or true, trajectory $(x(t),u(t),\mu)$. Define
$\begin{array}[]{rcllllllll}{\cal
E}=\left\\{(\hat{x}(t),\hat{u}(t),\hat{\mu})|\;||h(t,\hat{x}(t),\hat{u}(t),\hat{\mu})-h(t,x(t),u(t),\mu)||_{Y}\leq\epsilon\right\\}\end{array}$
(2)
The number $\epsilon$ is used as an output error bound. If
$h(\hat{x}(t),\hat{u}(t),\hat{\mu})$ stays in the $\epsilon$ neighborhood of
the nominal output $h(t,x(t),u(t),\mu)$, then we consider the trajectory
$(\hat{x}(t),\hat{u}(t),\hat{\mu})$ not distinguishable from the nominal one
using output measurement. In this case, any trajectory in $\cal E$ can be
picked by an estimation algorithm as an approximation of the true trajectory
$(x(t),u(t),\mu)$. For this reason, a trajectory in $\cal E$ is called an
estimation of $(x(t),u(t),\mu)$. Similarly,
$\hat{z}=e(t,\hat{x},\hat{u},\hat{\mu})$ is an estimation of $z=e(t,x,u,\mu)$.
###### Definition 1
Given a trajectory $(x(t),u(t),\mu)$, $t\in[t_{0},t_{1}]$. Let $\epsilon>0$ be
the output error bound. Then the number $\rho_{o}(\epsilon)$ is defined as
follows
$\begin{array}[]{lllllllll}\rho_{o}(\epsilon)=\displaystyle\max_{(\hat{x}(t),\hat{u}(t),\hat{\mu})}||e(t,\hat{x}(t),\hat{u}(t),\hat{\mu})-e(t,x(t),u(t),\mu)||_{Z}\\\
\mbox{subject to}\\\
||h(\hat{x}(t),\hat{u}(t),\hat{\mu})-h(t,x(t),u(t),\mu)||_{Y}\leq\epsilon\\\
\dot{\hat{x}}=f(t,\hat{x},\hat{u},\hat{\mu}),\\\
(\hat{x}(\cdot),\hat{u}(\cdot),\hat{\mu})\in{\cal C}\end{array}$ (3)
The number $\rho_{o}(\epsilon)$ is called the ambiguity in the estimation of
$z$ along the trajectory $(x(t),u(t),\mu)$.
Let $U$ be an open set in $(x,u,\mu)$-space and $[t_{0},t_{1}]$ be a time
interval. Then the largest value of ambiguity along all trajectories in $U$ is
called the ambiguity in the estimation of $z$ in the region $U$.
Remarks
1\. The ratio $\rho_{o}(\epsilon)/\epsilon$ measures the sensitivity of
estimation to the noise in $y$. A small sensitivity value implies strong
observability of $z$ in the presence of sensor noise.
2\. The ratio $\rho_{o}(\epsilon)/\epsilon$ is closely related to the
observability gramian. Consider a linear system
$\dot{x}=Ax,\;\;y=Cx$
Suppose $z=x$ and suppose $||\cdot||_{Y}$ is the $L^{2}$-norm. Let $P$ be the
observability gramian [1], [4], then
$\begin{array}[]{lllllllll}||y||_{Y}^{2}=x_{0}^{T}Px_{0}\end{array}$ (4)
Given $||y||_{Y}=\epsilon$, $\rho_{o}$ equals the maximum value of $||x_{0}||$
satisfying (4). In this case, $x_{0}$ is an eigenvector of $P$ associated to
the smallest eigenvalue $\lambda_{\min}$. We have
$\begin{array}[]{lllllllll}\epsilon^{2}=\lambda_{min}\rho_{o}^{2}\end{array}$
(5)
Therefore, the ratio $\rho_{o}(\epsilon)^{2}/\epsilon^{2}$ equals the
reciprocal of the smallest eigenvalue of the observability gramian. For
nonlinear systems, one can use observability gramian to approximate
$\rho_{o}(\epsilon)/\epsilon$. An advantage of this approach is that the
gramian can be computed empirically without solving the optimization problem
(3). Details on empirical computational algorithms for the gramian of
nonlinear systems can be found in [4] and [5].
3\. Given a fixed number $\rho_{o}>0$, from (4) the least sensitive initial
state defined in (5), i.e. the eigenvector of length $\rho_{o}$ associated to
$\lambda_{min}$, can be found by the following optimization
$\arg\min_{||x_{0}||_{X}=\rho_{o}}||y||_{Y}=x_{0}^{T}Px_{0}$
Extending this idea to nonlinear systems, the least observable direction in
initial states can be defined as follows: given $\rho_{o}>0$, let $\epsilon$
be the minimum value from the following problem of minimization
$\begin{array}[]{lllllllll}\epsilon=\displaystyle\min_{(\hat{x}(t),\hat{u}(t),\hat{\mu})}||h(t,\hat{x}(t),\hat{u}(t),\hat{\mu})-h(t,x(t),u(t),\mu)||_{Z}\\\
\mbox{subject to}\\\ ||\hat{x}_{0}-x_{0}||_{X}=\rho_{o}\\\
\dot{\hat{x}}=f(t,\hat{x},\hat{u},\hat{\mu}),\\\
(\hat{x}(\cdot),\hat{u}(\cdot),\hat{\mu})\in{\cal C}\end{array}$
The resulting initial $x_{0}$ represents the least observable state on the
sphere $||\hat{x}_{0}-x_{0}||_{X}=\rho_{o}$ and the ratio $\rho_{o}/\epsilon$
measures the unobservability of initial states. This definition is a reverse
process of Definition 1. However, in some cases it is easier to handle the
constraint $||\hat{x}_{0}-x_{0}||_{X}=\rho_{o}$ than the inequality of
$h(t,x,u,\mu)$ in (3).
4\. The metric for output in Definition 1 can be a vector valued function
which is bounded by a vector $\epsilon$. This flexibility is useful for
systems using different types of sensors with different accuracy.
5\. Definition 1 is independent of estimation methods. It characterizes a
fundamental attribute of the system itself, not the accuracy of a specific
estimation method. In the following, we compare Definition 1 to traditional
definitions of observability. It is shown that simple linear systems
observable in the traditional sense might be weakly observable or practically
unobservable under Definition 1; and, on the other hand, some systems not
observable under traditional definitions are practically observable with a
small ambiguity in estimation. $\diamond$
### 2.2 Computational dynamic optimization
The problem defined by (3) is a dynamic optimization. To apply Definition 1,
this problem must be solved. Obviously, an analytic solution to (3) is very
difficult to derive, if not impossible, especially in the case of nonlinear
systems. However, there exist numerical approaches that can be used to find
its approximate solution. For instance, various numerical methods are
discussed in detail in [6], [7], and [8]. Surveys on numerical methods for
solving nonlinear optimal control problems can be found in [9, 10]. The
computational algorithm used in this paper is from a family of approaches
called direct method [11, 12, 13, 14, 15]. The essential idea of this method
is to discretize the optimal control problem and then solve the resulting
finite-dimensional optimization problem. The simplicity of direct methods
makes it an ideal tool for a wide variety of applications of dynamic
optimization with constraints, including (3) in Definition 1.
More specifically, all simulations in this paper use a pseudospectral optimal
control method. In this approach, a set of nodes is selected using either the
zeros or the critical points of orthogonal polynomials, in our case the
Legendre-Guass-Lobatto nodes. Then, the problem of dynamic optimization is
discretized at the nodes to result in a nonlinear programming, which is solved
using sequential quadratic programming. Details are referred to [12, 14, 15].
In some of the following examples, dynamic optimizations are solved using the
software package DIDO [16].
A frustration in nonlinear programming is the difficulty of finding global
optimal solutions within a given domain. This is no exception in this paper.
In all examples, a variety of initial guesses are used to gain a comfortable
level of confidence that the result is not a local optimal solution. However,
for all examples of nonlinear systems in this paper, the computation cannot
guarantee global maximum value for (3). Nevertheless, in the case that a
result is not the global maximum value, it still provides a lower bound of the
ambiguity value $\rho_{o}$.
### 2.3 Examples
For the rest of this section, we illustrate Definition 1 using several
examples. In the first example, it is shown that the traditional concept of
observability is ineffective for systems with large dimensions. It justifies
the necessity of a quantitative definition of observability, such as
Definition 1.
Example. Consider the following linear system
$\begin{array}[]{rcllllllll}&&\dot{x}_{1}=x_{2}\\\ &&\dot{x}_{2}=x_{3}\\\
&&\vdots\\\
&&\dot{x}_{n}=-\displaystyle\sum_{i=1}^{n}\left(\begin{array}[]{ccc}n\\\
i-1\end{array}\right)x_{i}\\\ &&y=x_{1}\end{array}$ (6)
Under a traditional definition of observability, this system is perfectly
observable for any choice of $n$, i.e. given an output history $y=x_{1}(t)$,
it corresponds to a unique initial state $x_{0}$. However, if Definition 1 is
applied to measure the observability, it is a completely different story when
the dimension is high.
Suppose the goal is to estimate $x_{0}$. We can define $z=x(t)$. Definition 1
is applicable with arbitrary metrics. To measure the observability of the
initial state, we can use the norm of $x(0)$ as the metric for $z$, i.e.
$||z(t)||_{Z}=||x(0)||$
For this example, the output accuracy is measured by $L^{\infty}$-norm,
$||y(t)||_{Y}=\displaystyle\max_{t\in[t_{0},t_{f}]}|y(t)|$
Let us assume that the true initial state is
$x_{0}=\left[\begin{array}[]{ccccccccc}0&0&\cdots&1\end{array}\right]^{T}$
Let the output error bound be small, $\epsilon=10^{-6}$. So, we assume very
accurate observation data. The time interval is $[0,15]$. Problem (3) has the
following form
$\begin{array}[]{lllllllll}\rho_{o}(\epsilon)=\displaystyle\max_{\hat{x}}||\hat{x}(0)-x(0)||\\\
\mbox{subject to}\\\ ||\hat{x}_{1}(t)-x_{1}(t)||_{Y}\leq\epsilon\\\
\dot{\hat{x}}=f(\hat{x})\end{array}$ (7)
It is solved to compute $\rho_{o}(\epsilon)$. Table 1 lists the result for
$n=2,3,\cdots,9$.
n | 2 | 3 | 4 | 5
---|---|---|---|---
$\rho_{o}(\epsilon)$ | $4.70\times 10^{-6}$ | $2.67\times 10^{-5}$ | $1.53\times 10^{-4}$ | $8.89\times 10^{-4}$
$\epsilon$ | $10^{-6}$ | $10^{-6}$ | $10^{-6}$ | $10^{-6}$
n | 6 | 7 | 8 | 9
$\rho_{o}(\epsilon)$ | $5.20\times 10^{-3}$ | $3.01\times 10^{-2}$ | $1.75\times 10^{-1}$ | $1.02$
$\epsilon$ | $10^{-6}$ | $10^{-6}$ | $10^{-6}$ | $10^{-6}$
Table 1: Observability
From the table, when $n=2$ the ambiguity in the estimation of $x_{0}$ is as
small as $4.70\times 10^{-6}$. So the system is strongly observable.
Equivalently, if the observation data has absolute error less than $\epsilon$,
then the worst possible estimation of $x_{0}$ has an error at the scale of
$10^{-6}$. This conclusion agrees with the traditional theory of
observability. However, when the dimension is increased, the observability
ambiguity increases too; thus the system becomes less observable. At $n=8$,
the observability ambiguity is as big as $0.175$, or the worst error of
estimation is $17.5\%$ relative to the true $x_{0}$. When $n=9$, the
observability ambiguity is $1.02$. In this case, the worse relative error in
estimation is more than $100\%$! Thus, the system is practically unobservable,
although it is perfectly observable under a traditional definition. Figure 1
shows why this system is practically unobservable. The continuous curves
represent the true trajectory and its output for $n=9$; the dotted curves are
the estimation. The outputs of both trajectories agree to each other very well
(Figure on top), but the initial states (only $x_{9}$ is plotted) are
significantly different.
Figure 1: Estimation error ($n=9$)
As shown in Figure 1, while the estimation of the initial state is inaccurate,
the estimation is very close to the true value at the final time $t_{f}$. To
see the observability of the final state, let us use a different metric for
$z$,
$\begin{array}[]{lllllllll}||z(t)||_{Z}=||x(t_{f})||\end{array}$ (8)
If we consider $t=t_{f}$ as the current time moment, then this metric is used
to measure the detectability of the current system state, rather than the
observability of the initial value $x_{0}$. To compute the ambiguity under the
new metric, we solve the problem defined in (7) except that the cost function
is replaced by the metric (8). For the case of $t_{f}=10$ and
$\epsilon=10^{-6}$, the ambiguity in the estimation of $x(t_{f})$ equals
$2.7328\times 10^{-6}$. Therefore, the system is accurately detectable.
To summarize, this example shows a set of linear systems that are observable
under conventional definition. However, as the dimension is increased, the
systems become practically unobservable in the sense that an output trajectory
cannot accurately determine the state trajectory. Meanwhile, the detectability
of the system is not changed with the dimension. Definition 1 is used here to
treat both observability and detectability in the same framework,
quantitatively. $\diamondsuit$
A concept is useful only if it is verifiable for a wide spectrum of systems
and applications. An advantage of Definition 1 is that the dynamic
optimization (3) can be numerically solved for various types of applications.
In the following, we illustrate the usefulness of the ambiguity in estimation
using two examples. One is a networked cooperative control system; and the
other one is parameter identification for nonlinear systems.
Example (Partial observability of cooperative and networked systems). In this
example, it is shown that an unobservable system under traditional control
theory can be practically observable by employing user knowledge about the
system, such as an approximate upper bound of control input. Consider a
networked control system showing in Figure 2. Suppose it consists of an
unknown number of vehicles. Due to the large number of subsystems, it could be
either impossible or unnecessary to process or collect all information about
the entire system. A practical approach is to find partial observability with
local sensor information only. In this example, we assume that the cooperative
relationships in the system is unknown except that we know Vehicle 2 follows
Vehicle 1; Vehicle 3 follows both Vehicle 1 and 2 as shown by the arrows in
Figure 2. The dashed lines in the figure represent unknown cooperative
relationships. The question to be answered is the observability of Vehicle 1
if the locations of Vehicle 2 and 3 can be measured.
Figure 2: Cooperative networked system
Suppose each vehicle can be treated as a point mass with a linear dynamics
$\begin{array}[]{lllllllll}\dot{x}_{i1}=x_{i2}&\dot{y}_{i1}=y_{i2}\\\
\dot{x}_{i2}=u_{i}&\dot{y}_{i2}=v_{i}\end{array}$
Assume that the control input of vehicles 2 and 3 are defined as follows
$\begin{array}[]{lllllllll}u_{2}=a_{1}(x_{21}-x_{11}-d_{1})+a_{2}(x_{22}-x_{12})\\\
u_{3}=b_{1}(x_{31}-\displaystyle\frac{x_{11}+x_{21}}{2}-d_{2})+b_{2}(x_{32}-\displaystyle\frac{x_{12}+x_{22}}{2})\end{array}$
where $d_{i}$ is the distance of separation. The control in the $y$-direction
is the same. So, Vehicle 2 follows Vehicle 1, Vehicle 3 follows the average
position of Vehicle 1 and 2. Suppose we can measure the positions of Vehicles
2 and 3.
$\begin{array}[]{lllllllll}\mbox{output}=\left[\begin{array}[]{ccccccccc}x_{21}&y_{21}&x_{31}&y_{31}\end{array}\right]^{T}\end{array}$
(9)
The question to be answered is the observability of the location and velocity
of Vehicle 1, i.e. $x_{11}$, $y_{11}$, $x_{12}$, $y_{12}$. We would like to
emphasize that the control input of Vehicle 1 is unknown because its input is
determined by its cooperative relationships with other vehicles or agents,
which is not given. Therefore, in traditional control theory, Vehicle 1 is
unobservable.
To make Vehicle 1 practically observable with limited local measurement, we
assume that the input of Vehicle 1 has bounded variation with an upper bound
$V_{max}$. This is to say that the vehicles are not supposed to make high
frequency zigzag movement, or the control does not have chattering phenomenon.
However, discontinuity in control, such as bang-bang, is allowed. In the
following, we measure the observability of Vehicle 1 along a trajectory
defined by the parameters in Table 2.
$t_{0}$ | $t_{f}$ | $d_{1}$ | $d_{2}$ | $a_{1}$ | $a_{2}$ | $b_{1}$ | $b_{2}$ | $(x_{11}^{0},x_{12}^{0})$ | $(x_{21}^{0},x_{22}^{0})$ | $(x_{31}^{0},x_{32}^{0})$
---|---|---|---|---|---|---|---|---|---|---
$0$ | $20$ | $-2$ | $-2$ | $-1$ | $-2$ | $-3$ | $-7$ | ($0,4$) | ($d_{1},4$) | ($d_{2},4$)
Table 2: Parameters of Nominal Trajectory
The control input of the nominal trajectory is
$u_{1}=\sin\displaystyle\frac{(t_{f}-t_{0})t}{\pi}$
which is unknown to the observer. To measure the ambiguity in the estimation
of vehicles, we assume the output error bound of (9) is $\epsilon=10^{-2}$.
For the unknown $u_{1}$, we assume a bounded variation of less than or equal
to $V_{max}=3.0$, which is $50\%$ higher than the true variation. The metric
for each output variable is the $L^{\infty}$-norm. The metric for the location
and velocity of Vehicle 1 is the $L^{2}$-norm. The ambiguity in the estimation
of each state variable is computed by solving a problem of dynamic
optimization. Using the estimation of $x_{11}$ as an example, we have
$\begin{array}[]{lllllllll}\rho_{o}(\epsilon)=\displaystyle\max_{(\hat{x},\hat{u}_{1})}||\hat{x}_{11}(t)-x_{11}(t)||_{L^{2}}\\\
\mbox{subject to}\\\
||\hat{x}_{21}(t)-x_{21}(t)||_{L^{\infty}}\leq\epsilon_{1}\\\
||\hat{x}_{31}(t)-x_{31}(t)||_{L^{\infty}}\leq\epsilon_{2}\\\
\dot{\hat{x}}=f(\hat{x})\\\ V(\hat{u}_{1})\leq V_{max}\end{array}$ (10)
where $V(\hat{u}_{1})$ is the total variation. In computation, this constraint
is discretized at a set of node points $t_{0}<t_{1}<\cdots<t_{N}=f_{f}$ so
that
$\displaystyle\sum_{k=1}^{N}|\hat{u}_{1}(t_{i})-\hat{u}_{1}(t_{i-1})|\leq
V_{max}$
An interesting point in the formulation (10) is that the outputs for the two
vehicles, i.e. $\hat{x}_{21}$ and $\hat{x}_{31}$, have different error bounds,
$\epsilon_{1}$ and $\epsilon_{2}$. The metric for the outputs is a vector
valued function. This flexibility of using different $\epsilon$ value for
multiple outputs is advantageous for systems with multiple sensors of
different qualities that measure various states with different accuracy.
The computed result is shown in Table 3. The small relative ambiguity value
shows that the location and velocity of Vehicle 1 are practically observable
given the measurement of the positions of Vehicle 2 and 3, without using any
information about the rest of the networked system and without knowing the
input of Vehicle 1. The worst estimation of $x_{11}$ and $x_{12}$ is shown in
Figure 3, which has good accuracy. $\diamondsuit$
$V_{max}$ | $\epsilon$ | $\rho_{x_{11}}$ | $\rho_{x_{11}}/||x_{11}||_{L_{2}}$ | $\rho_{x_{12}}$ | $\rho_{x_{12}}/||x_{12}||_{L_{2}}$
---|---|---|---|---|---
$3$ | $10^{-2}$ | $1.2257$ | $2.8\times 10^{-3}$ | $0.5901$ | $1.16\times 10^{-2}$
Table 3: Observability of Vehicle 1 Figure 3: The worst estimation of the
position and velocity of Vehicle 1
In the following example, the concept of ambiguity in estimation is applied to
the Laub-Loomis model [17] with unknown parameters, a nonlinear system of
oscillating biochemical network.
Example (Parameter identification) In the study of biochemical networks, it
was proposed that interacting proteins could account for the spontaneous
oscillations in adenylyl cyclase activity that was observed in homogeneous
populations of dictyostelium cells. While a set of terminologies such as
$3^{\prime},5^{\prime}$-cycle monophosphate (cAMP) and adenylate cyclase (ACA)
are involved in the problem, we focus on the state space in which a set of
seven nonlinear differential equations are used as the model [17].
$\begin{array}[]{rcllllllll}\dot{x}_{1}&=&k_{1}x_{7}-k_{2}x_{1}x_{2}\\\
\dot{x}_{2}&=&k_{3}x_{5}-k_{4}x_{2}\\\
\dot{x}_{3}&=&k_{5}x_{7}-k_{6}x_{2}x_{3}\\\
\dot{x}_{4}&=&k_{7}-k_{8}x_{3}x_{4}\\\
\dot{x}_{5}&=&k_{9}x_{1}-k_{10}x_{4}x_{5}\\\
\dot{x}_{6}&=&k_{11}x_{1}-k_{12}x_{6}\\\
\dot{x}_{7}&=&k_{13}x_{6}-k_{14}x_{7}\\\ \end{array}$ (11)
In a robustness study [18], it was shown that a small variation in the model
parameters can effectively destroy the required oscillatory dynamics. As a
related question, it becomes interesting to investigate the possibility of
estimating the parameters in the system, $k_{1}$, $k_{2}$, $\cdots$, $k_{7}$.
To exemplify the idea, we assume that $x_{1}$, the value of CAC, is
measurable, i.e.
$y=x_{1}$
We also assume that the initial states in experimentation is known. Suppose
the unknown parameters are $k_{1}$, $k_{6}$, and $k_{10}$; and suppose the
other parameters are known. The goal is to use the measured data of $y$ to
estimate the unknown parameters , i.e.
$z=\left[\begin{array}[]{ccccccccc}k_{1}&k_{6}&k_{10}\end{array}\right]$
Using Definition 1 we can quantitatively determine the observability of the
unknown parameters. Along a nominal trajectory $(x^{\ast}(t),k^{\ast})$, the
ambiguity can be computed by solving the following special form of (3).
$\begin{array}[]{lllllllll}\rho_{o}^{2}=\displaystyle{\max_{(x,k_{1},k_{6},k_{10})}}(k_{1}-k^{\ast}_{1})^{2}+(k_{6}-k^{\ast}_{6})^{2}+(k_{10}-k^{\ast}_{10})^{2}\\\
\mbox{subject to}\\\
||x_{1}(t)-x^{\ast}_{1}(t)||_{L^{2}}^{2}\leq\epsilon^{2}\\\
\dot{x}=f(t,x,k_{1},k_{6},k_{10}),\mbox{ other parameters equal nominal
value}\\\ x(t_{0})=x^{\ast}(t_{0})\\\ \end{array}$
In the simulation, a nominal trajectory is generated using the following
parameter value and initial condition
$\begin{array}[]{lllllllll}k_{1}=2.0,\;\;k_{2}=0.9,\;\;k_{3}=2.5,\;\;k_{4}=1.5,\;\;k_{5}=0.6,\;\;k_{6}=0.8,\;\;k_{7}=1.0,\\\
k_{8}=1.3,\;\;k_{9}=0.3,\;\;k_{10}=0.8,\;\;k_{11}=0.7,\;\;k_{12}=4.9,\;\;k_{13}=23.0,\;\;k_{14}=4.5,\\\
x(0)=\left[\begin{array}[]{ccccccccc}1.9675&1.2822&0.6594&1.1967&0.6712&0.2711&1.3428\end{array}\right]\end{array}$
The output error bound is being set at $\epsilon=10^{-2}$. The computation
reveals that the ambiguity in the estimation of
$z=\left[\begin{array}[]{ccccccccc}k_{1}&k_{6}&k_{10}\end{array}\right]$ is
$\rho_{o}=2.38\times 10^{-2}$
Given the nominal value of the parameters, the relative ambiguity in
estimation is about $1\%$. So, the parameters are strongly observable. In
fact, the worst estimation of the parameters is
$k_{1}=2.0150,\;\;k_{6}=0.8082,\;\;k_{10}=0.7836.$
The trajectory generated by the worst parameter estimation is shown in Figure
4. $\diamond$
Figure 4: The trajectory of worst estimation: curve - true trajectory; star -
estimation)
## 3 Input-to-output gain
$L^{p}$-gain is a tool of analysis widely used by control engineers to
quantitatively measure the sensitivity and robustness of systems. Consider
$\begin{array}[]{rcllllllll}\dot{x}=f(t,x,w,\mu)\\\ z=e(t,x,w,\mu)\end{array}$
(12)
where $x\in\Re^{n_{x}}$ is the state variable, $w\in\Re^{n_{w}}$ is the input
that represents the disturbance, $\mu\in\Re^{n_{\mu}}$ is the system
uncertainty or a parameter, $z\in\Re^{n_{z}}$ is the performance. In the
following, the $L^{p}$-norm of a vector valued function is denoted by
$||\cdot||_{L^{p}}$, for instance
$||z(t)||_{L^{p}}=\displaystyle\left(\int_{t_{0}}^{t_{1}}\sum_{i=1}^{n_{z}}|z_{i}(t)|^{p}dt\right)^{1/p}$
Given a fixed time interval $[t_{0},t_{1}]$ and $\sigma>0$. Suppose the input
$w$ is a function in $L^{p}$ space for some $1\leq p\leq\infty$ such that
$w(t)$ is bounded by $\sigma$, i.e.
$||w(t)||_{L^{p}}\leq\sigma$
Let $x^{\ast}(t)$ be a nominal trajectory with $x^{\ast}(t_{0})=x_{0}$,
$w^{\ast}(t)=0$. Fix the initial value $x(t_{0})=x_{0}$. Suppose the system
uncertainty is bounded, $\mu_{min}\leq\mu\leq\mu_{max}$. Then the $L^{p}$-gain
from $w$ to $z$ along $x^{\ast}(t)$ is defined as follows
$\begin{array}[]{lllllllll}\gamma(\sigma)=\displaystyle\max_{\begin{array}[]{c}||w||_{L^{p}}\leq\sigma,\\\
\mu_{min}\leq\mu\leq\mu_{max}\end{array}}\displaystyle\frac{||e(t,x,w,\mu)-e(t,x^{\ast},0,\mu)||_{L^{p}}}{\sigma}\end{array}$
(13)
Remark 6. Without parameter, the maximum value of
$||e(t,x,w)-e(t,x^{\ast},0)||_{L^{p}}$ is the ambiguity in the estimation of
$z=e(t,x,w)$. More specifically, consider the ambiguity in the estimation of
$z$ under the observation of $w$ with an error bound $\sigma$. Then the
$L^{p}$-gain $\gamma(\sigma)$ gain equals the ratio of the ambiguity and
$\sigma$.
### 3.1 Computation and example
The input-to-output gain can be computed by solving the problem (13). In
Section 2, the output function $y$ is smooth. It can be numerically
approximated in a finite dimensional space, such as interpolation using a
finite number of nodes. Similarly, one has to work on a finite dimensional
space of $w$ to carry out the computation. So, for the purpose of computation,
we only discuss the $L^{p}$-gain in a finite dimensional space of $w$, denoted
by $\cal U$, rather than the infinite dimensional space of arbitrary
integrable functions. The space ${\cal U}$ can be defined by the frequency
bandwidth, or the order of polynomials, or some other spaces used for the
approximation of the input. Then, (13) is reformulated as follows
$\begin{array}[]{lllllllll}\gamma_{\cal
U}(\sigma)=\displaystyle\max_{\begin{array}[]{c}w\in{\cal
U},||w||_{L^{p}}\leq\sigma\\\
\mu_{min}\leq\mu\leq\mu_{max}\end{array}}\displaystyle\frac{||e(t,x,w,\mu)-e(t,x^{\ast},0,\mu)||_{L^{p}}}{\sigma}\end{array}$
(14)
More specifically, given a positive number $\sigma>0$ define
$J(x(\cdot),w(\cdot),\mu)=||e(t,x(\cdot),w(\cdot),\mu)-e(t,x^{\ast}(\cdot),0,\mu)||_{L^{p}}$
Then the following dynamic optimization determines the $L^{p}$-gain over the
space ${\cal U}$.
Dynamic optimization for $L^{p}$-gain
$\begin{array}[]{lllllllll}\rho=\displaystyle{\max_{(x,w,\mu)}}J\\\
\mbox{subject to}\\\ w(t)\in{\cal U},\;||w(t)||_{L^{p}}\leq\sigma\\\
\dot{x}=f(t,x,w,\mu),\\\ x(t_{0})=x_{0}\\\
\mu_{min}\leq\mu\leq\mu_{max}\end{array}$ (15)
The $L^{p}$-gain from $w\in{\cal U}$ to $z$ is $\gamma_{\cal
U}(\sigma)=\displaystyle\frac{\rho}{\sigma}$. $\diamond$
Example ($L^{p}$-gain in the presence of system uncertainty). Atomic force
microscope (AFM) invented two decades ago is used to probe surfaces at the
atomic level with good accuracy. This type of equipment is also used as nano-
manipulation tools to handle particles at nano-scale [19, 20, 21]. Illustrated
in Figure 5, the system consists of a microcantilever with a sharp tip at one
end. The vibration of the cantilever is measured by an optical sensor. The
topographic images of surfaces can be taken by measuring the cantilever’s
dynamic behavior which is determined by the interacting force of the tip with
the sample.
Figure 5: Atomic force microscope
The dynamics of the vibrating tip can be modeled as a second order system [21]
$\begin{array}[]{lllllllll}\dot{x}_{1}=x_{2}\\\
\dot{x}_{2}=-\omega^{2}x_{1}-2\xi\omega x_{2}+h(x_{1},\delta)+u(t)+w(t)\\\
z=x_{1}\end{array}$ (16)
where $x_{1}$ is the position of the cantilever tip at the scale of
nanometers, $x_{2}$ is its velocity, $\omega$ is the natural frequency of the
cantilever, and $\xi$ is the damping coefficient. In this system, $u(t)$ is
the control input, $w(t)$ is the actuator disturbance which is unknown. The
function $h(x_{1},\delta)$ is the tip-sample interaction force in which
$\delta$ is the separation between the equilibrium of $x_{1}$ and the sample
surface. It is a system uncertainty. We adopt the following model for
$h(x_{1},\delta)$ [20].
$h(x_{1},\delta)=-\displaystyle\frac{\alpha_{1}}{(\delta+x_{1})^{2}}+\displaystyle\frac{\alpha_{2}}{(\delta+x_{1})^{8}}$
At nano-scale, system uncertainly and performance robustness are critical
issues in control design because a seemingly small noise or uncertainty may
have significant impact on the performance. In the following, we assume that
the value of $\delta$ and $w(t)$ are unknown. The goal is to compute the
$L^{2}$-gain from the actuator disturbance $w$ to the performance $z=x_{1}$
for $\delta$ in the entire interval $[\delta_{min},\delta_{max}]$.
The simulations are based on the following set of parameter value
$\begin{array}[]{lllllllll}\omega=1.0,&\xi=0.02,&\alpha_{1}=0.1481,&\alpha_{2}=3.6\times
10^{-6}\end{array}$
The nominal control input is $u(t)=1$ and the time interval is $[0,7]$, which
is long enough to cover one period of oscillation. The bounds of $\delta$ are
$\begin{array}[]{lllllllll}\delta_{min}=0.8,&\delta_{max}=1.2\end{array}$
We use $L^{2}$-norm as the metric for the actuator disturbance force $w$. Let
$w$ be an arbitrary function in a two-frequency space ${\cal W}_{k_{1},k_{2}}$
defined as follows
$\begin{array}[]{lllllllll}w=\displaystyle\sum_{i=1}^{2}\left(A_{i}\cos(\displaystyle\frac{2\pi
k_{i}}{t_{f}-t_{0}}t)+B_{i}\sin(\displaystyle\frac{2\pi
k_{i}}{t_{f}-t_{0}}t)\right)\end{array}$
Let $\sigma=0.03$. Then the $L^{2}$-gain is computed by solving the dynamic
optimization defined in (15). More specifically,
$\begin{array}[]{lllllllll}\rho=\displaystyle{\max_{(x,u,\delta)}}||x_{1}(t)-x_{1}^{\ast}(t)||_{L^{2}}\\\
\mbox{subject to}\\\ ||w(t)||_{L^{2}}\leq\sigma,\;w(t)\in{\cal
W}_{k_{1},k_{2}}\\\ \dot{x}=f(t,x,u,\delta),\\\ \hat{x}(t_{0})=x_{0}\\\
\delta\in[\delta_{min},\delta_{max}]\end{array}$
The $L^{2}$-gain equals $\displaystyle\frac{\rho}{\sigma}$. It is computed for
spaces with various frequencies, ${\cal W}_{0,1}$, ${\cal W}_{2,3}$, ${\cal
W}_{4,5}$, ${\cal W}_{6,7}$, and ${\cal W}_{8,9}$. The result is shown in
Figure 6. The $L^{2}$-gain for frequencies $0$ and $1$ is $2.5707$. When the
frequencies are increased, the gain decreases. $\diamond$
Figure 6: $L^{2}$-gain
### 3.2 An alternative algorithm for $L^{2}$-gain
Solving (15) becomes increasingly difficult for high frequencies. The reason
is that, for the computational purpose, the problem of dynamic optimization is
always discretized at finite number of nodes in time. For higher frequencies,
the number of nodes must be increased. As a result, the dimension of
optimization variables is increased as well. Developing efficient methods of
computation for inputs of high frequencies requires further research. However,
in the case of $L^{2}$-gain, there exists an alternate approach without
solving dynamic optimization. Inspired by Remark 6 and the observability
gramian in [4], the $L^{2}$-gain can be approximated using the following
matrix.
Suppose the space of input, ${\cal U}$, is finite dimensional with a basis,
$w_{1}$, $w_{2}$, $\cdots$, $w_{m}$. Let $\sigma>0$ be a constant number. For
the input $\pm\sigma w_{i}$, the trajectory of
$\begin{array}[]{lllllllll}\dot{x}=f(t,x,\pm\sigma w_{i})\\\
x(t_{0})=x_{0}\end{array}$
is denoted by $x^{i\pm}(t)$. Define
$\begin{array}[]{lllllllll}2\Delta z^{i}=e(t,x^{i+}(t),\sigma
w_{i}(t))-e(t,x^{i-}(t),-\sigma w_{i}(t))\end{array}$ (17)
Now, define
$\begin{array}[]{lllllllll}G^{w}_{ij}=<w_{i},w_{j}>=\displaystyle\frac{1}{t_{f}-t_{0}}\displaystyle\int_{t_{0}}^{t_{f}}w_{i}(t)^{T}w_{j}(t)dt\\\
G^{z}_{ij}=<\Delta z^{i},\Delta
z^{j}>=\displaystyle\frac{1}{t_{f}-t_{0}}\displaystyle\int_{t_{0}}^{t_{f}}\Delta
z^{i}(t)^{T}\Delta z^{j}(t)dt\end{array}$ (18)
Denote the matrices $G^{w}=(G^{w}_{ij})_{i,j=1}^{n}$ and
$G^{z}=(G^{z}_{ij})_{i,j=1}^{n}$. Given any
$w=\displaystyle\sum_{i=1}^{m}a_{i}w_{i}$
satisfying
$<w,w>=\sigma^{2}$
Then $\Delta z$ generated by $\pm w$ is approximately
$2\Delta z=e(t,x^{+}(t),w(t))-e(t,x^{-}(t),-w(t))\approx
2\displaystyle\sum_{i=1}^{m}a_{i}\Delta z^{i}$
Therefore,
$\begin{array}[]{rcllllllll}||w||_{L^{2}}^{2}&=&\left[\begin{array}[]{ccccccccc}a_{1}\cdots
a_{m}\end{array}\right]G^{w}\left[\begin{array}[]{ccccccccc}a_{1}&\cdots&a_{m}\end{array}\right]^{T}\\\
||\Delta z||_{L^{2}}^{2}&\approx&\left[\begin{array}[]{ccccccccc}a_{1}\cdots
a_{m}\end{array}\right]G^{z}\left[\begin{array}[]{ccccccccc}a_{1}&\cdots&a_{m}\end{array}\right]^{T}\end{array}$
Therefore, the $L^{p}$-gain square is approximately the solution of the
following optimization
$\begin{array}[]{lllllllll}\displaystyle\frac{1}{\sigma^{2}}\max_{a}a^{T}G^{z}a\\\
\mbox{ subject to}\\\ a^{T}G^{w}a=\sigma^{2}\end{array}$
where
$a=\left[\begin{array}[]{ccccccccc}a_{1}&a_{2}&\cdots&a_{m}\end{array}\right]^{T}$.
A necessary condition for the optimal solution is
$G^{z}a=\lambda G^{w}a$
for some scalar $\lambda$. At this point,
$\begin{array}[]{rcllllllll}a^{T}G^{z}a&=&\lambda a^{T}G^{w}a\\\
&=&\lambda\sigma^{2}\end{array}$
Therefore,
$\begin{array}[]{rcllllllll}\gamma_{\cal
U}(\sigma)^{2}&=&\displaystyle\max_{||w||_{L^{2}}=\sigma^{2}}\displaystyle\frac{||\Delta
z||_{L^{2}}^{2}}{||w||_{L^{2}}^{2}}\\\
&\approx&\displaystyle\frac{a^{T}G^{z}a}{\sigma^{2}}\\\ &=&\lambda\end{array}$
On the other hand, $\lambda$ is an eigenvalue of $(G^{w})^{-1}G^{z}$. So, the
$L^{2}$-gain is approximately the square root of the largest eigenvalue.
To summarize, given a system
$\begin{array}[]{lllllllll}\dot{x}=f(t,x,w)\\\ x(t_{0})=x_{0}\end{array}$
and a space of input functions ${\cal U}$ with basis $w_{1}$, $w_{2}$,
$\cdots$, $w_{m}$. Given $\sigma>0$, compute $\Delta z^{i}$ in (17). Compute
the matrices $G^{w}$ and $G^{z}$ in (18). Then the $L^{2}$-gain is
approximately $\sqrt{\lambda_{max}}$, where $\lambda_{max}$ is the largest
eigenvalue of $(G^{w})^{-1}G^{z}$.
Example. Consider the model of AFM defined in (16). Assume that the value of
$\delta=1.0$ is known. The other parameters are the same as in the previous
example. The approximate $L^{2}$-gain is computed using the matrix approach.
The result is shown in Figure 7. This method is straightforward in computation
because no optimization is required. However, the approximation does not take
into full account the nonlinear dynamics. Comparing to the gain using (15),
the errors of the gain computed using covariance matrix is around $18\sim
20\%$ in ${\cal W}_{0,1}$, ${\cal W}_{2,3}$, and ${\cal W}_{4,5}$.
Figure 7: Approximate $L^{2}$-gain
## 4 Reachability
Dynamic optimization can be applied to quantitatively measure reachability.
Consider a control system
$\begin{array}[]{rcllllllll}\dot{x}=f(t,x,u)\end{array}$ (19)
where $x\in\Re^{n_{x}}$ and $u\in\Re^{n_{u}}$. Suppose $||x||_{X}$ is a norm
in $\Re^{n_{x}}$. We suppose that the state and control are subject to
constraint
$(x(\cdot),u(\cdot))\in{\cal C}$
###### Definition 2
Given $x_{0}$ and $x_{1}$ in $\Re^{n_{x}}$. Define
$\begin{array}[]{lllllllll}\rho_{c}(x_{0},x_{1})^{2}=\displaystyle\min_{(x,u)}||x(t_{1})-x_{1}||^{2}_{X}\\\
\mbox{subject to}\\\ \dot{x}=f(x,u)\\\ x(t_{0})=x_{0}\\\
(x(\cdot),u(\cdot))\in\cal C\end{array}$ (20)
The number $\rho_{c}(x_{0},x_{1})$ is called the ambiguity in control.
Let $D_{0},D_{1}\subset\Re^{n_{x}}$ be subsets in state space. The ambiguity
in control over the region $\bar{D}_{0}\times\bar{D}_{1}$ is defined by the
following max-min problem.
$\begin{array}[]{lll}\rho_{c}=\displaystyle{\max_{(x_{0},x_{1})\in\bar{D}_{0}\times\bar{D}_{1}}}\rho_{c}(x_{0},x_{1})\end{array}$
In this definition, $t_{1}$ is either fixed or free in a time interval
$[t_{0},T]$. In the following discussion, we assume $t_{1}$ is fixed. If
$\rho_{c}(x_{0},z_{1})$ is nonzero, then the state cannot reach $x_{1}$ from
$x_{0}$ by using admissible controls. The maximum value of
$\rho_{c}(x_{0},z_{1})$ over $\bar{D}_{0}\times\bar{D}_{1}$ represents the
worst scenario of reachability. In some applications, the relative ambiguity
$\displaystyle\frac{\rho_{c}(x_{0},x_{1})}{||x_{1}||_{X}}$
is used to measure the reachability.
The definition of $\rho_{c}$ is consistent with the classic definition of
controllability for linear time-invariant systems. To be more specific,
consider a linear system
$\begin{array}[]{rcllllllll}\dot{x}=Ax+Bu\end{array}$ (21)
let $u(\cdot)\in{\cal C}$ be the space of continuous functions from
$[t_{0},t_{1}]$ to $\Re^{m}$; and let $D_{0}=D_{1}=\Re^{n}$. If $(A,B)$ is
controllable, i.e.
$rank(\left[\begin{array}[]{ccccccccc}B&AB&A^{2}B&\cdots&A^{n-1}B\end{array}\right]=n$
then for any $x_{0}$ and $x_{1}$, there always exists a control input so that
$x(t)$ with $x(0)=x_{0}$ reaches $x_{1}$ at $t=t_{1}$. Therefore,
$\rho_{c}(x_{0},x_{1})$ is always zero for arbitrary $(x_{0},x_{1})$; and
$\rho_{c}=0$. On the other hand, if $(A,B)$ is uncontrollable, then under a
change of coordinates an uncontrollable subsystem can be decoupled from the
controllable part of the system. In the uncontrollable subsystem, the states
cannot be driven to close to each other by control inputs. Therefore,
$\rho_{c}(x_{0},x_{1})$ is unbounded for arbitrary states in the
uncontrollable subspace. This implies $\rho_{c}=\infty$. To summarize,
$\rho_{c}=\left\\{\begin{array}[]{lll}0&\mbox{if }(A,B)\mbox{ is controllable
}\\\ \infty&\mbox{if }(A,B)\mbox{ is uncontrollable }\end{array}\right.$
Control has a cost. For weakly reachable systems, it takes relatively large
control energy to reach a terminal state. The cost in reachability can be
measured by the following quantity. Denote $||u(\cdot)||_{\cal U}$ and
$||x||_{X}$ the metrics of the control input and the state, respective.
###### Definition 3
Given initial and final states, $x_{0}$ and $x_{1}$, define
$\begin{array}[]{lllllllll}W(x_{0},x_{1})=\displaystyle\min_{(x,u)}\displaystyle\lim_{\psi\rightarrow\infty}\left(||u(t)-u^{\ast}(t)||_{\cal
U}+\psi||x(t_{1})-x_{1}||_{X}\right)\\\ \mbox{subject to}\\\ \dot{x}=f(x,u)\\\
x(t_{0})=x_{0},\\\ (x(\cdot),u(\cdot))\in\cal C\end{array}$ (22)
This definition has the following property. If $x_{1}$ can be reached from
$x_{0}$, then $W(x_{0},x_{1})$ equals the minimum control
$\begin{array}[]{lllllllll}W(x_{0},x_{1})=\displaystyle\min_{(x,u)}||u(t)-u^{\ast}(t)||_{\cal
U}\\\ \mbox{subject to}\\\ \dot{x}=f(x,u)\\\
x(t_{0})=x_{0},\;\;x(t_{1})=x_{1}\\\ (x(\cdot),u(\cdot))\in\cal C\end{array}$
If $x(t)$ cannot reach $x_{1}$ using admissible control, then
$W(x_{0},x_{1})=\infty$. A large value of $W(x_{0},x_{1})$ implies higher
control cost, thus weak reachability.
Remark 7. Suppose the system is linear. Suppose $D_{0}=\\{0\\}$ and
$D_{1}=\\{x|\;\;||x||<\epsilon\\}$ for some small $\epsilon>0$. Let $W$ be the
maximum cost in reachability under $L^{2}$-norm,
$W=\displaystyle\max_{x_{1}\in\bar{D}_{1}}W(0,x_{1})$
Then $(W/\epsilon)^{2}$ equals the reciprocal of the smallest eigenvalue of
the controllability gramian.
To justify Remark 7, consider
$\dot{x}=Ax+Bu$
If $(A,B)$ is uncontrollable, we know $W=\infty$. We also know that the
smallest eigenvalue of the controllability gramian is zero. Therefore the
claim holds true. Now suppose $(A,B)$ is controllable. Define
$||u||_{\cal U}^{2}=\int_{t_{0}}^{t_{1}}||u(t)||^{2}dt$
The control cost to reach $x_{1}$ is defined by
$\begin{array}[]{lllllllll}W(0,x_{1})=\displaystyle\min_{(x,u)}||u(t)||_{\cal
U}\\\ \mbox{subject to}\\\ \dot{x}=Ax+Bu\\\
x(t_{0})=0,x(t_{1})=x_{1}\end{array}$
Let $P$ be the controllability gramian, then it is known [22] that the optimal
cost satisfies
$\begin{array}[]{rcllllllll}&&\displaystyle\int_{t_{0}}^{t_{1}}||u(t)||^{2}dt\\\
&=&x_{1}^{T}P^{-1}x_{1}\end{array}$
If $\sigma_{min}$ is the smallest eigenvalue of $P$, then
$\begin{array}[]{rcllllllll}(W/\epsilon)^{2}&&=&&\displaystyle\frac{1}{\epsilon^{2}}\max_{||x_{1}||\leq\epsilon}x_{1}^{T}P^{-1}x_{1}\\\
&&=&&\displaystyle\frac{1}{\sigma_{\min}}\end{array}$
$\diamond$
### 4.1 Example
In the following, we compute the ambiguity in the control of a heat equation
with Neumann boundary control.
$\begin{array}[]{rcllllllll}&&\displaystyle\frac{\partial w(r,t)}{\partial
t}-\kappa\displaystyle\frac{\partial^{2}w(r,t)}{\partial r^{2}}=0\\\
&&w(r,0)=0,\;\;\;0\leq r\leq 2\pi\\\ &&w(0,t)=0,\;\;\;0\leq t\leq t_{f}\\\
&&w_{rt}(r,t)|_{r=2\pi}=u(t)\end{array}$ (23)
where $w(r,t)\in\Re$ is the state of the system, $r\in\Re$ is the space
variable, and $t$ is time. The control input is $u$. For a thermal problem,
$u$ represents the rate of the heat flux $w_{r}$. The initial state is assumed
to be zero.
For the purpose of computation, we discretize the problem at equally spaced
nodes,
$0=r_{0}<r_{1}<r_{2}<\cdots<r_{N}=2\pi$
Define
$x_{1}(t)=w(r_{1},t),\;x_{2}(t)=w(r_{2},t),\;\cdots,x_{N}(t)=w(r_{N},t)$
Using central difference in space, (LABEL:pdemodel) is approximated by the
following control system defined by ODEs.
$\begin{array}[]{rcllllllll}\dot{x}_{1}&=&\kappa\displaystyle\frac{x_{2}-2x_{1}}{\Delta
r^{2}}\\\ \dot{x}_{2}&=&\kappa\displaystyle\frac{x_{1}+x_{3}-2x_{2}}{\Delta
r^{2}},\\\ &\vdots&\\\
\dot{x}_{i}&=&\kappa\displaystyle\frac{x_{i-1}+x_{i+1}-2x_{i}}{\Delta
r^{2}},\\\ &\vdots\\\
\dot{x}_{N-1}&=&\kappa\displaystyle\frac{x_{N-2}+x_{N}-2x_{N-1}}{\Delta
r^{2}}\\\ \dot{x}_{N}&=&v\\\ v&=&\dot{x}_{N-1}+\Delta ru\end{array}$ (24)
We understand that more sophisticated algorithms of solving the heat equation
exit. We adopt this central difference method for simplicity in the
illustration of control ambiguity. System (24) is linear and controllable. So,
it is theoretically a reachable system. However, in reality the maximum
temperature cannot exceed safety margin. Under such constraint, a controllable
linear system may not be reachable due to overshot. Let $t_{f}=150$ and
$\kappa=0.14$. Suppose the target states are the following arches. A few of
them is shown in Figure 8.
$w_{f}(r)=w(r,t_{f})=A\sin(r/2),\;\;\;0\leq r\leq 2\pi$
Figure 8: The target state $w_{f}(r)$
The goal is to compute the ambiguity in control from $w(r,0)=0$ to $w_{f}(r)$
with the magnitude of $0\leq A\leq 1.2$ subject to the constraint
$w(r,t)\leq 2$
The norm in the finite dimensional state space is defined by
$||x||^{2}=\Delta r\sum_{i=1}^{N}x_{i}^{2}$
which approximates the $L^{2}$-norm in $C[0,2\pi]$. To compute the control
ambiguity in the range of $0\leq A\leq 1.2$, we consider the following nodes
in the magnitude of the target arch
$A=0,\;0.2,\;0.4,\;0.6,\;0.8,\;1.0,\;1.2$
The corresponding dynamic optimization problem for the control ambiguity is
defined by
$\begin{array}[]{lllllllll}\rho_{c}(0,x_{f})^{2}=\displaystyle\min_{(x,u)}\Delta
r\sum_{i=1}^{N}(x_{i}-w_{f}(r_{i}))^{2}\\\ \mbox{subject to}\\\
\dot{x}=f(x,v)\\\ x_{i}(t)\leq 2,\;\;\;i=1,2,\cdots,N\\\ x(0)=0\end{array}$
(25)
where $f(x,v)$ is the dynamics defined in (24) with the input $v$, and
$x_{f}=\left[\begin{array}[]{ccccccccc}w_{f}(r_{1})&w_{f}(r_{2})&\cdots&w_{f}(r_{N})\end{array}\right]$
In the simulation, $N$ is selected to be $N=31$. Problem (25) is solved using
Pseudospectral method at Legendre-Gauss-Lebato (LGL) nodes [12, 14, 15]. We
use $15$ LGL nodes in this example. Through computation, it is found that the
system becomes increasingly unreachable due to the constraint when the value
of $A$ is bigger than $0.4$. The relative ambiguity in control is shown in
Figure 9. When the magnitude of the target state is $1.2$, the relative
ambiguity shows that the closest state that the system can reach has almost a
$40\%$ relative error. Therefore, the system is practically unreachable if the
state is required to be bounded.
Figure 9: Magnitude of target state vs. relative control ambiguity
## 5 Conclusion
It is shown by numerous examples and definitions that computational dynamic
optimization is a promising tool of quantitatively analyzing control system
properties. Using computational approaches, the concepts studied in this
paper, including the ambiguity in estimation and control, input-to-output
gain, and the cost in reachability, are applicable to a wide spectrum of
applications. In addition, these concepts are defined and applied in a way so
that one can take advantage of user knowledge or take into account system
constraints. As a result, the properties of control systems are not only
verified, but also measured quantitatively. While these concepts can be
applied to a wide spectrum of problems, some specific applications exemplified
in this study include: strongly observable (detectable) or weakly observable
(detectable) systems; improving observability by employing user knowledge;
partial observability of networked complex systems; $L^{2}$-gain of nonlinear
control systems; reachability in the presence of state constraints; and
boundary control of partial differential equations.
Similar to many nonlinear optimization problems, a main drawback of the
approach is that a global optimization is, in general, not guaranteed for
nonlinear systems. In addition, the problem of computational accuracy also
poses many questions remain to be answered.
## References
* [1] T. Kailath, Linear Systems, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1980.
* [2] A. Isidori, Nonlinear Control Systems, Springer-Verlag, London, 1995.
* [3] K. Zhou, J. Doyle, and K. Glover , Robust and Optimal Control, Prentice Hall, 1995.
* [4] A. J. Krener and K. Ide, Measures of Unobservability, preprint, 2009.
* [5] S. Lall, J. E. Marsden and S. Glavski, A subspace approach to balanced truncation for model reduction of nonlinear control systems, Int. J. Robust Nonlinear Control, vol. 12, 2002, pp. 519-535.
* [6] A. E. Bryson and Y. C. Ho, Applied Optimal Control. Hemisphere, New York, 1975.
* [7] A. E. Bryson, Dynamic Optimization, Addison-Wesley Longman, Inc., 1999.
* [8] E. Polak, Optimization: Algorithms and Consistent Approximations, Springer-Verlag, Heidelberg, 1997.
* [9] J. T. Betts, “Survey of Numerical Methods for Trajectory Optimization,” Journal of Guidance, Control, and Dynamics, Vol. 21, No. 2, 1998, pp. 193-207.
* [10] E. Polak, A historical survey of computations methods in optimal control, SIAM Review, Vol. 15, pp. 553-548, 1973.
* [11] J. T. Betts, Practical Methods for Optimal Control Using Nonlinear Programming, SIAM, Philadelphia, PA, 2001.
* [12] G. Elnagar, M. A. Kazemi and M. Razzaghi, The pseudospectral Legendre method for discretizing optimal control problems, IEEE Trans. Automat. Contr. Vol. 40, pp. 1793-1796, 1995.
* [13] W. W. Hager, Runge-Kutta methods in optimal control and the transformed adjoint system, Numerische Mathematik, Vol. 87, pp. 247-282, 2000.
* [14] F. Fahroo, I. M. Ross (1998) Costate Estimation by a Legendre Pseudospectral Method. Proceedings of the AIAA Guidance, Navigation and Control Conference, 10-12 August 1998, Boston, MA.
* [15] W. Kang, Q. Gong, and I. M. Ross, On the Convergence of Nonlinear Optimal Control using Pseudospectral Methods for Feedback Linearizable Systems, International Journal of Robust and Nonlinear Control, Vol. 17, 1251-1277, 2007.
* [16] DIDO, Elissar LLC, http://www.elissar.biz/.
* [17] M. T. Laub, and W. F. Loomis, A molecular network that produces spontaneous oscillations in excitable cells of Dictyostelium, Mol. Biol. Cell, Vol. 9, 3521-3532, 1998.
* [18] J. Kim, D. G. Bates, I. Postlethwaite, L. Ma, and P. A. Iglesias, Robustness analysis of biochemical network models, IEE Proc. Syst. Biol. Vol. 153, No. 3, May, 2006.
* [19] M. Ashhab, M.V. Salapaka, M. Dahleh, and I. Mezić, Dynamical analysis and control of microcantilevers, Automatica, 1663-1670, 1999.
* [20] M. Basso, L. Giarre, M. Dahleh, and I. Mezle, Complex dynamics in harmonically excited Lennard-Jones oscillator : Microcantilever-sample interaction in scanning probe microscopes, ASME Journal of Dynamics Systems, Measurment and Control, Vol. 122, pp. 240-245, 2000.
* [21] A. Delnavaz, N. Jalili, and Hassan Zohoor, Vibration control of AFM tip for nano-manipulation using combined sliding mode techniques, IEEE Proc 7th International Conference on Nanotechnology, Hong Kong, August 2 - 5, 2007.
* [22] J. Zabczyk, Mathematical Control Theory: An Introduction, Birkhaüser, Boston, 1992.
|
arxiv-papers
| 2009-06-01T05:23:09 |
2024-09-04T02:49:03.043217
|
{
"license": "Public Domain",
"authors": "Wei Kang and Liang Xu",
"submitter": "Wei Kang",
"url": "https://arxiv.org/abs/0906.0215"
}
|
0906.0216
|
Hiraga et al.Search Sc-K line emission with Suzaku 2008/09/162008/09/00
ISM:supernova remnant – X-ray:individual(RX J0852.0–4622(Vela Jr.))
# Search for Sc-K line emission from RX J0852.0–4622 Supernova remnant with
Suzaku
Junko S. hiraga 11affiliation: RIKEN, 2-1 Hirosawa, Wako, Saitama 3510198
Yusuke kobayashi 22affiliation: Institute of Space and Astronautical Science,
Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai Sagamihara, Kanagawa
2298510 Toru tamagawa 11affiliationmark: 33affiliation: Department of
Physics, Tokyo University of Science, 1-3 Kagurazaka, Shinjyuku-ku, Tokyo
1628601 Asami hayato 11affiliationmark: 33affiliationmark: Aya bamba
22affiliationmark: Yukikatsu Terada 44affiliation: Department of Physics,
Saitama University, Shimo-Okubo 255, Sakura, Saitama 3388570, Japan Robert
petre 55affiliation: Astrophysics Science Division, NASAGoddard Space Flight
Center, Code 662, Greenbelt, MD 20771, USA Hideaki Katagiri 66affiliation:
Department of Physical Science, Graduate School of Science, Hiroshima
University 1-3-1 Kagamiyama, Higashi-hiroshima, 7398526 Japan Hiroshi Tsunemi
77affiliation: Graduate School of Science, Earth and Space Science,Osaka
University, Machikaneyama, 1-1Toyonaka, Osaka, 5600043, Japan
jhiraga@crab.riken.jp
###### Abstract
We searched for evidence of line emission around 4 keV from the northwestern
rim of the supernova remnant RX J0852.0-4622 using Suzaku XIS data. Several
papers have reported the detection of an emission line around 4.1 keV from
this region of the sky. This line would arise from K-band fluorescence by
44Sc, the immediate decay product of 44Ti. We performed spectral analysis for
the entire portion of the NW rim of the remnant within the XIS field of view,
as well as various regions corresponding to regions of published claims of
line emission. We found no line emission around 4.1 keV anywhere, and are able
to set a restrictive upper limit to the X-ray flux: 1.1$\times$10-6 s-1 cm-2
for the entire field. For every region, our flux upper limit falls below that
of the previously claimed detection. Therefore, we conclude that, to date, no
definite X-ray line feature from Sc-K emission has been detected in the NW rim
of RX J0852.0-4622. Our negative-detection supports the recent claim that RX
J0852-4622 is neither young (1700–4000 yr) nor nearby ($\sim$750 pc).
## 1 Introduction
Supernovae are believed to be an agent for producing heavy elements and
distributing them to the Galaxy. The theory of supernova explosions predicting
nucleosynthesis yields has been developed over the past 50 years. However,
there is no observational evidence which allows us to carry out quantitative
comparison with these models, and the explosions themselves are still not
understood.
44Ti is a short-lived radioisotope with a half-life of about 60 yrs. It is
thought to be produced by explosive nucleosynthesis in SNe and to be the
source of the stable 44Ca in our Galaxy. The abundunce of this species
strongly depends on the explosion details, mainly on the so-called “mass-cut”
in core-collapse SNe, the energy of the explosion and explosion asymmetries.
Thus, the decay-chain of 44Ti offers a unique window into the study of the
supernova explosion mechanism (see review Diehl and Timmers (1998)). The
radioactive decay chain 44Ti$\to$44Sc$\to$44Ca produces three gamma-ray lines
at 67.9 keV, 78.4 keV and 1157 keV with similar branching ratios. Several
gamma-ray space missions, including COMPTEL, BeppoSAX and INTEGRAL, have
sought these lines in young supernova remnants. The discovery of the 44Ti 1157
keV line emission from the young famous Galactic supernova remnant Cas A with
COMPTEL(Iyudin et al. (1994)) was the first direct proof that this isotope is
really produced in SNe. Subsequent detection of line emission around 70 keV by
BeppoSAX and INTEGRAL has strengthened this result(Vink et al. (2001); Renaud
et al. (2006)). Today, Cas A remains the only SNR from which 44Ti lines have
been clearly detected. GRO J0852-4642 has been also reported as a 44Ti emitter
with a flux of 3.8$\pm$0.7 $\times$10-5cm-2s-1 based on six years of COMPTEL
data (Iyudin et al. (1998)). Schönfelder et al. (2000) examined the robustness
of the 44Ti line detection, applying different background modeling and event
selection criteria to suppress a large part of the background. They found a
significant variation for the line flux ranging from 2$\sigma$ to 4$\sigma$
depending on analysis method, whereas the Cas A significance hardly varies
(4$\sigma$ or more). The authors, therefore, conclude that the 44Ti line
detection for GRO J0852–4642 is marginal even though it is the second
brightest feature in COMPTEL’s survey in the 1157 keV band. For the 67.9 keV
and 78.4 keV lines from GRO J0852–4642, von Keinlin et al. (2004) obtained a
flux upper limit of 1.1$\times$ 10-4 cm-2s-1 with the INTEGRAL/SPI. The large
value is mainly due to systematic uncertainty.
44Ti is an unstable, proton-rich isotope among the many new nuclei synthesized
in supernova explosions. Many interesting proton-rich nucleosynthesis products
decay by electron capture, leaving primarily K-shell vacancies in the daughter
atoms. The adjustment of atomic electrons in response to the vacancies causes
the emission of characteristic X-rays (Leising et al. (2001)). For the case of
44Ti decay, it produces Sc-K$\alpha$(4.086, 4.091 keV) line emission.
Currently, and for the near future, X-ray observatories have much larger
effective area, better imaging capability, lower background level and better
energy resolution than Gamma-ray observatories; i.e., it is generally suitable
for line detection in X-ray band rather than Gamma-ray one. Therefore
searching for the X-ray lines offers an attractive alternative method of
direct measurement of the mass of 44Ti ejected and furthermore the location of
the 44Ti within the remnant. Although the most promising object where
substantial 44Ti should exist is Cas A, its large continuum flux prevents us
clearly detecting this X-ray line. On the other hand, detection of Sc-K
emission from the northwestern (NW) region of SNR RXJ0852.0–4622(Vela jr.)
have been reported in several studies. RXJ0852.0–4622 was recently discovered
in the southeastern corner of the Vela SNR with a large apparent radius of
about 60 (Aschenbach (1998)) and is coincident with GRO J0852.0–4622. Tsunemi
et al. (2000) reported on the presence of line emission around 4.1k̇eV using
an ASCA SIS observation, and concluded that there is substantial Ca produced
from 44Ti . On the other hand, Slane et al. (2001), also using ASCA data,
found no evidence for a line, and gave a 1$\sigma$ upper limit of Sc-K line
flux as 4.4$\times$10-6 cm-2s-1. Iyudin et al. (2005) reported on the
detection of Sc-K line emission from various regions in the remnant using XMM-
Newton data. Bamba et al. (2005) also reported on a possible excess around 4.1
keV from a Chandra observation.
In this paper, we take advantage of the large effective area around 4.1 keV of
Suzaku to make the most sensitive search to date for the Sc-K line emission.
Spectra extracted from corresponding regions to those in previously published
papers were examined for the existence of an emission line feature. Other
X-ray properties of the remnant such as nonthermal emission and thin-thermal
plasma will be discussed in separate papers.
## 2 Observation and Data Reduction
The NW part of RX J0852.0-4622 was observed using the Japanese-US X-ray
astronomy satellite, Suzaku (Mitsuda et al. (2007)) on 2005 December 19 during
the Performance Verification (PV) phase. This is the brightest portion in the
remnant, with a large apparent radius of about 60. The field of view (FOV) of
our Suzaku observations overlaid on ASCA image of the whole remnant (Tsunemi
et al. (2000)) is shown in figure 1. We also observed a nearby sky region with
the same Galactic latitude for background determination. Suzaku carries four
X-ray Imaging Spectrometers (XIS;Koyama et al. (2007)) and the non-imaging
Hard X-ray Detector (HXD;Takahashi et al. (2007)). Each XIS consists of a CCD
detector at the focal plane of a dedicated thin-foil X-ray telescope
(XRT;Serlemitsos et al. (2007)) with 18$\times$18field of view. One of the XIS
detectors (XIS 1) is back-illuminated (BI) and the other three (XIS0, XIS2 and
XIS3) are front-illuminated (FI). The advantages of the former are
significantly superior sensitivity in the 0.2–1.0 keV band with moderate
energy resolution while the latter has high detection efficiency and low
background level in the energy band above $\sim$5 keV.
The XIS was operated in the normal full-frame clocking mode without spaced-raw
charge injection (SCI) during both observations and the data format was
3$\times$3 or 5$\times$5\. The data were processed and cleaned using version
2.0.6.13 of the standard Suzaku pipeline software. The CALDB version of the
products is hxd-20070710, xis-20070731, and xrt-20070622. We used HEADAS
software 6.3.2 and XSPEC version 11.3.2 for the data reduction and analysis.
The net exposure times are approximately 175 ksec and 59 ksec for source and
background respectively. The observations are summarized in table1. Since the
characteristics of the three FI sensors are almost identical and well
calibrated to each other, we combined the data from them (hereafter XIS 023)
and show the average spectrum. In this paper, we concentrate on searching for
Sc-K line emission. For that purpose, we present an analysis of only the XIS
023 data. XIS 023 has the largest effective area at 4 keV among current X-ray
observatories, $\sim$900 cm2 .
## 3 Analysis and Results
Figure 2 shows the X-ray image obtained by XIS 023 in the energy range of
2.0–8.0 keV. Two bright rims, an outer and an inner one, are seen as revealed
by previous observations using XMM-Newton and Chandra (Iyudin et al. (2005);
Bamba et al. (2005)). In the XIS 023 image, they are not distinctly separated.
### 3.1 Spectrum from all of rim regioin
We extracted a representative XIS 023 spectrum from an elliptical region
including both outer and inner rims as identified by “rim-all” in figure 3.
This region includes the site where several papers reported the detection of
line emission around 4.1k̇eV (Tsunemi et al. (2000); Slane et al. (2001);
Bamba et al. (2005); Iyudin et al. (2005)).
The responses of the XRT and XIS were calculated using the “xissimarfgen”
version 2006-11-26 ancillary response file (ARF) generator (Ishisaki et al.
(2007)) and the “xisrmfgen” version 2006-11-26 response matrix file (RMF)
generator. The energy resolution of this data set was $\sim$110 eV (FWHM) at 4
keV (Koyama et al. (2007)). A slight degradation of the energy resolution was
included in the RMF. A decrease of the low-energy transmission of the XIS
optical blocking filter (OBF) was included in the ARF, but this effect has no
influence in the 2.0–8.0 keV energy band being used in this paper. The ARF
response was calculated assuming that photons come from only the rim-all
region (shown in figure 3) with a flat surface brightness profile. We checked
that contamination from source flux of outside of rim-all is negligible using
the region interior to the rim (shown in figure 3) which is about 1/4 dimmer
than rim-all region.
A background spectrum was extracted from the offset data using the same
detector area as the rim-all region. There is no significant signal beyond the
background level above 8.0 keV. Additionally, in order to avoid uncertainty
about the thermal emission which could have contributions from both the Vela
SNR and Vela Jr. below 2 keV, we used photons in the energy range in 2.0–8.0
keV for spectral fitting.
The resultant XIS 023 spectrum is apparently dominated by nonthermal emission
as shown in the left panel of figure 4. It is well represented by a single
power law spectrum with interstellar absorption. The derived photon index of
$\Gamma$= 2.81$\pm$0.05 and absorbing column density of
$N_{\rm{H}}$=(0.67$\pm$0.11)$\times$1022cm-2 with $\chi^{2}/$d.o.f=331.4/324
are in good agreement with previous results by ASCA, XMM-Newton and Chandra
(Slane et al. (2001); Iyudin et al. (2005); Bamba et al. (2005)). There
appears to be no significant emission feature around 4 keV. In order to search
for evidence from Sc-K line emission, we introduced into the model an
additional gaussian component at the fixed energy of 4.09 keV. XIS has the
absolute energy error within $\pm$5 eV (Koyama et al. (2007)). However, adding
this component does not improve the fit at all, and gives an upper-bound for
the line flux of 1.2$\times$10-6 cm-2s-1 (90% confidence). Here, the width of
channel bin is set to $\sim$15 eV around 4.1 keV. If left as a free parameter,
the line centroid energy is unconstrained. We therefore performed the spectral
fitting with a fixed energy.
### 3.2 Spatial Divided Spectral analysis
For more detailed study, we divided rim-all region into 12 square small cells
indicated in figure 3. The size of one cell is 2$\times$2 which is comparable
to the point spread function (PSF) of the XRT. In analyzing the spectrum from
each cell, a corresponding background spectrum was extracted from the offset
data. The responses of the XRT and XIS were calculated assuming the same
region as for the rim-all analysis (see § 3.1) and normalized by the ratio
between the cell size and the assumed source area (that of “rim-all” in this
case). All spectra were well fitted by a single power law with interstellar
absorption. Figure 4 (right) shows the resultant spectrum of region ID 9 as a
representative spectrum. Here, the width of channel bin is set to $\sim$100 eV
around 4.1 keV. No significant excess around 4.1 keV is found in any cell,
with upper limits of $F_{{\rm line}}\leq$(0.15–0.54)$\times$10-6 cm-2s-1
depending on the surface brightness of the cells.
### 3.3 Comparison with Previous Results
Several papers report a line emission feature around 4.1 keV from this sky
region. In order to directly compare our results to previous work, we
extracted spectra from regions corresponding to those used in the previous
papers. All regions are shown in figure 3. The thin solid black square
indicates the ASCA region used by Slane et al. (2001); the small solid white
rectangular represents the Chandra detection region Bamba et al. (2005). The
solid and dashed thin elliptical regions are defined as outer and inner rim,
respectively. We performed spectral analysis for all regions mentioned above
using the same procedure as for the other regions (see §3.1 and §3.2). There
appears to be no line emission in any spectrum; each is well represented by a
single power law and interstellar absorption model.
For the “ASCA” region, Tsunemi et al. (2000) suggested a significant excess
around 4.1 keV in the spectrum from the ASCA SIS, which has 11$\times$11 FOV.
Using thermal fits, they included the additional line-like feature with the
$\sim$99% confidence level according to the $F$-test. On the other hand, Slane
et al. (2001) reported that the spectrum from one of two detectors, SIS0,
contains a feature at $\sim$4 keV but not the other, SIS1, of the same region.
They derived a 1$\sigma$ upper limit of 4.4 $\times$ 10-6cm-2s-1 for Sc-K
emission from this region. Using the Suzaku XIS data, we performed a spectral
analysis for the “ASCA” region. We found no line-like feature anywhere and
derive a 90% upper limit of $F_{{\rm line}}\leq$1.1$\times$10-6 cm-2s-1 .
Using XMM-Newton data, Iyudin et al. (2005) reported the detection of some
emission line features in 4.2–4.5 keV band from both bright filaments of the
NW rim. These two regions are described as “reg1” and “reg2” in the paper by
Iyudin et al. (2005) but there is no information about their precise location.
We therefore defined two elliptical regions (outer and inner rims shown in
figure 3) as corresponding regions for “reg1” and “reg2” for Suzaku data.
Spectral analysis for these regions were performed. Again, no line-like
feature was found. The 90 % confidence upper limit for the outer rim (reg1) is
$F_{{\rm line}}\leq$5.7$\times$10-7 cm-2s-1 and the upper limit for the inner
rim (reg2) is $F_{{\rm line}}\leq$1.1$\times$10-6 cm-2s-1 at a fixed X-ray
energy of 4.09 keV. Remarkably, our derived upper limits are about 6 or 2
times those lower than values claimed by Iyudin et al. (2005). We also
verified the line emission at higher X-ray energy of 4.24 keV for the outer
rim and 4.2 keV for the inner rim as same as claims by Iyudin et al. (2005).
There are not any line-like features giving the 90% confidence upper limit of
1.2$\times$10-6 cm-2s-1 for the outer rim and 5.0$\times$10-7 cm-2s-1 for the
inner rim.
Bamba et al. (2005) reported that there is an excess around 4 keV from a tiny
region on the inner filament indicated as the “Chandra” region in figure 3.
They obtained an acceptable fit including an additional gaussian component as
a narrow line model with a power law model, although the reduced $\chi^{2}$
does not improve significantly ($\chi^{2}$/d.o.f changes from 76.2/81 to
69.5/79). The derived flux was 7.3${}^{+5.1}_{-4.5}$$\times$10-7 cm-2s-1. In
order to compare the Chandra result with our evaluation, we employed region-ID
9 to serve as the corresponding region due to blurred PSF of Suzaku. It is
noticed that our resultant upper limit of line flux, $F_{{\rm
line}}\leq$3.8$\times$10-7 cm-2s-1 is about half the value of the derived
Chandra flux.
We summarize our derived upper limits of line flux for all regions in table 2
where previous results are also listed. Figure.5 also presents our Suzaku
evaluation with each corresponding published claims for direct comparison. It
is noticed that our resultant upper limits indicate remarkably low flux
compared with those obtained in previous works.
## 4 Discussion and Conclusion
RX J0852.0–4622 is a unique object from which the possible detection of Sc-K
line emission (around 4.1 keV) has been claimed (Tsunemi et al. (2000); Iyudin
et al. (2005); Bamba et al. (2005)). In order to search for evidence of line
emission around 4.1 keV, we performed spectral analysis for various regions in
the NW rim of RX J0852.0–4622 with the Suzaku observatory. No line-like
features were found from any region, including the whole rim region contained
in the Suzaku field of view, small subdivisions of the region, and regions
from which detections have been claimed using ASCA, XMM-Newton and Chandra
observations. The Suzaku line flux upper limits are 2–6 times lower than those
of published claims for all regions. 44Ti would be ionized enough to shift the
44Sc-K lines to higher energy in such a shell of SNR. The featureless spectrum
of Suzaku imply that the similar results would be yielded even in higher
energy. Then we conclude that, to date, no credible X-ray line from Sc-K has
been detected in the NW of RX J0852.0–4622. In this study, we do not consider
any other confusing lines as thermalized Sc-K line and ionized Ca-K lines from
thermal plasma of RX J0852.0–4622 and/or Vela, which overlays in line of
sight.
Our upper limits presented here employ a 90% confidence interval, determined
using $\chi^{2}$ statistics. For a single parameter of interest, this
corresponds to an increase of $\chi^{2}$ by 2.7 from its minimum value. Such a
confidence limit can be compared directly with detection claims by XMM-Newton
and Chandra, since they employed 90% confidence intervals for their
”detection” values, as is common for statistical error assessment in spectral
fitting. In light of the possible controversial difference between our result
and others, we also examined the implications of using a more conservative
confidence interval of 99.7% (equivalent gaussian width of 3$\sigma$).
According to $\chi^{2}$-statistics, a 3$\sigma$ confidence interval
corresponds to $\Delta\chi^{2}$ of 9.0. The 3$\sigma$ flux upper limit was
derived to be 2.43$\times$10-6 cm-2s-1 for the “rim-all” region. We also
inferred $\Delta\chi^{2}$, for a flux of 0, using the best fit parameter,
where the $\chi^{2}$ value becomes the minimum, and its 90% error described in
Iyudin et al. (2005) and Bamba et al. (2005) (see reg1, reg2 and chandra in
Table.2). In this regards, it is assumed to be the parabolic distribution near
the $\chi^{2}$ minimum for easy comparison. The resultant $\Delta\chi^{2}$s
were 5.0, 6.9 and 7.1 for “reg1”, “reg2” and “chandra” region, respectively.
It is noticed all of “detection” reports show low $\Delta\chi^{2}$ at 0 flux
indicating less than 3$\sigma$ significance in the $\chi^{2}$ statistics.
We investigate the consistency with observed gamma-ray flux, keeping in mind
that the gamma-ray detection by COMPTEL is now considered marginal (Iyudin et
al. (1998); Schönfelder et al. (2000)). Expected X-ray flux of line emission,
$F_{X}$, can be estimated from the gamma-ray flux ($F_{\gamma}$) using the
expression:
$\displaystyle F_{X}$ $\displaystyle=$
$\displaystyle\frac{F_{\gamma}g_{X}I_{X}f_{X}}{I_{\gamma}f_{\gamma}}$ (1)
,where $g_{X}$ is the K-shell electron capture fraction among the total number
of decays, $I_{X}$ is the fluorescence yield of K$\alpha$ X-ray emission,
$I_{\gamma}$ is the absolute intensity of the flux per decay of the parent
nucleus, and $f_{X}$ and $f_{\gamma}$ are the escape fractions of the X- and
$\gamma$\- photons, respectively, both of which are definitely equal to 1 for
RX J0852.0–4622. Applying $g_{X}\sim$8/9, $I_{\gamma}$(1157 keV)=0.999 and
$I_{X}$(Sc-K$\alpha$)$=$0.17, we can roughly estimate expected the $F_{X}$
from $F_{\gamma}$. The decay of $F_{\gamma}$ due to about 10 years interval
between gamma-ray observation and X-ray observation was considered. The flux
is estimated to be 5.1$\times$10-6cm-2s-1 in the case of
$F_{\gamma}=$3.8$\pm$0.7$\times$10-6cm-2s-1(Iyudin et al. (1998)), indicated
by dashed line in the fig.5, and 2.4$\times$10-6cm-2s-1 in the case of the
lowest significance of gamma-ray source with
$F_{\gamma}=$1.8$\pm$0.8$\times$10-6cm-2s-1(see fig.4 in Schönfelder et al.
(2000)), shown as dotted and dashed line in fig.5. It is found that the upper
limits given by the Suzaku observation require lower flux that the lower value
of the predicted X-ray flux. Our Suzaku result of no detection of Sc-K line
emission is based on only the NW rim observation. According to model
calculation of nucleosynthesis in the supernova explosion, 44Ti is produced
close to so-called “mass-cut” which is the innermost radius of ejected matter.
It is possible that the line emission could appear from the interior of the
remnant, and further observations and study are required.
Katsuda et al. (2008) recently reported a slow X-ray expansion rate for this
remnant of 0.23% $\pm$ 0.006, based on a proper motion study using XMM-Newton
observations taken over a span of 6.5 yr. They estimated a remnant age of
1700-4000 yr, depending on its evolutionary stage. Furthermore they estimated
a distance $\sim$750 pc, and assuming a high shock velocity of $\sim$3000 km
s-1. This result raises serious doubts about the argument that RX J0852.0–4622
is a young ($\sim$680 yr) and nearby ($\sim$200pc) supernova remnant
(Aschenbach (1998)). Slane et al. (2001) also suggested a larger distance of
1–2 kpc based on a larger column density for this remnant than that for the
Vela SNR from their ASCA analysis. At this larger distance and with an older
remnant age, our negative-detection of Sc-K line emission becomes
understandable.
## References
* Aschenbach (1998) Aschenbach, B. 1998, Nature, 396, 141
* Bamba et al. (2005) Bamba, A., Yamazaki, R., Hiraga, J. S. 2005, ApJ, 52, 887
* Ishisaki et al. (2007) Ishisaki, Y. et al. 2007, PASJ, 59, S113
* Diehl and Timmers (1998) Diehl, R., and Timmers, F. X. 1998, PASP, 110, 637
* Iyudin et al. (1994) Iyudin, A. F., Diehl, R., Bloemen, H., et al. 1994, A&A, 284, L1
* Iyudin et al. (1998) Iyudin, A. F., Schönfelder, V., Bennet, K., et al. 1998, Nature, 396, 142
* Iyudin et al. (2005) Iyudin, A. F., Aschenbach, B., Becker, W., Dennerl, K., & Haberl, F. 2005, A&A, 429, 225
* Katsuda et al. (2008) Katsuda, S., Tsunemi, H., Mori, K., 2008, ApJ, 678, L35
* Koyama et al. (2007) Koyama, K., et al. 2007, PASJ, 59, S23
* Leising et al. (2001) Leising, M. D. 2001, ApJ, 563, 185
* Mitsuda et al. (2007) Mitsuda, K., et al. 2007, PASJ, 59, S1
* Renaud et al. (2006) Renaud, M. et al. 2006, ApJ, 647, L41
* Schönfelder et al. (2000) Schönfelder, V., Bloemen,H., Collmar, W., et al. 2000, AIP Conf. Proc. AIP, 510, 54
* Serlemitsos et al. (2007) Serlemitsos, P.J., et al. 2007, PASJ, 59, S9
* Slane et al. (2001) Slane, P., Hughes, J. P., Tsunemi, H., Miyata, E., Aschenbach, B. et al. 2000, ApJ, 548, 814
* Tsunemi et al. (2000) Tsunemi, H., Miyata, E., Aschenbach, B., Hiraga, J. S., & Akutsu, D. 2000, PASJ, 52, 887
* Vink et al. (2001) Vink, J., Laming, J. M., Kaastra, J. S., Bleeker, J. A. M., Bloemen, H., & Oberlack, U. 2001, ApJ, 560, L79
* Takahashi et al. (2007) Takahashi, T., et al. 2007, PASJ, 59, S35
* Vink et al. (2001) Vink, J., Laming, J.M., Kaastra, J.S., et al. 2001, ApJ, 560, L79
* von Keinlin et al. (2004) von Keinlin, A., Atte, D., Schanne, S., Coordier, B., Diehl, R., Iyudin, A. F., Lichti, G.G., Roques, H.-P., Schonfelder, V., and Strong, A. 2004, Proc. of the 5th _INTEGRAL_ Science Workshop, ESA SP-552
(100mm,100mm)fig1_suzakufov.ps
Figure 1: ASCA observation of RX J0852.0-4622. Suzaku FOV is overlayed as a
solid black square.
(110mm,110mm)fig2_xis_image.ps
Figure 2: Suzaku XIS0 Image of RX J0852.0–4622 NW rim. 2-8 keV photons are
used excluding calibration source on two corners.
(120mm,150mm)fig3_specreg.ps
Figure 3: Various regions where we extracted spectra are shown on gray scale
Suzaku image, which is the same as figure.2. Labels of respective regions are
described in text.
.
@(45mm,100mm)fig4a_rimall_1.eps (45mm,50mm)fig4b_reg9_1.eps
Figure 4: Suzaku spectrum from rim-all region of RX J0852.0-4622 in left
panel. Right panel shows spectrum from a representative small square region
(region ID=9). No line emission is detected around 4keV
(120mm,180mm)fig5_comparison.eps
Figure 5: Direct comparison of line fluxes around 4.1 keV between Suzaku XIS 90% upper limits (black filled square) and previous claims of line detections (white square). Estimated X-ray flux by using gamma-ray flux reported by Iyudin et al. (1998) and Schönfelder et al. (2000) are shown as black filled circles. Table 1: Suzaku Observation log of NW rim on RX J0852.0-4622 and its background region Target | Observatino ID | Coordinate | Observation start | Exposure(XIS)
---|---|---|---|---
| | RA, Dec | (UT) | (ksec)
RXJ0852-4622 NW | 500010010 | 08h48m58s.01, -45D39’02”.9 | 2005-12-19 | 175
RXJ0852-4622 NW offset | 500010020 | 09h00m17s.45, -47D56’39”.1 | 2005-12-23 | 59
Table 2: Comparison of detection and upper limits of Sc-K line emission. Region ID | area | observatory | flux | Energy | reference
---|---|---|---|---|---
| (arcmin2) | | (ph cm-2 s -1) | (keV) |
rim-all | 91 | Suzaku | $<$1.2$\times$10-6**footnotemark: $*$ | 4.09(fixed) | this work
ASCA | 11$\times$11 | ASCA | $<$4.4$\times$10-6 \dagger\daggerfootnotemark: $\dagger$ | $\sim$4 | Slane et al. (2001)
| | Suzaku | $<$1.1$\times$10-6**footnotemark: $*$ | 4.09(fixed) | this work
outer-rim(reg1) | 31 | XMM-Newton | 3.8(1.0-7.4)$\times$10-6 \ddagger\ddaggerfootnotemark: $\ddagger$ | 4.24(4.1-4.42) | Iyudin et al. (2005)
| | Suzaku | $<$5.7$\times$10-7**footnotemark: $*$ | 4.09(fixed) | this work
| | Suzaku | $<$1.0$\times$10-6**footnotemark: $*$ | 4.24(fixed) | this work
inner-rim(reg2) | 17 | XMM-Newton | 2.4(0.9-5.1)$\times$10-6\ddagger\ddaggerfootnotemark: $\ddagger$ | 4.2(4.0-4.2) | Iyudin et al. (2005)
| | Suzaku | $<$1.1$\times$10-6 **footnotemark: $*$ | 4.09(fixed) | this work
| | Suzaku | $<$5.0$\times$10-7**footnotemark: $*$ | 4.2(fixed) | this work
Chandra | 0.34 | Chandra | 7.3(2.8-12.4)$\times$10-7 \ddagger\ddaggerfootnotemark: $\ddagger$ | 4.11(3.9-4.42) | Bamba et al. (2005)
| 4 | Suzaku(ID 9) | $<$3.8$\times$10-7**footnotemark: $*$ | 4.09(fixed) | this work
**footnotemark: $*$ 90% confidence upper limit \dagger\daggerfootnotemark: $\dagger$ 1 $\sigma$ confidence upper limit \ddagger\ddaggerfootnotemark: $\ddagger$ 90% confidence error | |
|
arxiv-papers
| 2009-06-01T05:31:40 |
2024-09-04T02:49:03.052002
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Junko S. Hiraga, Y.Kobayashi, T.Tamagawa, A. Hayato, A.Bamba,\n Y.Terada, R.Petre, H.Katagiri, and H.Tsunemi",
"submitter": "Junko Hiraga",
"url": "https://arxiv.org/abs/0906.0216"
}
|
0906.0231
|
# Solving $k$-Nearest Neighbor Problem on Multiple Graphics Processors
Kimikazu Kato and Tikara Hosino Nihon Unisys, Ltd.
###### Abstract
A recommendation system is a software system to predict customers’ unknown
preferences from known preferences. In a recommendation system, customers’
preferences are encoded into vectors, and finding the nearest vectors to each
vector is an essential part. This vector-searching part of the problem is
called a $k$-nearest neighbor problem. We give an effective algorithm to solve
this problem on multiple graphics processor units (GPUs).
Our algorithm consists of two parts: an $N$-body problem and a partial sort.
For a algorithm of the $N$-body problem, we applied the idea of a known
algorithm for the $N$-body problem in physics, although another trick is need
to overcome the problem of small sized shared memory. For the partial sort, we
give a novel GPU algorithm which is effective for small $k$. In our partial
sort algorithm, a heap is accessed in parallel by threads with a low cost of
synchronization. Both of these two parts of our algorithm utilize maximal
power of coalesced memory access, so that a full bandwidth is achieved.
By an experiment, we show that when the size of the problem is large, an
implementation of the algorithm on two GPUs runs more than 330 times faster
than a single core implementation on a latest CPU. We also show that our
algorithm scales well with respect to the number of GPUs.
## I Introduction
A recommendation system is a software system which utilizes known customers’
preferences to predict unknown preferences. It is widely used in Internet-
based retail shops and other service providers, such as Amazon.com [2] for
example. In a recommendation system, customers’ preferences or buying patterns
for items are encoded into vectors and finding nearest vectors is an essential
part of its computation. This vector-finding part is called a $k$-nearest
neighbor problem. We give an effective GPU algorithm to solve this problem.
Generally a recommendation system deals with large samples and large
dimensions, as person$\times$item for example. In such a case, the
dimensionality reduction method such as singular value decomposition or latent
Dirichlet allocation has been widely used [4, 5, 6]. As the result of the
reduction, the problem becomes the $k$-nearest neighbor search for a moderate
dimension. However, the effect of the sample size $n$ is $O(n^{2})$ and it is
a computational burden. Therefore some approximation has been considered to be
necessary [7]. This paper indicates that strict computation in practical time
is possible. Our target size for $n$ is $\sim{}10^{6}$ to $\sim{}10^{8}$, for
the dimension after reduction is $\sim{}10^{2}$ to $\sim{}10^{3}$.
The $k$-nearest neighbor problem is defined as follows: when a set of vectors
$v_{1}\ldots v_{n}\in\mathbb{R}^{d}$, distance function $\delta$ and an
integer $k$ is given, find $k$ nearest vectors to each $v_{i}$. We propose an
effective and scalable algorithm to solve it on multiple Graphics Processor
Units (GPUs). Our algorithm is implemented in CUDA [1], which is extension of
C language provided by NVIDIA.
A GPU is a powerful commodity processor. Although a GPU is originally designed
for processing of graphics, the movement of the GPGPU (General Purpose
computing on GPU) has arisen as an expected breakthrough for a large scale
numerical computation. The typical characteristic of the GPGPU is highly
massive parallelism. A GPU has hundres of cores, and to extract its power, it
is necessary to run tens of thousand of threads per unit. Because of that
property, a GPU consumes large energy as a unit, but it is energy effective
per FLOPS.
The algorithm of the $k$-nearest neighbor problem is fundamentally a
combination of $N$-body problem and partial sorting. Nyland et al. [8] showed
an effective algorithm for $N$-body problem on CUDA. Because dealing with high
dimensional vectors, we give some trick in addition to the known $N$-body
algorithm. About sorting, [9] showed an effective algorithm, but we have
employed another algorithm because we have to sort many arrays at once and we
only need to have top $k$ element not fully sorted data.
Garcia et al. [garcia08] showed a GPU algorithm to compute the $k$-nearest
neighbor problem with respect to Kullback-Leibler divergence. Their algorithm
mainly uses a texture memory, which in effect, works as a cache memory. Its
performance largely depends on the cache-hit ratio, and for a large data, it
is likely that a cache miss occurs frequently. On the other hand, our
algorithm utilizes maximal power of coalesced memory access, so that such loss
as a cache miss never happens. Moreover, our algorithm is effective even for a
symmetric distance function and for multiple GPUs.
The rest of this paper is organized as follows. In Sect. II, outline of CUDA’s
programming model is explained. In Sect. III, we define the problem formally.
We give overview of the algorithm in Sect. IV. In Sect. V and VI, we explain
the detail of each step of the algorithm. In Sect. VII, we show the result of
experiment. We conclude in Sect. VIII.
## II Programming model of CUDA
In this section, programming model of CUDA is briefly explained. For more
details of CUDA, refer to [10].
### Thread model.
NVIDIA’s recent graphics processor contains hundreds of stream processors
(SPs). An SP is like a core in a CPU; it can compute simultaneously. For
example, GTX280 has 240 SPs. With such many SPs and very low cost of context
switch, a GPU performs well for tens of thousands of threads. Threads are
divided into thread blocks. Each thread block can contain at most 1024
threads. A function to synchronize threads in a block is provided, while there
is no such function to synchronize thread blocks. The only way to synchronize
thread blocks is to bring back the control to the CPU.
### Hierarchal memories.
Before running a GPU, the CPU must explicitly copy data to the GPU’s memory.
The memory in GPU to share the data with CPU is called global memory. A thread
block is also a unit to share data. Each thread block has a memory to share
only in the thread block. It is called shared memory. The access to the global
memory is relatively slow, and usually copying necessary data to shared memory
is better for performance. Although global memory has some gigabytes, shared
memory has only 16KB for each thread block. Each thread also has a local
memory which is called a register. The access to a register is fast, but its
size is also limited.
### Coalesced memory access.
In CUDA, for example, if successive 16 threads are accessing the successive
128 bytes in global memory at the same time, the memory access is coalesced.
When a memory access is coalesced, it is done in only one fetch while
otherwise access by 16 threads takes 16 fetches. Hence, effective utilization
of coalesced memory access affects very much the total performance of an
application. The detailed condition about when memory access can be coalesced
is explained in [10].
## III Description of the problem
The $k$-nearest neighbor problem is described as follows.
> Suppose that a set of vectors $v_{1},\cdots,v_{n}\in\mathbb{R}^{d}$ and
> distance function $\delta:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}$
> is given. Then output the $k$ nearest vectors to each $v_{i}$.
>
> In other words, for each $i$, find a subset of indices
> $\\{j_{i1},\ldots,j_{ik}\\}\subset\\{1,2,\ldots,n\\}$ such that
>
>
> $\delta(v_{i},v_{j_{i1}})\leq\delta(v_{i},v_{j_{i2}})\leq\cdots\leq\delta(v_{i},v_{j_{ik}})$
>
> and
>
> $\delta(v_{i},v_{j_{ik}})\leq\delta(v_{i},v_{j})\text{ for all
> }j\not\in\left\\{j_{i1},\ldots,j_{ik}\right\\}$
The distance function $\delta$ is arbitrary. Although we use the word
“distance”, it does not necessarily need to satisfy the axiom of distance. We
assume that $\delta$ is cumulatively computable. It means $\delta$ can be
computed step by step by referring to each coordinate values. In other words,
it is computed with a function
$\bar{\delta}:\mathbb{R}\times\mathbb{R}\times\mathbb{R}\to\mathbb{R}$ and
some initial value $a_{1}$ by $a_{i}=\bar{\delta}(u_{i-1},v_{i-1},a_{i-1})$
and $\delta(u,v)=a_{n+1}$.
In this paper, we only discuss the case when $\delta$ is symmetric: i.e.
$\delta(u,v)=\delta(v,u)$. In the symmetric case, we can omit the half of
distance calculations, and consequently, balancing of the workload becomes
more difficult. The algorithm explained in this paper is easily modified for
non-symmetric distance function.
## IV Overview of the algorithm
Since we have assumed that $\delta$ is symmetric, we only compute
$\delta(v_{x},v_{y})$ for $x>y$. For an explanation, we depict the whole
problem as a square where the point $(x,y)$ stands for the computation of
$\delta(v_{x},v_{y})$. The distances to compute is represented by upper right
triangle of the square.
Because of the limitation of the number of threads which can be run at once on
GPU, the problem is divided into a first level blocks. We call each of them a
grid (Fig. 1). Each grid is processed in a GPU at once. A grid can be divided
row-wisely into blocks, each of which is computed in a thread block. We denote
the size of each side of a grid by $\mathtt{GSIZE}$. It means the grid $(X,Y)$
stands for the region $\mathtt{GSIZE}\cdot X\leq
x<\mathtt{GSIZE}\cdot(X+1),\;\mathtt{GSIZE}\cdot Y\leq
y<\mathtt{GSIZE}\cdot(Y+1)$. Similarly we denote the size of a block (i.e. the
number of rows in a block) by $\mathtt{BSIZE}$ (Fig. 2). $\mathtt{GSIZE}$ is
determined depending on $n$ so that the problem can be devided effectively,
while $\mathtt{BSIZE}$ is fixed according to the capability of CUDA.
Figure 1: First level division of the problem Figure 2: Devision of a grid
To balance the workload, we assign GPUs as in Fig. 3. In other words, the
$i$-th row of grids is assigned to $j$-th GPU when $i\\!\mod
2\cdot\mathtt{nDevices}=j$ or $i\\!\mod
2\cdot\mathtt{nDevices}=2\cdot\mathtt{nDevices}-j-1$, where
$\mathtt{nDevices}$ means the number of GPUs available. Here note that
although it is enough to compute the upper-right part of the problem, each GPU
virtually compute the mirror side of the assigned part (see also Fig. 4)
Figure 3: Assignment for GPUs Figure 4: Heaps for GPUs
To keep the $k$-nearest vectors, we use a heap structure. The heap has at most
$k$ elements and is in descending order, so that the $k$-th smallest element
can be found in $O(1)$. Moreover, each GPU keeps their own heaps to avoid a
costly synchronization (Fig. 4). It means each GPU has $n$ heaps which stores
the $k$ nearest elements computed by itself. At the last phase, the heaps of
different GPUs are merged in CPU.
Thus the outline of the algorithm is shown in Fig. 5. In this algorithm, the
calclation of distances is explained in Sect. V, and how to push the distances
to the heaps is described in Sect. VI.
procedure ThreadMain($n$,$d$,$\left\\{v_{i}\right\\}$)
---
| $\mathtt{nGrids}\leftarrow\lfloor(n-1)/\mathtt{GSIZE}\rfloor+1$
| Prepare the heaps $\left\\{h_{i}\right\\}_{i=0}^{n-1}$
| for $Y:=0$ to $\mathtt{nGrids}-1$ do
| | for $X:=0$ to $\mathtt{nGrids}$-1 do
| | | if $i\\!\mod 2\cdot\mathtt{nDevices}=j$
| | | | | or $i\\!\mod 2\cdot\mathtt{nDevices}$
| | | | | $=2\cdot\mathtt{nDevices}-j-1$ then
| | | | Calculate the distances for the grid $(X,Y)$
| | | | Push the $i$-th row of distances
| | | | | | to $h_{i}$ for the grids $(X,Y)$ and $(Y,X)$
| | | end if
| | end for
| end for
end procedure
Figure 5: Overall algorithm: each GPU is assigned to CPU thread and its thread
id is given by $tid$
## V Phase 1: calculation of distances
Basically the framework of the process to compute the distances of vectors is
the same as the algorithm of $N$-body problem written in [8]. A grid is row-
wisely devided into blocks, and each block is assigned to a thread block. Each
thread corresponds to a row. A block first copies a fixed number (which we
denote by $\mathtt{C1}$) of columns to the shared memory. Then compute the
distances.
However, in our problem, since the dimension $d$ is large, it is not possible
to copy all the coordinate data to the shared memory even for a small
$\mathtt{C1}$. Hence, a thread iteratively reads a fixed number $\mathtt{C2}$
of coordinate values of corresponding vectors. In other words, if $v_{i}$ is
expressed as $(v_{i}^{(0)},\ldots,v_{i}^{(d-1)})$, then
$v_{i}^{(j\cdot\mathtt{C2})},\ldots,v_{i}^{(j\cdot(\mathtt{C2}+1)-1)}$ are
read in $j$-th iteration (Fig. 6). If a vector is expressed by single
precision numbers, $\mathtt{C2}$ must be a multiple of 32 to utilize full
power of coalesced memory accesses.
Figure 6: Illustration of the algorithm to compute the distaces of $d$
dimensional vectors
The algorithm to calculate the distaces for a given grid is shown in Fig. 7.
Here, the arguments $n_{1}$, $\left\\{v_{1i}\right\\}_{n=0}^{n_{1}-1}$,
$n_{2}$, and $\left\\{v_{2i}\right\\}_{i=0}^{n_{2}-1}$ are given as re-indexed
$\left\\{v_{i}\right\\}$ so that this procedure can calculate for the assigned
grid. The index for the block is expressed by $bid$, and each block has
$\mathtt{BSIZE}\times\mathtt{C1}$ threads. Each thread is indexed by $(tx,ty)$
procedure CalcDistances($d$,$n_{1}$,$\left\\{v_{1i}\right\\}_{i=0}^{n_{1}-1}$,
$n_{2}$, $\left\\{v_{2i}\right\\}_{i=0}^{n_{2}-1}$)
---
| $bx\leftarrow 0$
| Prepare the shared memory to store the distances
| while $bx\cdot\mathtt{C1}<n_{1}$ do
| | $l\leftarrow 0$
| | while $l<d$
| | | Copy $v_{1i}^{(k)},v_{2j}^{(k)}$
| | | | ($bx\leq i<bx+\mathtt{C2}$,
| | | | $bid\cdot\mathtt{BSIZE}\leq j<(bid+1)\cdot\mathtt{BSIZE}$,
| | | | $l\leq k<l+\mathtt{C2}$) to the shared memory
| | | Calculate cumulatively all the combinations of
| | | | $v_{1i}$ and $v_{2i}$ which are in the shared memory
| | | | and store it in a local resister $dist$
| | | $l\leftarrow l+\mathtt{C2}$
| | end while
| | Store the resulting distance $dist$ in the global memory
| | $bx\leftarrow bx+\mathtt{C1}$
| end while
end procedure
Figure 7: Algorithm for calculation of distances: for simplicity, it is
assumed that $n_{1}$ is multiple of $\mathtt{C1}$ and $d$ is multiple of
$\mathtt{C2}$
## VI Phase 2: taking $k$ smallest elements
In the second phase, each thread block is assigned to each row. The smallest
$k$ distances are computed by parallel processing of threads in the block. If
the number of thread in a block is denoted by $\mathtt{nThreads}$, each thread
read distances skipping $\mathtt{nThreads}$, so that memory access is
coalesced. A thread check if the element is smaller than current $k$-th
largest element in the heap, and store it in the local buffer if so. This is
because $k$ is relatively small than $n$ and it is likely that only a few
elements is stored in the local buffer. Because of this mechanism, the waiting
time is shortened even though when pushing to the heap, the threads must be
synchronized.
The algorithm is shown in Fig. 8. Here, the index for the block and thread is
denoted by $bid$ and $tid$ respectively, and $\mathtt{buffer}$ is thread-local
array and its size is $\mathtt{bufsize}$.
procedure KSmallest ($k$,$n$,$m$,$\\{h_{i}\\}$: heaps,
---
$\\{a_{ij}\\}_{0\leq i<m-1,0\leq j<n-1}$)
| for $i:=tid$ to $n-1$ step $\mathtt{nThreads}\cdot\mathtt{bufsize}$ do
| | for $j:=0$ to $\mathtt{nThreads}\cdot\mathtt{bufsize}$ step $\mathtt{bufsize}$ do
| | | for $l:=0$ to $\mathtt{bufsize}$ do
| | | | $\nu\leftarrow i\cdot\mathtt{nthreads}\cdot\mathtt{bufsize}+j+l$
| | | | if $a_{bid,\nu}$ is smaller than top of the heap $h_{bid}$ then
| | | | | Store $a_{bid,\nu}$ to $\mathtt{buffer}$
| | | | end if
| | | end for
| | | Push elements of $\mathtt{buffer}$
| | | | | to $h_{bid}$ (blocking other threads)
| | end for
| end for
end procedure
Figure 8: Algorithm to get $k$-smallest numbers from multiple arrays
## VII Experiment
We experimented our algorithm on two GTX280’s and one GTX280. For a
comparison, we also implemented CPU version and experimented it on Intel i7
920 (2.67GHz). GTX280 is one of the latest NVIDIA’s graphics chips. The
algorithm experimented on the CPU is a simple one: it calculates each
$\delta(v_{x},v_{y})\ (x>y)$ and pushes it to the corresponding heaps. Note
that although Intel i7 has four cores with hyperthreading capability, we only
worked on serial algorithm, i.e. it only uses one core.
The distance employed here is Hellinger distance, which often used in the
context of statistics. Hellinger distance for two vectors $u$ and $v$ is
defined as:
$\sum_{i}\left(\sqrt{u^{(i)}}-\sqrt{v^{(i)}}\right)$ (1)
The result of the experiment for various $n$ is shown in Table I. The other
parameters are set as $k=100$ and $d=256$; and the data is generated randomly.
It shows that for a large problem, our algorithm work well from the viewpoint
of parallelism of GPUs. Moreover, it also tells the GPUs substantially
outperforms the CPU; for a large problem, two GPU implementation is more than
330 times faster than the CPU.
TABLE I: Elapse time for $k$-nearest neighbor problem (sec) $n$ | 10000 | 20000 | 40000 | 80000
---|---|---|---|---
$2\times$ GTX280 (a) | 1.8 | 5.7 | 17.7 | 68.6
$1\times$ GTX280 (b) | 2.7 | 8.6 | 34.1 | 131.8
i7 920 (CPU) (c) | 354.2 | 1419.0 | 5680.7 | 22756.9
(c)/(a) | 196.7 | 248.9 | 320.9 | 331.7
(c)/(b) | 131.1 | 173.3 | 166.5 | 172.6
(b)/(a) | 1.50 | 1.51 | 1.92 | 1.92
## VIII Conclusion
We introduced an effective algorithm for $k$-nearest neighbor problem which
works on multiple GPUs. By an experiment, we have shown that it runs more than
330 times faster than an implementation on a single core of an up-to-date CPU.
We have also shown that the algorithm is effective from the viewpoint of
parallelism of GPUs. That is because 1) there is no synchronization between
GPUs until the very end of the process and 2) the workload is well balanced.
Our algorithm includes simultaneous partial sort of multiple arrays. It
minimizes the inter-thread synchronization utilizing the fact that if $k\ll
n$, most of the data are discarded. About this part of algorithm, we have
achieved a good performance but still there is a room for improvement because
it uses arrays in a local scope which are stored in a slow global memory in
effect. To improve the performance of the simultaneous partial sort is our
ongoing work, and we believe this problem alone is also important because it
can be applied to other problems.
## Acknowledgment
The authors would like to thank Khan Vo Duc of NVIDIA for giving us a helpful
advice about an early version of this paper.
## References
* [1] NVIDIA: CUDA Zone. http://www.nvidia.com/object/cuda_home.html
* [2] Amazon.com. http://www.amazon.com
* [3] Netflix. http://www.netflix.com
* [4] Brand, M.: Fast online SVD revisions for lightweight recommender systems. In: In SIAM International Conference on Data Mining. (2003)
* [5] Blei, D.M., Ng, A.Y., Jordan, M.I., Lafferty, J.: Latent dirichlet allocation. Journal of Machine Learning Research 3 (2003) 2003
* [6] Das, A.S., Datar, M., Garg, A., Rajaram, S.: Google news personalization: scalable online collaborative filtering. In: WWW ’07: Proceedings of the 16th international conference on World Wide Web, New York, NY, USA, ACM (2007) 271–280
* [7] Indyk, P., Miller, W.: Approximate nearest neighbors: towards removing the curse of dimensionality. In: In proceedings of the 1998 Symposyum on Theory of Computing. (1998)
* [8] Nyland, L., Harris, M., Prins, J.: Fast $n$-body simulation with CUDA. In: GPU Gems III. NVIDIA (2007) 677–695
* [9] Cederman, D., Tsigas, P.: A practical quicksort algorithm for graphics processors. In: ESA ’08: Proceedings of the 16th annual European symposium on Algorithms, Berlin, Heidelberg, Springer-Verlag (2008) 246–258
* [10] NVIDIA: CUDA 2.1 programming guide. http://www.nvidia.com/object/cuda_develop.html (2008)
|
arxiv-papers
| 2009-06-01T08:14:13 |
2024-09-04T02:49:03.058379
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Kimikazu Kato and Tikara Hosino",
"submitter": "Kimikazu Kato",
"url": "https://arxiv.org/abs/0906.0231"
}
|
0906.0282
|
# Real closed $*$ reduced partially ordered Rings
Jose Capco Email: jcapco@yahoo.com
###### Abstract
In this work we attempt to generalize our result in [Capco3] [Capco4] for real
rings (not just von Neumann regular real rings). In other words we attempt to
characterize and construct real closure $*$ of commutative unitary rings that
are real. We also make some very interesting and significant discoveries
regarding maximal partial orderings of rings, Baer rings and essentail
extension of rings. The first Theorem itself gives us a noteworthy bijection
between maximal partial orderings of two rings by which one is a rational
extension of the other. We characterize conditions when a Baer reduced ring
can be integrally closed in its total quotient ring. We prove that Baer hulls
of rings have exactly one automorphism (the identity) and we even prove this
for a general case (Lemma LABEL:deck_transform_Baer). Proposition
LABEL:Prop_essext allows us to study essential extensions of rings and their
relation with minimal prime spectrum of the lower ring. And Theorem
LABEL:adjoin_idemp gives us a construction of the real spectrum of a ring
generated by adjoining idempotents to a reduced commutative subring (for
instance the construction of Baer hull of reduced commutative rings).
From most of the above interesting theories we prove that there is a bijection
between the real closure $*$ of real rings (upto isomorphisms) and their
maximal partial orderings. We then attempt to develop some topological
theories for the set of real closure $*$ of real rings (upto isomorphism) that
will help us give a topological characterization in terms of the real and
prime spectra of these rings. The topological characterization will be
revealed in a later work. It is noteworthy to point out that we can allow
ourself to consider mostly the minimal prime spectrum of the real ring in
order to develop our topological theories.
Mathematics Subject Classification (2000):
Primary 13J25; Secondary 06E15, 16E50
Keywords:
real closed $*$ rings, regular rings, absolutes of Hausdorff spaces,
irreducible surjections, $f$-ring partial orderings,total quotient ring,
maximal partial ordering of rings, essential extensions of rings.
††Supported by Universität Passau, Passau, Germany and Magna-Steyr, St.
Valentin, Austria
###### Notation.
If $A$ is a commutative unitary ring, then we write $T(A)$ to mean the total
quotient ring of $A$. And if $A$ is partially ordered, with a partial ordering
$A^{+}$, then we automatically assume a default partial ordering of $T(A)$,
and unless otherwise defined we write $T(A)^{+}$ for it, which is the weakest
partial ordering of $T(A)$ that extends $A^{+}$. In other words
$T(A)^{+}:=\\{\sum_{i=1}^{n}a_{i}t_{i}^{2}:n\in\mathbb{N},a_{i}\in
A^{+},t_{i}\in T(A),i=1,\dots,n\\}$
###### Theorem 1.
Let $A$ be a subring of a reduced commutative ring $B$, suppose also that $B$
is a rational extension of $A$.
Suppose that $B$ has a maximal partial ordering $B^{+}$, then the partial
ordering $B^{+}\cap A$ is also a maximal partial ordering of $A$.
There is a bijection
$\Phi:\mathcal{P}_{B}\rightarrowtail\rightarrow\mathcal{P}_{A}$, where
$\mathcal{P}_{B}$ is the set of all maximal partial orderings of $B$ and
$\mathcal{P}_{A}$ is the set of all maximal partial orderings of $A$. If we
have a fixed partial ordering of $A$, say $A^{+}$, and if $B^{+}$ is the
weakest partial ordering of $B$ extending $A^{+}$ i.e.
$B^{+}:=\\{\sum_{i=0}^{n}a_{i}b_{i}^{2}\,:\,a_{i}\in A^{+},b_{i}\in B\\}$
then we can similarly prove that there is a bijection between the set of
maximal partial orderings of $B$ containing $B^{+}$ and the set of maximal
partial ordering of $A$ containing $A^{+}$.
###### Proof.
Set $A^{+}:=B^{+}\cap A$, if $A^{+}$ were not a maximal partial ordering of
$A$ then there exists an element $a\in A\backslash A^{+}$ that extends $A^{+}$
to another partial ordering of $A$, this is equivalent to (one can easily
prove this or find this in , [Brum] Proposition 1.5.1)
$a_{1}a+a_{2}=0\quad\Leftrightarrow\quad aa_{1},a_{2}=0\qquad(a_{1},a_{2}\in
A^{+})$ (*)
Now $a\not\in B^{+}$ and because $B^{+}$ is a maximal partial ordering of $B$,
$a$ cannot extend $B^{+}$ as a partial ordering of $B$ i.e
$\exists b_{1},b_{2}\in
B^{+}\backslash{\\{0\\}}\mathrel{\ooalign{$\,\backepsilon$\cr\lower
0.7pt\hbox{\kern 1.0pt$-\,$}}}ab_{1}+b_{2}=0$
By [Brum] §1.4 p.38 and the definition of $A^{+}$, there exists an $a_{2}\in
A^{2}\backslash{\\{0\\}}$ such that $a_{2}b_{2}\in A^{+}\backslash{\\{0\\}}$.
Then we have the following cases
Case 1: $a_{2}ab_{1}=0$. Then
$a_{2}(ab_{1}+b_{2})=a_{2}b_{2}\neq 0$
but $ab_{1}+b_{2}=0$, here we have a contradiction!
Case 2: $a_{2}ab_{1}\neq 0$. Then there is an $a_{1}\in A^{2}$ such that
$a_{1}a_{2}ab_{1},a_{1}a_{2}b_{1}\in A\backslash{\\{0\\}}$
this also shows that $a_{1}a_{2}b_{1}\in A^{+}\backslash{\\{0\\}}$. We thus
have the following
$a_{1}a_{2}(ab_{1}+b_{2})=(a_{1}a_{2}b_{1})a+(a_{1}a_{2}b_{2})=0$
but $a_{1}a_{2}b_{1},a_{2}a_{2}b_{2}\in A^{+}\backslash{\\{0\\}}$ and by ($*$)
$a_{1}a_{2}ab_{1}=0$ which is a contradiction.
For $\tilde{P}\in\mathcal{P}_{B}$ define $\Phi(\tilde{P}):=\tilde{P}\cap A$.
By (ii), $\Phi(\tilde{P})\in\mathcal{P}_{A}$, and we need only show that
$\Phi$ is both surjective and injective. We prove by contradiction, assume
there are $\tilde{P}_{1},\tilde{P}_{2}\in\mathcal{P}_{B}$ such that
$\Phi(\tilde{P}_{1})=\Phi(\tilde{P}_{2})=:P\in\mathcal{P}_{A}\quad\textrm{ for
some }P\in\mathcal{P}_{A}$
If $\tilde{P}_{1}\neq\tilde{P}_{2}$ then there exists a
$b\in\tilde{P}_{2}\backslash\tilde{P}_{1}$. Because $\tilde{P}_{1}$ is a
maximal partial ordering of $B$, we have (see for instance [Brum] Proposition
1.5.1) some $b_{1},b_{2}\in\tilde{P}_{1}\backslash{\\{0\\}}$ such that
$bb_{1}+b_{2}=0$. Without loss of generality we assume also that $b_{1}\in
B^{2}\subset\tilde{P}_{1},\tilde{P}_{2}$, since we can always write
$b_{1}^{2}b+b_{2}b_{1}=0$ knowing that (our rings are reduced)
$b_{1}^{2},b_{2}b_{1}\in\tilde{P}_{1}\backslash{\\{0\\}}$. So there is an
$a_{2}\in A^{2}\subset P$ such that $a_{2}b_{2}\in P\backslash{\\{0\\}}$ (by
the definition of $P$). We have the following cases
Case 1: $a_{2}b_{1}b=0$. Then
$a_{2}(b_{1}b+b_{2})=a_{2}b_{2}=0$
a contradiction.
Case 2: $a_{2}b_{1}b\neq 0$. Then, there is an $a_{1}\in A^{2}\subset P$ such
that $a_{1}a_{2}b_{1}b\in P\backslash{\\{0\\}}$. But we also have
$a_{1}a_{2}(b_{1}b+b_{2})=\underbrace{a_{1}a_{2}b_{1}b}_{\in
P\backslash{\\{0\\}}}+\underbrace{a_{1}a_{2}b_{2}}_{\in
P\backslash{\\{0\\}}}=0$
This is a contradiction, as $P$ is a partial ordering of $A$ (see for instance
[Brum] Proposition 1.2.1(b)).
Thus we have shown that $\Phi$ is injective. Now to show that $\Phi$ is
surjective, consider any $P\in\mathcal{P}_{A}$. Consider $\tilde{P}$ to be a
partial ordering of $B$ that is maximal and contains the partial ordering of
$B$ defined by
$\\{\sum_{i=1}^{n}b_{i}^{2}a_{i}\,:\,a_{i}\in P,b_{i}\in B,i=1,\dots,n\\}$
(for the case $A$ has a given partial ordering $A^{+}$ and $P$ contains this
$A^{+}$. We see that $\tilde{P}$ contains the weakest partial ordering of $B$
extending $A^{+}$). We observe then that
$\Phi(\tilde{P})=\tilde{P}\cap A\supset P$
but $P$ being a maximal partial ordering of $A$ implies then that
$\Phi(\tilde{P})=P$ and so we have shown that $\Phi$ is surjective. ∎
The Theorem above just enhanced [Capco4] Theorem 20 and so we can write
###### Theorem 2.
Let $A$ be a real, regular ring then there exists a bijection between the
following sets
1. 1.
$\\{C\,:\,C$ is a real closure $*$ of $A\\}/\sim$ where for any two real
closure $*$ of $A$, $C_{1}$ and $C_{2}$, one defines $C_{1}\sim C_{2}$ iff
there is an $A$-poring-isomorphism between $C_{1}$ and $C_{2}$
2. 2.
$\\{X\subset\mathrm{Sper}\,A\,:\,X\textrm{ is closed and
}\mathrm{supp}_{A}|X:X\rightarrow\mathrm{Spec}\,A\textrm{ is an irreducible
surjection}\\}$
3. 3.
$\\{P\subset A\,:\,P\supset A^{+}$ and $P$ is a maximal partial ordering of
$A\\}$
4. 4.
$\\{C\,:\,C$ is a real closure $*$ of $B(A)\\}/\sim$ where for any two real
closure $*$ of $B(A)$, $C_{1}$ and $C_{2}$, one defines $C_{1}\sim C_{2}$ iff
there is a $B(A)$-poring-isomorphism between $C_{1}$ and $C_{2}$
5. 5.
$\\{s:\mathrm{Spec}\,B(A)\rightarrow\mathrm{Sper}\,B(A)\,:\,s$ is a continuous
section of $\mathrm{supp}_{B(A)}\\}$
6. 6.
$\\{X\subset\mathrm{Sper}\,B(A)\,:\,X$ is closed and
$\mathrm{supp}_{B(A)}|X:X\rightarrow\mathrm{Spec}\,B(A)$ is an irreducible
surjection$\\}$
7. 7.
$\\{P\subset B(A)\,:\,P\supset B(A)^{+}$ and $P$ is an $f$-ring partial
ordering of $B(A)\\}$
We make the following Lemma, whose proof is quite straightforward, therefore
it is omitted.
###### Lemma 3.
Let $A$ be an $f$-ring then for any $x,y\in A$ the following identity holds
$y+(x-y)^{+}=x+(y-x)^{+}=x\vee y$ $y-(x-y)^{-}=x-(y-x)^{-}=x\wedge y$
###### Theorem 4.
1. (i)
Let $A$ be a reduced ring integrally closed in a real closed von Neumann
regular ring $B$. Let $f\in A[T]$ and $g\in B[T]$ be monic polynomials of odd
degree (i.e. $\deg(f),\deg(g)\in 2\mathbb{N}+1$). Then $f$ has a zero in $A$
and $g$ has a zero in $B$.
2. (ii)
Assume the rings $A$ and $B$ as above. Then $A$ has the property that
$\mathrm{Quot}(A/(\mathfrak{p}\cap A))$ is algebraically closed in
$B/\mathfrak{p}$ for all $\mathfrak{p}\in\mathrm{Spec}\,B$ (i.e.
$\mathrm{Quot}(A/(\mathfrak{p}\cap A))$ is a real closed field).
3. (iii)
Let $A$ be a reduced commutative unitary ring and $T(A)$ be von Neumann
regular, then
$\mathrm{Quot}(A/(\mathfrak{p}\cap
A))=T(A)/\mathfrak{p}\qquad\forall\mathfrak{p}\in\mathrm{Spec}\,T(A)$
Now if this $A$ is as in (ii) and $T(A)$ an intermediate ring of $A$ and $B$
then $T(A)$ is in fact a real closed ring (not necessarily real closed $*$).
###### Proof.
(i) First we show that $g$ has a zero in $B$. Let
$\mathfrak{p}\in\mathrm{Spec}\,B$ then we know by the very definition of real
closed rings that $B/\mathfrak{p}$ is a real closed field. Thus the canonical
image of $g$ in $B/\mathfrak{p}[T]$, denote by $\widehat{g}$, has a zero in
$B/\mathfrak{p}$ (note that $g$ is monic and thus $\widehat{g}$ will have the
same degree as $g$) say $\widehat{b}_{\mathfrak{p}}$, where
$\widehat{b}_{\mathfrak{p}}$ is the canonical image of some
$b_{\mathfrak{p}}\in B$ in $B/\mathfrak{p}$. We can do this for any prime
ideal $\mathfrak{p}\in\mathrm{Spec}\,B$. Now set
$V_{\mathfrak{p}}:=\\{\mathfrak{q}\in\mathrm{Spec}\,B\,:\,f(b_{\mathfrak{p}})\in\mathfrak{q}\\}$
now because $B$ is von Neumann regular we know that $V_{\mathfrak{p}}$ is a
clopen set in $\mathrm{Spec}\,B$, furthermore we know that $\mathfrak{p}\in
V_{\mathfrak{p}}$. Thus
$\mathrm{Spec}\,B=\bigcup_{\mathfrak{p}\in\mathrm{Spec}\,B}V_{\mathfrak{p}}$
Now because $\mathrm{Spec}\,B$ is compact, there are
$V_{1},\dots,V_{n}\subset\mathrm{Spec}\,B$ that are clopen and together they
cover $\mathrm{Spec}\,B$ and such that for any $i\in\\{1,\dots,n\\}$ we
associate a $b_{i}\in B$ such that
$V_{i}=\\{\mathfrak{p}\in\mathrm{Spec}\,B\,:\,f(b_{i})\in\mathfrak{p}\\}$
We look at the global section ring of the sheaf structure of $B$ and we define
mutually disjoint clopen sets $U_{1},\dots,U_{n}$ by
$U_{1}:=V_{1},\dots,U_{i}:=V_{i}\backslash U_{i-1}\textrm{ for }i=2,\dots,n$
Now define $b$ in the global section ring (which is actually isomorphic to
$B$) by
$b(\mathfrak{p}):=b_{i}(\mathfrak{p})\textrm{ if }\mathfrak{p}\in U_{i}$
Then we observe that for any $\mathfrak{p}\in\mathrm{Spec}\,B$ one has
$g(b)\in\mathfrak{p}$, and because $B$ is reduced we conclude that $g(b)=0$.
Thus $g$ has a zero in $B$.
Now because $f$ is monic and of odd degree, it has a zero in $B$ and because
$A$ is integrally closed in $B$, this zero must actually be in $A$.
Let $\mathfrak{p}$ be in $\mathrm{Sper}\,B$. Set
$K:=\mathrm{Quot}(A/(\mathfrak{p}\cap A))$ and $L:=B/\mathfrak{p}$, also for
any $x\in B$ and $f\in B[T]$ denote $\widehat{x}$ and $\widehat{f}$ to be the
canonical image of $x$ in $L$ and the canonical image of $f$ in $L[T]$
respectively.
Suppose now that $\widehat{f}$ is in $K[T]$, monic and of odd degree for some
$f\in B[T]$. We may write
$\widehat{f}(T)=\sum_{i=0}^{n-1}\frac{\widehat{a_{i}}}{\widehat{a_{n}}}T^{i}+T^{n}\quad
a_{n}\in A\backslash\mathfrak{p},a_{i}\in A,n\in 2\mathbb{N}+1$
Define now $g(T)\in A[T]$ by
$g(T):=T^{n}+\sum_{i=0}^{n-1}a_{i}a_{n}^{n-i-1}T^{i}$
Then $g$ is a monic polynomial of odd degree in $A[T]$ and by (i) we can
conclude that $g$ has a zero, say $a$, in $A$. Thus we may as well conclude
that $\widehat{a}$ is a zero of $\widehat{g}$.
Now we observe that
$\widehat{a_{n}}^{n}\widehat{f}(T)=\widehat{g}(\widehat{a}_{n}T)$
and because $\widehat{a}_{n}$ has an inverse in $K$ we learn that
$\widehat{a}\widehat{a_{n}}^{-1}$ is a zero of $\widehat{f}$. But
$\widehat{a}\widehat{a_{n}}^{-1}$ is in $K$. Thus we have shown that any monic
polynomial of odd degree in $K[T]$ has a zero in $K$.
We do know that $B$ is a real closed ring thus its partial ordering is (see
for instance [SM] Proposition 12.4(c))
$B^{+}=\\{b^{2}\,:\,b\in B\\}$
Since $A$ is integrally closed in $B$, we conclude that the set
$A^{+}:=B^{+}\cap A=\\{a^{2}\,:\,a\in A\\}$
is a partial ordering of $A$. We show that $A$ with this partial ordering is
actually a sub-$f$-ring of $B$. By Lemma 3, we need only show that for any
$a\in A$, $a^{+}\in B$ is in $A$ (and thus so is $a^{-}$). $B$ is a von
Neumann regular ring, so $a^{+}$ has a quasi-inverse we shall denote by
$(a^{+})^{\prime}$. Since $A$ is integrally closed in $B$, we then also know
that the idempotent $a^{+}(a^{+})^{\prime}$ is in $A$, but then
$a(a^{+}(a^{+})^{\prime})=(a^{+}-a^{-})(a^{+}(a^{+})^{\prime})=a^{+}(a^{+}(a^{+})^{\prime})=(a^{+})^{2}(a^{+})^{\prime}=a^{+}\in
A$
We will now show that $K^{+}$ defined by
$K^{+}:=\\{(\widehat{a}/\widehat{b})^{2}\,:\,a\in A,b\in
A\backslash\mathfrak{p}\\}$
is a total ordering of $K$ (and thus by [KS] p.16 Satz 1, $K$ is a real closed
field). Since $A$ is a sub-$f$-ring of $B$ (the partial ordering of both is
their weakest partial ordering) and $\mathfrak{p}\cap A$ is a prime $l$-ideal
of $A$ (we know that $B$ is a real closed regular ring, and by [Capco]
Proposition 7, all of it’s residue fields are real closed ring and so by [BKW]
Corollaire 9.2.5, $\mathfrak{p}$ is an $l$-ideal of $B$. It is then easy to
see that it’s restriction to $A$ is also an $l$-ideal), we know by [BKW]
Corollaire 9.2.5, that $A/(\mathfrak{p}\cap A)$ is totally ordered by
$A^{+}/(\mathfrak{p}\cap A)=\\{\widehat{a}^{2}\,:\,a\in A\\}$
One easily checks that this total ordering of $A/(\mathfrak{p}\cap A)$ induces
a total ordering of $K$ which is non other than $K^{+}$.
Let $\mathfrak{p}$ be in $T(A)$, clearly
$A/(\mathfrak{p}\cap A)\hookrightarrow T(A)/\mathfrak{p}$
we show that all elements of $T(A)/\mathfrak{p}$ is actually an element in
$\mathrm{Quot}(A/(\mathfrak{p}\cap A))$. For any $q\in T(A)$ we denote
$\bar{q}$ as the image of $q$ in $T(A)/\mathfrak{p}$.
Let $q\in T(A)\backslash\mathfrak{p}$, then $\bar{q}$ is non-zero in
$T(A)/\mathfrak{p}$. There is a regular element $a\in A$ such that $aq\in A$,
and because $a$ is regular it cannot be contained in $\mathfrak{p}$, otherwise
its contained in $\mathfrak{p}\cap A$ which is a minimal ideal in $A$ (see
[Mewborn2] Theorem 3.1 and Theorem 4.4. Note that
$\mathrm{Spec}\,Q(A)\rightarrow\mathrm{Spec}\,T(A)$ is a surjection because
$T(A)$ is a regular ring, see for instance [raphael] Lemma 1.14), but all
regular elements of $A$ are not in any minimal ideal of $A$). Thus since
$\mathfrak{p}$ is prime, we learn that $aq\in A\backslash\mathfrak{p}$. Thus
$\bar{a},\bar{a}\bar{q}\in(A/(\mathfrak{p}\cap A))^{*}$, thus $\bar{a}$ has an
inverse $\bar{a}^{-1}$ in $\mathrm{Quot}(A/(\mathfrak{p}\cap A))$ and so
$\bar{a}^{-1}\bar{a}\bar{q}=\bar{q}\in\mathrm{Quot}(A/(\mathfrak{p}\cap A))$
So $T(A)/\mathfrak{p}$ is a subring of $\mathrm{Quot}(A/(\mathfrak{p}\cap
A))$. Now $T(A)$ itself is a regular ring, so $T(A)/\mathfrak{p}$ must be a
field and because $A$ is a subring of $T(A)$ we get
$\mathrm{Quot}(A/(\mathfrak{p}\cap A))$ as a subring of $T(A)/\mathfrak{p}$.
Therefore
$T(A)/\mathfrak{p}=\mathrm{Quot}(A/(\mathfrak{p}\cap A))$
Because $T(A)$ is a subring of $B$ and both are regular rings, we then know
that any prime ideal of $T(A)$ is a restriction of prime ideal of $B$ (see
[raphael] Lemma 1.14). Thus by (ii) we can conclude that $T(A)/\mathfrak{p}$
is a real closed field for any prime ideal $\mathfrak{p}\in T(A)$. By [Capco]
Proposition 7, $T(A)$ must be a real closed ring. ∎
###### Lemma 5.
If $A$ is a Baer reduced commutative unitary ring, then $T(A)$ is von Neumann
regular.
###### Proof.
Let $a,b\in A$ and consider the ideal $I=aA+bB$ then because $A$ is Baer,
there is an idempotent $e\in E(A)$ such that
$\mathrm{Ann}_{A}(I)=eA=\mathrm{Ann}_{A}(1-e)$. By [Mewborn] Proposition 2.3
and then [huckaba] Theorem B, we conclude that $T(A)$ is a regular ring. ∎
###### Theorem 6.
Let $A$ be a Baer ring, then $A$ is integrally closed in $T(A)$ iff for any
$\mathfrak{p}\in\mathrm{Spec}\,T(A)$ we have $A/(\mathfrak{p}\cap A)$ is
integrally closed in $T(A)/\mathfrak{p}$.
###### Proof.
For simplicity let us set $B:=T(A)$.
”$\Rightarrow$” Suppose by contradiction there exists an $f\in A[T]$ monic,
$\mathfrak{p}\in\mathrm{Spec}\,B$ and $b\in B$ such that
* i.
$f(b)\in\mathfrak{p}$
* ii.
$(b+\mathfrak{p})\cap A=\emptyset$
Then we have the following cases $\dots$
Case 1: $b\in\mathfrak{p}$. Then $b\equiv 0\,\mathrm{mod}\,\mathfrak{p}$ and
this is a contradiction to ii.
Case 1: $b\not\in\mathfrak{p}$. Then $f(b)\in\mathfrak{p}\subset B$. Now
because $A$ is Baer we know by Lemma 5 that $B$ is von Neumann regular. Let
$c\in B$ be the quasi-inverse of $f(b)$ . Then $cf(b)$ is an idempotent in $B$
and so it must be in $A$ (because $A$ is integrally closed in $B$). Now
$1-cf(b)$ is also an idempotent, denote $e:=1-cf(b)$ (clearly
$e\not\in\mathfrak{p}$ because $1-e\in\mathfrak{p}$) and observe that
$ef(b)=0$. Now we can write
$f(T)=T^{n}+\sum_{i=0}^{n-1}a_{i}T^{i}$
for some $n\in\mathbb{N}$ and $a_{0},\dots,a_{n}\in A$. Then $e^{n}f(b)=0$ and
so $eb$ is a zero of the monic polynomial $g\in A[T]$ defined by
$g(T)=T^{n}+\sum_{i=0}^{n-1}a_{i}e^{n-i}T^{i}$
but because $A$ is integrally closed in $B$, we then know that $eb\in A$.
But all these implies that $eb\equiv b\,\mathrm{mod}\,\mathfrak{p}$ (since
$1-e\in\mathfrak{p}$) and so $(b+\mathfrak{p})\cap A\neq\emptyset$. Again a
contradiction!
”$\Leftarrow$” Let $f\in A[T]$ be monic and $f(b)=0$ for some $b\in B$. For
any $\mathfrak{p}\in\mathrm{Spec}\,B$, consider $a_{\mathfrak{p}}\in A$ to be
such that $f(a_{\mathfrak{p}})\in\mathfrak{p}$ and $a_{\mathfrak{p}}\equiv
b\,\mathrm{mod}\,\mathfrak{p}$. Consider the clopen sets (because $A$ is Baer
and so $B$ is regular):
$V_{\mathfrak{p}}:=\\{\mathfrak{q}\in\mathrm{Spec}\,B:a_{\mathfrak{p}}\equiv
b\,\mathrm{mod}\,\mathfrak{q}\\}$
then $\mathfrak{p}\in V_{\mathfrak{p}}$ for all
$\mathfrak{p}\in\mathrm{Spec}\,B$
$\bigcup_{\mathfrak{p}\in\mathrm{Spec}\,B}V_{\mathfrak{p}}=\mathrm{Spec}\,B$
Thus there are finitely many $a_{1},...,a_{n}\in A$ such that
$\bigcup_{i=1}^{n}V_{i}=\mathrm{Spec}\,B$
where
$V_{i}=\\{\mathfrak{q}\in\mathrm{Spec}\,B:a_{i}\equiv
b\,\mathrm{mod}\,\mathfrak{q}\\}$
Define now another family of clopen set
$U_{1}:=V_{1},U_{i}=V_{i}\backslash U_{i-1}i\geq 2$
we can also define the idempotents $e_{i}\in B$ by
$e_{i}\,\mathrm{mod}\,\mathfrak{p}=\left\\{\begin{array}[]{ll}1&\mathfrak{p}\in
U_{i}\\\ 0&\mathfrak{p}\not\in U_{i}\\\ \end{array}\right.$
Clearly $b=\sum_{i=1}^{n}a_{i}e_{i}$ and all $a_{i},e_{i}\in A$ for
$i=1,\dots,n$ ($E(A)=E(T(A))$ because $A$ is Baer, see for instance [raphael]
). Thus $b\in A$! ∎
###### Corollary 7.
A reduced Baer poring $B$ is real closed $*$ iff for any minimal prime ideal
$\mathfrak{p}\in\mathrm{MinSpec}\,B$ one has $B/\mathfrak{p}$ is a real closed
$*$ integral domain.
###### Proof.
”$\Rightarrow$” If $B$ is real closed real $*$, then it is Baer and it is
integrally closed in $Q(B)$, and furthermore $Q(B)$ is real closed $*$. Thus
by Theorem 4 $T(B)$ is also real closed $*$ (as it is also Baer and by Theorem
4 iii a real closed regular ring, we can then use [Capco] Theorem 15). Using
[SV] Proposition 2 and by Theorem 6 we know that $B/\mathfrak{p}$ is a real
closed ring $*$ for any minimal prime ideal $\mathfrak{p}$ in
$\mathrm{Spec}\,B$ (this is because the restriction of prime ideals of $T(B)$
to $B$ are exactly the minimal prime ideals of $B$, see for instance
[Mewborn2] Theorem 3.1 and Theorem 4.4),
”$\Leftarrow$” The same reasons as above shows us that $B$ is integrally
closed in $T(B)$ and $T(B)$ is a real closed $*$ ring. Now $Q(B)$ is also the
complete ring of quotients of $T(B)$ so by [srcr] Theorem 3, $Q(B)$ is also a
real closed ring $*$ and in this case $T(B)$ is obviously integrally closed in
$Q(B)$. Our initial hypothesis and results in Theorem 6, [SV] Proposition 2,
[Mewborn2] Theorem 3.1 and Theorem 4.4, does imply that $B$ is integrally
closed in $Q(B)$. All this implies satisfies the condition of [srcr] Theorem 3
for B, making us conclude that $B$ is real closed $*$. ∎
From the proof of the Corollary above we also immediately have the following
###### Lemma 8.
A poring $A$, is real closed $*$ iff it is integrally closed in its total
quotient ring and its total quotient ring is a real closed regular (Baer)
ring.
Now we show one way how a real closure $*$ of a reduced ring can be found.
###### Corollary 9.
If $A$ is a reduced poring and $B$ is a rationally complete real closed ring
(thus also real closed $*$, see [Capco] Theorem 15) such that $A$ is a sub-
poring of it and $B$ is an essential extension of $A$. Then $\mathrm{ic}(A,B)$
is a real closure $*$ of $A$.
###### Proof.
Denote $\bar{A}:=\mathrm{ic}(A,B)$. By Storrer’s Satz one has the following
commutative diagrams
$\\!ifnextchar[{\beginpicture\setcoordinatesystem
units<1pt,1pt>{0}}{\beginpicture\setcoordinatesystem
units<1pt,1pt>}\\!ifnextchar[{\\!ifnextchar[{{50}{A}{A}}{{50}}}$
|
arxiv-papers
| 2009-06-01T13:28:02 |
2024-09-04T02:49:03.065449
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Jose Capco",
"submitter": "Jose Capco",
"url": "https://arxiv.org/abs/0906.0282"
}
|
0906.0290
|
# Exercises in orthodox geometry
Edited by A. Petrunin
orthodox.geometry@gmail.com
###### Abstract
This collection is oriented to graduate students who want to learn fast simple
tricks in geometry. (Solution of each problem requires only one non-trivial
idea.)
For problem solvers. The meaning of signs next to number of the problem:
* $\circ$
— easy problem;
* $*$
— hard problem;
* $+$
— the solution requires knowledge of a theorem;
* $\sharp$
— there are interesting solutions based on different ideas.
To get a hint, send an e-mail to the above address with the number and the
name of the problem.
For problem makers. This collection is under permanent development. If you
have suitable problems or corrections please e-mail it to me.
Many thanks. I want to thank everyone sharing the problems. Also I want to
thank R. Matveyev, P. Petersen, S. Tabachnikov and number of students in my
classes for their interest in this list and for correcting number of mistakes.
I’m also thankfull to everyone who took part in the discussion of this list on
mathoverflow.
## 1 Curves and surfaces
1.1. Geodesic for birds. Let $f\colon\mathbb{R}^{2}\to\mathbb{R}$ be a
$\ell$-Lipschitz function. Consider closed region $W$ cut from
$\mathbb{R}^{3}$ by graph of $f$; i.e.
$W=\\{(x,y,z)\in\mathbb{R}^{3}\,\mid\,z\geqslant f(x,y)\\}.$
Show that any _geodesic_ in $W$ has _variation of turn_ at most $2\ell$.
[D. Berg],[J. Liberman]
It is much easier to give some eatimate of turn in terms of $\ell$, say
$\pi(1+\ell)$. The bound $2\ell$ is optimal; to prove it one has to do some
calculations.
1.2. Kneser’s spiral. Let $\gamma$ be a plane curve with strictly monotonic
curvature function. Prove that $\gamma$ has no self-intersections.
[Ovsienko–Tabachnikov]
1.3. Closed curve. A smooth closed _simple curve_ with curvature at most $1$
bounds a rigion $\Omega$ in a plane. Prove that $\Omega$ contains a disc or
radius $1$. ???
1.4♯. A curve in a sphere. Let $\gamma$ be a closed curve in a unit sphere
which intersects each equator, prove that its length is at least $2\pi$. N.
Nadirashvili
1.5♯. A spring in a tin. Let $\alpha$ be a closed smooth immersed curve inside
a unit disc. Prove that the average absolute curvature of $\alpha$ is at least
$1$, with equality if and only if $\alpha$ is the unit circle possibly
traversed more than once. [S. Tabachnikov]
If instead of a disc we have a region bounded by closed convex curve $\gamma$
then it is still true that the average absolute curvature of $\alpha$ is at
least as big as average absolute curvature of $\gamma$. The proof is not that
simple, see ???.
1.6. A minimal surface.111If $\Sigma$ does not pass through the center and we
only know the distance $r$ from center to $\Sigma$ then optimal bound is
expected to be $\pi(1-r^{2})$. This is known if $\Sigma$ is topological disc,
see [Alexander–Osserman]. An analog result for area-minimizing submanifolds
holds for dimensions and codimensions, see [Alexander–Hoffman–Osserman]. Let
$\Sigma$ be a _minimal surface_ in $\mathbb{R}^{3}$ which has boundary on a
unit sphere. Assume $\Sigma$ passes through the center of the sphere. Show
that area of $\Sigma$ is at least $\pi$. ???
The problem is simpler if you assume that $\Sigma$ is a topological disc.
1.7. Asymptotic line. Consider a smooth surface $\Sigma$ in $\mathbb{R}^{3}$
given as a graph $z=\nobreak f(x,y)$. Let $\gamma$ be a closed smooth
asymptotic line on $\Sigma$. Assume $\Sigma$ is _strictly saddle_ in a
neighborhood of $\gamma$. Prove that projection of $\gamma$ on $xy$-plane is
not star shaped. [D. Panov, 1]
1.8. Closed twisted geodesics. Give an example of a closed Riemannian
2-manifold which has no closed smooth curve with constant geodesic curvature
$=1$.
[V. Ginzburg]
1.9. Non contractable geodesics. Give an example of a non-flat metric on
$2$-torus such that it has no contractible geodesics.
Y. Colin de Verdière, [M. Gromov, 4; 7.8(1/2)+]
1.10. Fat curve. Construct a _simple plane curve_ with non-zero Lebesgue’s
measure. ???
1.11. Stable net. Show that there is no stable net in standard $2$-sphere. By
“net” we understand an embedding $f\colon\Gamma\to\SS^{2}$ of a graph $\Gamma$
and it is stable if small variation of $f$ can not decreasse the total length
of all its edges. Z. Brady
1.12. Oval in oval. Consider two closed smooth strictly convex planar curves,
one inside another. Show that there is a chord of the outer curve, which is
tangent to the inner curve and divided by the point of tangency into equal
parts.
There is a similar open problem: find a point on outer curve (not necessary
convex) which has two tangent segments from this point to the inner curve has
equal size.
## 2 Comparison geometry
For doing most of problems in this section it is enough to know second
variation formula. Knowledge of some basic results such as O’Neil formula,
Gauss formula, Gauss–Bonnet theorem, Toponogov’s comparison theorem, Soul
theorem, Toponogov splitting theorems and Synge’s lemma also might help. To
sove problem 2, it is better know that factor positively curved Riemannian
manifold by an isometry group is a positively curved Alexandrov space. Problem
2 requires Liouville’s theorem for geodesic flow. Problem 2 requires a Bochner
type formula.
2.1. Totally geodesic hypersurface. Prove that if a compact positively curved
$m$-manifold $M$ admits a totally geodesic embedded hypersurface then $M$ or
its double cover is homeomorphic to the $m$-sphere. P. Petersen
2.2. Immersed convex hypersurface I. Let $M$ be a complete simply connected
Riemannian manifold with nonpositive curvature and
$\operatorname{dim}M\geqslant 3$. Prove that any immersed locally convex
hypersurface in $M$ is globally convex, i.e. it is an embedded hypersurface
which bounds a convex set. [S. Alexander]
2.3∗. Immersed convex hypersurface II. Prove that any immersed locally convex
hypersurface in a complete positively curved manifold $M$ of dimension
$m\geqslant 3$, is the boundary of an immersed ball. I.e. there is an
immersion of a closed ball $\bar{B}^{m}\to M$ such that the induced immersion
of its boundary $\partial\bar{B}^{m}\to M$ gives our hypersurface. [M. Gromov,
3], [J. Eschenburg], [B. Andrews]
2.4. Almgren’s inequalities. Let $\Sigma$ be a closed $k$-dimensional _minimal
surface_ in the unit $\SS^{n}$. Prove that
$\operatorname{vol}\Sigma\geqslant\operatorname{vol}\SS^{k}$. [F. Almgren],
[M. Gromov, 1]
2.5. Hypercurve. Let $M^{m}\hookrightarrow\mathbb{R}^{m+2}$ be a closed smooth
$m$-dimensional submanifold and let $g$ be the induced Riemannian metric on
$M^{m}$. Assume that sectional curvature of $g$ is positive. Prove that
curvature operator of $g$ is positively defined. [A. Weinstein, 2]
In particular, it follows from [Micallef–Moore]/[Böhm–Wilking] that the
universal cover of $M$ is homeomorphic/diffeomorphic to a standard sphere.
2.6. Horosphere. Let $M$ be a complete simply connected manifold with
negatively pinched sectional curvature (i.e. $-a^{2}\leqslant
K\leqslant-b^{2}<0$). And let $\Sigma\subset M$ be an horosphere in $M$ (i.e.
$\Sigma$ is a level set of a _Busemann function_ in $M$). Prove that $\Sigma$
with the induced intrinsic metric has _polynomial volume growth_.
V. Kapovitch
2.7. Minimal spheres. Show that a positively curved $4$-manifold can not
contain two distinct _equidistant_ _minimal_ 2-spheres. D. Burago
2.8. Fixed point of conformal mappings. Let $(M,g)$ be an even-dimensional
positively curved closed oriented Riemannian manifold and $f\colon M\to M$ be
a conformal orientation preserving map. Prove that $f$ has a fixed point.
[A. Weinstein, 1]
2.9. Totally geodesic immersion. Let $(M,g)$ be a simply connected positively
curved $m$-manifold and $N\hookrightarrow M$ be a totally geodesic immersion.
Prove that if $\operatorname{dim}N>\tfrac{m}{2}$ then $N$ is embedded. B.
Wilking
2.10. Minimal hypersurfaces. Show that any two compact _minimal hypersurfaces_
in a Riemannian manifold with positive Ricci curvature must intersect.
[T. Frankel]
2.11. Negative curvature vs. symmetry. Let $(M,g)$ be a closed Riemannian
manifold with negative Ricci curvature.
Prove that $(M,g)$ does not admit an isometric $S^{1}$-action. ???
2.12$\displaystyle{}^{{}^{{}_{+}}}$. Positive curvature and symmetry. Let
$(M,g)$ be a positively curved $4$-dimensional closed Riemannian manifold with
an isometric $S^{1}$-action.
Prove that $S^{1}$-action has at most $3$ isolated fixed points.
B. Kleiner???
2.13$\displaystyle{}^{{}^{{}_{+}}}$. Scalar curvature vs. injectivity radius.
Let $(M,g)$ be a closed Riemannian $m$-manifold with scalar curvature
$\mathrm{Sc}_{g}\geqslant m(m-1)/2$ (i.e. bigger then scalar curvature of
$\SS^{m}$). Prove that the injectivity radius of $(M,g)$ is at most $\pi$. ???
2.14. Almost flat manifold. Show that for any $\epsilon>0$ there is
$n=n(\epsilon)$ such that there is a compact $n$-dimensional manifold $M$
which is not a finite factor of a _nil-manifold_ and which admits a Riemannian
metric with diameter $\leqslant 1$ and sectional curvature $|K|<\epsilon$. [G.
Guzhvina]
2.15. Lie group. Show that the space of non-negatively curved left invariant
metrics on a compact Lie group $G$ is contractable. B. Wilking
2.16. Simple geodesic. Let $g$ be a complete Riemannian metric with positive
curvature on $\mathbb{R}^{2}$. Show that there is a two-sided infinite
geodesic in $(\mathbb{R}^{2},g)$ with no self-intersections. [V. Bangert]
2.17♯. Polar points. Let $(M,g)$ be a Riemannian manifold with sectional
curvature $\geqslant 1$. A point $p^{*}\in M$ is called polar to $p\in M$ if
$|px|+|xp^{*}|\leqslant\pi$ for any point $x\in M$. Prove that for any point
in $(M,g)$ there is a polar. [A. Milka]
2.18. Deformation to a product. Let $(M,g)$ be a compact Riemannian manifold
with non-negative sectional curvature. Show that there is a continuous one
parameter family of non-negatively curved metrics $g_{t}$ on $M$, $t\in[0,1]$,
such that a finite Riemannian cover of $(M,g_{1})$ is isometric to a product
of a flat torus and a simply connected manifold. [B. Wilking]
2.19∗. Isometric section. Let $s\colon(M,g)\to(N,h)$ be a Riemannian
submersion. Assume that $g$ is positively curved. Show that $s$ does not admit
an isometric section; i.e. there is no isometry
$\imath\colon(N,h)\hookrightarrow(M,g)$ such that
$s\circ\imath=\operatorname{id}_{N}$.
[G. Perelman]
2.20♯. Minkowski space. Let us denote by $\mathbb{M}^{m}$ the set
$\mathbb{R}^{m}$ equipped with the metric induced by the $\ell^{p}$-norm.
Prove that if $p\not=2$ then $\mathbb{M}^{m}$ can not be a Gromov–Hausdorff
limit of Riemannian $m$-manifolds $(M_{n},g_{n})$ such that
$\operatorname{Ric}_{g_{n}}\geqslant\nobreak C$ for some contant
$C\in\mathbb{R}$.
2.21. An island of scalar curvature. Construct a Riemannian metric $g$ on
$\mathbb{R}^{3}$ which is Euclidean outside of an open bounded set $\Omega$
and scalar curvature of $g$ is negative in $\Omega$. [J. Lohkamp]
2.22$\displaystyle{}^{{}^{{}_{+}}}$. If hemisphere then sphere. Let $M$ is an
$m$-dimensional Riemannian manifold with Ricci curvatur at least $m-1$;
moreover there is a point $p\in M$ such that sectional curvature is exactly
$1$ at all points on distance $\leqslant\tfrac{\pi}{2}$ from $p$. Show that
$M$ has constant sectional curvature. [Hang–Wang]
The problem is still interesting if instead of the first condition one has
that sectional curvature $\geqslant 1$. If instead of first condition one only
has that scalar curvature $\geqslant m(m-1)$, then the question is open, it
was conjectured by Min-Oo in 1995.
## 3 Curvature free
Most of the problems in this section require no special knowledge. Solution of
3 relies on Gromov’s pseudo-holomorphic curves; problem 3 uses Liouville’s
theorem for geodesic flow.
3.1$\displaystyle{}^{{}^{{}_{+}}}$. Minimal foliation. Consider
$\SS^{2}\times\SS^{2}$ equipped with a Riemannian metric $g$ which is
$C^{\infty}$-close to the product metric. Prove that there is a conformally
equivalent metric $\lambda{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}g$ and re-
parametrization of $\SS^{2}\times\SS^{2}$ such that each sphere $\SS^{2}\times
x$ and $y\times\SS^{2}$ forms a _minimal surface_ in
$(\SS^{2}\times\SS^{2},\lambda{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}g)$.
3.2. Smooth doubling. Let $N$ be a Riemannian manifold with boundary which is
isometric to $(M,g)/\SS^{1}$, where $g$ is an $\SS^{1}$-invariant complete
smooth Riemannian metric on $M$. Prove that the _doubling_ of $N$ is a smooth
Riemannian manifold.
[A. Lytchak]
3.3. Loewner’s theorem. Given $\mathbb{R}$Pn equipped with a Riemannian metric
$g$ conformally equivalent to the canonical metric $g_{\text{can}}$ let $\ell$
denote the minimal length of curves in $(\mathbb{R}$P${}^{n},g)$ not homotopic
to zero. Prove that
$vol(\mathbb{R}\text{P}^{n},g)\geqslant
vol(\mathbb{R}\text{P}^{n},g_{\text{can}})(\ell/\pi)^{n}$
and in case of equality $g=c{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}g_{\text{can}}$ for some positive constant $c$. ???
3.4$\displaystyle{}^{{}^{{}_{+}}}$. Convex function vs. finite volume. Let $M$
be a complete Riemannian manifold which admits a non-constant convex function.
Prove that $M$ has infinite volume. [S. Yau]
3.5. Besikovitch inequality. Let $g$ be a Riemannian metric on a
$n$-dimensional cube $Q=(0,1)^{n}$ such that any curve connecting opposite
faces has length $\geqslant 1$. Prove that $\operatorname{vol}(Q,g)\geqslant
1$ and equality holds if and only if $(Q,g)$ is isometric to the interior of
the unit cube. ???
3.6. Mercedes-Benz. Construct a Riemannian metric $g$ on $\SS^{3}$ and
involution $\imath\colon\SS^{3}\to\SS^{3}$ such that
$\operatorname{vol}(\SS^{3},g)$ is arbitrary small and $|x\,\imath(x)|_{g}>1$
for any $x\in\SS^{3}$.
[C. Croke]
Note that for $\SS^{2}$ such thing is not possible.
## 4 Metric geometry.
The necessary definitions can be found in [Burago–Burago–Ivanov]. It is very
hard to do 4 without using Kuratowski embedding. To do problem 4 first do
problem 4; to do this problem you have to know a construction of compact
manifolds of constant negative curvature of given dimension $m$. To do problem
4 you should be familiar with the proof of Nash–Kuiper theorem. Problems 4 and
4 are similar, in both you have to know Rademacher’s theorem on
differentiability of Lipschitz maps.
4.1∘. Noncontracting map. Let $X$ be a compact metric space and $f\colon X\to
X$ be a noncontracting map. Prove that $f$ is an isometry.
4.2$\displaystyle{}^{{}^{{}_{+}}}$. Embedding of a compact. Prove that any
compact metric space is isometric to a subset of a compact _length spaces_.
4.3. Bounded orbit. Let $X$ be a _proper metric space_ and $\imath\colon X\to
X$ is an isometry. Assume that for some $x\in X$, the the orbit
$\imath^{n}(x)$, $n\in\mathbb{Z}$ has a partial limit in $X$. Prove that for
one (and hence for any) $y\in X$, the orbit $\imath^{n}(y)$ is bounded.
[A. Całka]
4.4. Covers of figure eight. Let $(\Phi,d)$ be a “figure eight”; i.e. a metric
space which is obtained by gluing together all four ends of two unit segments.
Prove that any compact _length spaces_ $X$ is a Gromov–Hausdorff limit of a
sequence of metric covers $(\widetilde{\Phi}_{n},\tilde{d}/n)\to(\Phi,d/n)$.
[V. Sahovic]
4.5$\displaystyle{}^{{}^{{}_{+}}}$. Constant curvature is everything. Given
integer $m\geqslant 2$, prove that any compact _length spaces_ $X$ is a
Gromov–Hausdorff limit of a sequence of $m$-dimensional manifolds $M_{n}$ with
curvature $-n^{2}$. [V. Sahovic]
4.6∗. Diameter of m-fold cover. Let $X$ be a _length space_ and $\tilde{X}$ be
a connected $m$-fold cover of $X$ equiped with induced intrinsic metric. Prove
that
$\operatorname{diam}\tilde{X}\leqslant m{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}\operatorname{diam}X.$
A. Nabutovsky, [S. Ivanov]
4.7. 2-sphere is far from a ball. Show that there is no sequence of Riemannian
metrics on $\SS^{2}$ which converge in Gromov–Hausdorff topology to the
standard ball $\bar{B}^{2}\subset\mathbb{R}^{2}$.
4.8. 3-sphere is close to a ball. Construct a sequence of Riemannian metrics
on $\SS^{3}$ which converge in Gromov–Hausdorff topology to the standard ball
$\bar{B}^{3}\subset\mathbb{R}^{3}$.
???
4.9∘. Macrodimension. Let $M$ be a simply connected Riemannian manifold with
the following property: any closed curve can be shrunk to a point in an
$\varepsilon$-neighborhood of itself. Prove that $M$ is 1-dimensional on scale
$10^{10}{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}\varepsilon$; i.e. there is a
graph $\Gamma$ and a continuous map $f\colon M\to\Gamma$, such that for any
$x\in\Gamma$ we have diam$(f^{-1}(x))\leqslant 10^{10}{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}\varepsilon$. N. Zinoviev
4.10. Anti-collapse. Construct a sequence of Riemannian metric $g_{i}$ on a
2-sphere such that $\mathop{\it vol}g_{i}<1$ such that induced distance
functions $d_{i}\colon\SS^{2}\times\SS^{2}\to\mathbb{R}_{+}$ converge to a
metric $d\colon\SS^{2}\times\SS^{2}\to\mathbb{R}_{+}$ with arbitrary large
Hausdorff dimension.
[Burago–Ivanov–Shoenthal]
4.11∗. No short embedding. Construct a length-metric on $\mathbb{R}^{3}$ which
admits no local _short_ embeddings into $\mathbb{R}^{3}$.
[Burago–Ivanov–Shoenthal]
4.12$\displaystyle{}^{{}^{{}_{+}}}$. Sub-Riemannian sphere. Prove that any
_sub-Riemannian metric_ on the $n$-sphere is isometric to the intrinsic metric
of a hypersurface in $\mathbb{R}^{n+1}$.
4.13$\displaystyle{}^{{}^{{}_{+}}}$. Path isometry. Show that there is no
_path isometry_ $\mathbb{R}^{2}\to\mathbb{R}$.
4.14$\displaystyle{}^{{}^{{}_{+}}}$. Minkowski plane. Let $\mathbb{M}^{2}$ be
a _Minkowski plane_ which is not isometric to the Euclidean plane. Show that
$\mathbb{M}^{2}$ does not admit a _path isometry_ to $\mathbb{R}^{3}$.
4.15. Hyperbolic space. Show that the hyperbolic $3$-space is _quasi-
isometric_ to a subset of product of two hyperbolic planes. ???
4.16. Kirszbraun’s theorem. Let $X\subset\mathbb{R}^{2}$ be an arbitrary
subset and $f\colon X\to\mathbb{R}^{2}$ be a _short map_. Show that $f$ can be
extended as a short map to whole $\mathbb{R}^{2}$; i.e. there is a short map
$\bar{f}\colon\mathbb{R}^{2}\to\mathbb{R}^{2}$ such that its restriction to
$X$ coinsides with $f$.
???
4.17. Hilbert problem. Let $F$ be a convex plane figure. Construct a complete
Finsler metric $d$ on the interior of $F$ such that any line segment in $F$
forms a geodesic of $d$. ???
4.18. Straight geodesics. Let $\rho$ be a length-metric on $\mathbb{R}^{n}$,
which is bi-Lipschitz equivalent to the canonical metric. Assume that any
_geodesic_ $\gamma$ in $(\mathbb{R}^{d},\rho)$ is a linear (i.e. it can be
described as $\gamma(t)=v+w{\hskip 0.5pt\cdot\nobreak\hskip 0.5pt}t$ for some
$v,w\in\mathbb{R}^{n}$). Show that $\rho$ is induced by a norm on
$\mathbb{R}^{n}$. ???
## 5 Topology
5.1. Milnor’s disks. Construct two “topologically different” smooth immersions
of the disk into the plane which coincide near the boundary. (Two immersions
$f_{1},f_{2}\colon D\to\mathbb{R}^{2}$ are topologically different if there is
no diffeomorphism $h\colon D\to\nobreak D$ such that $f_{1}=f_{2}\circ h$) [M.
Gromov, 2; ???]
5.2∘. Positive Dehn twist. Let $\Sigma$ be an oriented surface with non empty
boundary. Prove that any composition of _positive Dehn twists_ of $\Sigma$ is
not homotopic to identity _rel_ boundary. R. Matveyev
5.3♯. Function with no critical points. Given $n\geqslant 2$, construct a
smooth function $f$ defined on a neighborhood of closed unit ball $B^{n}$ in
$\mathbb{R}^{n}$ which has no critical points and which can not be presentes
in the form $\ell\circ\varphi$, where $\ell\colon\mathbb{R}^{n}\to\mathbb{R}$
is a linear function and $\varphi\colon B^{n}\to\mathbb{R}^{n}$ is a smooth
embedding. P. Pushkar
5.4. Conic neighborhood. Let $p\in X$ be a point in a topological space $X$.
We say that an open neighborhood $U_{p}$ of $p\in X$ is conic if there is a
homeomorphism from a cone to $U_{p}$ which sends its vertex to $p$. Show that
any two conic neighborhoods of $p$ are homeomorphic to each other. [K. Kwun]
Note that for two cones $\mathop{\rm Cone}(\Sigma_{1})$ and $\mathop{\rm
Cone}(\Sigma_{2})$ might be homeomorphic while $\Sigma_{1}$ and $\Sigma_{2}$
are not.
5.5. Knots in $C^{0}$-topology. Prove that space of $C^{\infty}$-smooth
embeddings $f\colon\SS^{1}\to\nobreak\mathbb{R}^{3}$ is connected in the
$C^{0}$-topology.
5.6∘. Simmetric square. Let $X$ be a connected topological space. Note that
$X{\times}X$ admits natural $\mathbb{Z}_{2}$-action by $(x,y)\mapsto(y,x)$.
Show that fundamental group of $X{\times}X/\mathbb{Z}_{2}$ is commutative. R.
Matveyev
5.7∘. Sierpinski triangle. Find the group of homeomorphisms of Sierpinski
triangle. B. Kliener
5.8. Simple stabilization. Construct two compact subsets
$K_{1},K_{2}\subset\mathbb{R}^{2}$ such that $K_{1}$ is not homeomorphic to
$K_{2}$, but $K_{1}\times[0,1]$ is homeomorphic to $K_{2}\times\nobreak[0,1]$.
???
5.9. Knaster’s circle. Construct a bounded open set in $\mathbb{R}^{2}$ whose
boundary does not contain a _simple curve_. [L. Wayne]
5.10∘. Boundary in $\mathbb{R}$. Construct three disjoined non-empty sets in
$\mathbb{R}$ which have the same boundary.
## 6 Descrete geometry
It is suggested to do Kirszbraun’s theorem (4) before doing problem 6. One of
the solutions of 6 uses mixed volumes. In order to solve problem 6, it is
better to know what is the genus of complex curve of degree $d$. To solve
problem 6 one has to use axiom of choice.
6.1∘. 4-polyhedron. Give an example of a convex $4$-dimensional polyhedron
with $100$ vertices, such that any two vertices are connected by an edge. ???
6.2. Pecewise linear isometry I. Let $P$ be a compact $m$-dimensional
_polyhedral_ _space_. Construct a _pecewise linear isometry_ $f\colon
P\to\mathbb{R}^{m}$. [V. Zalgaller]
6.3$\displaystyle{}^{{}^{{}_{+}}}$. Pecewise linear isometry II. Prove that
any _short map_ to $\mathbb{R}^{2}$ which is defined on a finite subset of
$\mathbb{R}^{2}$ can be extended to a _pecewise linear isometry_
$\mathbb{R}^{2}\to\mathbb{R}^{2}$. [U. Brehm]
6.4. Minimal polyhedron. Consider the class of polyhedral _surfaces_ in
$\mathbb{R}^{7}$ with fixed boundary curve such that each (1) is homeomorphic
to a 2-disc and (2) is glued out $n$ triangles. Let $\Sigma_{n}$ be a surface
of minimal area in this class. Show that $\Sigma_{n}$ is a _saddle surface_.
Note that it is not longer true if $\Sigma$ minimizes area only in the class
of polyhedral surfaces with fixed triangulation.
6.5. Convex triangulation. A triangulation of a convex polygon is called
convex if there is a convex function which is linear on each triangle and
changes the gradient if you come trough any edge of the triangulation.
Find a non-convex triangulation. [Gelfand–Kapranov–Zelevinsky]
6.6. Inscrebed triangulation. Let $Q$ be a unit square and $f\colon
Q\to\mathbb{R}_{>}$ be an arbitrary (not nesessury continuous) function. Prove
that there is a triangulation $\mathcal{T}$ of $Q$ such that each triangle
$\triangle xyz$ in $\mathcal{T}$ is covered by three discs $B_{f(x)}(x)$,
$B_{f(y)}(y)$ and $B_{f(z)}(z)$.
6.7∗. A sphere with one edge. Let $P$ be a finite 3-dimensional simplicial
complex with _spherical polyhedral metric_. Let us denote by $P_{s}$ the
subset of singular222A point is called _regular_ if it has a neighborhood
isometric to an open set of standard sphere; it is called singular point
_otherwise_. points of $P$.
Construct $P$ which is homeomorphic to $\SS^{3}$ and such that $P_{s}$ is
formed by a knotted circle. Show that in such an example the total length of
$P_{s}$ can be arbitrary large and the angle around $P_{s}$ can be made
strictly less than $2\pi$.
[D. Panov, 2]
6.8. Monotonic homotopy. Let $F$ be a finite set and $h_{0},h_{1}\colon
F\to\mathbb{R}^{m}$ be two maps. Consider $\mathbb{R}^{m}$ as a subspace of
$\mathbb{R}^{2m}$. Show that there is a homotopy $h_{t}:F\to\mathbb{R}^{2m}$
from $H_{1}$ to $h_{2}$ such that for any $x,y\in F$ the function
$t\mapsto\nobreak|h_{t}(x)-h_{t}(y)|$ is monotonic. [R. Alexander]
6.9♯. Box in a box. Assume that one rectangular box with sizes $a,b,c$ is
inside another with sizes $A,B,C$. Show that $A+B+C\geqslant a+b+c$. [A. Shen]
6.10∗. Besicovitch’s set. Show that one can cut a unit plane disc by radii
into sectors and then move each sector by a parallel translation on such a way
that its union will have arbitrary small area. [A. Besicovitch]
6.11. Boys and girls in a Lie group. Let $L_{1}$ and $L_{2}$ be two discrete
subgroups of a Lie group $G$, $h$ be a left invariant metric on $G$ and
$\rho_{i}$ be the induced left invariant metric on $L_{i}$. Assume
$L_{i}\backslash G$ are compact and moreover
$\text{vol}(L_{1}\backslash(G,h))=\text{vol}(L_{2}\backslash(G,h)).$
Prove that there is bi-Lipschitz one-to-one mapping (not necessarily a
homomorphism) $f\colon(L_{1},\rho_{1})\to(L_{2},\rho_{2})$. D. Burago
6.12. Universal group. Given a group $\Gamma$, let us denote by $Q(\Gamma)$
its minimal normal subgroup which contains all elements of finite order.
Construct a hyperbolic group, $\Gamma$ such that for any finitely presented
group $G$ there is a subgroup $\Gamma^{\prime}\subset\Gamma$ such that $G$ is
isomorphic to $\Gamma^{\prime}/Q(\Gamma^{\prime})$
6.13∘. Round circles in $\SS^{3}$. Suppose that you have a finite collection
of round circles in round $\SS^{3}$, not necessarily all of the same radius,
such that each pair is linked exactly once (in particular, no two intersect).
Prove that there is an isotopy in the space of such collections of circles so
that afterwards, they are all great circles. Thurston or Conway or Viro or ???
6.14$\displaystyle{}^{{}^{{}_{+}}}$. Harnack’s circles. Prove that a smooth
algebraic curve of degree $d$ in $\mathbb{R}\mathrm{P}^{2}$ consists of at
most $(d^{2}-3d+4)/2$ circles. ???
6.15$\displaystyle{}^{{}^{{}_{+}}}$. Two points on each line. Construct a set
in the Euclidean plane, which intersect each line at exactly 2 points.
## 7 Dictionary
Busemann function. Let $X$ be a metric spaces and $\gamma\colon[0,\infty)\to
X$ is a geodesic ray; i.e. it is a one side infinite _geodesic_ which is
minimizing on each interval. The Busemann function of $\gamma$ is defined by
$b_{\gamma}(p)=\lim_{t\to\infty}(|\gamma(t)\,p|-t).$
From the triangle inequality, it is clear that the limit above is well
defined.
Curvature operator. The Riemannian curvature tensor $R$ can be viewed as an
operator R on bi-vectors defined by
$\langle\mathbf{R}(X\wedge Y),Z\wedge T\rangle=\langle R(X,Y)Z,T\rangle,$
The operator $\mathbf{R}\colon\bigwedge^{2}T\to\bigwedge^{2}T$ is called
_curvature operator_ and it is said to be _positively defined_ if
$\langle\mathbf{R}(\varphi),\varphi\rangle>0$ for all non zero bi-vector
$\varphi\in\bigwedge^{2}T$.
Dehn twist. Let $\Sigma$ be a surface and
$\gamma\colon\mathbb{R}/\mathbb{Z}\to\Sigma$ be noncontractible closed _simple
curve_. Let $U_{\gamma}$ be a neighborhood of $\gamma$ which admits a
homeomorphism $h\colon U_{\gamma}\to\mathbb{R}/\mathbb{Z}\times(0,1)$. Dehn
twist along $\gamma$ is a homeomorphism $f\colon\Sigma\to\Sigma$ which is
identity outside of $U_{\gamma}$ and $h\circ f\circ
h^{-1}\colon(x,y)\mapsto(x+y,y)$.
If $\Sigma$ is orientable, then the Dehn twist described above is called
_positive_ if $h$ is orientation preserving.
Doubling of a manifold $M$ with boundary $\partial M$ is is two copies of
$M_{1},M_{2}$ identified along corresponding points on the boundary $\partial
M_{1},\partial M_{2}$.
Equidistant subsets. Two subsets $A$ and $B$ in a metric space are called
equidistant if $\operatorname{dist}_{A}$ is constant on $B$ and
$\operatorname{dist}_{B}$ is constant on $A$.
Geodesic. Let $X$ be a metric space and $\mathbb{I}$ be a real interval. A
locally isometric immersion $\gamma\colon\mathbb{I}\to X$ is called geodesic.
In other words, $\gamma$ is a geodesic if for any $t_{0}\in\mathbb{I}$ we have
$|\gamma(t)\gamma(t^{\prime})|=|t-t^{\prime}|$ for all
$t,t^{\prime}\in\mathbb{I}$ sufficiently close to $t_{0}$. Note that in our
definition geodesic has unit speed (that is not quite standard).
Length space. A complete metric space $X$ is called length space if the
distance between any pair of points in $X$ is equal to the infimum of lengths
of curves connecting these points.
Minimal surface. Let $\Sigma$ be a $k$-dimensional smooth surface in a
Riemannian manifold $M$ and $T(\Sigma)$ and $N(\Sigma)$ correspondingly
tangent and normal bundle. Let $s\colon T\otimes T\to N$ denotes the second
fundamental form of $\Sigma$. Let $e_{i}$ is an orthonormal basis for $T_{x}$,
set $H_{x}=\sum_{i}s(e_{i},e_{i})\in N_{x}$; it is the mean curvature vector
at $x\in\Sigma$.
We say that $\Sigma$ is _minimal_ if $H\equiv 0$.
Minkowski space — $\mathbb{R}^{m}$ with a metric induced by a norm.
Nil-manifolds form the minimal class of manifolds which includes a point, and
has the following property: the total space of any oriented $\SS^{1}$-bundle
over a nil-manifold is a nil-manifold.
It also can be defined as a factor of a connected nilpotent Lie group by a
lattice.
Path isometry A map $f:X\to Y$ of _length spaces_ $X$ and $Y$ is a path
isometry if for any path $\alpha:[0,1]\to X$, we have
$\operatorname{length}(\alpha)=\operatorname{length}(f\circ\alpha).$
Polyhedral space — a simplicial complex with a metric such that each simplex
is isometric to a simplex in a Euclidean space.
It admits the following generalizations:
spherical (hyperbolic) polyhedral space — a simplicial complex with a metric
such that each simplex is isometric to a simplex in a unit sphere (corresp.
hyperbolic space of constant curvature $-1$).
Polynomial volume growth. A Riemanninan manifold $M$ has polynomial volume
growth if for some (and therefore any) $p\in M$, we have
$\operatorname{vol}B_{r}(p)\leqslant\nobreak C{\hskip 0.5pt\cdot\nobreak\hskip
0.5pt}(r^{k}+1)$, where $B_{r}(p)$ is the ball in $M$ and $C$, $k$ are real
constants.
Proper metric space. A metric space $X$ is called _proper_ if any closed
bounded set in $X$ is compact.
Pecewise linear isometry — a piecewise linear map from a polyhedral space
which is isometric on each simplex. More precisely: Let $P$ and $Q$ be
polyhedral spaces, a map $f\colon P\to Q$ is called piecewise linear isometry
if there is a triangulation $\mathcal{T}$ of $P$ such that at any simplex
$\Delta\in\mathcal{T}$ the restriction $f|_{\Delta}$ is globally isometric.
Quasi-isometry. A map $f\colon X\to Y$ is called a quasi-isometry if there is
a constant $C<\infty$ such that $f(X)$is a $C$-net in $Y$ and
$\frac{|xy|}{C}-C\leqslant|f(x)f(y)|\leqslant C{\hskip
0.5pt\cdot\nobreak\hskip 0.5pt}|xy|+C$
Note that a quasi-isometry is not assumed to be continuous, for example any
map between compact metric spaces is a quasi-isometry.
Saddle surface. A smooth surface $\Sigma$ in $\mathbb{R}^{3}$ is saddle
(correspondingly strictly saddle) if the product of the principle curvatures
at each point is $\leqslant 0$ (correspondingly $<0$).
It admits the following generalization to non-smooth case and arbitrary
dimension of the ambient space: A surface $\Sigma$ in $\mathbb{R}^{n}$ is
saddle if the restriction $\ell|_{\Sigma}$ of any linear function
$\ell\colon\mathbb{R}^{3}\to\mathbb{R}$ has no strict local minima at interior
points of $\Sigma$.
One can generalize it further to an arbitrary ambient space, using convex
functions instead of linear functions in the above definition.
Short map — a distance non increasing map.
Simple curve — an image of a continuous injective map of a real segment or a
circle.
Sub-Riemannian metric.
Variation of turn. Let $\gamma\colon[a,b]\to\mathbb{R}^{n}$ be a curve. The
variation of turn of $\gamma$ is defined as supremum of sum of ??? angles for
broken lines inscribed in $\gamma$. Namely,
$\sup\left\\{\,\left.{\sum_{i=1}^{n-1}\alpha_{i}}\vphantom{a=t_{0}<t_{1}<\dots<t_{n}=b}\,\right|\,{a=t_{0}<t_{1}<\dots<t_{n}=b}\,\right\\},$
where
$\alpha_{i}=\pi-\measuredangle\gamma(t_{i-1})\gamma(t_{i})\gamma(t_{i+1})$.
## References
* [Alexander–Hoffman–Osserman] Alexander, H.; Hoffman, D.; Osserman, R. Area estimates for submanifolds of Euclidean space. Symposia Mathematica, Vol. XIV, pp. 445–455. Academic Press, London, 1974.
* [Alexander–Osserman] Alexander, H.; Osserman, R. Area bounds for various classes of surfaces. Amer. J. Math. 97 (1975), no. 3, 753–769.
* [R. Alexander] Lipschitzian mappings and total mean curvature of polyhedral surfaces, no. I, Trans. Amer. Math. Soc. 288 (1985), no. 2, 661–678.
* [S. Alexander] Proc. Amer. Math. Soc. 64 (1977), no. 2, 321–325;
* [F. Almgren] Optimal isoperimetric inequalities. Indiana Univ. Math. J. 35 (1986), no. 3, 451–547.
* [B. Andrews] Contraction of convex hypersurfaces in Riemannian spaces. J. Differential Geom. 39 (1994), no. 2, 407–431.
* [D. Berg] An estimate on the total curvature of a geodesic in Euclidean 3-space-with-boundary. Geom. Dedicata 13 (1982), no. 1, 1–6.
* [V. Bangert] On the existence of escaping geodesics. Comment. Math. Helv. 56 (1981), no. 1, 59–65.
* [U. Brehm] Extensions of distance reducing mappings to piecewise congruent mappings on $R^{m}$. J. Geom. 16 (1981), no. 2, 187–193.
* [Burago–Burago–Ivanov] Burago, D.; Burago, Yu.; Ivanov, S., A course in metric geometry. Graduate Studies in Mathematics, 33. American Mathematical Society, Providence, RI, 2001. xiv+415 pp. ISBN: 0-8218-2129-6
* [A. Besicovitch] The Kakeya Problem. American Mathematical Monthly 70: 697–706
* [Burago–Ivanov–Shoenthal] Burago, D.; Ivanov, S.; Shoenthal, D. Two counterexamples in low dimensional length geometry, St.Petersburg Math. J. Vol. 19 (2008), No. 1, Pages 33–43
* [Böhm–Wilking] Böhm, C.; Wilking, B., Manifolds with positive curvature operators are space forms. International Congress of Mathematicians. Vol. II, 683–690, Eur. Math. Soc., Zürich, 2006.
* [A. Całka] On conditions under which isometries have bounded orbits. Colloq. Math. 48 (1984), 219–227
* [C. Croke] Small volume on big $n$-spheres. Proc. Amer. Math. Soc. 136 (2008), no. 2, 715–717
* [T. Frankel] On the fundamental group of a compact minimal submanifold. Ann. of Math. (2) 83 1966 68–73.
* [V. Ginzburg] On the existence and non-existence of closed trajectories for some Hamiltonian flows. Math. Z. 223 (1996), no. 3, 397–409.
* [Gelfand–Kapranov–Zelevinsky] Gelfand, I. M.; Kapranov, M. M.; Zelevinsky, A. V., Discriminants, resultants, and multidimensional determinants, Mathematics: Theory & Applications, Birkhäuser Boston Inc., Boston, MA, 1994, x+523, 0-8176-3660-9.
* [M. Gromov]
1. 1.
Gromov’s Appendix in Milman, Vitali D.; Schechtman, Gideon Asymptotic theory
of finite-dimensional normed spaces. Lecture Notes in Mathematics, 1200.
Springer-Verlag, Berlin-New York, 1986. viii+156 pp.
2. 2.
Partial Differential relations
3. 3.
Sign and geometric meaning of curvature. Rend. Sem. Mat. Fis. Milano 61
(1991), 9–123 (1994).
4. 4.
Metric structures for Riemannian and non-Riemannian spaces. Modern Birkhäuser
Classics. Birkhäuser Boston, Inc., Boston, MA, 2007. xx+585 pp. ISBN:
978-0-8176-4582-3; 0-8176-4582-9
* [G. Guzhvina] Gromov’s pinching constant, arXiv:0804.0201v1
* [J. Eschenburg] Local convexity and non-negative curvature — Gromov’s proof of the sphere theorem. Invent. Math. 84 (1986), no. 3, 507–522.
* [Hang–Wang] Hang, F.; Wang, X., A Rigidity Theorem for the Hemisphere arXiv:0711.4595
* [S. Ivanov] an answer at http://mathoverflow.net/questions/7732
* [K. Kwun] Uniqueness of the open cone neighborhood, Proc. Araer. Math. Soc. 15 (1964) 476–473
* [J. Liberman] Geodesic lines on convex surfaces. C. R. (Doklady) Acad. Sci. URSS (N.S.) 32, (1941). 310–313.
* [J. Lohkamp] Lohkamp, J. ???
* [A. Lytchak] ???
* [A. Milka] Multidimensional spaces with polyhedral metric of non-negative curvature. I. (Russian) Ukrain. Geometr. Sb. Vyp. 5–6 1968 103–114.
* [Micallef–Moore] Micallef, M. J.; Moore, J. D., Minimal two-spheres and the topology of manifolds with positive curvature on totally isotropic two-planes. Ann. of Math. 127 pp.199–227 1988
* [Ovsienko–Tabachnikov] Ovsienko, V., Tabachnikov, S., Projective differential geometry, old and new: from Schwarzian derivative to cohomology of diffeomorphism groups, Cambridge Univ. Press, 2005.
* [D. Panov]
1. 1.
Parabolic curves and gradient mappings. Local and global problems of
singularity theory (Russian). Tr. Mat. Inst. Steklova 221 (1998), 271–288.
2. 2.
Polyhedral Käler Manifolds. Geometry and Topology. ???
* [G. Perelman] Proof of the soul conjecture of Cheeger and Gromoll. J. Differential Geom. 40 (1994), no. 1, 209–212.
* [V. Sahovic] Approximations of Riemannian Manifolds with Linear Curvature Constraints, Dissertation 2009.
* [A. Shen] ??? Math. Intelligencer 21 (1999), no. 3.
* [S. Tabachnikov] The tale of a geometric inequality, MASS selecta: teaching and learning advanced undergraduate mathematics, AMS Bookstore, 2003, 257–262
* [L. Wayne] The Pseudo-Arc, Bol. Soc. Mat. Mexicana Volume 5. (1999), 25–77.
* [A. Weinstein]
1. 1.
A fixed point theorem for positively curved manifolds. J. Math. Mech. 18
1968/1969 149–153.
2. 2.
Positively curved $n$-manifolds in $\mathbb{R}^{n+2}$. J. Differential
Geometry 4 1970 1–4.
* [B. Wilking] On fundamental groups of manifolds of nonnegative curvature. Differential Geom. Appl. 13 (2000), no. 2, 129–165.
* [S. Yau] Non-existence of continuous convex functions on certain Riemannian manifolds. Math. Ann. 207 (1974), 269–270.
* [V. Zalgaller] Isometric embedding of polyhedra. (Russian) Dokl. Akad. Nauk SSSR 123 1958 599–601.
|
arxiv-papers
| 2009-06-01T14:04:59 |
2024-09-04T02:49:03.071780
|
{
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"authors": "Anton Petrunin",
"submitter": "Anton Petrunin",
"url": "https://arxiv.org/abs/0906.0290"
}
|
0906.0386
|
# Simulation of Beam-Beam Effects and Tevatron Experience
A. Valishev valishev@fnal.gov Yu. Alexahin V. Lebedev Fermilab, Batavia, IL
60510, USA D. Shatilov BINP SB RAS, Novosibirsk, 630090, Russia
###### Abstract
Effects of electromagnetic interactions of colliding bunches in the Tevatron
have a variety of manifestations in beam dynamics presenting vast
opportunities for development of simulation models and tools. In this paper
the computer code for simulation of weak-strong beam-beam effects in hadron
colliders is described. We report the recent operational experience, explain
major effects limiting the collider performance and compare results of
observations and measurements with simulations.
###### pacs:
29.27.Bd
## I Introduction
Peak luminosity of the Tevatron reached 3.5$\times 10^{32}$ cm-2s-1 which
exceeds the original Run II goal of 2.7$\times 10^{32}$. This achievement
became possible due to numerous upgrades in the antiproton source, injector
chain, and in the Tevatron collider itself. The most notable rise of
luminosity came from the commissioning of electron cooling in the recycler
ring and advances in the antiproton accumulation rate. Starting from 2007, the
intensity and brightness of antiprotons delivered to the collider greatly
enhanced the importance of beam-beam effects. Several configurational and
operational improvements in the Tevatron have been planned and implemented in
order to alleviate these effects and allow stable running at high peak
luminosities.
Since the publication of paper Shiltsev et al. (2005) that gave a detailed
summary of beam dynamics issues related to beam-beam effects, the peak
luminosity of Tevatron experienced almost a tree-fold increase. In the present
article we provide an updated view based on the recent years operation
(Section II).
Development of a comprehensive computer simulation of beam-beam effects in the
Tevatron started in 1999. This simulation proved to be a useful tool for
understanding existing limitations and finding ways to mitigate them. In
Section IV the main features of the code LIFETRAC are described. In Sections
V-VI we summarize our experience with simulations of beam-beam effects in the
Tevatron, and cross-check the simulation results against various experimental
data and analytical models. We also correlate the most notable changes in the
machine performance to changes of configuration and beam conditions, and
support the explanations with simulations.
## II Overview of Beam-Beam Effects
A detailed description of the Tevatron collider Run II is available in other
sources Run . Here only the essential features important for understanding of
beam dynamics are provided.
Tevatron is a superconducting proton-antiproton collider ring in which beams
of the two species collide at the center of mass energy of $2\times 0.98$ TeV
at two experiments. Each beam consists of 36 bunches grouped in 3 trains of 12
with 396 ns bunch spacing and 2.6 $\mu$s abort gaps between the trains. The
beams share a common vacuum chamber with both beams moving along helical
trajectories formed by electrostatic separators. Before the high energy
physics collisions have been initiated, the proton and antiproton beams can be
moved longitudinally with respect to each other, which is referred to as
cogging. This configuration allows for 72 interactions per bunch each turn
with the total number of collision points in the ring equal to 138. The total
number of collision points is determined by the symmetry of bunch filling
pattern.
At the peak performance Tevatron operates with $N_{p}=2.8\cdot 10^{11}$
protons and $N_{a}=0.9\cdot 10^{11}$ antiprotons per bunch. The normalized
transverse 95% beam emittances are $\varepsilon_{p}=18\cdot 10^{-6}$m for
protons and $\varepsilon_{a}=7\cdot 10^{-6}$m for antiprotons. Proton and
antiproton bunch length at the beginning of a high energy physics (HEP) store
is 52 cm and 48 cm, respectively. Parameters of the beams are mostly
determined by the upstream machines.
The value of $\beta$-function at the main collision points ($\beta^{*}$) is
0.28 m. Betatron tunes are $Q_{x}=20.584$, $Q_{y}=20.587$ for protons and
$Q_{x}=20.575$, $Q_{y}=20.569$ for antiprotons.
A typical collider fill cycle is shown in Figure 1. First, proton bunches are
injected one at a time on the central orbit. After that, the helix is opened
and antiproton bunches are injected in batches of four. This process is
accompanied by longitudinal cogging after each 3 transfers. Then the beams are
accelerated to the top energy (85 s) and the machine optics is changed to
collision configuration in 25 steps over 120 seconds (low-beta squeeze). The
last two stages include initiating collisions at the two main interaction
points (IP) and removing halo by moving in the collimators.
Figure 1: Collider fill cycle for store 5989.
It has been shown in machine studies that beam losses up the ramp and through
the low-beta squeeze are mainly caused by beam-beam effects Shiltsev et al.
(2005). In the HEP mode, the beam-beam induced emittance growth and particle
losses contribute to the faster luminosity decay. Figure 2 summarizes the
observed losses of luminosity during different stages of the collider cycle.
Figure 2: Luminosity loss budget over a 3 year period. The labels mark: 1.
Commissioning of electron cooling. 2. Installation of extra separators and new
collision helix. 3\. Antiproton accumulation rate. 4. Correction of second-
order chromaticity. 5. Implementation of antiproton emittance blowup.
### II.1 Beam-beam effects at injection
During injection the long range (also referred to as parasitic) beam-beam
effects cause proton losses (currently 5 to 10%). At the same time the
antiproton life time is very good and only a fraction of a per cent are lost.
Observations show that mainly off momentum particles are lost (Fig. 3) and the
betatron tune chromaticity $C=dQ/d\delta$, where $\delta=\Delta p/p$ is the
relative momentum deviation, has a remarkable effect. Early in Run II the
chromaticity had to be kept higher than 8 units in order to maintain coherent
stability of the intense proton beam, but after several improvements aimed at
reduction of the machine impedance the chromaticity is about 3 units Ivanov et
al. (2003, 2005); Ranjbar and Ivanov (2008). Figure 3 shows an interesting
feature in the behavior of two adjacent proton bunches (no. 20 and 21). Spikes
in the measured values are instrumental effects labeling the time when the
beams are cogged. Before the first cogging the bunches have approximately
equal life time. After the first cogging bunch 20 exhibits faster decay, and
bunch 21 after the second. Analysis of the collision patterns for these
bunches allowed to pinpoint a particular collision point responsible for the
life time degradation. The new injection helix has been implemented late in
2007 which improved the proton life time Moore et al. (2005); Alexahin (2007).
Figure 3: Intensity and length of proton bunches no. 20 and 21 during
injection of antiprotons.
### II.2 Low-beta squeeze
During the low-beta squeeze two significant changes occur - the $\beta^{*}$
value is being gradually decreased from $\sim$1.5 m to 0.28 m (hence the name
squeeze) and the helical orbits change their shape and polarity from injection
to collision configuration. The latter poses a serious limitation since the
beams separation at several long range collision points briefly decreases from
5-6$\sigma$ to $\sim$2$\sigma$. At this moment a sharp spike in losses is
observed.
Another important operational concern was the tight aperture limitation in one
of the two final focus regions (CDF). With dynamically changing orbit and
lattice parameters the local losses were often high enough to cause a quench
of the superconducting magnets even though the total amount of beam loss is
small ($\sim$ 1%). The aperture restriction has been located and fixed in
October of 2008.
Besides orbit stability two other factors were found to be important in
maintaining low losses through the squeeze: antiproton beam brightness and
betatron coupling. Figure 4 shows the dependence of proton losses on the
antiproton beam brightness. Large amount of stores lost in this stage of the
cycle caused by increase of the antiproton beam brightness after the 2007
shutdown demanded the commissioning of the antiproton emittance control system
Tan .
Figure 4: Proton losses in low-beta squeeze vs. antiproton beam brighness
$36\cdot N_{a}/\varepsilon_{a}$.
### II.3 High energy physics
After the beams are brought into collisions at the main IPs, there are two
head-on and 70 long range collision points per bunch. Beam-beam effects caused
by these interactions lead to emittance growth and particle losses in both
beams.
During the running prior to the 2006 shutdown the beam-beam effects at HEP
mostly affected antiprotons. The long range collision points nearest to the
main IPs were determined to be the leading cause for poor life time.
Additional electrostatic separators were installed in order to increase the
separation at these IPs from 5.4 to 6$\sigma$ Alexahin (2007). Also, the
betatron tune chromaticity was decreased from 20 to 10 units. Since then, the
antiproton life time is dominated by losses due to luminosity and no emittance
growth is observed provided that the betatron tune working point is well
controlled.
Electron cooling of antiprotons in the Recycler and increased antiproton
staching rate drastically changed the situation for protons. Figure 5 shows
the evolution of total head-on beam-beam tune shift $\xi$ for protons and
antiprotons. Note that prior to the 2006 shutdown the proton $\xi$ was well
under 0.01 and big boost occurred in 2007 when both beam-beam parameters
became essentially equal. It was then when beam-beam related losses and
emittance blowup started to be observed in protons.
Figure 5: Head-on beam-beam tune shift vs. time.
Our analysis showed that deterioration of the proton life time was caused by a
decrease of the dynamical aperture for off-momentum particles due to head-on
collisions (see Sec. VI). It was discovered that the Tevatron optics had large
chromatic perturbations, e.g. the value of $\beta^{*}$ for off-momentum
particles could differ from that of the reference particle by as much as 20%.
Also, the high value of second order betatron tune chromaticity
$d^{2}Q/d\delta^{2}$ generated a tune spread of $\sim$0.002. A rearrangement
of sextupoles in order to correct the second order chromaticity was planned
and implemented before the 2007 shutdown Valishev et al. (2007). Figure 6
demonstrates the effect of this modification on integrated luminosity. Since
the dependence of luminosity on time is very well fitted by a
$L_{0}/(1+t/\tau)$ function, one can normalize the luminosity integral for a
given store to a fixed length $T_{0}$ by using the expression $L_{0}\tau\cdot
ln(1+T_{0}/\tau)$ Shiltsev and McCrory (2005). Here $L_{0}$ is the initial
luminosity, and $\tau$ is the luminosity life time. One can see that after the
modification the saturation at luminosities above $2.6\times 10^{32}$ was
mitigated and the average luminosity delivered to experiments increased by
$\sim$ 10%.
Figure 6: Luminosity integral normalized by 24 h vs. initial luminosity. Blue
points and curve - before second order chromaticity correction, red - after
correction. Black line represents the ultimate integral for the given beam
parameters in the absence of beam-beam effects (see Sec. III).
Another step in the proton $\xi$ happened after the 2007 shutdown when the
transverse antiproton emittance decreased because of improvements in injection
matching. The total attained head-on beam-beam tune shift for protons exceeded
that of antiprotons and reached 0.028. This led to high sensitivity of the
proton life time to small variations of the betatron tunes, and to severe
background conditions for the experiments. The reason is believed to be the
large betatron tune spread generated by collisions of largely different size
bunches Syphers . Indeed, at times the antiproton emittance was a factor of 5
to 6 smaller than the proton emittance.
To decrease the proton to antiproton emittance ratio a system has been
commissioned which increases the antiproton emittance after the top energy is
reached by applying wide band noise to a directional strip line (line 5 in
Fig. 2) Tan . Currently, the optimal emittance ratio is $\sim$3.
Since the majority of our efforts was targeting beam-beam effects in HEP mode,
we concentrate on this topic in the remaining part of this paper. Discussion
of long range effects at injection and coherent effects Alexahin (2005) is
left out of the scope of this report.
## III Store analysis package
Beam-beam interaction is not the single strongest effect determining evolution
of beam parameters at collisions. There are many sources of diffusion causing
emittance growth and particle losses, including but not limited to intrabeam
scattering, noise of accelerating RF voltage, and scattering on residual gas.
Parameters of these mechanisms were measured in beam studies, and then a model
was built in which the equations of diffusion and other processes are solved
numerically Lebedev (2003). This model is able to predict evolution of the
beam parameters in the case of weak beam-beam effects. When these effects are
not small, it provides a reference for evaluation of their strength. We use
this approach on a store-by-store basis to monitor the machine performance in
real time sto because such calculations are very fast compared to a full
numerical beam-beam simulation. Figure 7 presents an example comparison of
evolution of beam parameters in an actual high luminosity store to
calculations. Note that there is no transverse emittance blow up in both
beams, and the emittance growth is determined by processes other than beam-
beam interaction. The same is true for antiproton intensity and bunch length.
The most pronounced difference between the observation and the model is seen
in the proton intensity. Beam-beam effects cause proton life time degradation
during the initial 2-3 hours of the store until the proton beam-beam tune
shift drops from 0.02 to 0.015. The corresponding loss of luminosity integral
is about 5%.
Figure 7: Observed beam parameters in store 6683 compared to store analysis
calculation (model). $L_{0}=3.5\cdot 10^{32}$ cm-2 s-1. a) Single bunch
Luminosity and Luminosity integral. b) Intensity of proton bunch no. 6 and of
antiproton bunch colliding with it (no. 13). c) Bunch lengths. d) Horizontal
95% normalized bunch emittances.
## IV Weak-strong code LIFETRAC
Initially, the beam-beam code LIFETRAC was developed for simulation of the
equilibrium distribution of the particles in circular electron-positron
colliders Shatilov (1996). In 1999 the new features have been implemented,
which allowed simulating non-equilibrium distributions, for example proton
beams. In this case the goal of simulations is not to obtain the equilibrium
distribution but to observe how the initial distribution is changing with
time. Number of simulated particles can vary in the range of $10^{3}$ to
$10^{6}$, usually it is set to $(5\div 10)\cdot 10^{3}$. The tracking time is
divided into “steps”, typically $10^{3}\div 10^{5}$ turns each. The statistics
obtained during the tracking (1D histograms, 2D density in the space of
normalized betatron amplitudes, luminosity, beam sizes and emittances) is
averaged over all particles and all turns for each step. Thus, a sequence of
frames representing evolution of the initial distribution is obtained.
Another important quantity characterizing the beam dynamics is the intensity
life time. It is calculated by placing an aperture restriction in the machine
and counting particles reaching the limit. The initial and final coordinates
of the lost particle are saved. This information is valuable for analysis of
various beam dynamics features.
The initial 6D distribution of macroparticles can be either Gaussian (by
default), or read from a separate text file. Besides, the macroparticles may
have different “weights”. This allows representing the beam tails more
reliably with limited number of particles. Usually we simulate the Gaussian
distribution with weights: particles initially located in the core region have
larger weight while the “tail” particles with smaller weight are more
numerous.
In the present bunch pattern (3 trains of 12 bunches) there are two main IPs
and 70 long range collision points. When performing transformation through a
main IP, the “strong” bunch is divided into slices longitudinally. The higher
are the orders of significant betatron resonances which are supposed to make
effect on the distribution, the greater must be the number of slices. In our
simulations 12 slices were used in the main IPs where beta-functions are
approximately equal to the bunch length and only one slice in long range
collision points where beta-functions are much greater and one can neglect the
betatron phase advance on the bunch length.
The transverse density distributions within “strong” slices are bi-Gaussian,
allowing to apply the well-known formulae Hirata et al. (1992) for 6D
symplectic beam-beam kick. However, a simple modification allowed simulating
non-Gaussian strong bunches. Namely, the strong bunch is represented as a
superposition of a few (up to three) Gaussian distributions with different
betatron emittances. The kicks from all these “harmonics” are summarized
additively. The calculation time is increased somehow (not very significantly)
but the transformation remains 6D symplectic.
### IV.1 Tevatron optics
The parasitic collisions in Tevatron play a significant role in the beam
dynamics. In order to account their contribution correctly an accurate
knowledge of the machine lattice of the whole ring with all distortions, beta
beatings, coupling, etc. is required. This necessitated the construction of a
realistic model of the machine lattice based on beam measurements. The most
effective method proved to be the orbit responce matrix analysis Sajaev et al.
(2005); Valishev et al. (2006); Lebedev et al. (2006).
The model lattice is built in the optics code OptiM Lebedev . Both OptiM and
LIFETRAC treat betatron coupling using the same coupled beta-functions
formalism Lebedev and Bogacz . This allows the linear transport matrix between
any two points to be easily derived from the coupled lattice functions and
phase advances.
A set of scripts has been created enabling fast creation of input files for
the beam-beam simulation. These programs automate calculation of azimuthal
positions of interaction points for the chosen bunch and extraction of the
optics parameters. In the end, the machine optics is represented by a set of
6D linear maps between the interaction points.
It was estimated that resonances generated by known Tevatron nonlinearities,
such as the final focus triplets and lattice sextupoles, are much weaker than
those driven by beam-beam collisions at the present betatron tune working
point. Hence, inclusion of nonlinear lattice elements into the simulation was
deemed unnecessary.
### IV.2 Chromaticity
Although linear optics is used for the machine lattice model, there are two
nonlinear lattice effects which are considered to be significant for beam-beam
behaviour and were included into simulations. These are the chromaticities of
beta-functions excited in the Main IPs and chromaticities of the betatron
tunes. In the Hamiltonian theory the chromaticity of beta-functions does not
come from energy-dependent focusing strength of quads (as one would
intuitively expect) but from drift spaces where the transverse momentum is
large (low-beta regions). The symplectic transformations for that are:
$\displaystyle X$ $\displaystyle=$ $\displaystyle X-L\cdot
X^{\prime}\cdot\frac{\Delta p}{p}$ $\displaystyle Y$ $\displaystyle=$
$\displaystyle Y-L\cdot Y^{\prime}\cdot\frac{\Delta p}{p}$ $\displaystyle Z$
$\displaystyle=$ $\displaystyle Z-L\cdot(X^{\prime 2}+Y^{\prime 2})/2$
where $X$, $Y$, and $Z$ are the particle coordinates, and $L$ is the
“chromatic drift” length. Then, it is necessary to adjust the betatron tune
chromaticities which are also affected by “chromatic drift”. For that, an
artificial element (insertion) is used with the following Hamiltonian:
$H=I_{x}\cdot(2\pi Q_{x}+C_{x}\frac{\Delta p}{p})+I_{y}\cdot(2\pi
Q_{y}+C_{y}\frac{\Delta p}{p}),$
where $I_{x}$ and $I_{y}$ are the action variables, $Q_{x}$ and $Q_{y}$ are
the betatron tunes, $C_{x}$ and $C_{y}$ are the [additions to the]
chromaticities of betatron tunes.
### IV.3 Diffusion and Noise
Diffusion and noise are simulated by a single random kick applied to the
macroparticles once per turn. Strength of the kick on different coordinates is
given by a symmetrical matrix representing correlations between Gaussian
noises. In the Tevatron, the diffusion is rather slow in terms of the computer
simulation – the characteristic time for the emittance change is around an
hour, or $\sim 10^{8}$ turns. In order to match the diffusion and the computer
capabilities, the noise was artificially increased by three orders of
magnitude.
We justify this approach below. In contrast to the electron-positron colliders
there is no damping in hadron colliders. As the result, during the store time
an effect of beam-beam interaction on the emittance growth needs to be
minimized and made small relative to other diffusion mechanisms such as the
intra-beam scattering (IBS), scattering on the residual gas, and diffusion due
to RF phase noise. We will call these the extrinsic diffusion to distinguish
from the diffusion excited by beam-beam effects. For the present Tevatron
parameters the extrinsic diffusion sets the luminosity lifetime to be about 10
hours at the beginning of the store. IBS dominates both transverse and
longitudinal diffusions in the case of protons while its relative effect is
significantly smaller for antiprotons because of $\sim$5 times smaller
intensity.
Table 1 summarizes lifetimes for major beam parameters obtained with diffusion
model Lebedev and Burov (2004) for a typical Tevatron store with the
luminosity of $9\times 10^{31}cm^{-2}s^{-1}$. There are many parameters in
Tevatron which are beyond our control and therefore each store is different.
For good stores, the beam-beam effects make comparatively small contribution
to the emittance growth yielding luminosity lifetime in the range of 7-8 hours
and 10-15% loss in the luminosity integral. The planned threefold increase in
antiproton intensity will amplify the beam-beam effects. That, if not
addressed, can cause unacceptably large background in detectors and reduce
integrated luminosity. In this paper we discuss the results of numerical
simulations aimed to understand major factors contributing to the beam-beam
interaction and possible ways of their mitigation.
Parameter (lifetime, hour) | Protons | Antiprotons
---|---|---
Luminosity | 9.6 | 9.6
Transverse emittance, | -17 / -18 | -52 / -46
$(d\epsilon/dt)/\epsilon$ [hor./vert.] | |
Longitudinal emittance | -8 | -26
Intensity | 26 | 155
Table 1: Lifetimes for major beam parameters obtained with diffusion model.
Under the real conditions at Tevatron the emittance growth rate is small and
exact simulations of beam-beam effects would require tracking for billions of
turns. That is well beyond capabilities of present computers. Fortunately, the
extrinsic diffusion is large enough in comparison with beam-beam diffusion
which resuls in the loss of phase correlation after about 50,000 turns.
The external noise plays important role in particle dynamics: it provides
particle transport in the regions of phase space which are free from resonance
islands.
To make this transport faster we can artificially increase the noise level
assuming that its effect scales as noise power multiplied by number of turns.
If we choose it so that the noise alone gives 10% emittance growth in $10^{6}$
turns (we use this level as the reference) then this number of turns of
simulation will correspond to $\sim 5$ h of time in the Tevatron.
To verify this approach we studied the effect of the noise level on luminosity
using reconstructed optics.
Fig.8 presents results of the tune scan along the main diagonal with the
reference noise level and without noise. The effect of noise on luminosity
corresponds to its level with exception for the point $Q_{y}=0.575$ where it
was larger due to some cooperation with strong 5th order resonances.
Figure 8: Ratio of luminosity after a fixed time ($t=2\cdot 10^{6}$ turns) to
the initial luminosity vs. betatron tune. Circles - $D_{noise}=0$, diamonds -
the emittance change due to extrinsic diffusion after $t$ is 20%.
To study this cooperation in more detail we performed tracking at this working
point with different noise levels. Fig.9 shows the luminosity reduction in
$2\cdot 10^{6}$ turns (diamonds) and a fit made using just 3 points, with
relative noise level 0.5, 1 and 2.
The fit works fine for higher noise level, but predicts somewhat faster
luminosity decay in absence of noise than actually observed in tracking. This
means that there are regions in the phase space which particles cannot pass
(within the tracking time) without assistance from the external noise so that
the simple rule $D_{total}=D_{res}+D_{noise}$ does not apply. However, such
“blank spaces” may contain isolated resonance islands which would show up on a
longer time scale with the real level of external noise.
Figure 9: Ratio of luminosities vs. noise level.
The applicablity of this rule at the reference noise level testifies that
(with the chosen number of turns) no such “blank spaces” were left so we get
more reliable predictions.
### IV.4 Program features
Since the beam-beam code uses the “weak-strong” model, it can be very
efficiently parallelized. Each processor tracks its own set of particles and
the nodes need to communicate very rarely (at the end of each step), just to
gather the obtained statistics. Hence, the productivity grows almost linearly
with the number of nodes.
There are also two auxiliary GUI codes. The first one automates production of
the LIFETRAC input files for different bunches from the OptiM machine lattice
files. The second one is dedicated for browsing the LIFETRAC output files and
presenting the simulation results in a text and graphical (histogram) form.
We have validated the code using available experimental data. As an example,
Figs. 10 and 11 show a good reproduction of the two distinct effects in bunch
to bunch differences caused by beam-beam effects: variation of vertical bunch
centroid position due to long range dipole kicks, and variation of transverse
emittance blowup caused by difference in tunes and chromaticities.
Figure 10: Bunch by bunch antiproton vertical orbit. Figure 11: Bunch by
bunch antiproton emittance growth. Measured in store 3554 (red) and simulated
with lifetrac (blue).
The numerical simulation was used to justify the decrease of antiproton
betatron tune chromaticity, reduction of the $\beta^{*}$ from 0.35 m to 0.28 m
(both in 2005). Importance of separation at the long range collision points
nearest to the main IPs was also demonstrated.
Planning for the increase in amount of antiprotons available to the collider,
we identified the large chromaticity of $\beta^{*}$ as a possible source of
the proton life time deterioration. Figure 12 shows the beam-beam induced
proton life time for different values of $\xi$, and demonstrates the positive
effect of corrected chromatic $\beta^{*}$.
Figure 12: Proton intensity evolution for different values of beam-beam
parameter per IP.
Simulations revealed an interesting feature in the behavior of the proton
bunch length at high values of $\xi$ \- the so-called bunch shaving, when the
bunch length starts to decrease after initiating head-on collisions instead of
steady growth predicted by the diffusion model (Fig. 13). This behavior was
observed multiple times during HEP stores in 2007, being especially pronounced
when the vertical proton betatron tune was set too high.
Figure 13: Effect of corrected second order chromaticity on the proton bunch
length evolution.
The significance of the PCs is illustrated in Fig. 14, where a bunch intensity
is plotted vs. time ($2\times 10^{6}$ turns correspond to about 15 hours in
the Tevatron) with the complete set of IPs and PCs, and with PCs turned off.
It is clear that PCs dominate the particle losses.
Figure 14: Normalized intensity of bunch #6 simulated in the presence (solid
line) and in the absence (dashed line) of long range collisions.
### IV.5 Optics Errors
In the simulations we used 3 major Tevatron optics modifications:
* •
“design” optics with ideal parameters of the main Interaction Points (IP),
zero coupling.
* •
“january” optics which was in effect until March, 2004. This optics was
measured in January, 2004, and had sufficient distortions in the main IPs
(unequal beta’s, beam waists shifted from the IP), and betatron coupling.
* •
“june” optics introduced in March, 2004, where the distortions were corrected.
Comparison of the three cases is shown in Fig. 15. This plot shows that
modifications to the optics implemented in March, 2004, made the optics close
to the design. Additional simulations revealed that the main source of
particle losses was in the long range collisions (PC) nearest to the main IPs.
Increasing the beams separation in these points and repairing the phase
advances cured high antiproton losses.
Figure 15: Intensity of bunch #6 vs. time for different types of optics.
$\xi=0.01$, $Q_{x}=0.57$, $Q_{Y}=0.56$
### IV.6 Scallops
Another illustration of validity of the code is simulation of scallops. Fig.
11 shows the simulated pattern of the antiproton emittance blow-up during
first minute of the store. We also demonstrated that scallops can be reduced
by moving the working point farther from 5th order resonance.
### IV.7 Chromaticity
Reducing the betatron tune chromaticity can also be a very powerful instrument
in decreasing the particle losses. Results in Fig. 16, demonstrate that
changing the tune chromaticity from the present 15-20 units to 5-10 units may
significantly improve the beam life time. This can give about 10% in
luminosity integral.
Figure 16: Evolution of the bunch intensity for various values of betatron
tune chromaticity. June optics, $Q_{x}=0.58$, $Q_{y}=0.575$, $\xi=0.01$.
### IV.8 Further $\beta^{*}$ Reduction
An improvement which can be relatively easily implemented is the further
reduction of the beta-function at the main IPs. Decreasing the $\beta^{*}$
from 35 cm to 28 cm one can gain 10% both in peak luminosity and in luminosity
integral.
### IV.9 Beams Separation, 23 Bucket Spacing
Since parasitic collisions give a strong contribution to beam-beam effects,
the nearest parasitic collision points dominate, one could increase separation
of beams in these points thus making their effect weaker. This could be done
by changing the bunch filling pattern from 3 trains of 12 bunches with 21 RF
buckets between bunches to 3 x 11 bunches with separation of 23 RF buckets,
conserving the overall intensity of antiprotons. In that case the separation
in the nearest PCs is bumped from 8.2 beam sigmas to 13.6 beam sigmas. Figure
17 shows that such modification allows to increase the attainable beam-beam
parameter for antiprotons from 0.01 per IP to 0.0125 without loss in
efficiency.
Figure 17: Vertical emittance of bunch #6 vs. time for the 21 and 23 bucket
spacing configurations and different values of beam-beam parameter.
## V New collision helix
As mentioned, the strong betatron resonances affecting the collider
performance are caused by beam-beam effects. It was shown that the strength of
the 7-th order resonance is determined by the long range collisions Alexahin
(2007). Also, analytical calculations and numerical simulations predicted that
increasing the beam separation at the parasitic collision points nearest to
the main IPs would give the largest benefit. To achieve this, two extra
electrostatic separators were installed during the 2006 shutdown. As the
result of their commissioning, the separation at the IPs upstream and
downstream of CDF and D0 increased by 20% (Fig. 18).
Figure 18: Radial beam separation at the collision helix in units of the beam size vs. azimuth starting from CDF IP. Blue - design helix, red - before installation of the new separators, green - present. Table 2: Radial separations in the first long range collision points in units of the beam size. | CDF u.s. | CDF d.s. | D0 u.s. | D0 d.s.
---|---|---|---|---
Before | 5.4 | 5.6 | 5.0 | 5.2
After | 6.4 | 5.8 | 6.2 | 5.6
The increased separation showed itself in improved proton lifetime. Figure 19
shows a comparison of the single bunch proton intensity for two HEP stores
before and after commissioning of the new helix. Initial intensities and
emittances of antiprotons in these stores were close which allows direct
comparison.
Figure 19: Single bunch proton intensity in two HEP stores. 4581 with the old
helix, 4859 with the new helix.
A noticeable change in the bunch length behavior can be observed in Fig. 20.
Note that on the old helix protons experienced significant bunch shortening.
Figure 20: Proton and antiproton bunch length in two HEP stores. 4581 with
the old helix, 4859 with the new helix.
Single bunch luminosity and luminosity integral for the same two stores are
shown in Fig. 21. As one can see, luminosity lifetime in the new configuration
has improved substantially. The overall gain can be quantified in terms of
luminosity integral over a fixed period of time (e.g. 24 hours) normalized by
the initial luminosity. The value of this parameter has increased by 16%.
Figure 21: Single bunch luminosity and luminosity integral for stores 4581
and 4859.
## VI Second order chromaticity
Increasing the beam separation mitigated the long range beam-beam effects.
However, with advances in the antiproton production rate, the initial
antiproton intensity at collisions has been rising continuously. Head-on beam-
beam parameter for protons was pushed up to 0.008 per IP which made the head-
on beam-beam effects in the proton beam much more pronounced. One of the
possible ways for improvement is a major change of the betatron tune in order
to increase the available tune space. This, however, requires significant
investment of the machine time for optics studies and tuning. A partial
solution may be implemented by decoupling of the transverse and longitudinal
motion at the main IPs, i.e. by reducing the chromatic beta-function.
The value of chromatic beta-function $(\Delta\beta/\beta)/(\Delta p/p)$ at
both IPs is -600 which leads to the beta-function change of 10% for a particle
with 1$\sigma$ momentum deviation Valishev et al. (2007). Thus, a large
variation of focusing for particles in the bunch exists giving rise to beam-
beam driven synchrobetatron resonances.
Numerical simulations with weak-strong code predicted that elimination of the
chromatic beta-function at the main IPs would mitigate the deterioration of
proton lifetime at the present values of antiproton intensity even without
switching to the new betatron tune working point. In Fig. 22 the simulated
proton bunch intensities are plotted for the cases of corrected and
uncorrected chromatic beta-function.
Figure 22: Normalized intensity of one proton bunch vs. time. Green line -
$\xi=0.01$, chromatic beta-function = -600. Blue line - $\xi=0.01$, chromatic
beta-function = 0. Red line - $\xi=0$. Numerical simulation.
In order to achieve the desired smaller beta-function chromaticity, a new
scheme of sextupole correctors in the Tevatron has been developed and
implemented in May 2007. The scheme uses the existing sextupole magnets split
into multiple families instead of just two original SF and SD circuits. The
effect of introducing the new circuits is illustrated in Fig. 6.
## VII Summary and discussion
Over the past two years Tevatron routinely operated at the values of head-on
beam-beam tune shift for both proton and antiproton beams exceeding 0.02. The
transverse emittance of antiprotons is a factor of 3 to 5 smaller than the
proton emittance. This creates significantly different conditions for the two
beams.
Beam-beam effects in antiprotons are dominated by long range interactions at
four collision points with minimal separation. After the separation at these
points was increased to 6$\sigma$ no adverse effects are observed in
antiprotons at present proton intensitites.
On the contrary, protons experience life time degradation due to head-on
collisions with the beam of smaller transverse size. Correction of chromatic
$\beta$-function in the final focus and reduction of betatron tune
chromaticity increased dynamic aperture and improved proton beam life time.
Simulation of beam-beam effects developed for the Tevatron correctly describes
many observed features of the beam dynamics, has predictive power and has been
used to support changes of the machine configuration.
Further increase of the beam intensities is limited by the space available on
the tune diagram near the current working point. A change of the tune working
point from 0.58 to near the half integer resonance would allow as much as 30%
increase of intensities but requires a lengthy commissioning period which
makes it inlikely that this improvement will be realized during the time
remaining in Run II.
###### Acknowledgements.
This report is the result of hard work and dedication of many people at
Fermilab. The work was supported by the Fermi Research Alliance, under
contract DE-AC02-76CH03000 with the U.S. Dept. of Energy.
## References
* Shiltsev et al. (2005) V. Shiltsev, Y. Alexahin, V. Lebedev, P. Lebrun, R. S. Moore, T. Sen, A. Tollestrup, A. Valishev, and X. L. Zhang, Phys. Rev. ST Accel. Beams 8, 101001 (2005).
* (2) _Tevatron run ii handbook_ , http://www-bd.fnal.gov/runII.
* Ivanov et al. (2003) P. Ivanov, J. Annala, A. Burov, V. Lebedev, E. Lorman, V. Ranjbar, V. Scarpine, and V. Shiltsev, in _Proceedings of the Particle Accelerator Conference, Portland, OR, USA_ (2003), pp. 3062–3064.
* Ivanov et al. (2005) P. Ivanov, Y. Alexahin, J. Annala, V. Lebedev, and V. Shiltsev, in _Proceedings of the Particle Accelerator Conference, Knoxville, TN, USA_ (2005), pp. 2714–2716.
* Ranjbar and Ivanov (2008) V. H. Ranjbar and P. Ivanov, Phys. Rev. ST Accel. Beams 11, 084401 (2008).
* Moore et al. (2005) R. Moore, Y. Alexahin, J. Johnstone, and T. Sen, in _Proceedings of the Particle Accelerator Conference, Knoxville, TN, USA_ (2005), pp. 1931–1933.
* Alexahin (2007) Y. Alexahin, in _Proceedings of the Particle Accelerator Conference, Albuquerque, NM, USA_ (2007), pp. 3874–3876.
* (8) C.-Y. Tan, to be published.
* Valishev et al. (2007) A. Valishev, G. Annala, V. Lebedev, and R. Moore, in _Proceedings of the Particle Accelerator Conference, Albuquerque, NM, USA_ (2007), pp. 3922–3924.
* Shiltsev and McCrory (2005) V. Shiltsev and E. McCrory, in _Proceedings of the Particle Accelerator Conference, Knoxville, TN, USA_ (2005), pp. 2536–2537.
* (11) M. Syphers, _Beam-beam tune distributions with differing beam sizes_ , fermilab Beams Doc. 3031.
* Alexahin (2005) Y. Alexahin, in _Proceedings of the Particle Accelerator Conference, Knoxville, TN, USA_ (2005), pp. 544–548.
* Lebedev (2003) V. Lebedev, in _Proceedings of the Particle Accelerator Conference, Portland, OR, USA_ (2003), pp. 29–33.
* (14) http://www-bd.fnal.gov/SDAViewersServlets/valishev_sa_catalog2.html.
* Shatilov (1996) D. Shatilov, Part. Accel. 52, 65 (1996).
* Hirata et al. (1992) K. Hirata, H. Moshammer, and F. Ruggiero (1992), kEK Report 92-117.
* Sajaev et al. (2005) V. Sajaev, V. Lebedev, V. Nagaslaev, and A. Valishev, in _Proceedings of the Particle Accelerator Conference, Knoxville, TN, USA_ (2005), pp. 3662–3664.
* Valishev et al. (2006) A. Valishev, Y. Alexahin, J. Annala, V. Lebedev, V. Nagaslaev, and V. Sajaev, in _Proceedings of European Accelerator Coference, Edinburgh, Scotland_ (2006), pp. 2053–2055.
* Lebedev et al. (2006) V. Lebedev, V. Nagaslaev, A. Valishev, and V. Sajaev, Nucl. Instrum. Methods Phys. Res., Sect. A 558, 299 (2006).
* (20) V. Lebedev, http://www-bdnew.fnal.gov/pbar/organizationalchart/lebedev/OptiM/optim.htm.
* (21) V. Lebedev and S. Bogacz, http://www.cebaf.gov/~lebedev/AccPhys/.
* Lebedev and Burov (2004) V. Lebedev and A. Burov, in _Proceedings of the 33rd ICFA Advanced Beam Dynamics Workshop on High Intensity and High Brightness Hadron Beams, ICFA-HB2004, Bensheim, Germany_ (2004), pp. 350–354.
|
arxiv-papers
| 2009-06-01T21:54:50 |
2024-09-04T02:49:03.081645
|
{
"license": "Public Domain",
"authors": "A. Valishev, Yu. Alexahin, V. Lebedev, D. Shatilov",
"submitter": "Alexander Valishev",
"url": "https://arxiv.org/abs/0906.0386"
}
|
0906.0405
|
# Coupling of light from an optical fiber taper into silver nanowires
Chun-Hua Dong Xi-Feng Ren111renxf@ustc.edu.cn Rui Yang Key Laboratory of
Quantum Information, University of Science and Technology of China, Hefei
230026, People’s Republic of China Jun-Yuan Duan Jian-Guo Guan State Key
Laboratory of Advanced Technology for Materials Synthesis and Processing,
Wuhan University of Technology, 122 Luoshi Road, Hubei, Wuhan 430070, People’s
Republic of China Guang-Can Guo Guo-Ping Guo222gpguo@ustc.edu.cn Key
Laboratory of Quantum Information, University of Science and Technology of
China, Hefei 230026, People’s Republic of China
###### Abstract
We report the coupling of photons from an optical fiber taper to surface
plasmon modes of silver nanowires. The launch of propagating plasmons can be
realized not only at ends of the nanowires, but also at the midsection. The
degree of the coupling can be controlled by adjusting the light polarization.
In addition, we present the coupling of light into multiple nanowires from a
single optical fiber taper simultaneously. Our demonstration offers a novel
method for optimizing plasmon coupling into nanoscale metallic waveguides and
promotes the realization of highly integrated plasmonic devices.
###### pacs:
78.67.Lt, 73.20.Mf, 73.22.Lp
With the increasing attention and progress of nanotechnology, the dimensions
of ultrafast transistors are on the order of 50 nm. The imperative problem now
is carrying digital information from one end to the other end of a
microprocessor if we want to increase the speed of microprocessors. Optical
interconnects such as fiber optic cables can carry digital data with a
capacity 1000 times more than that of electronic interconnects, while fiber
optic cables are larger due to the optical diffraction limit. This size-
compatibility problem may be solved if the optical elements can be integrated
on chip and fabricated at nanoscale. One such proposal is surface plasmons,
which are electromagnetic waves that propagate along the surface of a
conductorozbay . Plasmonics, surface plasmon-based optics, have been
demonstrated and investigated intensively in nanoscale metallic hole
arraysEbbesen98 ; Moreno ; Alt , metallic waveguidesPile ; Bozhevo ; Lamp ,
and metallic nanowiresDickson ; Graff ; Ditlbacher ; Sanders ; Knight ; Pyayt
in recent years. Among the different kinds of plasmonic waveguides, sliver
nanowires have some unique properties that make them particularly attractive,
such as low propagating loss due to their smooth surface and scattering of
plasmons to photons only at their sharp ends. Since the momentums of the
photons and plasmons are different, it is a challenge to couple light into
plasmon waveguides efficiently. The general methods for plasmon excitation
include prism coupling and focusing of light onto one end of the nanowire with
a microscope objective. Nanoparticle antenna-based approach is also proved to
be an efficient way for optimizing plasmon coupling into nanowiresKnight ,
which allows for direct coupling into straight, continuous nanowires by using
a nanoparticle as an antenna. Recently, a single polymer waveguide is used to
couple light into multiple nanowires simultaneouslyPyayt as well, aiming at
providing light to a number of nanoscale devices in the future integrated
photonic circuits. Whereas due to the random distribution of nanowires and
nanoparticles, it is hard to achieve optimum coupling efficiency for the two
methods under present technology.
Here we report a new experimental method to couple light with plasmons in
sliver nanowires by using an optical single mode fiber taper contacting one or
several nanowires. It is found that the plasmons can be excited from the
midsection of a continuous, smooth nanowire. Using a fiber taper, we can
couple light into a nanowire from any position of it. Moreover, the fiber
taper can be used to arrange the position of the nanowires, and several
nanowires can be excited simultaneously by one fiber taper. This structure
bridges the classical optical fibers and the nanoscale plasmonic nanowires and
might be useful for coupling light to nanophotonic devices in integrated
circuits.
Figure 1: (A)Scanning electron micrographs of a 14 $\mu m$ long silver
nanowire. Its diameter is about 300 $nm$. (B)Sketch map of our experiment
setup. Laser beam with 780 nm wavelength is coupled into an optical fiber
taper which is contacted with nanowires. The fiber taper is mounted in a
U-shaped configuration and moved by a piezo-electric stage. Scattering light
is recorded by a CCD camera after a microscope objective.
There are a lot of methods for the controlled synthesis of silver narowiressun
; xia ; huang . Here a solvothermal process is used to fabricate silver
nanowires. In a typical synthesis procedure, 2 mmol of PVP and 1.4 mmol of
AgNO3 were successively dissolved in 36 mL of ethylene glycol. Then 2 mL of
NaCl ethylene glycol solution (1.2 mmol/L) and 2 mL of ferric nitrate ethylene
glycol solution (15 mmol/L) were added under magnetic stirring. The mixture
was sealed in a 50 mL autoclave and heated in oven at 180 ${}^{\circ}C$ for 12
hours. Finally, the Teflon-lined autoclave was cooled naturally to room
temperature, and the final products were obtained after centrifugation of the
straw yellow suspension and washed with deionized water and ethanol for
several times (centrifugal speed is 6000 r/min). The products were preserved
in ethanol. The as-synthesized products were characterized by field emission
scanning electron microscopy (FE-SEM; Hitachi, S-4800) at an acceleration
voltage of 5.0 kV. The wires obtained here have a diameter about 300 nm and
lengths about 10 $\mu m$(Fig. 1A).
Samples used in our experiment were prepared by drop-casting a dilute
nanowires suspension on cover glass and then letting them dry in the open air.
A tapered fiber was prepared from a single mode fiber at a wavelength of 780
nm (Newport) which was heated by a Hydrogen microtorch and stretched to the
opposite directions with two translatorscai ; min . The curvature of the taper
profile was small to realize adiabatic propagation of light through the
tapered region. In our experiment, the fiber taper reached a minimum diameter
of only about 1 $\mu m$ which had evanescent fields outsideLou . Laser beam
with 780 nm wavelength was coupled into the optical fiber and laser
polarization was controlled by a polarization beam splitter (PBS) followed by
a half wave plate (HWP)Konishi . Rotating the HWP allowed us to investigate
the relationship between the coupling efficiency and the polarization of
light. The optical fiber taper was placed above and parallel to the substrate
where nanowires were dropped. It was mounted in a U-shaped configuration and
moved by a three dimensional piezo-electric stage (Physik Instrumente Co.,
Ltd. NanoCube XYZ Piezo Stage), sketched in Fig. 1B. Scattering light from the
nanowire was recorded by a CCD camera after a microscope objective.
Figure 2: Polarization dependence of coupling efficiency at nanowire end. (A)
Micrograph of a nanowire contacted with a fiber taper at one end. (B) Emission
can be observed from the other end. (C) Far-field emission intensities as a
function of laser polarization angle. a is the emission intensity determined
by averaging the four brightest pixels at site ”a” and b is the intensity of
site ”b” which is used as a reference of background scattering since there is
a dust adhering to the fiber taper. a/b gives the relationship between the
coupling strength and the polarization of light.
To eliminate the influence of the glass surface, we put a nanowire on the edge
as shown in Figure 2A. The length of the nanowire was about 11 $\mu m$. The
fiber taper contacted the nanowire at one end and the emission was observed
from the other end clearly, which verified that optical fiber taper could also
excite surface plasmons in metallic nanowires and couple optical information
into nanoscale devices. The coupling strength was measured by changing the
polarization of the input light. For each polarization, the emission intensity
was determined by averaging the four brightest pixels at site ”a”(see inset of
Fig. 2). Intensity of site ”b” was used as a reference of background
scattering since there was a dust adhering to the fiber taper. It changed with
the polarization of the input light for the different coupling efficiencies.
Fig. 2C showed the relationship between the coupling strength and the
polarization of light. The far-field emission curve as a function of
polarization angle was approximately in accord with the theoretical
prediction(cosine or sine function)Sanders ; Knight , and the error here might
come from the strong background scattering. This phenomenon was similar with
the case that we excited surface plasmons with focused laser spot in free
space using a 100x microscope objective.
As we know, the momentum of the propagating plasmon($k_{sp}$) is larger than
that of the incoming photon($k_{ph}$), so there needs an additional
wavevector($\Delta k$) to sustain the momentum conservation condition. Surface
plasmons in nanowires can be excited where the symmetry is broken, for
example, at the ends and sharp bendsDickson ; Graff ; Ditlbacher ; Sanders ,
because an extra wavevector ($\Delta k_{scatter}$) is supplied according to
the scattering mechanism in this situation. Surface plasmons can not be
excited in the midsection directly, as a result of the smooth surface of the
nanowire. Since the plasmonic waveguides may be very long in practice like
optical fiber, it will be more convenient if we can couple light into them
from the midsection. One scheme to directly couple light into straight,
continuous nanowires is using a nanoparticle as an antennaKnight . However
this method may be not expedient if we want to couple light at random sections
of a nanowire since the distributions of nanowires and nanoparticles can not
be controlled easily.
Figure 3: Plasmons are observed from both ends by contacting the fiber taper
with a nanowire in the midsection. (A) Micrograph of a nanowire contacted with
a fiber taper. (B),(C),(D) The fiber taper contact the nanowire with different
sections and scattering light from both ends are detected.
From Fig. 3, we can see that plasmons were observed from both ends by
contacting the fiber taper with a nanowire in its midsection. The fiber taper
was also moved from one end to the other end of this nanowire slowly, and
scattering light was observed as periodic glint during this process. To
testify that it was not the result of the exceptive discontinuity of the
nanowire, we focused the laser light on the midsection using a 100X microscope
objective and no plasmon was launched. Several other nanowires were tested
subsequently as well and gave the similar phenomena. The reason for direct
coupling in midsection may be that the symmetry of the nanowire is disrupted
when the fiber taper and the nanowire are contacted with each other. An
additional momentum is offered partly by scattering on the nanowire surface
and else from the evanescent optical field of the fiber taper. Moving the
optical fiber taper by the stage randomly, we can launch plasmons from any
section of a straight nanowire. Similar to the case of exciting plasmon from
the ends, coupling strength can be modulated by the light polarization. To
check whether the continuities of the nanowires were damaged after contacted
with fiber taper, we used the free space coupling method and proved that the
whole process of coupling light from fiber taper to surface plasmons was safe
for nanowires and exercisable in practice. It should be noticed that the
intensity of output light from two ends changed with coupling positions. A
potential explanation is that the silver nanowire can work as an efficient
Fabry-Perot resonator, in which the scattered light intensity is modulated as
a function of coupling position with the distinct Fabry-Perot resonator modes.
Further investigation is necessary to give a numerical analysis which is
beyond this work. According to the free coupling property of this protocol, it
is especially useful for coupling light into nanodevices which have no sharp
end, such as nanoringMclell ; wang .
Figure 4: Coupling of light into multiple nanowires from a single optical
fiber taper. (A) A micrograph with white light illumination shows a fiber
taper contacted with three nanowires simultaneously. (B) Light scattered from
the ends of the three nanowires can be observed. (C) A fiber taper contacted
with a nanowire at end and another nanowire at midsection. (D) Dark filed
picture of (C) indicates that we can excite surface plasmons selectively from
end of a nanowire or its midsection.
Besides the benefit of coupling light from any sections of nanowires, another
advantage of the fiber taper coupling method is that we can excite surface
plasmons in many nanowires simultaneously using a single fiber taper. In the
future plasmonic circuits, we may need to integrate many nano-waveguides to
increase data transmission rates and capacity. Obviously, the previous methods
of prism coupling and focusing with microscope objective are not convenient
and can not be applied on chips. Pyayt and his coworkers proposed to excite
plasmons in many nanowires by putting them perpendicular to a polymer
waveguide with one end located close to the light inside the waveguidePyayt .
In their structure, the sliver nanowires were oriented randomly on the
substrate and a series of SU-8 stripes were covered on them as polymer
waveguides. They observed that the light coupled in the waveguide could
propagate along several nanowires simultaneously. While due to the random
distribution of nanowires, many of them did not couple light out of the
waveguide. Precise control of nanowire orientation was essential to achieve
optimum coupling efficiency. Here, we used the fiber taper to substitute for
the waveguide and discovered the similar phenomenon while the whole process
can be controlled more precisely.
We utilized a broken fiber taper to adhibit a nanowire, then moved it to the
appropriate place carefully by a nanoscale piezo stage and put it down on the
substrate. Repeating this process several times, we got a well organized
distribution of nanowires. Though some of the nanowires might be destroyed
during this operation, we can clear the bad ones and keep the good ones. In
this work, five nanowires were placed parallel to each other on the substrate
as shown in Fig. 4A. A fiber taper contacted three of them simultaneously on
their ends. We could see that light scattered from the other ends of these
three nanowires at the same time and the two un-contacted nanowires remained
dark, as shown in Fig. 4B. Likewise, we can excite surface plasmons
selectively from end of a nanowire or its midsection. This proved that our
method can be used to couple laser light to multiple nanowires simultaneously.
In summary, we have demonstrated an original technique to couple light into
silver nanowires. The new method has two remarkable advantages: One is that
plasmons can be launched from any part of a nanowire, and the other is that
one optical fiber taper can be applied to couple light into many nanowires
simultaneously. This method can directly combine the classical optical
elements with the nanoscale plasmonic devices, and thus may be practical for
optical input of nanoscale photonic devices in highly integrated circuits.
Acknowledgments
The authors thank Prof. Younan Xia for useful discussion. This work was funded
by the National Basic Research Programme of China (Grants No.2009CB929600 and
No. 2006CB921900), the Innovation funds from Chinese Academy of Sciences, and
the National Natural Science Foundation of China (Grants No. 10604052 and
No.10874163).
## References
* (1) E. Ozbay, ”Plasmonics: Merging Photonics and Electronics at Nanoscale Dimensions,” Science 311, 189-193 (2006), .
* (2) T.W. Ebbesen, H. J. Lezec, H. F. Ghaemi, T. Thio, and P. A. Wolff, ”Extraordinary optical transmission through sub-wavelength hole arrays,” Nature(London) 391, 667-669 (1998).
* (3) L. Martin-Moreno, F. J. Garcia-Vidal, H. J. Lezec, K. M. Pellerin, T. Thio, J. B. Pendry, and T. W. Ebbesen, ”Theory of Extraordinary Optical Transmission through Subwavelength Hole Arrays,” Phys. Rev. Lett. 86, 1114-1117 (2001).
* (4) E. Altewischer, M. P. Van Exter, and J. P. Woerdman, ”Plasmon-assisted transmission of entangled photons,” Nature(London) 418 304-306 (2002).
* (5) D. F. P Pile and D. K. Gramotnev, ”Channel plasmon-polariton in a triangular groove on a metal surface,” Opt. Lett. 29, 1069-1071 (2004).
* (6) S. I. Bozhevolnyi, V. S. Volkov, E. Devaux, J. Y. Laluet, and T. W. Ebbesen ”Channel plasmon subwavelength waveguide components including interferometers and ring resonators,” Nature 440, 508-511 (2006).
* (7) B. Lamprecht, J. R. Krenn, G. Schider, H. Ditlbacher, M. Salerno, N. Felidj, A. Leitner, F. R. Aussenegg, and J. C. Weeber, ”Surface plasmon propagation in microscale metal stripes,” Appl. Phys. Lett. 79, 51-53 (2001).
* (8) R. M. Dickson and L. A. Lyon, ”Unidirectional Plasmon Propagation in Metallic Nanowires,” J. Phys. Chem. B 104, 6095- 6098 (2000).
* (9) A. Graff, D. Wagner, H. Ditlbacher, and Kreibig, ”Silver nanowires,” U. Eur. Phys. J. D 34, 263-269 (2005).
* (10) H. Ditlbacher, A. Hohenau, D. Wagner, U. Kreibig, M. Rogers, F. Hofer, F. R. Aussenegg, and J. R. Krenn, ”Silver Nanowires as Surface Plasmon Resonators,” Phys. ReV. Lett. 95, 257403 (2005).
* (11) A. W. Sanders, D. A. Routenberg, B. J. Wiley, Y. N. Xia, E. R. Dufresne, and M. A. Reed, ”Observation of Plasmon Propagation, Redirection, and Fan-Out in Silver Nanowires,” Nano Lett.__ 6, 1822-1826 (2006).
* (12) M. W. Knight, N. K. Grady, R. Bardhan, F. Hao, P. Nordlander, and N. J. Halas, ”Nanoparticle-Mediated Coupling of Light into a Nanowire,” Nano Lett. 7, 2346-2350 (2007).
* (13) A. L. Pyayt, B. J. Wiley, Y. N. Xia, A. T. Chen, and L. Dalton, ”Integration of photonic and silver nanowire plasmonic waveguides ,” Nature nanotechnology 3, 660-665 (2008).
* (14) Y. G. Sun and Y. N. Xia, ”Large-Scale Synthesis of Uniform Silver Nanowires Through a Soft, Self-Seeding, Polyol Process,” Advanced Materials 14, 833-837 (2002).
* (15) Y. N. Xia, P. D. Yang, Y. G. Sun, Y. Y. Wu, B. Mayers, B. Gates, Y. D. Yin, F. Kim, and H. Q. Yan ”One-Dimensional Nanostructures: Synthesis, Characterization, and Applications,” Advanced Materials 15, 353-389 (2003).
* (16) M. H. Huang, A. Choudrey, and P. D. Yang, ”Ag nanowire formation within mesoporous silica,” Chemical Communications 1063-1064 (2000).
* (17) M. Cai and K. Vahala, ”Highly efficient optical power transfer to whispering-gallery modes by use of a symmetrical dual-coupling configuration,” Opt. Lett. 25, 260-262 (2000).
* (18) B. Min, E. Ostby, V. Sorger, E. Ulin-Avila, L. Yang, X. Zhang, and K. Vahala, ”High-$Q$ surface-plasmon-polariton whispering-gallery microcavity,” Nature(London)__ 457, 455-458 (2009).
* (19) J. Y. Lou, L. M. Tong, and Z. Z. Ye, ”Modeling of silica nanowires for optical sensing,” Optics Express 13, 2135-2140, (2005).
* (20) H. Konishi, H. Fujiwara, S. Takeuchi, and K. Sasaki, ”Polarization-discriminated spectra of a fiber-microsphere system,” Appl. Phys. Lett. 89, 121107-121109 (2006).
* (21) J. M. McLellan, M. Geissler, and Y. N. Xia ”Edge Spreading Lithography and Its Application to the Fabrication of Mesoscopic Gold and Silver Rings,” J. Am. Chem. Soc. 126, 10830-10831 (2004).
* (22) H. M. Gong, L. Zhou, X. R. Su, S. Xiao, S. D. Liu, Q. Q. Wang, ”Illuminating Dark Plasmons of Silver Nanoantenna Rings to Enhance Exciton-Plasmon Interactions,” Adv. Funct. Mater.__ 19, 298-303 (2009).
|
arxiv-papers
| 2009-06-02T01:51:48 |
2024-09-04T02:49:03.090437
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Chun-Hua Dong, Xi-Feng Ren, Rui Yang, Jun-Yuan Duan, Jian-Guo Guan,\n Guang-Can Guo, and Guo-Ping Guo",
"submitter": "Xifeng Ren",
"url": "https://arxiv.org/abs/0906.0405"
}
|
0906.0406
|
# Coupling of higher-mode-light into a single sliver nanowire
Guo-Ping Guo Rui Yang Xi-Feng Ren111renxf@ustc.edu.cn Lu-Lu Wang Key
Laboratory of Quantum Information, University of Science and Technology of
China, Hefei 230026, People’s Republic of China Hong-Yan Shi Bo Hu Shu-Hong
Yu222shyu@ustc.edu.cn University of Science and Technology of China, Hefei
230026, People’s Republic of China Guang-Can Guo Key Laboratory of Quantum
Information, University of Science and Technology of China, Hefei 230026,
People’s Republic of China
###### Abstract
Coupling of higher-order-mode light into a single sliver nanowire and the
degree of the coupling can be controlled by adjusting the light polarization,
showing that nanowire waveguide has no request on the spatial mode of the
input light. Photons with different orbital angular momentums (OAM) are used
to excite surface plasmons of silver nanowires. The experiment indicates the
propagating modes of surface plasmons in nanowires were not the OAM
eignenstates.
###### pacs:
78.66.Bz,73.20.MF, 71.36.+c
Today, the major problem to increase the speed of microprocessors is how to
carry digital information from one end to the other. Optical interconnectors
can carry digital data much more than that of electronic interconnectors,
while fiber optical cables can not be minimized to nanoscale due to the
optical diffraction limit. To solve this size-incompatibility problem, we may
need to integrate the optical elements on chip and fabricate them at the
nanoscale. One such proposal is surface plasmons, which are electromagnetic
waves that propagate along the surface of a conductorozbay . Plasmonics,
surface plasmon-based optics, has been demonstrated and investigated
intensively in nanoscale metallic hole arraysEbbesen98 ; Moreno ; Alt ,
metallic waveguidesPile ; Bozhevo ; Lamp , and metallic nanowiresDickson ;
Graff ; Ditlbacher ; Sanders ; Knight ; Pyayt in recent years.
Among the various kinds of plasmonics waveguides, sliver nanowires have some
unique properties that make them particularly attractive, such as low
propagation loss due to their smooth surface and scattering of plasmons to
photons only at their sharp ends. Since the momentums of the photons and
plasmons are different, it is a challenge to couple free-space light into
plasmon waveguides efficiently. The typical methods for plasmon excitation
include grating coupling, prism coupling and focusing of light onto one end of
the nanowire with a microscope objective. Nanoparticle antenna-based approach
is also proved as an effective way for optimizing plasmon coupling into
nanowiresKnight , which realizes direct coupling into straight, continuous
nanowires by using a nanoparticle as an antenna. Recently, polymer waveguides
are used to couple light into several nanowires simultaneouslyPyayt as well,
aiming at providing light to a number of nanoscale devices in the future
integrated photonic circuits.
Figure 1: (color online)(a) SEM image of silver nanowires. (b) TEM image taken
on the end of an individual silver nanowire. (c) HRTEM image of the singe
nanowire shown in (b). (d) SAED pattern taken from an individual nanowire.
Because the former researches about the nanowires always concentrate on using
Gaussian mode light to excite surface plasmons, here we discuss whether
surface plamons can be launched by other higher-order-mode light. We focus
laser beam with different orbital angular momentums (OAM) on one end of a
nanowire and observe scattering light from the other end. Surface plasmons are
launched not only by Gaussian mode light but also by higher-order-mode light.
The coupling strength over light polarization is also studied for higher-
order-mode light and gives the similar results with the case of Gaussian mode.
The output intensity increases linearly with the input intensity rising, and
is independent of the spatial mode of the input light.
Ag nanowires were synthesized through a polyol process in a mixture of
ethylene glycol (EG) and poly (vinyl pyrrolidone) (PVP) at a certain
temperature, which was very similar as the previous reporttao ; jiang ; korte
. Scanning electron micrograph (SEM) image in Fig.1a shows that all the
nanowires are straight and have uniform diameters that vary from 60 to 100nm
and lengths from 10 to 40$\mu m$. A typical nanowire with diameter of 60 nm is
shown in Fig. 1b. High resolution TEM image in Fig. 1c shows a lattice spacing
of 0.23 nm, corresponding to those of $(\bar{1}\bar{1}\bar{1})$ and
$(\bar{1}11)$respectively. Electron diffraction pattern taken the individual
nanowire can be indexed as two parallel zone axes, i.e. $[01\bar{1}]$ and
$[1\bar{1}\bar{1}]$(Fig. 1d). Based on the analysis, the nanowire axis is
along $[100]$.
The mode of the input light is determined by its OAM. It is known that photons
have both spin angular momentum and OAM. The light fields of photons with OAM
can be described by means of Laguerre-Gaussian ($LG_{p}^{l}$) modes with two
indices $p$ and $l$Allen92 . The $p$ index identifies the number of radial
nodes observed in the transversal plane and the $l$ index describes the number
of the $2\pi$-phase shifts along a closed path around the beam center. If the
mode function is a pure LG mode with winding number $l$ , then every photon of
this beam carries an OAM of $l\hbar$. This corresponds to an eigenstate of the
OAM operator with eigenvalue $l\hbar$Allen92 . For the sake of simplification,
here we just consider the cases for $p=0$. When $l=0$, the light is in the
general Gaussian mode, while when $l\neq 0$, the energy distribution of light
likes a doughnut due to their helical wavefronts (see inset of Fig. 2). We
usually use computer generated holograms (CGHs)ArltJMO ; VaziriJOB to change
the winding number of LG mode light. It is a kind of transmission holograms.
Inset of Fig. 1. shows part of a typical CGH($n=+1$) with a fork in the
center. Corresponding to the diffraction order $m$, the $l$ fork hologram can
change the winding number of the input beam by $\Delta l_{m}=m*n$. In our
experiment, we use the first order diffraction light ($m=+1$) and the
efficiencies of our CGHs are all about $40\%$. The Gaussian mode light can be
identified using mono-mode fibers in connection with avalanche detectors. All
other modes light have a larger spatial extension, and therefore cannot be
coupled into the single-mode fiber efficiently.
Figure 2: (color online)Sketch illustration of the experimental setup. The OAM
of the laser (wavelength 632.8nm) was controlled by a CGH, while the
polarization was controlled by a PBS followed by a HWP. The polarized laser
beam was focused on one end of a nanowire using a 100X objective lens (Zeiss,
NA=0.75). The sample was moved by a three dimensional piezo-electric stage.
Scattering light was recorded by a CCD camera after a microscope objective.
Inset are pictures of a typical CGH ($n=1$) and the energy distribution of the
produced light.
The experimental setup was shown in Fig. 2. The wavelength of the laser beam
was 632.8 nm, which was much bigger than the diameter of the nanowires (about
100 nm). The OAM of the laser was controlled by a CGH, while the polarization
was controlled by a polarization beam splitter (PBS, working wavelength 632.8
nm) followed by a half wave plate (HWP, working wavelength 632.8 nm). Rotating
the HWP allowed us to investigate the relation between the coupling efficiency
and the polarization of light. The polarized laser beam was directed into the
microscope and focused on one end of a nanowire with the light diameter about
$5.5\mu m$ using a 100X objective lens (Zeiss, NA=0.75). The sample was moved
by a three dimensional piezo-electric stage (Physik Instrumente Co., Ltd.
NanoCube XYZ Piezo Stage). Scattering light from the nanowire was reflected by
a beam splitter (BS, 50/50) and recorded by a CCD camera.
Figure 3: (color online)Higher-order-mode light ($l=2$) was focused on one end
of a nanowire (length $9.3\mu m$) and the emission was observed from the other
end clearly, which verified that the higher-mode-light can also be transmitted
by the sliver nanowire.
The momentum of the propagating plasmon($k_{sp}$) is larger than that of the
incoming photon($k_{ph}$), there needs an additional wavevector($\Delta k$) to
sustain the momentum conservation condition. Surface plasmons in nanowires can
be excited when the symmetry were broken, for example, at the ends and sharp
bendsDickson ; Graff ; Ditlbacher ; Sanders , because an extra wavevector
($\Delta k_{scatter}$) is provided according to the scattering mechanism at
this situation. It has been proved that surface plasmons can propagated along
the length of nanowires when they were excited by Gaussian mode light, even
the diameter of nanowires were much smaller than the wavelength of light. In
our experiment, higher mode lights ($l=1$ and $2$) were focused on one end of
a nanowire (length $9.3\mu m$ ) and the emission was observed from the other
end clearly, which verified that the higher-mode-light can also be transmitted
by the sliver nanowire (Fig. 3). It is noted that the energy distribution of
the output light was not same with the input light which has a null hole in
center. This phenomenon is different from the cases of extraordinary optical
transmission through nano-hole structures, where the OAM eigenstates can be
preservedren06 ; wang . A potential explanation is that the propagation modes
of surface plasmons in nanowires are not the eigenmodes of OAM states, like
the case of multi-mode optical fiber.
Figure 4: (color online)Polarization dependence of coupling efficiency at
nanowire end. (a) The Gaussian mode light was focused on the end of nanowire.
(b) The higher-order-mode light ($l=2$) was focused on the end of nanowire.
The two cases give the similar curve.
The coupling strength was measured by changing the polarization of the input
light. For each polarization, the emission intensity was determined by
averaging the four brightest pixels at the other end of the nanowire in the
CCD images. It changed with the polarization of input light for the different
coupling efficiencies. Fig. 4 shows the relationship between the coupling
strength and the polarization of the input light. The far-field emission
curves as a function of polarization angle was approximately in accord with
the theoretical prediction (cosine or sine function)Sanders ; Knight . As a
comparison, the case of Gaussian mode light was observed and gave the similar
curve.
The end of the nanowire was also moved from one edge to the other of the laser
spot (which has a diameter about $5.5\mu m$) to give the relationship between
the input intensity of laser beam and the emission intensity from the end of
the nanowire. The results were measured for the cases of Gaussian mode light
and higher-order-mode light ($l=2$), as shown in Fig. 5, which showed that the
emission intensity increased linearly with the pump intensity and was almost
independent of the spatial mode of the input light.
Figure 5: (color online)Relationship between the input intensity of laser beam
and the emission intensity from the nanowire. The dots are experimental
results and the lines come from theoretical calculations. The Gaussian mode
light (blue round dots) and the higher-order-mode light (red square dots) were
focused on the end of a nanowire. In both cases, the emission intensity
changed linearly with the input intensity.
In conclusion, we experimentally demonstrate that higher-order-mode light can
also excite surface plasmons in sliver nanowires. The surface plasmons can
propagate along the nanowire and scatter back to photons at the other end. The
coupling strength is correlated with the polarization of input light, as the
same as the case of Gaussian mode light. The OAM eigenstates are not the
propagating modes of surface plasmons in nanowires. These results may give us
more hints to the understanding of the waveguide properties of sliver
nanowires.
This work was supported by National Fundamental Research Program (Nos.
2006CB921900, 2005CB623601), National Natural Science Foundation of China
(Nos. 10604052, 50732006, 20621061, 20671085), Chinese Academy of Sciences
International Partnership Project, the Partner Group of the CAS-MPG and
Natural Science Foundation of Anhui Province(Grants No. 090412053)..
## References
* (1) E. Ozbay, Science 311, 189-193 (2006).
* (2) T. W. Ebbesen, H. J. Lezec, H. F. Ghaemi, T. Thio, and P. A. Wolff, Nature(London) 391, 667 (1998).
* (3) L. Martin-Moreno, F. J. Garcia-Vidal, H. J. Lezec, K. M. Pellerin, T. Thio, J. B. Pendry, and T. W. Ebbesen, Phys. Rev. Lett. 86, 1114 (2001).
* (4) E. Altewischer, M. P. van Exter and J. P. Woerdman, Nature(London) 418 304 (2002).
* (5) D. F. P. Pile, D. K. Gramotnev, Opt. Lett. 29, 1069 (2004).
* (6) S. I. Bozhevolnyi, V. S. Volkov, E. Devaux, J. Y. Laluet, T. W. Ebbesen, Nature(London) 440, 508 (2006).
* (7) B. Lamprecht, J. R. Krenn, G. Schider, H. Ditlbacher, M. Salerno, N. Felidj, A. Leitner, F. R. Aussenegg, J. C. Weeber, Appl. Phys. Lett. 79, 51 (2001).
* (8) R. M. Dickson, L. A. Lyon, J. Phys. Chem. B 104, 6095 (2000).
* (9) A. Graff, D. Wagner, H. Ditlbacher, U. Kreibig, Eur. Phys. J. D. 34, 263 (2005).
* (10) H. Ditlbacher, A. Hohenau, D. Wagner, U. Kreibig, M. Rogers, F. Hofer, F. R. Aussenegg, J. R. Krenn, Phys. Rev. Lett. 9525, 7403 (2005).
* (11) A. W. Sanders, D. A. Routenberg, B. J. Wiley, Y. N. Xia, E. R. Dufresne, M. A. Reed, Nano Lett. 6, 1822 (2006).
* (12) M. W. Knight, N. K. Grady, R. Bardhan, F. Hao, P. Nordlander, N. J. Halas, Nano Lett. 7, 2346 (2007).
* (13) A. L. Pyayt, B. J. Wiley, Y. N. Xia, A. T. Chen, L. Dalton, Nature nanotechnology 3, 660 (2008).
* (14) A. Tao, F. Kim, _et al._ Nano Letters 3(9), 1229 (2003).
* (15) P. Jiang, S. Y. Li, _et al._ Chemistry-a European Journal 10(19), 4817, (2004).
* (16) Korte, K. E., S. E. Skrabalak, _et al._ Journal of Materials Chemistry 18(4), 437, (2008).
* (17) L. Allen, M. W. Beijersbergen, R. J. C. Spreeuw, and J. P. Woerdman, Phys.Rev.A. 45, 8185 (1992).
* (18) J. Arlt, K. Dholokia, L. Allen, and M. Padgett. J. Mod. Opt. 45 1231 (1998).
* (19) A. Vaziri, G. Weihs, and A. Zeilinger. J. Opt. B: Quantum Semiclass. Opt 4 s47 (2002).
* (20) X. F. Ren, G. P. Guo, Y. F. Huang, Z. W. Wang, and G. C. Guo, Opt. Lett. 31, 2792, (2006).
* (21) Lu-Lu Wang, Xi-Feng Ren, Rui Yang, Guang-Can Guo and Guo-Ping Guo, arXiv:0904.4349, (2009).
|
arxiv-papers
| 2009-06-02T01:59:25 |
2024-09-04T02:49:03.096970
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Guo-Ping Guo, Rui Yang, Xi-Feng Ren, Lu-Lu Wang, Hong-Yan Shi, Bo Hu,\n Shu-Hong Yu, and Guang-Can Guo",
"submitter": "Xifeng Ren",
"url": "https://arxiv.org/abs/0906.0406"
}
|
0906.0415
|
# Classical Processing Requirements for a Topological Quantum Computing
System.
Simon J. Devitt devitt@nii.ac.jp National Institute of Informatics, 2-1-2
Hitotsubashi, Chiyoda-ku, Tokyo-to 101-8430, Japan Austin G. Fowler Center
for Quantum Computing Technology, University of Melbourne, Victoria 3010,
Australia Todd Tilma National Institute of Informatics, 2-1-2 Hitotsubashi,
Chiyoda-ku, Tokyo-to 101-8430, Japan W. J. Munro Hewlett Packard
Laboratories, Bristol BS34 8HZ, United Kingdom National Institute of
Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo-to 101-8430, Japan Kae
Nemoto National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku,
Tokyo-to 101-8430, Japan
###### Abstract
Dedicated research into the design and construction of a large scale Quantum
Information Processing (QIP) system is a complicated task. The design of an
experimentally feasible quantum processor must draw upon results in multiple
fields; from experimental efforts in system control and fabrication through to
far more abstract areas such as quantum algorithms and error correction.
Recently, the adaptation of topological coding models to physical systems in
optics has illustrated a possible long term pathway to truly large scale QIP.
As the topological model has well defined protocols for Quantum Error
Correction (QEC) built in as part of its construction, a more grounded
analysis of the classical processing requirements is possible. In this paper
we analyze the requirements for a classical processing system, designed
specifically for the topological cluster state model. We demonstrate that via
extensive parallelization, the construction of a classical “front-end” system
capable of processing error correction data for a large topological computer
is possible today.
###### pacs:
03.67.Lx, 07.05.Wr
## I Introduction
The design and construction of a feasible large scale quantum computing system
has been a highly sought after and long term goal of quantum information
science ever since the first physical system proposals were made in the mid
1990’s Cirac and Zoller (1995); Cory et al. (1997); Gershenfeld and Chuang
(1997); Kane (1998); Loss and DiVincenzo (1998); Knill et al. (2001); Mooij et
al. (1999); Nakamura et al. (1999); ARDA (2004); Kok et al. (2007). While
experimental advances in quantum computing have been pronounced Chiorescu et
al. (2003); Yamamoto et al. (2003); Chiaverini et al. (2004); Häffner et al.
(2005); Gorman et al. (2005); Gaebel et al. (2006); Hanson et al. (2006); Dutt
et al. (2007); O’Brien et al. (2003) we are not yet at the stage where we can
faithfully claim a multi-million qubit device is just around the corner.
Nevertheless, in order for experimental progress to be made, the fundamental
theoretical building blocks for a large scale computer need to be firmly in
place. This theoretical development is not restricted to the discovery of new
protocols for computation, algorithms or error correction but it also includes
the architectural engineering of future computers.
While there has been steady progress over the past 15 years on designing novel
and (more importantly) experimentally feasible large scale processor
architectures, the complication of implementing appropriate and efficient
error correction procedures and designing systems which can trivially be
scaled to the level of a “programmable”, multi-task computer is still a
daunting and often neglected area of research.
Recently, the introduction of theoretical ideas such as topological cluster
state quantum computing (TCQC) Raussendorf et al. (2007); Raussendorf and
Harrington (2007); Fowler and Goyal (2009) and the single photon optical
architecture Devitt et al. (2009); Ionicioiu and Munro (2009) gives us an idea
of what a truly large scale device may possibly look like. The modular design
of the cluster preparation network, and the measurement based nature of the
computational model, gives this design something that other architectures
arguably lack, a strictly modular scaling of the entire computer.
While the design introduced in Refs. Devitt et al. (2009, 2008) is not
necessarily the optimal way to construct a large scale quantum computer, it
does contain several key elements, easing the conceptual design of a large
scale computer. For example:
1. 1.
Utilizing a computational model that is fundamentally constructed from error
correction, rather than the implementation of codes on top of an otherwise
independent computational model.
2. 2.
Having a modular construction to the computer. The fundamental quantum
component (the photonic chip Stephens et al. (2008); Devitt et al. (2009)), is
a comparatively simple quantum device. Scaling the computer arbitrarily
requires the addition of more chips in a regular and known manner.
3. 3.
Employing a computational model exhibiting high fault-tolerance thresholds
Raussendorf et al. (2007), which relieves the pressure on experimental
fabrication and control.
4. 4.
Utilizing a measurement based model for computation Raussendorf and Briegel
(2001). By employing a measurement based computational model, the quantum
component of the computer is a simple state preparation network. Therefore,
programming such a device is a problem of classical software, not of hardware.
These properties, as well as others, allowed us to consider the structure of
an extremely large mainframe-type device. In Ref. Devitt et al. (2008), the
quantum analogue of high performance computing was examined. The conceptual
scalability of the optical architecture allowed us to examine the operating
conditions, physical structure and resource costs of a computer employing
extensive topological error correction to the level of 2.5 million logical
qubits.
In addition to examining the implementation of a large scale TCQC, the nature
of the topological model also allows for a more concrete discussion on an
often neglected, but important aspect of quantum information processing,
namely what are the classical computational requirements of a large scale
device? In this paper we attempt to answer this question.
There have been several broad investigations into the classical structure,
design and operation of a large scale quantum computer Meter (2006); Steane
(2007), but investigation into this topic is difficult. The primary obstacle
in analyzing classical requirements is that the quantum architecture generally
has to be specified. Since all classical processing is ultimately dependent on
both the computational model employed at the quantum level and more
importantly the error correction protocols utilized, a detailed analysis of
the classical front end must wait for the design of the quantum processor.
In this paper we specifically analyze the classical front end requirements to
perform active error correction on a 3D topological cluster lattice prepared
by the photonic chip network. This analysis will be restricted to the
classical system required to implement the underlying error correction
procedures in the topological model, without the execution of an active
quantum algorithm.
Although we present this analysis in the context of the optical network
presented in Ref. Devitt et al. (2009), it should be stressed that this
analysis is still highly relevant for any physical architecture employing the
2D or 3D topological model Bravyi and Kitaev (2001); Dennis et al. (2002);
Raussendorf and Harrington (2007); Raussendorf et al. (2007); Fowler et al.
(2008). Our analysis demonstrates that with several optimizations of the
classical processing and the ability to significantly parallelize classical
error correction processing, the classical computational requirements for
large scale TCQC are indeed within the capabilities of today’s processing
technology.
Section II very briefly reviews the nature of the topological cluster model in
order to fully specify what is required of the classical processing. Section
III reviews the flowing nature of the preparation network, and how this
relates to the optical measurement layer, and the first level of classical
processing. In Section IV we overview the basic requirements of the classical
network and how target processing rates are related to the clock cycle of the
quantum network. In Section V we introduce classical benchmarking data for the
minimum weight matching algorithm utilized for error correction and
illustrate, given this data, how error correction processing can be
parallelized over the entire computer. We conclude by illustrating how the
parallelization of the classical processing allows, in principal, for the
construction of a classical error correcting front end for large scale TCQC
with classical processing technology available today.
## II Topological Error Correction in the Optical Architecture
TCQC was first introduced by Raussendorf, Harrington and Goyal in 2007
Raussendorf and Harrington (2007); Raussendorf et al. (2007). This model
incorporates the ideas stemming from topological quantum computing introduced
by Kitaev Kitaev (1997) and cluster state computation Raussendorf and Briegel
(2001), leading to a very attractive computational model incorporating error
correction by construction and exhibiting a high threshold error rate.
As with any measurement based computational model, computation proceeds via
the initial construction of a highly entangled multi-qubit state. Fig. 1
illustrates the structure of the cluster. Each node in the cluster represents
a physical qubit, initially prepared in the
${|{+}\rangle}=({|{0}\rangle}+{|{1}\rangle})/\sqrt{2}$ state, and each edge
represents a controlled-$\sigma_{z}$ entangling gate between qubits. This is
the fundamental unit cell of the cluster, which repeats in all three
dimensions.
Computation under this model is achieved via the consumption of the cluster
along one of the three spatial dimensions Fowler and Goyal (2009) (simulated
time). Logical qubits are defined via the creation of “holes” or “defects”
within the global lattice and multi-qubit operations are achieved via braiding
(movement of these defects around one another) as the cluster is consumed
along the direction of simulated time. The specific details for computation
under this model are not important for this discussion and we encourage the
reader to refer to Refs. Raussendorf et al. (2007); Fowler and Goyal (2009)
for further details. For this analysis, the effect of errors on a topological
lattice is the important factor.
### II.1 Error effects
Quantum errors in this model manifest in a very specific way. In Fig. 1,
illustrating the unit cell of the cluster, we have illustrated six face qubits
shown in red. If no errors are present in the system, measuring each of these
six qubits in the ${|{+}\rangle}$ or ${|{-}\rangle}$ states ($\sigma_{x}$
basis) will return an even parity result. If we denote the classical result of
these six measurements as $s_{i}\in\\{0,1\\}$, $i=1,..,6$, then
$(s_{1}+s_{2}+s_{3}+s_{4}+s_{5}+s_{6})\text{ mod }2=0.$ (1)
Figure 1: Unit cell of the 3D cluster, which extends arbitrarily in all three
dimensions. In the absence of errors, the entanglement generated within the
lattice sets up correlation conditions when certain qubits are measured. For
each unit cell of the cluster, quantum correlations guarantee that the
measurement results of the six face qubits (shown above) return an even parity
result [Eq. 1].
This result is a consequence of the quantum correlations established in the
preparation of the cluster. If the cluster is prepared perfectly, these six
qubits are placed into a state that is a $+1$ eigenstate of the operator
$K=\sigma_{x}^{1}\otimes\sigma_{x}^{2}\otimes\sigma_{x}^{3}\otimes\sigma_{x}^{4}\otimes\sigma_{x}^{5}\otimes\sigma_{x}^{6},$
(2)
where $\sigma_{x}$ is the Pauli bit-flip operator. The measurement of each of
these six qubits in the $\sigma_{x}$ basis will produce random results for
each individual qubit. However the eigenvalue condition of this correlation
operator guarantees the classical measurement results satisfy Eq. 1 in the
absence of errors.
The remaining qubits in each unit cell are also measured in the $\sigma_{x}$
basis, but their results are associated with the parity of cells within the
dual lattice [Fig. 2]. This property of the cluster is not important for this
discussion. What is important is that when no quantum algorithm is being
implemented, every qubit is measured in the $\sigma_{x}$ basis and is used to
calculate the parity of their respective cells.
Figure 2: (From Ref. Fowler and Goyal (2009)) The regular structure of the 3D
cluster results in a primal and dual lattice structure. A set of eight unit
cells arranged in a cube results in a complete unit cell present at the
intersection of these eight cells. Hence there are two self-similar lattices
(offset by half a unit cell diagonally) known as the primal and dual lattice.
These two structures are extremely important for computation under the
topological model Raussendorf et al. (2007). However, in the context of this
discussion, the measurement results of the additional nine qubts in Fig. 1 are
associated with the parities of bordering dual cells.
Due to the structure of this 3D cluster state, all error channels can
effectively be mapped into phase errors (Pauli-$\sigma_{z}$ operations applied
to phyical qubits in the lattice which takes a state,
$\alpha{|{0}\rangle}+\beta{|{1}\rangle}\rightarrow\alpha{|{0}\rangle}-\beta{|{1}\rangle}$)
or physical qubit loss Fowler and Goyal (2009). These two distinct channels
are processed slightly differently.
### II.2 Error channel one: Phase errors
We first consider the effect of phase errors, which act to flip the parity of
a cell of the cluster. As the Pauli operators $\sigma_{x}$ and $\sigma_{z}$
anti-commute, $\sigma_{z}\sigma_{x}=-\sigma_{x}\sigma_{z}$, if a phase error
occurs to one of these six qubits the correlation condition of a cell will
flip from a +1 eigenstate of the operator $K$ to a -1 eigenstate. If the
correlation condition flips to a $-1$ eigenstate of $K$, the classical result
of the six individual measurements also flips to an odd parity condition, i.e.
$(s_{1}+s_{2}+s_{3}+s_{4}+s_{5}+s_{6})\text{ mod }2=1.$ (3)
Errors on qubits on the boundary of each unit cell therefore flip the parity
of the measurement result from even to odd. Note that the change of parity of
any individual cell gives us absolutely no information regarding which one of
the six qubits experienced the physical error.
The second important aspect of this structure is that any given qubit lies on
the boundary of two cells in the cluster. If a given qubit experiences a phase
error it will flip the parity result of the two adjacent cells. This allows us
to detect which of the six qubits of a given cell experienced an error. If a
single cell flips parity, we then examine the parity result of the six
adjacent cells. Assuming that only one error has occured, only one of these
six adjacent cells will have also flipped parity allowing us to uniquely
identify the erred qubit.
Figure 3: (Taken from Ref. Fowler and Goyal (2009)) Illustration of error
effects in the 3D cluster. Here we show a volume $V=4^{3}$ of cluster cells
and the effect of three error chains. As the parity conditions of Eq. 1 are
cyclical (mod 2), the calculation of cell parities only reveals information
regarding the endpoints of error chains. Shown above are three examples, one
error flips the parity of the two cells adjacent to the erred qubit while
longer chains only flip the parity of the end point cells. The goal of error
correction is to faithfully identify these chains given the end point parity
data.
If we now consider more than one error within the lattice, we no longer
identify the location of individual errors but instead identify error chains.
Fig. 3 from Ref. Fowler and Goyal (2009) illustrates. Here we have a 3D
cluster consisting of a cube of $4^{3}$ cells with three error chains. The
first chain is a single error which flips the parity of the two adjacent
cells, the other two chains illustrate the effect of multiple errors. As the
parity conditions are cyclical (mod 2), if two errors occur on the boundaries
of a given cell the parity result will not change. Instead, the two cells at
the endpoints of these error chains are the cells which flip parity.
Hence, in the TCQC model, it is not the locations of individual errors which
are important but the endpoints of error chains. In fact, the symmetries of
the cluster do not require us to identify the physical error chain
corresponding to the detected endpoints. Once the endpoints of the chain are
correctly identified all path of correction operators (Pauli operators which
are applied to reverse detected errors) which connect the two endpoints are
equivalent Raussendorf et al. (2007). Hence, the goal of error correction in
this model is to correctly “pair up” a set of odd parity cells such that the
appropriate correction operators can be applied.
Undetectable errors in this model occur when chains become so long that they
actually connect two boundaries of the lattice. If a physical error chain
completely spans the lattice from one boundary to another then each individual
cell experiences two physical errors and every cell remains in an even parity
state. If the 3D lattice is not used for computation, these error chains are
actually invariants of the cluster and hence have no effect. Once computation
begins, information is stored by deliberately creating holes (or defects) in
this lattice. These defects act as artificial boundaries and consequently
error chains connecting defects to any other boundary (either other defects or
the boundary of the lattice) are undetectable and cause logical errors on
stored information.
From the standpoint of this investigation we are only concerned with
performing active error correction on a defect free lattice. We will not be
introducing information qubits into the cluster. Instead we will be examining
the classical resources required to detect and correctly identify error chains
in an otherwise perfectly prepared lattice.
This type of analysis is justified as information qubits are essentially
regions of the 3D cluster that have simply been removed from the global
lattice. Analyzing the classical requirements for the complete, defect free
lattice therefore represents the maximum amount of classical data that needs
to be processed for correction.
### II.3 Error channel two: Qubit loss
The second major error channel is qubit loss. As we are motivated by the
optical architecture introduced in Ref. Devitt et al. (2009), loss represents
a significant error channel. Unlike other computation models for quantum
information, the TCQC model can correct loss without additional machinery.
Without going into the details, loss events can be modeled by tracing out the
qubit that is lost from the system and replacing the qubit with the vacuum.
Tracing out the lost qubit is equivalent to measuring the qubit in the
${|{0}\rangle}$ or ${|{1}\rangle}$ state (a $\sigma_{z}$ basis measurement)
with the result unknown. In principal, this type of error can be modeled as a
standard channel, causing the parity of the respective cells to flip with a
probability of 50%. However, since the qubit is no longer present, we can
utilize the vacuum measurement to uniquely identify these error events.
Illustrated in Fig. 4 is the structure of a unit cell when one qubit is
essentially measured out via loss. In this case, the boundary of a cell
increases around the lost qubit. Instead of the parity conditions being
associated with the six face qubits of a given cell, it extends to be the
combined parity of the ten measurements indicated. As the loss event is
detected via no “clicks” from the detector array, this result is corrected by
now taking the parity of this larger structure and proceeding as before.
Provided no other errors have occured, the parity of this larger boundary will
be even, and any additional qubit errors will link this extended cell to a
second end point with odd parity. Recent results, obtained in the context of
the surface code Dennis et al. (2002); Fowler et al. (2008), have demonstrated
a high tolerance to heralded loss events Stace et al. (2009).
Figure 4: The effect of qubit loss on the parity conditions of the cluster.
When a qubit is lost, it is essentially removed from the cluster. The parity
condition of Eq. 1 is extended to the boundary surrounding the loss event. As
a lost qubit is a heralded error (i.e. can be detected separately), the parity
calculation can be modified to encapsulate this larger volume if a loss event
is detected.
## III The optical computer, a “flowing” network
As introduced in Ref. Devitt et al. (2009), optical TCQC can be performed by
making use of a preparation network of photonic chips Stephens et al. (2008).
This network receives a stream of single photons (from a variety of
appropriate sources) and ejects a fully connected topological cluster. As
computation proceeds via the successive consumption of lattice faces along the
direction of simulated time, the preparation network is designed to
continuously prepare the lattice along this third dimension at a rate equal to
the rate of consumption by the detector layer. Fig. 5 illustrates the basic
design.
Figure 5: General architectural model for a “flowing” optical computer. Single
photon sources inject photonic qubits into a preparation network that
deterministically links up a 3D photonic cluster, layer by layer. Immediately
after the preparation network, an array of single photon detectors measures
each photon to perform computation. As photons are continually linked to the
rear of the 3D cluster as the front is consumed, an arbitrarily deep 3D
cluster can be prepared and consumed with finite space.
Photons are continuously injected into the rear of the preparation network,
ideally from appropriate on-demand sources. Each photon passes through a
network of four photonic chips, which act to link them together into the
appropriate 3D array. Each photonic chip operates on a fundamental clock
cycle, $T$, and each chip in the network operates in a well-defined manner,
independent of the total size of the network Devitt et al. (2009). In total, a
single photon entering the network at $t=0$ will exit at $t=4T$, after which
it can be measured by the detector banks.
Each photonic chip acts to entangle a group of five photons into an
appropriate state such that the parity condition for each cell is satisfied.
After each group of five photons passes through an individual chip, a single
atomic system contained within each chip is measured and reinitialized,
projecting the relevant group of 5-photons into an entangled state. The result
of this atomic measurement is fed forward to the classical processing layer in
order to define a set of initial correlation conditions. The cluster is
defined such that Eq.1 is satisfied for all cells. However, the preparation
network does not automatically produce these correlation conditions. Depending
on the measurement results of the atomic systems contained within each
photonic chip, approximately 50% of cells within the lattice will be prepared
with an initial parity condition that is odd. This can, in principal, be
corrected to be even for all cells by applying selective single qubit
rotations dependent on the atomic readout results, but this is unnecessary.
The initial parity results from the preparation network are simply recorded,
endpoints of error chains are then identified with cells that have changed
parity from this initial state.
As one dimension of the topological lattice is identified as simulated time,
the total 2D cross section defines the actual size of the quantum computer.
Defects, regions of the cluster measured in the $\sigma_{z}$ basis, are used
to define logical qubits and are kept well separated to ensure fault
tolerance. The 2D cross section is then continually teleported, via
measurement, to the next successive layer along the direction of simulated
time allowing an algorithm to be implemented (in a similar manner to standard
cluster state computation Raussendorf and Briegel (2001)).
Figure 6: basic structure of the detection layer in the optical network. A
given unit cell of the cluster can be associated with 9 optical waveguides
containing temporally separated photonic qubits. For each set of 9 detectors
(consisting of a polarizing beam splitter and two single photon detectors for
polarization encoding), the central detector is associated with the cross-
sectional co-ordinate address, $(i,j)$, for the cell. The temporal co-
ordinate, $T$, is associated with the current clock cycle of the quantum
preparation network. For each unit cell, in the absence of photon loss, the
measurement results for the 6 face qubits are sent to the first classical
processing layer that calculates Eq. 4 for each 3D co-ordinate, $(i,j,T)$. If
the parity result differs between co-ordinates $(i,j,T-2)$ and $(i,j,T)$, this
information is sent to the next classical processing layer.
In Fig. 6 we illustrate the structure of the detection system. For the sake of
simplicity, we are assuming that the basis states for the qubits are photon
polarization, ${|{H}\rangle}\equiv{|{0}\rangle}$ and
${|{V}\rangle}\equiv{|{1}\rangle}$, hence the detection system consists of a
polarizing beam splitter (PBS) and two single photon detectors. A given unit
cell flows through a set of nine optical lines such that the relevant parity
is given by,
$P(i,j,T)=(s_{(i,j)}^{T-1}+s_{(i-1,j)}^{T}+s_{(i,j-1)}^{T}+s_{(i,j+1)}^{T}+s_{(i+1,j)}^{T}+s_{(i,j)}^{T+1})\text{
mod }2$ (4)
where $s_{i,j}^{T}$ is the detection result $(1,0)$ of detector $(i,j)$ at
time $T$. This result defines the parity of the cell $(i,j,T)$ in the lattice.
Loss events would result in neither detector firing, at which point the
calculation of Eq. 4 would be redefined based on the loss event to calculate
the parity of the boundary around this lost qubit.
The results of all the detection events are fed directly from the detectors
into the first classical processing layer. This layer calculates Eq. 4 and
passes the result forward to the subsequent processing layer if it differs
from the initial parity. This general structure extends across the entire 2D
cross section of the lattice with parities repeatedly calculated for each unit
cell that flows into the detection system.
## IV Classical Processing requirements
Illustrated in Fig. 7 is the layer structure of the classical processing for
topological error correction. In total there are four stages, parity
calculation, tree creation, minimum weight matching, and the quantum
controller.
Figure 7: General cross-sectional processing structure for the error
correction procedures in the optical TCQC. The quantum network consists of a
bank of single photon sources, photonic chips and detectors. The preparation
network continually prepares the cluster along the dimension of simulated
time. The detector array will be outputting classical results on the same time
frame as the quantum clock cycle. The classical processing layer is required
to determine likely physical chains of errors producing the classical parity
results detected. The classical layer consists of 4 stages, parity check, co-
ordinate tuple $\rightarrow$ tree creation, minimum weight matching, and the
quantum controller.
As this analysis is only considering the classical requirements for base level
error correction processing, we will not discuss the structure for the quantum
controller. This top-level processor will be responsible for the application
of active quantum algorithms on the topological lattice, given inputs from
both a quantum algorithm compiler, and error data outputted from the
correction processors. In the future, we will be examining the requirements
for this top level processor, but for now we omit details regarding its
structure and operation.
The error correction in this model requires identifying all cells which have
changed parity and reliably identifying pairs of parity changes associated
with a physical error chain. In order for error correction to be effective, we
assume a standard, stochastic, qubit error model. As all standard error
channels (all errors except qubit loss) can be mapped under the topological
model to phase errors Raussendorf et al. (2007); Fowler and Goyal (2009), we
can, without loss of generality, assume that each qubit experiences a phase
error, $\sigma_{z}\equiv Z$, with probability $p$. Therefore, a stochastic
hierarchy exists with respect to longer error chains.
As the probability of a single phase error is given by $p<1$, the probability
of an $d$ error chain is $O(p^{d})\ll 1$. Therefore, for a given set of parity
results, the most likely set of physical errors producing the classical
measurement pattern is the set of end point pairings where the total length of
all connections is minimized.
Classical results stemming from the detection layer are used to calculate the
parity for all unit cells for some total volume. The co-ordinates of all cells
which have experienced a parity flip are stored in a classical data structure.
Minimum weight matching algorithms Chu and Liu (1965); Edmonds (1967); Cook
and Rohe (1999); Kolmogorov (2009) are then used to calculate likely error
chains corresponding to the detected endpoints. Once chains are calculated,
the Pauli frame (the current eigenvalue condition for all cells relative to
their initial state) of the computation, within the quantum controller, is
updated with the new error information.
The frequency of error correction in the lattice is dictated by the
application of the non-Clifford gate
$T=\begin{pmatrix}1&0\\\ 0&e^{\frac{i\pi}{4}}\end{pmatrix}.$ (5)
This gate is required to generate a universal set of operations Nielsen and
Chuang (2000), and in the topological model is achieved via the injection of
multiple, low fidelity, singular ancilla, magic-state distillation Bravyi and
Kitaev (2005); Reichardt (2005) and the application of teleportation protocols
with information qubits. In order to successfully apply logical $T$ gates,
error information must be obtained for the qubit undergoing the teleported $T$
gate prior to application of the correction. Fig. 8 illustrates the
teleportation protocol to implement an $R_{z}(\theta)$ rotation. If a logical
$X$ error exists on the state prior to teleportation, the condition
$R_{z}(\theta)X{|{\phi}\rangle}=XR_{z}(-\theta){|{\phi}\rangle}$ (6)
implies that this error must be known before teleporting the rotation
$R_{z}(\theta)$. If the error is detected after teleportation, the conjugate
rotation $R_{z}(-\theta)$ will actually be applied. Therefore, the classical
processing of the minimum weight matching algorithm will have to occur at a
comparable rate to the logical gate rate of the preparation network to ensure
up-to-date error information for all logical qubits is available when
teleported gates are applied.
Figure 8: Standard teleportation circuit required to perform the rotation
$R_{z}(\theta)$ on an arbitrary quantum state via the preparation of an
appropriate ancilla state. The presence of a bit-flip $\equiv X$ errors on the
qubit affects the gate. Error information during quantum processing must be up
to date upon applying these types of gates in order to ensure rotations are
applied in the correct direction.
As detailed in Ref. Devitt et al. (2007), the clock cycle of the preparation
network can vary from nanoseconds to microseconds, depending on the system
utilized to construct the photonic chip. Hence our goal in this investigation
is to determine, for a given failure probability of the quantum component of
the computer, how quickly the network be operated such that all classical
processing can be performed utilizing today’s technology.
## V Layers two and three: Minimum weight matching
Calculating the minimum weight matching of the classical parity data stemming
from the first layer of the classical processing network is the essential
requirement for error correction in the topological model. The parity
processing layer is designed to simply output co-ordinate tuples for all
parity changes in the lattice to this next layer. The relevant question is,
can the minimum weight processing of this data be performed over a large
volume of the cluster in a comparable time frame to the quantum preparation
network?
### V.1 Minimum Weight Matching benchmarking
Classical algorithms for determining the minimum weight matching of a
connected graph are well known with a run-time polynomial in the total number
of nodes Cook and Rohe (1999); Kolmogorov (2009). Such algorithms are derived
from the original Edmonds’ Blossom algorithm Edmonds (1967) and for our
benchmarking tests we have used Blossom V Kolmogorov (2009). However, due to
the nature of our problem, there are several adaptations that can be made to
optimize the algorithm further.
Typical minimum weight matching algorithms accept a list of $N$ co-ordinates,
$N-even$, and of weighted edges (in this case, lattice separation between
nodes) such that the corresponding graph is completely connected. The output
is then a list of edges such that every node is touched by exactly one edge
and the total weight of the edges is a minimum. For the purposes of TCQC we
can optimize by considering the specifics of the problem.
Due to the stochastic hierarchy of errors in the qubit model, and the
assumption that the operational error rate of the computer is low, $p\ll 1$,
the most likely patterns of errors are simply sets of sparse single errors
causing two adjacent cells to flip parity. The probability of obtaining longer
and longer errors chains is increasingly unlikely.
Figure 9: Likely structure of the error data for a large volume of cluster
cells. As we are assuming the operational error rate of the quantum computer
is low, $p\ll 1$, long error chains become increasingly unlikely. Hence, cell
parity flips will most likely be sparse and, generally, pairs of parity flips
will be isolated. This property of the computational model allows a certain
amount of optimization of the classical requirements for minimum weight
matching.
Therefore, for a given volume of classical parity results, erred cells will
tend to be clustered into small groups, as illustrated in Fig. 9.
Additionally, we can consider the computational lattice structure. Fig. 10,
taken from Ref. Devitt et al. (2009), illustrates the 2D cross-section of the
cell structure once logical qubits are defined.
Figure 10: (Taken from Ref. Devitt et al. (2009)). Cross section of a large
topological lattice. Qubits are defined within a cluster region of
approximately $40\times 20$ cells. The actual qubit information is stored in
pairs of defect regions (artificially created holes in the lattice). As
undetectable error chains occur when errors connect two boundary regions,
topological protection is achieved by keeping defect qubits well separated
from boundaries and from each other.
In this example, defects are separated from each other (and from the edge of
the lattice) by a total of 16 cells. Logical errors occur when error chains
connect two boundaries in the cluster. If an error chain spans more than 8
cells, the correction inferred from the endpoints is likely to be incorrect,
resulting in defects connected by a chain of errors - a logical error.
This allows us to set a maximum edge length that is allowed between
connections in the minimum weight matching algorithm. Instead of creating a
completely connected graph structure from the classical data, we instead
create multiple smaller subgraphs, with each subgraph having no connections of
weight greater than a maximum edge parameter $m_{e}$. As the separation and
circumference of defects within the lattice determines the effective distance
of the quantum code $m_{e}$ can safely be chosen such that $m_{e}=\lfloor
d/2\rfloor$. This approximation ensures that all error chains throughout the
lattice with a weight $\leq\lfloor d/2\rfloor$ are connected within the
classical processing layer. This classical approximation is defined to fail if
a single length $>\lfloor d/2\rfloor$ error chain occurs during computation.
This is much less likely than failure of the code itself which can occur as
the result of numerous, not necessarily connected, arrangements of
$\lfloor(d-1)/2\rfloor$ errors.
It should be noted that this approximation does not neglect all error chains
longer than $m_{e}$. However, the classical error correction data has no
knowledge regarding the actual path an error chain has taken through the
lattice. The $m_{e}$ approximation neglects all error chains with endpoints
separated by $>m_{e}$ cells.
By making this approximation, we speed up the classical processing for minimum
weight as, on average, the algorithm will be run on a very sparse graph
structure.
### V.2 Classical simulations as a function of total cluster volume and code-
distance.
Given the above classical approximation, we have benchmarked the Blossom V
algorithm Kolmogorov (2009) as a function of the total volume of the cluster
for various distances of the quantum code, $d$ and hence $m_{e}$.
In order to investigate the classical processing requirements of the system,
we will assume a physical error rate of the quantum computer. As with previous
investigations into this system we will be assuming, throughout this
discussion, that the quantum computer operates at a physical error rate of
$p=10^{-4}$ Devitt et al. (2009, 2008) and the fault-tolerant threshold is
$p_{th}\approx 0.61\%$ Raussendorf et al. (2007).
Figure 11: Benchmarking data for the tuple $\rightarrow$ tree creation
processing layer. Taken with $10^{4}$ statistical runs, an operational error
rate of $p=10^{-4}$ for various sized codes, $d=(8,10,12,14,16,18,20)$,
$m_{e}=[4,..,10]$. Notice that $m_{e}$ does not significantly alter the
processing time.
Figure 12: Benchmarking data for the minimum weight matching processing layer.
The simulation conditions for this data set are identical to the tree creation
layer shown in Fig. 11.
Given this base assumption, Figs. 11 and 12 examine the processing time of the
modified Blossom V algorithm run on a MacBook Pro (technical details of this
computer are summarized in Appendix. A). For various volume sizes, $V$, from
$35^{3}$ to $295^{3}$ cells, random single qubit $Z$ errors were generated
with a probability of $10^{-4}$. The processing time for each value of $V$ was
examined for different distance quantum codes and hence different values of
the approximation parameter, $m_{e}=d/2$. A list of cell co-ordinate tuples,
$(i,j,T)$, corresponding to the endpoints of error chains (cells with changed
parity) was constructed. This list is the input to the classical processor as,
in practice, it would be provided directly by the hardware.
The tuple data was then processed in two stages. In the first stage, which we
denote as the tree creation layer, an 8-way sort/search tree was created from
the tuple information and then used to generate a list of connections (edges)
between cells and their distances (weights) [Fig. 11]. The second stage of the
simulation applied the Blossom V minimum weight matching algorithm to the
generated sparse graph structures [Fig. 12]. The benchmarking data was taken
using $10^{4}$ runs per data point.
For each cluster volume, code distances $d=[8,10,12,14,16,18,20]$,
corresponding to $m_{e}=[4,..,10]$ were simulated. For both the tuple
$\rightarrow$ tree creation and the minimum weight matching, Figs. 11 and 12
illustrate that the approximation parameter, $m_{e}$, does not significantly
alter the total simulation time and that for a given ($V,m_{e}$), tree
creation and minimum weight matching take a similar amount of time.
### V.3 Parallelizing the algorithm
The numerical simulations shown in Figs. 11 and 12 clearly illustrate that the
minimum weight matching subroutine cannot be run over the entire lattice used
for TCQC. As a rough estimate, a mainframe device such as the one introduced
in Ref. Devitt et al. (2008) consists of a lattice cross section measuring
$(5\times 10^{5})\times(4\times 10^{3})$ unit cells. Clearly in order to
achieve classical processing speeds of the order of microseconds (for any
distance topological code), either the classical fabrication of the processing
equipment must allow for at least a 10–15 order of magnitude speed up from a
standard Laptop or the application of the tree creation and minimum weight
subroutines must be highly parallelized.
Due to the approximation made to the Blossom V algorithm, parallelizing the
classical processing is possible. The $m_{e}$ approximation to the subroutine
prohibits the establishment of graph connections between two cluster cell co-
ordinates separated by a distance $>m_{e}=d/2$. In Fig. 13 we illustrate the
relative frequency of different sized connected components within the lattice
at an error probability of $p=10^{-4}$, for $m_{e}=[4,..,10]$. These
simulations were performed using the Floyd-Warshall algorithm Floyd (1962);
Warshall (1962) obtained for a volume region of $V=50^{3}$, with $2\times
10^{6}$ statistical runs (resulting in approximately $3\times 10^{7}$
connected components in total). In these simulations, the size of each
connected component in the lattice, $n(m_{e})$, does not represent the longest
path through the graph. Instead it represents the physical edge length through
the cluster of a cube of sufficient size to fully contain each connected
component. In Appendix B we provide additional simulations showing the
distribution of connected components within the cluster to assist in the
explanation of Fig. 13.
Figure 13: Volume independent distribution of connected component sizes for
cluster error data. Shown above is the distribution of the maximum linear size
of each connected component for error data, simulated with $p=10^{-4}$,
$m_{e}=[4,..,10]$ and performed with $2\times 10^{6}$ statistical runs (giving
a total number of connected components $\approx 3\times 10^{7}$). Simulations
were performed using the Floyd-Warshall algorithm Floyd (1962); Warshall
(1962), but instead of calculating the maximum distance between any two nodes
in a connected graph we instead calculate the maximum physical distance
through the cluster (in a single spatial dimension) between any two connected
nodes. These results allow us to estimate the edge length of a cube of
sufficient size to encapsulate all connected components at a given value of
$m_{e}$. Performing an approximate exponential fit to the decay of these
curves allow for the estimation of the probability of obtaining very large
connected components. Appendix B presents further simulation results
explaining the general properties of this curve.
Parallelizing the minimum-weight matching procedure requires subdividing up a
large volume of the cluster into smaller regions such that each instance of
the tuple $\rightarrow$ tree creation and the minimum weight algorithms
faithfully return the same results as processing the entire volume (up to the
failure probability of the computer). In Fig. 13 we provide an approximate
scaling of the decay of each curve, representing the volume independent
relative frequency of connected component with a linear size, $n(m_{e})$, in
the cluster. In order to parallelize classical processing, two regions are
defined. Fig. 14a. illustrates.
The inner volume defines the minimum weight processing region while the outer
volume, with an edge length of $3\times$ the inner volume, defines the tree
creation processing region. During tuple $\rightarrow$ tree creation, if any
connected component contains at least one vertex within the inner volume it is
sent to independent instances of minimum weight matching. Provided that the
edge length of these regions are large enough, then any connected component
with at least one vertex in the inner volume will be fully contained within
the outer volume with high probability.
To determine the size of these processing regions we utilize the decay of the
curves in Fig. 13. The probability of failure when parallelizing classical
processing should be approximately the same as the failure probability of the
quantum computer itself. In order to determine these failure probabilities, we
consider the volume of the cluster required to perform a logical CNOT
operation as a function of $m_{e}$. Fig. 10 illustrates the logical structure
of the lattice. Each logical qubit cell in the cluster consists of a cluster
cross-section measuring $(2d+d/2)(d+d/4)=25m_{e}^{2}/2$ cells. A CNOT gate
requires 4 logical cells and the depth through the cluster required to perform
the gate is $(2d+d/2)=5m_{e}$ cells Raussendorf et al. (2007). Hence the total
cluster volume for a CNOT operation is $V=250m_{e}^{3}$ cells.
The failure probability of the quantum computer during a logical CNOT is
approximately,
$p_{L}(m_{e})\approx 1-(1-{\Omega(m_{e})})^{\lambda(m_{e})}$ (7)
where,
$\Omega(m_{e})\approx\left(\frac{p}{p_{th}}\right)^{d/2}=10^{-2m_{e}}$ (8)
is the probability of failure for a single logical qubit a single layer thick,
with a fault-tolerant threshold of $p_{th}\approx 0.61\%$, $p=10^{-4}$ and
$\lambda(m_{e})=4(2d+d/2)=20m_{e}$ is the number of such layers of the cluster
that need to be consumed to perform a logical CNOT operation.
Given the failure rate of the quantum computer, we utilize the data from Fig.
13 to determine the edge length of a volume large enough to encapsulate all
connected components with a probability approximately equal to Eq. 7. As Fig.
13 represent relative frequencies (the probability of a connected component of
size $n(m_{e})$, relative to a connected component of size one) we scale
$P(n,m_{e})$ by the number of isolated errors expected in a cluster volume
required for a CNOT. Hence,
$6p\times 250m_{e}^{3}P(n,m_{e})=1-(1-{\Omega(m_{e})})^{\lambda(m_{e})}$ (9)
where the factor of 6 comes from the 6 independent qubits per unit cell of the
cluster. Eq. 9 is then solved for $n(m_{e})$ giving,
$n(m_{e})=-\frac{1}{\beta(m_{e})}\ln\left(\frac{1-(1-10^{-2m_{e}})^{20m_{e}}}{0.15\alpha(m_{e})}\right)$
(10)
where $[\alpha(m_{e}),\beta(m_{e})]$ are the scaling parameters shown in Fig.
13.
The values of $n(m_{e})$ for $m_{e}=[4,..10]$ and the probabilities of the
logical CNOT failure and equally the probability of a connected component of
size greater than $n$ are shown in Tab. 1.
$m_{e}$ | CNOT failure | $n(m_{e})$ | $P(\text{connected component }>n)$
---|---|---|---
4 | $O(10^{-7})$ | 10 | $O(10^{-7})$
5 | $O(10^{-8})$ | 15 | $O(10^{-8})$
6 | $O(10^{-10})$ | 23 | $O(10^{-10})$
7 | $O(10^{-12})$ | 32 | $O(10^{-12})$
8 | $O(10^{-14})$ | 44 | $O(10^{-14})$
9 | $O(10^{-16})$ | 62 | $O(10^{-16})$
10 | $O(10^{-18})$ | 81 | $O(10^{-18})$
Table 1: Maximum edge length, $n(m_{e})$, of a cube of sufficient size in the
lattice to encapsulate all connected components of the tuple $\rightarrow$
tree creation graph structure. The last column is the probability that a
connected component of the graph is unbounded by a cube of volume $n^{3}$ and
Eq. 9 ensures that this occurs with approximately the same probability as the
CNOT failure rate of the topological computer.
In Tab. 1 we give the size of the processing regions for parallelizing both
the tuple $\rightarrow$ tree creation and minimum weight matching processes
such that the probability of any connected component within the inner region
unbounded by the boundary of the outer region is approximately the same as the
failure rate of the quantum computer. The value $n(m_{e})$ therefore sets the
edge length of the inner and outer volume regions.
Using this estimate, the tree creation layer becomes an interlaced network,
with each individual instance of tree creation operating over a volume of
$V\approx 27n(m_{e})^{3}$.
Figure 14: Tree creation structure and minimum weight matching processing
structure for parallel application of minimum weight matching. a) illustrates
the volume regions for the two classical processes. For a given approximation
parameter, $m_{e}$, a volume of $27n(m_{e})^{3}$ of tuple data is sent to an
independent tree creation process. Once processed, any subgraph with at least
one vertex in the inner volume region of $n(m_{e})^{3}$ is sent to a separate
instance of minimum weight processing for this region. Parallelization is
achieved by interlacing the outer volumes such that the inner volumes touch.
b) illustrates the temporal processing of this data, where a given tree
creation volume is processed through time. For each instance of the tree
creation process, 67% of the tuple data is taken from the previous instance of
tree creation (as the previous volume overlaps the new volume by two-thirds).
The remaining 33% of the data is collected directly from the measurement of
photons. Therefore both the tree creation process and minimum weight process
must be completed within the time for collection of the new data in order to
keep up with the quantum clock speed.
As we can neglect larger connected components (which occur with probability
roughly equal to the probability of quantum failure), any connected component
that contains at least one vertex within a central volume $V\approx
n(m_{e})^{3}$ will be fully contained within the outer volume of $V\approx
27n(m_{e})^{3}$. The central volume represents the region that is sent to
separate instances of the minimum weight matching algorithm. The tree creation
layer is interlaced such that each of the central volumes touch but do not
overlap.
This processing structure ensures that multiple parallel instances of minimum
weight matching will produce identical results to an individual instance of
minimum weight matching run over the entire volume. As the tree creation
layers are interlaced, tree structures that cross the boundaries of two inner
volumes regions will be sent to two independent instances of minimum weight
matching. After processing, these duplicate results will simply be removed
from the final error list.
We can now combine the results from Tab. 1 with the simulation data of Figs.
11 and 12 to determine the maximum size and speed of a quantum computer such
that error correction data can be processed sufficiently quickly. Fig. 14b.
illustrates how error correction data is collected as qubits along the third
axis of the cluster are sequentially measured. As the tuple $\rightarrow$ tree
creation processing layer consists of an interlaced set of $V=27n(m_{e})^{3}$
cubes, two thirds of the data for any given volume is taken from the
previously collected results while the final one third consists of newly
collected data. Therefore the processing “window” available for each instance
of the tuple $\rightarrow$ tree creation (and hence each instance of minimum
weight matching associated with each tree creation volume) is the time
required to collect this new data.
The optical network illustrated in Sec. III has each parity calculation from
the detector banks, for each unit cell, occurring over three successive
cluster faces and, in the absence of loss, occur every 2$T$, where $T$ is the
separation period of photons (the quantum clock rate). Tree creation from the
parity tuples for a given volume element utilizes the last
$9n(m_{e})^{2}\times 2n(m_{e})$ tuple information from the previous instance
of tree creation and must store the same amount for use in the next tree
creation subroutine. Parity tuple $\rightarrow$ tree creation processing is
repeated for each $27n(m_{e})^{3}$ volume every $2Tn(m_{e})$ seconds.
Taking the simulation data from Fig. 11 (as tuple $\rightarrow$ tree-creation
is the slower of the two processes), the fastest clock cycle,
$T_{\text{min}},$ can be calculated for each value of $m_{e}$ as,
$T_{\text{min}}(m_{e})=\frac{t(3n(m_{e}))}{2n(m_{e})}.$ (11)
where $t(3n(m_{e}))$ is the processing time as a function of the edge length
of the processing volume, $3n(m_{e})$, shown in Fig. 12. The results are shown
in Tab. 2.
$m_{e}$ | CNOT failure | $n(m_{e})$ | $T_{\text{min}}(m_{e})(\mu s)$ | “window”, $2n(m_{e})(\mu s)$ | CNOT operating Freq. | Processing instances / Logical qubit, (I)
---|---|---|---|---|---|---
4 | $O(10^{-7})$ | 10 | 0.06 | 19 | 105 kHz | 8.7
5 | $O(10^{-8})$ | 15 | 0.28 | 31 | 18 kHz | 5.3
6 | $O(10^{-10})$ | 23 | 1.1 | 46 | 4 kHz | 3.4
7 | $O(10^{-12})$ | 32 | 3.2 | 64 | 1 kHz | 2.4
8 | $O(10^{-14})$ | 44 | 9.3 | 87 | 0.3 kHz | 1.7
9 | $O(10^{-16})$ | 62 | 28 | 124 | 0.1 kHz | 1.0
10 | $O(10^{-18})$ | 81 | 68 | 162 | 37 Hz | 0.77
Table 2: Maximum size and speeds for topological quantum computers when
classical processing is performed utilizing the benchmarking data of Figs. 11
and 12. The failure of logical CNOT gates defines the size of the computer,
$\approx 1/KQ$, where $Q$ is the number of logical qubits in the system and
$K$ is the total number of logical time steps available for an algorithm.
$T_{\text{min}}(m_{e})$ defines the maximum speed the quantum network can be
operated such that error correction data can be processed sufficiently
quickly. The processing “window”, independent of the benchmarking data, is
related to the parallelization of classical processing. Processing instances /
Logical qubit defines how many classical processes are required for a lattice
cross section housing one logical qubit.
The last column in Tab. 2 calculates the total number of processing instances
required per logical qubit in the computer. This is calculated as the ratio of
the cross-sectional area of a logical qubit to the cross-sectional area of the
minimum weight matching processing volume.
$I=4\times\frac{(2d+d/2)(d+d/4)}{n(m_{e})^{2}}=4\times\frac{25m_{e}^{2}}{4n(m_{e})^{2}}$
(12)
the factor of 4 is introduced since each instance of minimum weight matching
has an associated tuple $\rightarrow$ tree creation process and that primal
and dual lattice error correction is performed independently giving another
factor of two. As the size of the quantum computer increases (increasing
$m_{e}$) this ratio decreases as the scaling of $n(m_{e})$ is approximately
$n(m_{e})\approx O(m_{e}^{2})$.
While the total size and speed of a topological quantum computer will
ultimately be governed by the experimental accuracy in constructing each
quantum component, the results shown in Tab. 2 are promising. Assuming that
quantum fabrication can reach and accuracy of $p=10^{-4}$, current classical
technology is quite sufficient to process error correction data for a large
range of computer sizes. The logical failure rate of the CNOT gate
approximately defines the size of the computer, $p_{L}(\text{CNOT})\approx
O(1/KQ)$, where $Q$ is the number of logical qubits in the computer and $K$ is
the number of logical time steps in a desired quantum algorithm (note that the
application of non-clifford gates will lower this effective size further).
Even for a small topological computer ($m_{e}=4$ has sufficient protection for
1000 logical qubits running an algorithm requiring approximately $10^{4}$ time
steps) less than ten classical processing instances are required per logical
qubit, a quantum network run at $\approx 17$MHz with a logical CNOT operating
frequency of $\approx 100$kHz.
The classical processing power utilized in this investigation is clearly not
specially designed for the task of operating a topological computer. Not only
can we safely assume that classical processing power will increase before the
advent of a large topological computer, but the design and implementation of
both specialized hardware and more optimal coding should also result in
significant increases in the achievable operational frequency of the quantum
network and logical gates. More recent analysis has suggested that possible
operational frequency of the quantum network could reach the $100$MHz level Su
et al. (2008). In this case, if classical processing could result in a 2-3
order of magnitude speed up when moving to optimized hardware and software,
current classical technology would be sufficient for a quantum computer
capable of a logical error rate $\approx O(10^{-14})$ and a logical CNOT
frequency of $\approx 0.3$MHz.
## VI Observation and Conclusions
In this work we have focused exclusively on the classical requirements to
perform the underlying error correction processing for the network. As the
error correction procedures can be thought of as the base-level processing,
the development of an appropriate quantum controller system is an obvious next
step. This higher level classical controller will essentially be responsible
for the following,
1. 1.
The compilation of a user-designed quantum circuit into an appropriate
measurement sequence on a topological cluster.
2. 2.
Direct control of quantum components within the measurement system of the
topological cluster in order to change the measurement basis for the photon
stream.
3. 3.
The dynamic allocation of cluster resources dependent on operational error
rate. Specifically, the fundamental partitioning of the lattice into
appropriately separated defect regions for logical qubit storage.
4. 4.
Accepting the data from the error correction processing layer to faithfully
ensure accurate error correction is performed during computation.
5. 5.
Dynamical restructuring of the topological lattice partitioning to allow for
ancilla preparation for non-Clifford quantum gates, and optimization of
logical qubit/qubit interactions for specific quantum subroutines.
This last point is one of the more interesting questions that can be addressed
in this model. As we noted in the introduction, once the cluster lattice is
prepared, data processing is performed via software. The structure of the
topological lattice essentially allows for qubit/qubit interactions in a 2D
arrangement [Fig. 10]. However, provided we have access to a large cluster
lattice, we can envisage the dynamical creation of data pathways and “flying
defects” in order to speed up specific quantum subroutines. This could lead to
some extremely interesting avenues of investigation in software control and
optimization of a TCQC architecture.
This analysis has demonstrated that the classical error correction
requirements necessary to construct an optical quantum computer based on
topological cluster states is certainly feasible given today’s technology. We
have illustrated how minimum weight matching, required to process error data
from the topological mainframe, can be optimized in such a way to allow for a
massively parallelized processing network that can process information for
large topological clusters.
These results are very encouraging. As with the quantum preparation network,
the classical front end can also be constructed in a modular manner. As the
quantum preparation network is increased in sized via the addition of more
photonic chips, the classical processing network is also expanded in a similar
way. Parity check processors, tuple $\rightarrow$ tree creation processors and
minimum weight matching processors are also linked into a pre-existing network
as its size expands. The results of this investigation give us a very
optimistic outlook on the viability of the topological model as a possible
avenue to achieve truly large scale quantum computation.
## VII Acknowledgments
We would like to thank Rod Van Meter, Ashley Stephens, and Zac Evans for
helpful discussions. We acknowledge the support of MEXT, JST, HP and the EU
project HIP and the support of the Australian Research Council, the Australian
Government, and the US National Security Agency (NSA) and the Army Research
Office (ARO) under contract number W911NF-08-1-0527.
## Appendix A Technical Specifications for simulations.
The technical specifications to the computer used in benchmarking simulations
are summarized below.
Process | Benchmark
---|---
Floating Point Basic | 3.1Gflop/s
vecLib FFT | 3.5 Gflop/s
Memory Fill | 6.2 GB/s
Table 3: Benchmarking for system processes for the MacBook Pro 2.2 GHz, 3GB
RAM Benchmarking data was taken using the program XBench, version 1.3.
## Appendix B Simulations of graph sizes for cluster errors.
The simulations shown in Fig. 13 illustrate the distribution of the largest
physical distance through the cluster (in a single spatial dimension) between
any two nodes for each connected component for graph structures established
using the $m_{e}$ approximation detailed in the main text. The following
results illustrate the structure of this distribution in more detail.
Figure 15: The maximum graph diameter over $10^{5}$ statistical runs utilizing
$p=10^{-4}$, $m_{e}=6$ and a total cluster volume of $V=100^{3}$. The Floyd-
Warshall algorithm is designed to find the shortest pathway between any two
connected nodes in the cluster and in these simulations we maximize over all
possible connections. Unlike Fig. 13 we are examining the actual graph
diameter, $D$, and not the linear separation of nodes in the physical cluster.
Here we are simply finding the maximum graph diameter in the complete data
set, unlike Fig. 13 which calculates the diameter of all connected components
(hence these results exhibit volume dependence). At $p=10^{-4}$ and $m_{e}=6$,
the graph structure for $D=8$ is the most probable. Changing $p$ shifts which
diameter graph is the most probable, while changing $m_{e}$ changes the values
of $D$ where peaks occur.
Figure 16: Volume independence of distribution of all connected components
within a cluster volume. The upper four curves are simulations calculating the
largest connected component in cluster volumes of
$V=(50^{3},75^{3},100^{3},125^{3})$ while the lower four curves examine the
distribution for all connected components at $p=10^{-4}$ and $m_{e}=6$ (total
number of simulations vary between $O(10^{5})-O(10^{6})$). As you can see,
volume independence when calculating all connected components is good.
Additionally, we are now calculating the maximum physical separation of
endpoints within each connected component. This has the effect of smoothing
out the curves in Fig. 15 and slightly shifting the main peak to the left (as
maximum graph diameter, in general, is larger than the maximum node separation
in the physical cluster). In the simulations calculating the largest connected
component, the main peaks shift to the right as volume increases. This is
again due to the fact that the largest connected component will scale with
volume.
## References
* Cirac and Zoller (1995) J. Cirac and P. Zoller, Phys. Rev. Lett. 74, 4091 (1995).
* Cory et al. (1997) D. Cory, A. Fahmy, and T. Havel, Proc. National Academy of Science 94, 1634 (1997).
* Gershenfeld and Chuang (1997) N. Gershenfeld and I. Chuang, Science 275, 350 (1997).
* Kane (1998) B. Kane, Nature (London) 393, 133 (1998).
* Loss and DiVincenzo (1998) D. Loss and D. DiVincenzo, Phys. Rev. A. 57, 120 (1998).
* Knill et al. (2001) E. Knill, R. Laflamme, and G. Milburn, Nature (London) 409, 46 (2001).
* Mooij et al. (1999) J. Mooij, T. Orlando, L. Levitov, L. Tian, C. van der Wal, and S. Lloyd, Science 285, 1096 (1999).
* Nakamura et al. (1999) Y. Nakamura, Y. A. Pashkin, and J. Tsai, Nature (London) 398, 786 (1999).
* ARDA (2004) ARDA, _Quantum information science and technology roadmap project, http://qist.lanl.gov_ (2004).
* Kok et al. (2007) P. Kok, W. Munro, K. Nemoto, T. Ralph, J. Dowling, and G. Milburn, Rev. Mod. Phys. 79, 135 (2007).
* Chiorescu et al. (2003) I. Chiorescu, Y. Nakamura, C. Harmans, and J. Mooij, Science 299, 1869 (2003).
* Yamamoto et al. (2003) T. Yamamoto, Y. A. Pashkin, O. Astafiev, Y. Nakamura, and J. Tsai, Nature (London) 425, 941 (2003).
* Chiaverini et al. (2004) J. Chiaverini, D. Leibfried, T. Schaetz, M. Barrett, R. Blakestad, J. Britton, W. Itano, J. Jost, E. Knill, C. Langer, et al., Nature (London) 432, 602 (2004).
* Häffner et al. (2005) H. Häffner, W. Hänsel, C. Roos, J. Benhelm, D. C. al kar, M. Chwalla, T. Körber, U. Rapol, M. Riebe, P. Schmidt, et al., Nature (London) 438, 643 (2005).
* Gorman et al. (2005) J. Gorman, D. Hasko, and D. Williams, Phys. Rev. Lett. 95, 090502 (2005).
* Gaebel et al. (2006) T. Gaebel, M. Domhan, I. Popa, C. Wittmann, P. Neumann, F. Jelezko, J. Rabeau, N. Stavrias, A. Greentree, S. Prawer, et al., Nature Physics (London) 2, 408 (2006).
* Hanson et al. (2006) R. Hanson, F. Mendoza, R. Epstein, and D. Awschalom, Phys. Rev. Lett. 97, 087601 (2006).
* Dutt et al. (2007) M. G. Dutt, L. Childress, L. Jiang, E. Togan, J. Maze, F. Jelezko, A. Zibrov, P. Hemmer, and M. Lukin, Science 316, 1312 (2007).
* O’Brien et al. (2003) J. O’Brien, G. Pryde, A. White, T. Ralph, and D. Branning, Nature (London) 426, 264 (2003).
* Raussendorf et al. (2007) R. Raussendorf, J. Harrington, and K. Goyal, New J. Phys. 9, 199 (2007).
* Raussendorf and Harrington (2007) R. Raussendorf and J. Harrington, Phys. Rev. Lett. 98, 190504 (2007).
* Fowler and Goyal (2009) A. Fowler and K. Goyal, Quant. Inf. Comp. 9, 721 (2009).
* Devitt et al. (2009) S. Devitt, A. Fowler, A. Stephens, A. Greentree, L. Hollenberg, W. Munro, and K. Nemoto, New. J. Phys. 11, 083032 (2009).
* Ionicioiu and Munro (2009) R. Ionicioiu and W. Munro, arxiv:0906.1727 (2009).
* Devitt et al. (2008) S. Devitt, W. Munro, and K. Nemoto, arxiv:0810.2444 (2008).
* Stephens et al. (2008) A. Stephens, Z. Evans, S. Devitt, A. Greentree, A. Fowler, W. Munro, J. O’Brien, and K. Nemoto, Phys. Rev. A. 78, 032318 (2008).
* Raussendorf and Briegel (2001) R. Raussendorf and H.-J. Briegel, Phys. Rev. Lett. 86, 5188 (2001).
* Meter (2006) R. V. Meter, quant-ph/0607065 (2006).
* Steane (2007) A. Steane, Quant. Inf. Comp. 7, 171 (2007).
* Bravyi and Kitaev (2001) S. Bravyi and A. Kitaev, Quant. Computers and Computing 2, 43 (2001).
* Dennis et al. (2002) E. Dennis, A. Kitaev, A. Landahl, and J. Preskill, J. Math. Phys. 43, 4452 (2002).
* Fowler et al. (2008) A. Fowler, A. Stephens, and P. Groszkowski, arxiv:0803.0272 (2008).
* Kitaev (1997) A. Kitaev, Russ. Math. Serv. 52, 1191 (1997).
* Stace et al. (2009) T. Stace, S. Barrett, and A. Doherty, Phys. Rev. Lett. 102, 200501 (2009).
* Chu and Liu (1965) Y. Chu and T. Liu, Science Sinica 14, 139 (1965).
* Edmonds (1967) J. Edmonds, J. Res. Nat. Bur. Standards 71B, 233 (1967).
* Cook and Rohe (1999) W. Cook and A. Rohe, INFORMS Journal on Computing, 11, 138 (1999).
* Kolmogorov (2009) V. Kolmogorov, Mathematical Programming Computation (2009), URL http://dx.doi.org/10.1007/s12532-009-0002-8.
* Nielsen and Chuang (2000) M. Nielsen and I. Chuang, _Quantum Computation and Information_ (Cambridge University Press, 2000), 2nd ed.
* Bravyi and Kitaev (2005) S. Bravyi and A. Kitaev, Phys. Rev. A. 71, 022316 (2005).
* Reichardt (2005) B. Reichardt, Quant. Inf. Proc. 4, 251 (2005).
* Devitt et al. (2007) S. Devitt, A. Greentree, R. Ionicioiu, J. O’Brien, W. Munro, and L. Hollenberg, Phys. Rev. A. 76, 052312 (2007).
* Floyd (1962) R. Floyd, Comm. of the ACM 5, 345 (1962).
* Warshall (1962) S. Warshall, Journal of the ACM 9, 11 (1962).
* Su et al. (2008) C.-H. Su, A. Greentree, W. Munro, K. Nemoto, and L. Hollenberg, Phys. Rev. A. 78, 062336 (2008).
|
arxiv-papers
| 2009-06-02T03:50:37 |
2024-09-04T02:49:03.102636
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Simon J. Devitt, Austin G. Fowler, Todd Tilma, W. J. Munro, Kae Nemoto",
"submitter": "Simon Devitt Dr",
"url": "https://arxiv.org/abs/0906.0415"
}
|
0906.0513
|
# Fully 3D Multiple Beam Dynamics Processes Simulation for the Tevatron
E.G. Stern egstern@fnal.gov J.F. Amundson P.G. Spentzouris A.A. Valishev
Fermi National Accelerator Laboratory
###### Abstract
We present validation and results from a simulation of the Fermilab Tevatron
including multiple beam dynamics effects. The essential features of the
simulation include a fully 3D strong-strong beam-beam particle-in-cell Poisson
solver, interactions among multiple bunches and both head-on and long-range
beam-beam collisions, coupled linear optics and helical trajectory consistent
with beam orbit measurements, chromaticity and resistive wall impedance. We
validate individual physical processes against measured data where possible,
and analytic calculations elsewhere. Finally, we present simulations of the
effects of increasing beam intensity with single and multiple bunches, and
study the combined effect of long-range beam-beam interactions and transverse
impedance. The results of the simulations were successfully used in Tevatron
operations to support a change of chromaticity during the transition to
collider mode optics, leading to a factor of two decrease in proton losses,
and thus improved reliability of collider operations.
###### pacs:
29.27.-a
††preprint: FERMILAB-PUB-09-281-AD-CD
## I Motivation
The Fermilab Tevatron Tevatron is a $p$-$\bar{p}$ collider operating at a
center-of-mass energy of $1.96\,\rm{TeV}$ and peak luminosity reaching
$3.53\times 10^{32}\,\rm{cm}^{-2}\,~{}\rm{s}^{-1}$. The colliding beams
consist of 36 bunches moving in a common vacuum pipe. For high-energy physics
operations, the beams collide head-on at two interation points (IPs) occupied
by particle detectors. In the intervening arcs the beams are separated by
means of electrostatic separators; long-range (also referred to as parasitic)
collisions occur at 136 other locations. Effects arising from both head-on and
long-range beam-beam interactions impose serious limitations on machine
performance, hence constant efforts are being exerted to better understand the
beam dynamics. Due to the extreme complexity of the problem a numerical
simulation appears to be one of the most reliable ways to study the
performance of the system.
Studies of beam-beam interactions in the Tevatron Run II mainly concentrated
on the incoherent effects, which were the major source of particle losses and
emittance growth. This approach was justified by the fact that the available
antiproton intensity was a factor of 10 to 5 less than the proton intensity
with approximately equal transverse emittances. Several simulation codes were
developed and used for the optimization of the collider performance lifetrac ;
BBSim .
With the commissioning of electron cooling in the Recycler, the number of
antiprotons available to the collider substantionally increased. During the
2007 and 2008 runs the initial proton and antiproton intensities differed by
only a factor of 3. Moreover, the electron cooling produces much smaller
transverse emittance of the antiproton beam ($\simeq
4\pi\,\textrm{mm}\,\textrm{mrad}$ 95% normalized vs. $\simeq
20\pi\,\textrm{mm}\,\textrm{mrad}$ for protons), leading to the head-on beam-
beam tune shifts of the two beams being essentially equal. The maximum
attained total beam-beam parameter for protons and antiprotons is 0.028.
Under these circumstances coherent beam-beam effects may become an issue. A
number of theoretical works exist predicting the loss of stability of coherent
dipole oscillations when the ratio of beam-beam parameters is greater than
$\simeq 0.6$ due to the suppression of Landau dampingAlexahin . Also, the
combined effect of the machine impedance and beam-beam interactions in
extended length bunches couples longitudinal motion to transverse degrees of
freedom and may produce a dipole or quadrupole mode instability combinedbb .
Understanding the interplay between all these effects requires a comprehensive
simulation. This paper presents a macroparticle simulation that includes the
main features essential for studying the coherent motion of bunches in a
collider: a self-consistent 3D Poisson solver for beam-beam force computation,
multiple bunch tracking with the complete account of sequence and location of
long-range and head-on collision points, and a machine model including our
measurement based understanding of the coupled linear optics, chromaticity,
and impedance.
In Sections II–V we describe the simulation subcomponents and their validation
against observed effects and analytic calculations. Section VI shows results
from simulation runs which present studies of increasing the beam intensity.
Finally, in Section VII we study the coherent stability limits for the case of
combined resistive wall impedance and long-range beam-beam interactions.
## II BeamBeam3d code
The Poisson solver in the BeamBeam3d code is described in Ref. Qiang1 . Two
beams are simulated with macroparticles generated with a random distribution
in phase space. The accelerator ring is conceptually divided into arcs with
potential interaction points at the ends of the arcs. The optics of each arc
is modeled with a $6\times 6$ linear map that transforms the phase space
$\\{x,x^{\prime},y,y^{\prime},z,\delta\\}$ coordinates of each macroparticle
from one end of the arc to the other. There is significant coupling between
the horizontal and vertical transverse coordinates in the Tevatron. For our
Tevatron simulations, the maps were calculated using coupled lattice functions
Optim1 obtained by fitting a model ColOptics of beam element configuration
to beam position measurements. The longitudinal portion of the map produces
synchrotron motion among the longitudinal coordinates with the frequency of
the synchrotron tune. Chromaticity results in an additional momentum-dependent
phase advance $\delta\mu_{x(y)}=\mu_{0}C_{x(y)}\Delta p/p$ where $C_{x(y)}$ is
the normalized chromaticity for $x$ (or $y$) and $\mu_{0}$ is the design phase
advance for the arc. This is a generalization of the definition of
chromaticity to apply to an arc, and reduces to the normalized chromaticity
$({\Delta\nu}/\nu)/({\Delta p/p})$ when the arc encompasses the whole ring.
The additional phase advance is applied to each particle in the decoupled
coordinate basis so that symplecticity is preserved.
The Tevatron includes electrostatic separators to generate a helical
trajectory for the oppositely charged beams. The mean beam offset at the IP is
included in the Poisson field solver calculation.
Different particle bunches are individually tracked through the accelerator.
They interact with each other with the pattern and locations that they would
have in the actual collider.
The impedance model applies a momentum kick to the particles generated by the
dipole component of resistive wall wakefields Chao1 . Each beam bunch is
divided longitudinally into slices containing approximately equal numbers of
particles. As each bunch is transported through an arc, particles in slice $i$
receive a transverse kick from the wake field induced by the dipole moment of
the particles in forward slice $j$:
${\Delta\vec{p}_{\perp}\over p}={2\over{\pi
b^{3}}}\sqrt{{4\pi\epsilon_{0}c}\over\sigma}\,\,{N_{j}r_{0}\,{<\\!\vec{r}_{j}\\!>}\over{\beta\gamma}}{L\over\sqrt{z_{ij}}}$
(1)
The length of the arc is $L$, $N_{j}$ is the number of particles in slice $j$,
$r_{0}$ is the classical electromagnetic radius of the beam particle
$e^{2}/4\pi\epsilon_{0}m_{0}c^{2}$, $z_{ij}$ is the longitudinal distance
between the particle in slice $i$ that suffers the wakefield kick and slice
$j$ that induces the wake. $\vec{r}_{j}$ is the mean transverse position of
particles in slice $j$, $b$ is the pipe radius, $c$ is the speed of light,
$\sigma$ is the conductivity of the beam pipe and $\beta\gamma$ are Lorentz
factors of the beam. Quantities with units are specified in the MKSA system.
## III Synchrobetatron comparisons
We will assess the validity of the beam-beam calculation by comparing
simulated synchrobetatron mode tunes with a measurement performed at the
VEPP-2M $500\,{\rm MeV}$ $e^{+}e^{-}$ collider and described in Ref.
Nesterenko . These modes are an unambiguous marker of beam-beam interactions
and provide a sensitive tool for evaluating calculational models. These modes
arise in a colliding beam accelerator where the longitudinal bunch length and
the transverse beta function are of comparable size. Particles at different
$z$ positions within a bunch are coupled through the electromagnetic
interaction with the opposing beam leading to the development of coherent
synchrobetatron modes. The tune shifts for different modes have a
characteristic evolution with beam-beam parameter
$\xi=Nr_{0}/4\pi\gamma\epsilon$, in which $N$ is the number of particles,
$r_{0}$ is the classical electromagnetic radius of the beam particle, and
$\epsilon$ is the unnormalized one-sigma beam emittance.
There are two coherent transverse modes in the case of simple beam-beam
collisions between equal intensity beams without synchrotron motion: the
$\sigma$ mode where the two beams oscillate with the same phase, and the $\pi$
mode where the two beams oscillate with opposite phases piwinski . Without
synchrotron motion, the $\sigma$ mode mode has the same tune as unperturbed
betatron motion while the $\pi$ mode frequency is offset by $K\xi$, where the
parameter $K$ is approximately equal to and greater than 1 and depends on the
transverse shape of the beams yokoya . The presence of synchrotron motion
introduces a more complicated spectrum of modes whose spectroscopy is outlined
in Fig. 1 in Ref. Nesterenko .
We simulated the VEPP-2M collider using Courant-Snyder uncoupled maps. The
horizontal emittance in the VEPP-2M beam is much larger than the vertical
emittance. The bunch length ($4\,{\rm cm}$) is comparable to
$\beta^{*}_{y}=6\,{\rm cm}$ so we expect to see synchrobetatron modes. In
order to excite synchrobetatron modes, we set an initial $y$ offset of one
beam sigma approximately matching the experimental conditions.
Longitudinal effects of the beam-beam interaction were simulated by dividing
the bunch into six slices. At the interaction point, bunches drift through
each other. Particles in overlapping slices are subjected to a transverse
beam-beam kick calculated by solving the 2D Poisson equation for the electric
field with the charge density from particles in the overlapping beam slice.
Figure 1: Simulated mode spectra in the VEPP-2M collider with $\xi=.008$
showing synchrobetatron modes. The line indicated by A is the base tune, B is
the first synchrobetatron mode, C is the beam-beam $\pi$ mode.
Simulation runs with a range of beam intensities corresponding to beam-beam
parameters of up to 0.015 were performed, in effect mimicking the experimental
procedure described in Ref. Nesterenko . For each simulation run, mode peaks
were extracted from the Fourier transform of the mean bunch vertical position.
An example of the spectrum from such a run is shown in Fig. 1 with three mode
peaks indicated. In Fig. 2, we plot the mode peaks from the BeamBeam3d
simulation as a function of $\xi$ as red diamonds overlaid on experimental
data from Ref. Nesterenko and a model using linearized coupled modes referred
to as the matrix model described in that reference and Ref.
Perevedentsev_Valishev ; Danilov_Perevedentsev . As can be seen, there is good
agreement between the observation and simulation giving us confidence in the
beam-beam calculation.
Figure 2: The diamonds show simulated synchrobetatron modes as a function of
beam-beam parameter $\xi$ (diamonds) and of observed modes (points).
## IV Impedance tests
Wakefields or, equivalently, impedance in an accelerator with a conducting
vacuum pipe gives rise to well known instabilities. Our aim in this section is
to demonstrate that the wakefield model in BeamBeam3d quantitatively
reproduced these theoretically and experimentally well understood phenomena.
The strong head-tail instability examined by Chao Chao1 arises in extended
length bunches in the presence of wakefields. For any particular accelerator
optical and geometric parameters, there is an an intensity threshold above
which the beam becomes unstable.
The resistive wall impedance model applies an additional impulse kick in
addition to the application of the map derived from beam optics. The tune
spectrum is computed from the Fourier transform of the beam bunch positions
sampled at the end of each arc. In order for the calculation to be a good
approximation of the wakefield effect, the impedance kick should be much
smaller than the $x^{\prime}$ or $y^{\prime}$ change due to regular beam
transport so we divide the ring into multiple arcs. which brings up the
question is how many is sufficient. The difference in calculated impedance
tune shift for a 12 arc division of the ring or a 24 arc division is only
$2\times 10^{-4}$, which is less than 3% of the synchrotron tune (0.007 in
this study), the relevant scale in these simulations. We perform the
calculation with 12 arcs for calculational efficiency.
In the absence of impedance, we would expect to see the tune spectrum peak at
20.574, the betatron tune of the lattice. With a pipe radius of $3\,{\rm cm}$
and a bunch length of $20\,{\rm cm}$, resistive wall impedance produces the
spectrum shown in Fig. 3 for a bunch of $4\times 10^{12}$ protons at
$150\,{\rm GeV}$111We are simulating a much larger intensity than would be
possible in the actual machine in order to drive the strong headtail
instability for comparison with the analytical model.. In this simulation, the
base tune $\nu_{\beta}$ is $20.574$ and the synchrotron tune is $0.007$. Three
mode peaks are clearly evident corresponding to synchrobetatron modes with
frequencies $\nu_{\beta}-\nu_{s}$ shifted up by the wakefield (point $A$),
$\nu_{\beta}$ shifted down (point $B$), and $\nu_{\beta}+\nu_{s}$ shifted
upward (point $C$) as would be expected in Ref. chao_strong_headtail .
Figure 3: Simulated spectrum of a two slice bunch in the presence of
wakefields and synchrotron motion showing three synchrobetatron modes $A$,
$B$, and $C$ induced by wakefields. Figure 4: Evolution of the base tune and
lower synchrobetatron mode frequencies as a function of beam intensity showing
the two modes approaching a common frequency due to impedance. The $y$ scale
is in units of the synchrotron tune. The simulations are shown for a two slice
aond six slice wakefield calculation.
In Fig. 4, we show the evolution of the two modes as a function of beam
intensity. With the tune and beam environment parameters of this simulation,
Chao’s two particle model predicts instability development at intensities of
about $9\times 10^{12}$ particles, which is close to where the upper and lower
modes meet. We show two sets of curves for two slice and six slice wakefield
calculations. The difference between the two slice and six slice simulations
is accounted for by the effective slice separation, $\hat{z}$, that enters Eq.
1. With two slices, the effective $\hat{z}$ is larger than than the six slice
effective $\hat{z}$, resulting in a smaller $W_{0}$. With the smaller wake
strength, a larger number of protons is necessary to drive the two modes
together as is seen in Fig. 4.
When the instability occurs, the maximum excursion of the bunch dipole moment
grows exponentially as the beam executes turns through the accelerator. The
growth rate can be determined by reading the slope of a graph of the absolute
value of bunch mean position as a function of turn number plotted on a log
scale. The growth rate per turn of dipole motion at the threshold of strong
head-tail instability has a parabolic dependence on beam intensity. The
wakefield calculation reproduces this feature, as shown in Fig. 5. The growth
rate is slowly increasing up to the instability threshold at $5.42\times
10^{12}$, after which it has the explicitly quadratic dependence on beam
intensity ($I$) of $\textrm{growth rate}=-0.100+0.0304I-0.00207I^{2}$.
Figure 5: The growth rate of dipole motion in the simulated accelerator with
impedance as a function of beam intensity as the strong head-tail threshold is
reached superimposed with a parabolic fit. Figure 6: The normalized growth
rate of dipole motion in the simulated accelerator with impedance and
chromaticity as a function of head-tail phase $\chi$ at three beam intensities
demonstrating their linear relationship close to $0$ and the near-universal
relationship for head-tail phase between $-1$ and $0$.
Chromaticity interacts with impedance to cause a different head-tail
instability. We simulated a range of beam intensities and chromaticity values.
The two particle model and the more general Vlasov equation calculation Chao1
indicate that the growth rate scales by the head-tail phase $\chi=2\pi
C\nu_{\beta}{\hat{z}}/c\eta$, where $\eta$ is the slip factor of the machine
and $\hat{z}$ is roughly the bunch length. The head-tail phase gives the size
of betatron phase variation due to chromatic effects over the length of the
bunch.
Some discussion of the meaning of the slip factor in the context of a
simulation is necessary. In a real accelerator, the slip factor has an
unambiguous meaning: $\eta=(\alpha_{C}-1/\gamma^{2})$. The momentum compaction
parameter $\alpha_{c}$ is determined by the lattice and $\gamma$ is the
Lorentz factor. We simulate longitudinal motion by applying maps to the
particle coordinates $z$ and $\delta$ in discrete steps. The simulation
parameters specifying longitudinal transport are the longitudinal beta
function $\beta_{z}$ and synchrotron tune $\nu_{s}$. Note that these
parameters do not make reference to path length travelled by a particle.
However, path length enters into the impedance calculation because wake forces
are proportional to path length. In addition, analytic calculations of the
effect of wake forces depend on the evolution of the longitudinal particle
position which in turn depend explicitly on the slip factor. For our
comparisons with analytic results to be meaningful, we need to use a slip
factor that is consistent with the longitudinal maps and the path lengths that
enter the wake force calculations. The relationship between the slip factor
$\eta$ and the simulation parameters is $\beta_{z}=\eta
L_{\bigcirc}/2\pi\nu_{s}$, where $L_{\bigcirc}$ is the length of the
accelerator and $\beta_{z}=\sigma_{z}/\sigma_{\delta}$ is the longitudinal
beta functionChao2 ; Syphers which may be derived by identifying
corresponding terms in the solution to the differential equations of
longitudinal motion and a one term linear map.
When the growth rate is normalized by
$Nr_{0}W_{0}/2\pi\beta\gamma\nu_{\beta}$, which includes the beam intensity
and geometric factors, we expect a universal dependence of normalized growth
rate versus head-tail phase that begins linearly with head-tail phaseChao3
and peaks around -1222The simulated machine is above transition ($\eta$ is
positive.) The head-tail instability develops when chromaticity is negative,
thus the head-tail phase is negative. .
Fig. 6 shows the simulated growth rate at three intensities with a range of
chromaticites from $-.001$ to $-0.5$ to get head-tail phases in the $0$ to
$-1$ range. The normalized curves are nearly identical and peak close to head-
tail phase of unity. The deviation from a universal curve is again due to
differences between the idealized model and detailed simulation.
## V Bunch-by-bunch emittance growth at the Tevatron
Figure 7: Schematic of the position of proton and antiproton bunches in the
Tevatron with 36 proton and 36 antiproton bunches. The diagram shows the
positions at a time when the lead bunch of the trains are at the head-on
collision location. Head-on collisions occur at location B0 and D0. The green
shading indicates the part of the ring where beam-beam collisions may occur in
the simulations with six-on-six bunches.
Understanding the effect of unwanted long-range collisions among multiple beam
bunches in the design and operation of hadron colliders has received attention
from other authors pieloni ; lifetrac which underscores the importance for
this kind of simulation. A schematic of the fill pattern of proton and
antiproton bunches in the Tevatron is shown in Fig. 7. There are three trains
of twelve bunches for each species. A train occupies approximately
$81.5^{\circ}$ separated by a gap of about $38.5^{\circ}$. The bunch train and
gap are replicated three times to fill the ring. Bunches collide head-on at
the B0 and D0 interaction points but undergo long range (electromagnetic)
beam-beam interactions at 136 other locations around the ring333With three-
fold symmetry of bunch trains, train-on-train collisions occur at six
locations around the ring. The collision of two trains of 12 bunches each
results in bunch-bunch collisions at 23 locations which when multiplied by six
results in 138 collision points. It is a straightforward computer exercise to
enumerate these locations. Two of these locations are distinguished as head-on
while the remainder are parasitic.shiltsev_beambeam .
Running the simulation with all 136 long-range IPs turns out to be very slow
so we only calculated beam-beam forces at the two main IPs and and the long-
range IPs immediately upstream and downstream of them. The transverse beta
functions at the long-range collision locations are much larger than the bunch
length, so the beam-beam calculation at those locations can be performed using
only the 2D solver.
One interesting consequence of the fill pattern and the helical trajectory is
that any one of the 12 bunches in a train experiences collisions with the 36
bunches in the other beam at different locations around the ring, and in
different transverse positions. This results in a different tune and emittance
growth for each bunch of a train, but with the three-fold symmetry for the
three trains. In the simulation, emittance growth arises from the effects of
impedance acting on bunches that have been perturbed by beam-beam forces. The
phenomenon of bunch dependent emittance growth is observed
experimentallyshiltsev_beambeam .
Figure 8: The simulated and measured emittance of each Tevatron proton bunch
after running with 36 proton and 36 antiproton bunches. Curves (a) and (b)
which show the emittance after 50000 simulated turns are read with the left
vertical axis. Curve (a) results from a simulation with the nominal beam
spacing at the long-range IPs. Curve (b) results from a simulation with the
hypothetical condition where the beam separation at the long-range IPs is 100
times normal, suppressing the effect of those long-range IPs. Curve (c) is the
measured emittance of bunches after 15 minutes of a particular store (#5792)
of bunches in the Tevatron, and is read with the right vertical axis.
The beam-beam simulation with 36-on-36 bunches shows similar effects. We ran a
simulation of 36 proton on 36 antiproton bunches for 50000 turns with the
nominal helical orbit. The proton bunches had $8.8\times 10^{11}$ particles
(roughly four times the usual to enhance the effect) and the proton emittance
was the typical $20\pi\,\textrm{mm}\,\textrm{mrad}$. The antiproton bunch
intensity and emittance were both half the corresponding proton bunch
parameter. The initial emittance for each proton bunch was the same so changes
during the simulation reflect the beam-beam effect.
Curve (a) in Figure 8 shows the emittance for each of the 36 proton bunches in
a 36-on-36 simulation after 50000 turns of simulation. The three-fold symmetry
is evident. The end bunches of the train (bunch 1, 13, 25) are clearly
different from the interior bunches. For comparison, curve (c) shows the
measured emittance taken during accelerator operations. The observed bunch
emittance variation is similar to the simulation results. Another beam-beam
simulation with the beam separation at the closest head-on IP expanded 100
times its nominal value resulted in curve (b) of Figure 8 showing a much
reduced bunch-to-bunch variation. We conclude that the beam-beam effect at the
long-range IPs is the origin of the bunch variation observed in the running
machine and that our simulation of the beam helix is correct.
## VI Tevatron applications
### VI.1 Single bunch features
We looked at the tune spectrum with increasing intensity for equal intensity
$p$ and $\bar{p}$ beams containing one bunch each. As the intensity increases,
the beam-beam parameter $\xi$ increases. Fig. 9 shows the spectrum of the sum
and difference of the two beam centroids for $\xi=0.01,0.02,0.04$,
corresponding to beam bunches containing $2.2\times 10^{11}$, $4.4\times
10^{11}$ and $8.8\times 10^{11}$ protons. The abscissa is shifted so the base
tune is at 0 and normalized in units of the beam-beam parameter at a beam
intensity of $2.2\times 10^{11}$. The coherent $\sigma$ and $\pi$ mode peaks
are expected to be present in the spectra of the sum and difference signals of
the two beam centroids. The coherent $\sigma$ modes are evident at 0, while
the coherent $\pi$ modes should slightly greater than 1, 2, and 4
respectively. Increasing intensity also causes larger induced wake fields
which broaden the mode peaks, especially the $\pi$ mode, as shown in Fig. 9.
Figure 9: Dipole mode spectra of the sum and difference offsets of two beam
centroids at three beam intensities corresponding to beam-beam parameter
values for each beam of 0.01, 0.02 and 0.04. The vertical scale is in
arbitrary units.
The 4D emittances at higher intensities show significant growth over 20000
turns as shown in Fig. 10. The kurtosis excess of the two beams remains
slightly positive for the nominal intensity, but shows a slow increase at
higher intensities indicating the the beam core is being concentrated as shown
in Fig. 11. Concentration of the bunch core while emittance is growing
indicates the development of filamentation and halo.
Figure 10: The evolution of 4D emittances for beam-beam parameters of 0.01,
0.02, and 0.04 which correspond to intensities of (a) $2.2\times 10^{11}$, (b)
$4.4\times 10^{11}$, (c) $8.8\times 10^{11}$, and (d) $1.1\times 10^{12}$
protons per bunch. Figure 11: The evolution of (reduced) kurtosis of the
particle distribution for intensities of (a) $2.2\times 10^{11}$, (b)
$4.4\times 10^{11}$, (c) $8.8\times 10^{11}$, and (d) $1.1\times 10^{12}$
protons per bunch.
### VI.2 Simulation of bunch length, synchrotron motion and beam-beam
interactions
Synchrotron motion in extended length bunches modifies the effects of the
beam-beam interaction by shifting and suppressing the coherent modes. The
plots in Fig. 12 show simulated spectra for sum and difference signals of the
beam centroid offsets for one-on-one bunch collisions in a ring with Tevatron-
like optics, with both short and long bunches, at three different synchrotron
tunes. The sum signal will contain the $\sigma$ mode while the difference
signal will contain the $\pi$ mode. In this Tevatron simulation, the beam
strength is set so that the beam-beam parameter is 0.01, the base tune in the
vertical plane is 0.576, and $\beta_{y}$ is approximately $30\,{\rm cm}$.
Subplots a and b of Fig. 12 show that with small synchrotron tune both the
$\sigma$ and $\pi$ mode peaks are evident with short and long bunches. The
$\sigma$ mode peak is at the proper place, with the $\pi$ mode peak shifted
upwards by the expected amount, but with longer bunches (subplots c and d) the
incoherent continuum is enhanced and the strength of the coherent peaks is
reduced. When the synchrotron tune is the same as or larger than the beam-beam
splitting (subplots e and f), short bunches still exhibit strong coherent
modes, but with long bunches the coherent modes are significantly diluted. In
the case of long bunches, the $\sigma$ mode has been shifted upwards to 0.580,
and the $\pi$ mode is not clearly distinguishable from the continuum. At
$\nu_{s}$ of $0.01$ and $0.02$, the synchrobetatron side bands are clearly
evident.
Figure 12: Simulated one-on-one bunch $y$ plane $\sigma$ and $\pi$ mode tune
spectra for short bunches (a, c, e) and long bunches (b, d, f), for three
different synchtrotron tunes, with a Tevatron-like lattice.
### VI.3 Multi-bunch mode studies
When the Tevatron is running in its usual mode, each circulating beam contains
36 bunches. Every bunch in one beam interacts with every bunch in the opposite
beam, though only two interaction points are useful for high energy physics
running. The other 136 interaction points are unwanted and detrimental to beam
lifetime and luminosity. The beam orbit is deflected in a helical shape by
electrostatic separators to reduce the impact of these unwanted collisions, so
the beams are transversely separated from each other in all but the two high-
energy physics interaction points. Because of the helical orbit, the beam
separation is different at each parasitic collision location. For instance, a
bunch near the front of the bunch train will undergo more long-range close the
the head-on interaction point, compared to a bunch near the rear of the bunch
train. A particular bunch experiences collisions at specific interaction
points with other bunches each of which has its own history of collisions.
This causes bunch-to-bunch variation in disruption and emittance growth as
will be demonstrated below.
We will begin the validation and exploration of the multi-bunch implementation
starting with runs of two-on-two bunches and six-on-six bunches before moving
on to investigate the situation with the full Tevatron bunch fill of 36-on-36
bunches. Two-on-two bunches will demonstrate the bunches coupling amongst each
other, but will not be enough to demonstrate the end bunch versus interior
bunch behavior that characterizes the Tevatron. For that, we will look at six-
on-six bunch runs.
In these studies, we are only filling the ring with at most six bunches in a
beam. Referring to Fig. 7, we see that only the head-on location at B0 is
within the green shaded region where beam-beam collisions may occur with six
bunches in each beam. Because of the beam-beam collisions, each bunch is
weakly coupled to every other bunch which gives rise to multi-bunch collective
modes.
We began the investigation of these effects with a simulation of beams with
two bunches each. The bunches are separated by 21 RF buckets as they are are
in normal Tevatron operations. Collisions occur at the head-on location and at
parasitic locations 10.5 RF buckets distant on either side of the head-on
location. To make any excited modes visible, we ran with $2.2\times 10^{11}$
particles, which gives a single bunch beam-beam parameter of $0.01$. There are
four bunches in this problem. We label bunch 1 and 3 in beam 1 (proton) and
bunch 2 and 4 in beam 2 (antiproton) with mean $y$ positions of the bunches
$y_{1},\ldots y_{4}$. By diagonalizing the covariance matrix of the turn-by-
turn bunch centroid deviations, we determine four modes, shown in Fig. 13.
Fig. 13(a) shows the splitting of the $\sigma$ mode. The coefficients of the
two modes indicate that this mode is primarily composed of the sum of
corresponding beam bunches (1 with 2, 3 with 4) similar to the $\sigma$ mode
in the one-on-one bunch case. The other two modes in Fig 13(b) have the
character and location in tune space of the $\pi$ mode, from their
coefficients and also their reduced strength compared to the $\sigma$ mode.
Figure 13: Mode tune spectrum for a two on two bunch run at $2.2\times
10^{11}$ particles/bunch ($\xi=0.01$). Figure (a) shows the two modes that are
most like $\sigma$ modes. $\sigma$ mode 1 is
$0.53y_{1}+0.53y_{2}+0.59y_{3}-0.31y_{4}$, $\sigma$ mode 2 is
$0.39y_{1}+0.49y_{2}-0.46y_{3}-0.63y_{4}$. Figure (b) shows the two $\pi$-like
modes. $\pi$ mode 1 is $0.74y_{1}-0.66y_{2}-0.08y_{3}$, $\pi$ mode 2 is
$0.12y_{1}+0.20y_{2}-0.66y_{3}+0.31y_{4}$. The absolute scale of the Fourier
power is arbitrary, but the relative scales of plots (a) and (b) are the same.
With six on six bunches, features emerge that are clearly bunch position
specific. Fig. 14(a) shows the turn-by-turn evolution of 4D emittance and (b)
$y$ kurtosis for each of the six proton bunches. It is striking that bunch 1,
the first bunch in the sequence, has a lower emittance growth than all the
other bunches. Emittance growth increases faster with increasing bunch number
from bunches 2–5, but bunch 6 has a lower emittance growth than even bunch 4.
The kurtosis of bunch 1 changes much less than that of any of the other
bunches, but bunches 2–5 have a very similar evolution, while bunch 6 is
markedly closer to bunch 1. One difference between the outside bunches (1 and
6) and the inside bunches (2–5) is that they have only one beam-beam
interaction at the parasitic IP closest to the head-on collision, while the
inside bunches have one collision before the head-on IP, and one after it. The
two parasitic collision points closest to the head-on collision point have the
smallest separation of any of the parasitics, so interaction there would be
expected to disrupt the beam more than interactions at other parasitic
locations.
|
---|---
Figure 14: A six-on-six bunch Tevatron run with $8.8\times 10^{11}$
particles/bunch: (a) The turn-by-turn evolution of 4D emittance of each of the
six bunches. (b) The turn-by-turn evolution of $y$ kurtosis of the six
bunches.
To test this hypothosis, we did two additional runs. In the first, the beam
separation at the parasitic IP immediately downstream of the head-on IP was
artificially increased in the simulation so as to have essentially no effect.
The effect of this is that the first proton bunch will not have any beam-beam
collisions at an IP close to the head-on IP, while all the other bunches will
have one collision at a near-head-on IP. The corresponding plots of emittance
and kurtosis are shown in Fig. 15. The kurtosis data shows that bunches 2–5
which all suffer one beam-beam collision at a close parasitic IP are all
together while bunch 1 which does not have a close IP collision is separated
from the others.
|
---|---
Figure 15: In a six-on-six bunch Tevatron run with $8.8\times 10^{11}$
particles/bunch, with the beam spacing at the first parasitic IP downstream of
the head-on location artificially increased: (a) The 4D emittance of each of
the six bunches as a function of turn. (b) the $y$ kurtosis of the six bunches
as a function of turn.
Emittance and kurtosis growth in simulations where the beam separation at the
closest upstream and downstream parasitic IPs was increased is shown in Fig.
16. In this configuration no bunch suffers a strong beam-beam collision at a
parasitic IP close to the head-on location so the kurtosis of all the bunches
evolves similarly.
|
---|---
Figure 16: In a six-on-six bunch Tevatron run with $8.8\times 10^{11}$
particles/bunch, with the both nearest upstream and downstream parasitic IP
artificially widened: (a) The 4D emittance of each of the six bunches as a
function of turn. (b) the $y$ kurtosis of the six bunches as a function of
turn.
## VII Lower chromaticity threshold
During the Tevatron operation in 2009 the limit for increasing the initial
luminosity was determined by particle losses in the so-called squeeze phase
AVPAC09 . At this stage the beams are separated in the main interaction points
(not colliding head-on), and the machine optics is gradually changed to
decrease the beta-function at these locations from 1.5 m to 0.28 m.
With proton bunch intensities currently approaching $3.2\times 10^{11}$
particles, the chromaticity of the Tevatron has to be managed carefully to
avoid the development of a head-tail instability. It was determined
experimentally that after the head-on collisions are initiated, the Landau
damping introduced by beam-beam interaction is strong enough to maintain beam
stability at chromaticity of +2 units (in Tevatron operations, chromaticity is
$\Delta\nu/({\Delta p}/p)$.) At the earlier stages of the collider cycle, when
beam-beam effects are limited to long-range interactions the chromaticity was
kept as high as 15 units since the concern was that the Landau damping is
insufficient to suppress the instability. At the same time, high chromaticity
causes particle losses which are often large enough to quench the
superconducting magnets, and hence it is desireable to keep it at a reasonable
minimum.
Table 1: Beam parameters for Tevatron simulation Parameter | value
---|---
beam energy | $980\,\mbox{GeV}$
$p$ particles/bunch | $3.0\times 10^{11}$
$\bar{p}$ particles/bunch | $0.9\times 10^{11}$
$p$ tune $(\nu_{x},\nu_{y})$ | (20.585,20.587)
$p$ (normalized) emittance | $20\pi\,\textrm{mm}\,\textrm{mrad}$
$\bar{p}$ tune $(\nu_{x},\nu_{y})$ | (20.577,20.570)
$\bar{p}$ (normalized) emittance | $6\pi\,\textrm{mm}\,\textrm{mrad}$
synchrotron tune $\nu_{s}$ | 0.0007
slip factor | 0.002483
bunch length (rms) | $43\,{\rm cm}$
$\delta p/p$ momentum spread | $1.2\times 10^{-4}$
effective pipe radius | $3\,{\rm cm}$
Figure 17: The x dipole moment in a simulation with $C=-2$ and no beam-beam
effect showing the development of instability. Figure 18: The x dipole moment
of a representative bunch in a 36-on-36 simulation with $C=-2$ with beam-beam
effects and beams separated showing no obvious instability within the limits
of the simulation. Figure 19: The x RMS moment of a representative bunch in a
36-on-36 simulation with $C=-2$ with beam-beam effects and beams separated
showing no obvious instability within the limits of the simulation.
Our multi-physics simulation was used to determine the safe lower limit for
chromaticity. The simulations were performed with starting beam parameters
listed in Table 1. With chromaticity set to -2 units, and no beam-beam effect,
the beams are clearly unstable as seen in Fig. 17. With beams separated,
turning on the beam-beam effect prevents rapid oscillation growth during the
simulation as shown in Fig. 18. The bursts of increased amplitude is sometimes
indicative of the onset of instability, but it is not obvious within the
limited duration of this run. The RMS size of the beam also does not exhibit
any obvious unstable tendencies as shown in Fig. 19.
Based on these findings the chromaticity in the squeeze was lowered by a
factor of two, and presently is kept at 8-9 units. This resulted in a
significant decrease of the observed particle loss rates (see, e.g., Fig. 5 in
AVPAC09 ).
## VIII Summary
The key features of the developed simulation include fully three-dimentional
strong-strong multi-bunch beam-beam interactions with multiple interaction
points, transverse resistive wall impedance, and chromaticity. The beam-beam
interaction model has been shown to reproduce the location and evolution of
synchrobetatron modes characteristic of the 3D strong-strong beam-beam
interaction observed in experimental data from the VEPP-2M collider. The
impedance calculation with macroparticles excites both the strong and weak
head-tail instabilities with thresholds and growth rates that are consistent
with expectations from a simple two-particle model and Vlasov calculation.
Simulation of the interplay between the helical beam-orbit, long range beam-
beam interactions and the collision pattern qualitatively matches observed
patterns of emittance growth.
The new program is a valuable tool for evaluation of the interplay between the
beam-beam effects and transverse collective instabilities. Simulations have
been successfully used to support the change of chromaticity at the Tevatron,
demonstrating that even the reduced beam-beam effect from long-range
collisions may provide enough Landau damping to prevent the development of
head-tail instability. These results were used in Tevatron operations to
support a change of chromaticity during the transition to collider mode
optics, leading to a factor of two decrease in proton losses, and thus
improved reliability of collider operations.
###### Acknowledgements.
We thank J. Qiang and R. Ryne of LBNL for the use of and assistance with the
BeamBeam3d program. We are indebted to V. Lebedev and Yu. Alexahin for useful
discussions. This work was supported by the United States Department of Energy
under contract DE-AC02-07CH11359 and the ComPASS project funded through the
Scientific Discovery through Advanced Computing program in the DOE Office of
High Energy Physics. This research used resources of the National Energy
Research Scientific Computing Center, which is supported by the Office of
Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
This research used resources of the Argonne Leadership Computing Facility at
Argonne National Laboratory, which is supported by the Office of Science of
the U.S. Department of Energy under contract DE-AC02-06CH11357.
## References
* (1) Run II handbook, http://www-bd.fnal.gov/runII
* (2) A. Valishev et al., “Observations and Modeling of Beam-Beam Effects at the Tevatron Collider”, PAC2007, Albuquerque, NM, 2007
* (3) M. Xiao et al., “Tevatron Beam-Beam Simulation at Injection Energy”, PAC2003, Portland, OR, 2003
* (4) Y. Alexahin, “On the Landau Damping and Decoherence of Transverse Dipole Oscillations in Colliding Beams”, Part. Acc. 59, 43 (1996)
* (5) E.A. Perevedentsev and A.A. Valishev, “Simulation of Head-Tail Instability of Colliding Bunches”, Phys. Rev. ST Accel. Beams 4, 024403, (2001)
* (6) J. Qiang, M.A. Furman, R.D. Ryne, J.Comp.Phys., 198 (2004), pp. 278–294, J. Qiang, M.A. Furman, R.D. Ryne, Phys. Rev. ST Accel. Beams, 104402 (2002).
* (7) V. Lebedev, http://www-bdnew.fnal.gov/pbar/ organizationalchart/lebedev/OptiM/optim.htm
* (8) A. Valishev et al., “Progress with Collision Optics of the Fermilab Tevatron Collider”, EPAC06, Edinburgh, Scotland, 2006
* (9) A. Chao, Physics of Collective Beam Instabilities in High Energy Accelerators., pp. 56–60, 178–187, 333-360 John Wiley and Sons, Inc., (1993)
* (10) A. Piwinski, IEEE Trans. Nucl. Sci. NS-26 (3), 4268 (1979).
* (11) K. Yokoya, Phys. Rev. ST-AB 3, 124401 (2000).
* (12) I.N. Nesterenko, E.A. Perevedentsev, A.A. Valishev, Phys.Rev.E, 65, 056502 (2002)
* (13) E.A. Perevedentsev and A.A. Valishev, Phys. Rev. ST Accel. Beams 4, 024403 2001 ; in Proceedings of the 7th European Particle Accelerator Conference, Vienna, 2000 unpub- lished , p. 1223; http://accelconf.web.cern.ch/AccelConf/e00/ index.html
* (14) V.V. Danilov and E.A. Perevedentsev, Nucl. Instrum. Methods Phys. Res. A 391, 77 1997 .
* (15) A. Chao, Physics of Collective Beam Instabilities in High Energy Accelerators., Figure 4.8 on p. 183, John Wiley and Sons, Inc., (1993).
* (16) A. Chao, Physics of Collective Beam Instabilities in High Energy Accelerators., p. 9, John Wiley and Sons, Inc., (1993)
* (17) D.A. Edwards and M.J. Syphers, An Introducution to the Physics of High Energy Accelerators, pp. 30–46, John Wiley and Sons Inc., 1993.
* (18) A. Chao, Physics of Collective Beam Instabilities in High Energy Accelerators., p. 351, John Wiley and Sons, Inc., (1993).
* (19) F.W. Jones, W. Herr, T. Pieloni, Parallel Beam-Beam Simulation Incorporating Multiple Bunches and Multiple Interaction Regions, THPAN007 in Proceedings of PAC07, Albuquerque, NM (2007), T. Pieloni, W. Herr, Models to Study Multi Bunch Coupling Through Head-on and Long-range Beam-Beam Interactions, WEPCH095 in Proceedings of EPAC06, Edinburgh, Scotland, (2006), T. Pieloni and W. Herr, Coherent Beam-beam Modes in the CERN Large Hadron Collider (LHC) for Multiple Bunches, Different Collision Schemes and Machine Symmetries, TPAT078 in Proceedings of PAC05, Knoxville, TN, (2005).
* (20) V. Shiltsev, et al., “Beam-Beam Effects in the Tevatron,” PRSTAB, 8, 101001 (2005).
* (21) A. Valishev et al., “Recent Tevatron Operational Experience”, PAC2009, Vancouver, BC, 2009
|
arxiv-papers
| 2009-06-02T15:37:37 |
2024-09-04T02:49:03.115502
|
{
"license": "Public Domain",
"authors": "E.G. Stern, J.F. Amundson, P.G. Spentzouris, A.A. Valishev",
"submitter": "Alexander Valishev",
"url": "https://arxiv.org/abs/0906.0513"
}
|
0906.0668
|
11institutetext: Institute for Advanced Studies in Basic Sciences (IASBS), P.
O. Box 45195-1159, Zanjan, Iran
# Tully-Fisher relation, key to dark companion of baryonic matter
Yousef Sobouti Akram Hasani Zonoozi sobouti@iasbs.ac.ir a.hasani@iasbs.ac.ir
Hosein Haghi haghi@iasbs.ac.ir
(Received *****; Accepted *****)
Rotation curves of spiral galaxies _i_) fall off much less steeply than the
Keplerian curves do, and _ii_) have asymptotic speeds almost proportional to
the fourth root of the mass of the galaxy, the Tully-Fisher relation. These
features alone are sufficient for assigning a dark companion to the galaxy in
an unambiguous way. In regions outside a spherical system, we design a
spherically symmetric spacetime to accommodate the peculiarities just
mentioned. Gravitation emerges in excess of what the observable matter can
produce. We attribute the excess gravitation to a hypothetical, dark, perfect
fluid companion to the galaxy and resort to the Tully-Fisher relation to
deduce its density and pressure. The dark density turns out to be proportional
to the square root of the mass of the galaxy and to fall off as
$r^{-(2+\alpha)},\leavevmode\nobreak\ \alpha\ll 1$. The dark equation of state
is barrotropic. For the interior of the configuration, we require the
continuity of the total force field at the boundary of the system. This
enables us to determine the size and the distribution of the interior dark
density and pressure in terms of the structure of the observable matter. The
formalism is nonlocal and nonlinear, and the density and pressure of the dark
matter at any spacetime point turn out to depend on certain integrals of the
baryonic matter over all or parts of the system in a nonlinear manner.
###### Key Words.:
galaxies: Tully-Fisher– Gravitation: modified gravity– dark matter
## 1 Introduction
Gravitation of the observable matter in galaxies and clusters of galaxies is
not sufficient for explaining their dynamics. Dark matter scenarios and/or
alternative theories of gravitation (see e. g., Milgrom 1983; Behar and
Carmelli 2000; Capozziello et al. 2002, 2003 and 2006; Carroll et al. 2004;
Norjiri et al. 2003 and 2004; Moffat 2005; Sobouti 2007) are called in to
resolve the dilemma. The fact remains, however, that the proponents of dark
matter have always looked for it in observable matter. No one has, so far,
reported a case where there is still no baryonic matter, but there is a
dynamical issue to be settled. In view of this negative observation, it has
been conjectured (Sobouti 2008 a, b; (2009)) that, if there is a dark
companion to any baryonic matter, there must be rules to connect the
properties of the twin entities. On the other hand, the existence of such a
rule will entitle one to interpret the case as an alternative gravity, thus
reducing the difference between the two paradigms to the level of semantics.
This conclusion, however, is true as long as the assumed dark matter does not
interact with the baryonic one in any other way than through its gravitation.
Sobouti assumes a spherically symmetric system, attributes a dark perfect
fluid companion to it, and requires the rotation curve of the system to
display the same asymptotic behavior as those of the actual spirals. The
reason for the assumption of a dark fluid instead of the conventionally
assumed dark pressureless dust, is to ensure the satisfaction of the Bianchi
identities and thereby the baryonic conservation laws (See sect. 8 for further
explanation).
In regions outside to the baryonic system, he finds the density and pressure
of the dark fluid companion in terms of the mass of the host system. The
Tully-Fisher relation and the slow non-Keplerian decline of the rotation
curves play key roles in determining the relation between the matter and its
dark twin.
In this paper, we follow the same line of argument to find the structure of
the dark matter in the interior of the baryonic system. The continuity of the
total gravitational force at the boundary of the observable matter leads to
the dark matter distribution in the interior. The Tully-Fisher relation is a
nonlocal and nonlinear feature of the dynamics of galaxies: a) The presence of
the total or partial integrals of the baryonic matter in the structure of both
exterior and interior solutions reflects the nonlocality. b) That the excess
gravitation does not increase proportionally upon increasing the mass of the
host galaxy indicates the nonlinearity. To emphasize these two features, we
refer to the formalism developed here as the nonlocal and nonlinear (NN) one.
To check its validity, the formalism is applied to NGC 2903 and NGC 1560, two
examples of high and low surface brightness galaxies, respectively, and the
resulting rotation curves are compared with those obtained through other
approaches.
## 2 Model and formalism
The following is a brief background from Sobouti (2008a, b; 2009). The
physical system is a spherically symmetric baryonic matter of finite extent.
By conjecture there is a dark presence that pervades both the interior and
exterior of the system. The spacetime metric inside and outside of the system
is necessarily spherically symmetric and takes the form
$\displaystyle ds^{2}=-B(r)dt^{2}+A(r)dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta
d\varphi^{2}).$ (1)
Let both the baryonic matter and its dark companion be perfect fluids of
densities $\rho,\leavevmode\nobreak\ \rho_{d}$, of pressures
$p,\leavevmode\nobreak\ p_{d}$, respectively, and be at rest. From the field
equations of general relativity (GR), we find
$\displaystyle\frac{1}{r^{2}}\left[\frac{d}{dr}\left(\frac{r}{A}\right)-1\right]=-(\rho+\rho_{d}),$
(2)
$\displaystyle\frac{1}{rA}\left(\frac{B^{\prime}}{B}+\frac{A^{\prime}}{A}\right)=[(p+p_{d})+(\rho+\rho_{d})],$
(3)
where we have let $8\pi G=c^{2}=1$, and ‘′’$=d/dr$. In the nonrelativistic
regime, we neglect the pressures, eliminate the densities between the two
equations, and arrive at
$\displaystyle\frac{B^{\prime}}{B}=\frac{1}{r}(A-1).$ (4)
In the following two sections we solve Eqs. (2) - (4) inside and outside the
baryonic system.
## 3 Exterior solution
Hereafter, the parameters pertaining to the interior and exterior of the
system will be labeled by the superscripts $(i)$ and $(e)$, respectively. The
unknowns in Eqs. (2-4) are $A,\leavevmode\nobreak\ B,\leavevmode\nobreak\
\rho_{d},\leavevmode\nobreak\ p_{d},$ and the dark equation of state. We begin
with Eq. (4) and assume that in the baryonic vacuum, $\rho=p=0$, the factor
$(A^{(e)}-1)$ is differentiable and has the series expansion
$\displaystyle(A^{(e)}-1)=\left(\frac{r_{0}}{r}\right)^{\alpha}\left(s_{0}+\frac{s_{1}}{r}+\cdots\right),\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ r\geq R,$ (5)
where the indicial exponent $\alpha$ and $s_{0}$ are dimensionless, $s_{1}$
has the dimension of length, $r_{0}$ is an arbitrary length scale of the
system, and $R$ is the radius of the baryonic sphere. Substituting Eq. (5)
into Eq. (4) and integrating the resulting expression, gives
$\displaystyle
B^{(e)}=\exp\left[-\left(\frac{r_{0}}{r}\right)^{\alpha}\left(\frac{s_{0}}{\alpha}+\frac{s_{1}}{(1+\alpha)r}+\cdots\right)\right].$
(6)
We expand the exponential factor, keep its first two terms, and for the weak
field gravitational potential, $\phi=(B-1)/2$, find
$\displaystyle\phi^{(e)}=-\frac{1}{2}\left(\frac{r_{0}}{r}\right)^{\alpha}\left[\frac{s_{0}}{\alpha}+\frac{s_{1}}{(1+\alpha)r}+\cdots\right].$
(7)
The square of the circular speed of a test object orbiting the galaxy is
$\displaystyle
v^{2}=r\frac{d\phi^{(e)}}{dr}=\frac{1}{2}\left(\frac{r_{0}}{r}\right)^{\alpha}\left(s_{0}+\frac{s_{1}}{r}+\cdots\right).$
(8)
Equation (8) is the rotation curve of our hypothetical galaxy in its baryonic
vacuum. It has an asymptotically constant logarithmic slope,
$\Delta=d\ln v^{2}/d\ln r\rightarrow-\alpha\leavevmode\nobreak\
\leavevmode\nobreak\ \textrm{as}\leavevmode\nobreak\ r\rightarrow\infty.$
### 3.1 Determination of $\alpha,\leavevmode\nobreak\
s_{0},\leavevmode\nobreak\ s_{1},\leavevmode\nobreak\
\cdots\leavevmode\nobreak\ $
Rotation curves of actual spiral galaxies have two distinct non-classical
features:
_i_) Their asymptotic slopes are much flatter than that of the Keplerian
curves, $-1$, (Sanders (1996); Bosma (1981); Begmann (1989); Persic and
Salucci, (1995); Begmann et al. (1991); Sanders and Verheijen (1998); Sanders
and McGhaugh (2002)). This implies $\alpha\ll 1$. From Persic et al. (1996),
who study 1100 galaxies with the aim of arriving at a universal rotation
curve, we estimate
$\displaystyle\alpha<0.01.$ (9)
Moreover, $\alpha$ does not seem to be a universal constant. The rotation
curves of more massive galaxies appear to fall off somewhat more steeply than
those of the less massive ones (Persic et al. (1996)). Hereafter, for
simplicity but mainly for pedagogical reasons, we work in the limit of
$\alpha\rightarrow 0$.
_ii_) Their asymptotic speeds follow the Tully-Fisher relation. They are
almost proportional to the fourth root of the mass of the host galaxy (Tully
and Fisher (1977); Begmann (1989); McGaugh et al. (2000); McGhaugh (2005)). In
Eq. (8), letting $\alpha\rightarrow 0$, the dominant term at large distances
is $v^{2}=s_{0}/2$. We identify this $v$ with the Tully-Fisher asymptote and
conclude that
$\displaystyle
s_{0}=\lambda\left({M}/{M_{\odot}}\right)^{1/2},\leavevmode\nobreak\
\leavevmode\nobreak\ \lambda=2.8\times 10^{-12},$ (10)
where M is the galactic mass, and $\lambda$ can be obtained either from a
direct examination of the observed asymptotic speeds (Sobouti (2007)) or from
a comparison of the first term of Eq. (8) with the low acceleration limit of
MOND (Milgrom (1983)):
$v^{2}/r\rightarrow(a_{0}g_{N})^{1/2},\leavevmode\nobreak\ a_{0}=1.2\times
10^{-10}\textrm{m sec}^{-2}$ (Begmann (1989)).
Again letting $\alpha\rightarrow 0$, the second term in Eq. (8) is the classic
Newtonian or GR term. Therefore, $s_{1}$ should be identified with the
Schwarzschild radius of the host galaxy:
$\displaystyle s_{1}=2GM/c^{2}.$ (11)
Here, for clarity, we have restored the constants $c^{2}$ and $G$ and written
$s_{1}$ in physical units. There is no compelling observational evidence to
indicate the need for other terms in Eqs. (5) - (8). Therefore, at least at
the present state of the extent and accuracy of the observational data, we
truncate the series at the $s_{1}$ term.
## 4 Interior solution
The first and foremost condition to be satisfied is the continuity of the
total force exerted on a test object at the boundary, $R$, of the baryonic
system. Pressure forces are anticipated to be insignificant in the present
problem so are ignored. The gravitational forces remain. From Eqs. (8) - (11),
the exterior force is
$\displaystyle\frac{d\phi^{(e)}}{dr}=\frac{1}{2}\left[\lambda\left(\frac{M}{M_{\odot}}\right)^{1/2}\frac{1}{r}+\frac{2GM}{c^{2}}\frac{1}{r^{2}}\right],\leavevmode\nobreak\
r\geq R.$ (12)
By analogy, for the interior of the system we adopt
$\displaystyle\frac{d\phi^{(i)}}{dr}=\frac{1}{2}\left[\lambda\left(\frac{M(r)}{M_{\odot}}\right)^{1/2}\frac{1}{r}+\frac{2GM(r)}{c^{2}}\frac{1}{r^{2}}\right],\leavevmode\nobreak\
r\leq R,$ (13)
where $M(r)=4\pi\int_{0}^{r}\rho r^{2}dr$ is the variable baryonic mass inside
the radius $r$. The continuity of the exterior and interior forces at the
boundary is evident, QED. Once the baryonic $\rho(r)$ and $M(r)$ are known,
$\phi^{(i)}(r)$, $B^{i}(r)\approx 1+2\phi^{(i)}$ and $A^{(i)}$ can be
integrated. The expression for the latter is much simpler and is given below
for later reference. From Eq. (4) we find
$\displaystyle
A^{(i)}-1=2r\frac{d\phi^{(i)}}{dr}=\left[\lambda\left(\frac{M(r)}{M_{\odot}}\right)^{1/2}+\frac{2GM(r)}{c^{2}}\frac{1}{r}\right].$
(14)
This has the same form as Eq. (5), where $M$ is replaced by $M(r)$.
## 5 Structure of the dark matter
The densities are obtained from Eq. (2) or equivalently from Poisson’s
equation through Eqs. (12) - (13). For the exterior dark density we find
$\displaystyle\rho_{d}^{(e)}(r)=\lambda\left(\frac{M}{M_{\odot}}\right)^{1/2}\frac{1}{r^{2}},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ r\geq R.$ (15)
Note the square root dependence on the mass of the galaxy and the fall out as
$r^{-2}$. For the interior, $A^{(i)}$ is given by Eq. (14), whose first term
gives the interior dark density and the second renders the baryonic density,
$\rho$. Thus,
$\displaystyle\rho_{d}^{(i)}(r)=\lambda\left[\frac{M(r)}{M_{\odot}}\right]^{1/2}\frac{1}{r^{2}}\left[1+2\pi\frac{\rho
r^{3}}{M(r)}\right],\leavevmode\nobreak\ \leavevmode\nobreak\ r\leq R.$ (16)
The dark matter inside the radius $r$ is
$\displaystyle
M_{d}(r)=4\pi\int_{0}^{r}\rho_{d}(r)r^{2}dr=\lambda\left[\frac{M(r)}{M_{\odot}}\right]^{1/2}r.$
(17)
Equation (17) holds for any $r$. For $r\geq R$, however, $M(r)$ attains its
maximum constant value, $M$, and $M_{d}(r>R)$ becomes proportional to $r$.
It is instructive to look at the behavior of Eq. (16) in the neighborhood of
the origin, where $\rho\rightarrow\rho_{c}$ and $M(r)\rightarrow
4\pi\rho_{c}r^{3}/3$. Equation (16) tends toward
$\displaystyle\rho_{d}^{(i)}(r\rightarrow
0)=\frac{5}{2}\lambda\left(\frac{4\pi}{3}\frac{\rho_{c}}{M\odot}\right)^{1/2}r^{-1/2}.$
(18)
Similarly,
$\displaystyle M_{d}^{(i)}(r\rightarrow
0)=\lambda\left(\frac{4\pi}{3}\frac{\rho_{c}}{M\odot}\right)^{1/2}r^{5/2}.$
(19)
While the density becomes singular as $r\rightarrow 0$, no cusp develops. For
the measure $r^{2}dr$ tends to zero as $r\rightarrow 0$.
Pressures of the matter and of its dark companion are obtained from their
hydrostatic equilibrium, a requirement of the Bianchi identities. The general
formula is
$\displaystyle\frac{p^{\prime}}{p+\rho}\approx\frac{p^{\prime}}{\rho}=-\frac{1}{2}\frac{B^{\prime}}{B}\approx-\frac{d\phi}{dr}.$
(20)
For the exterior pressure from Eqs. (20), (15), (12), we find
$\displaystyle
p_{d}^{(e)}(r)=\frac{1}{4}s_{0}\left(\frac{s_{0}}{r^{2}}+\frac{2}{3}\frac{s_{1}}{r^{3}}\right),\leavevmode\nobreak\
\leavevmode\nobreak\ r\geq R.$ (21)
The presence of an extra factor of $s_{0}$ in Eq.(21) makes the pressure an
order of magnitude less than the density and justifies the approximation made
in the derivation of Eq.(4) and thereafter. The equation of state, $p(\rho)$,
in the exterior region is obtained by eliminating $r$ between Eqs. (21) and
(15). It is barrotropic. The internal pressure is obtained in a similar way.
It is, however, an involved expression so is too involved expression to give
here.
A pedagogical note: Throughout the text, except in Eq.(11), we have chosen
$8\pi G=c^{2}=1$. To write the results in physical units, the rule is to
multiply, everywhere, the potentials, $\phi$, by $c^{2}$, the dark densities,
$\rho_{d}$, dark masses, $M_{d}$, by $c^{2}/8\pi G$, and the dark pressures,
$p_{d}$, by $c^{4}/8\pi G$.
## 6 Application to actual spirals
Spiral galaxies are flattened objects. Their approximation as spherical
systems introduces an error on the order of $(R_{gyr}/r)^{2}$, where
$R_{gyr}(r)$ is the gyration radius of the mass enclosed within a radius $r$.
In a flat system that thins out as an exponential or as a Kuzmin disk, say,
this ratio would be a few parts in thousand and small enough for our purpose.
This is also the practice of all the authors quoted so far in this paper. To
illustrate the practical applicability of the formalism developed here, we
construct the rotation curves of two standard high- and low- surface
brightness galaxies and compare the results with those obtained through MOND’s
formalism.
NGC 2903 is a textbook example of a high surface brightness spiral. It has a
large stellar component and small HI content. The gas is confined to the
galactic plane and follows circular orbits. It is well observed out to about
40 kpc (Begmann (1989)). In contrast, NGC 1560 is a low surface brightness
spiral with a dominant gas component. Its observed rotation curve extends out
to about 8 kpc and does not seem to have reached its asymptotic regime.
In Fig. 1 we construct the rotation curves of our NN formalism from Eq. (13),
in which $M(r)$ is the total, stellar plus HI, mass interior to $r$. The free
adjustable parameter in matching the theoretical curves to data points, is the
‘stellar’ mass-to-light ratio, $\Upsilon$, assumed to be constant throughout
the galaxy. For comparison we have also included the rotation curves of MOND.
That the NN curves trace the data points more closely than the MOND ones can
be seen pictorially. The $\chi^{2}$ test and $\Upsilon$’s of Table 1, however,
illustrate this in a quantitative way. In both galaxies our $\chi_{NN}^{2}$ is
noticeably small. Significant, however, is the low stellar mass-to-light ratio
of the young and gas-dominated NGC 1560. Our $\Upsilon=0.3$ is, by far, closer
to 0.4 estimate of McGaugh (2002) than to 1.1 of MOND.
Figure 1: Points with error bars are observed data. Dotted and dashed lines are the contributions of the gaseous and stellar components to the rotation curves, respectively. Dashed-dotted line is the rotation curve constructed through MOND’s formalism. Solid line is our rotation curve calculated from Eq. (13). The free parameter in matching theoretical curves to data points, is the stellar mass-to-light ratio. Table 1: Minimum $\chi^{2}$ and fitted stellar mass-to-light ratio, $\Upsilon$, of MOND and of our NN formalism. Galaxy | | $\chi^{2}_{NN}$ | $\Upsilon_{NN}$ | $\chi^{2}_{MOND}$ | $\Upsilon_{MOND}$
---|---|---|---|---|---
NGC 2903 | | 4.97 | 1.7 | 6.07 | 3.0
NGC 1560 | | 1.52 | 0.3 | 3.35 | 1.1
Our next project is to study pressure-supported systems, globular clusters and
dwarf spheroidal galaxies (dSph). Globular clusters are commonly believed to
be almost Newtonian systems, while dSph’s show significant deviations from
Newtonian regimes. The low baryonic mass and extremely high dynamical-mass-to-
light ratio of dSph’s are inconsistent with population synthesis models
(Hilker (2006); Jordi et al. (2009); Angus (2008)). Our approach is to find a
counterpart of the classical virial theorem for our proposed gravity and to
solve a modified Jeans equation. The aim is to verify whether the velocity
dispersions obtained via Jeans equation fit the observed data. We also hope to
be able to come up with a notion equivalent to the fundamental plane for
galaxies where one arranges the galaxies on a two-parameter-plane in a three-
dimensional space of luminosity, velocity dispersion, and some other global
characteristics of the galaxies.
## 7 Nonlocality and nonlinearity of the formalism:
The masses $M$ and $M(r)$ are integrals over all or parts of the system. Their
presence, in the structure of the spacetime metric, in the rotation curve, in
the expressions for the dark densities and pressures, etc., reflects the
nonlocal nature of the theory. That these integrals enter the formalism not in
a linear way indicates the nonlinearity of it. Both features are rooted in the
Tully-Fisher relation, which requires the dynamical variables at one spacetime
point to depend on the integral properties of the whole or parts of the system
through the square root of these integrals. Any attempt to derive the
spacetime metric entertained in this paper through a variational principle
should take these two features into account.
In this respect, Hehl and Mashhoon’s generalization of GR, (Hehl and Mashhoon
2009 a, b ), constructed within the framework of the translational gauge
theory of gravity, is interesting. In the weak field approximation, the excess
gravitation coming from the nonlocality of their theory can be interpreted as
a dark companion to the baryonic matter. In the case of a point baryonic mass,
$M$, the dark density has the expected $r^{-2}$ distribution, But it does not
obey the Tully - Fisher relation. Instead of $M^{1/2}$, it is proportional to
$M$ itself.
## 8 Concluding remarks
The formalism developed here is a dark matter scenario or, equivalently, a
modified GR paradigm to understand the non-classical behavior of the rotation
curves of spiral galaxies. Following (Sobouti 2008 a, b; (2009)), we attribute
a hypothetical dark perfect fluid companion to our model galaxy , and find the
size and the distribution of the companion by comparing the rotation curve of
the model with those of the actual galaxies. However, as long as the dark
companion displays no physical characteristics other than its gravitation, one
has the option to interpret the scenario as an alternative theory of
gravitation. Here, for example, one may maintain that the gravitation of a
baryonic sphere is not what Newton or Schwarzschild profess, but rather what
one infers from the spacetime metric detailed above. In fact we wish to
emphasize that any modified gravity is expressible in terms of a dark matter
scenario. And vice versa, any dark matter paradigm, in which the matter and
its dark twin are related by certain rules, is explainable by a modified
gravity. The difference between the two alternatives is semantic.
Dynamics of galaxies is a nonrelativistic issue. Yet, its analysis in a GR
context answers questions that otherwise are left out. In particular, in a
nonrelativistic scenario, there is neither need nor logic to assign a pressure
field to a hypothetical matter that one knows nothing about its nature. In a
GR context, on the other hand, the dark matter has to have a pressure field
and has to be in hydrodynamic equilibrium as a requirement of the Bianchi
identities and thereby of the conservation laws of the baryonic matter, i.e.
the vanishing of the 4-divergence of both sides of the field equations. Let us
also note in passing that all those metric approaches that attempt to explain
the galaxy problems with the aid of a single scalar field are subject to the
same criticism, namely the violation of the Bianchi identities and of the
conservation laws.
Regions outside to the baryonic matter are not dark matter vacua. Therefore,
the Ricci scalar does not vanish, and there are excess lensing and excess
periastron precession caused by the dark matter. These are analyzed in Sobouti
(2008a, b; 2009).
The formalism is good for spherical distributions of baryonic matters. An
axiomatic generalization to nonspherical configurations or to many body
systems requires further deliberations and more accurate observational data to
help find some solutions. One might need other postulates not contemplated.
The difficulty lies in the nonlinearity of the formalism. There is no
superposition principle. One may not add the fields of the dark companions of
two separate baryonic systems because $s_{0}$ of Eq. (10) is not linear in $M$
or $M(r)$.
###### Acknowledgements.
We thank S. S. McGaugh for providing us with observational data on the
rotation of galaxies and for his useful comments on their interpretation.
## References
* Angus (2008) Angus, G. W., 2008, MNRAS, 387, 1481
* Bosma (1981) Bosma, A., 1981, AJ , 86, 1825
* Begmann (1989) Begmann, K. G., 1989, A&A, 233, 47
* Begmann et al. (1991) Begmann, K. G., Broeils, A. H., and Sanders R. H., 1991, MNRAS, 249, 523
* Behar & Carmeli (2000) Behar, S., & Carmeli, M., Int.J.Theor.Phys, 39, 2000, 1397-1404
* Capozziello (2002) Capozziello, S. 2002, Int. J. Mod. Phys. D., 11, 483
* Capozziello et el. (2003) Capozziello, S., Cardone, V., Carloni, S., & Troisi, A. 2003, Int. J. Mod. Phys. D., 12, 1969
* Capozziello et al. (2006) Capozziello, S., Cardone, V. F., & Troisi, A. 2006, JCAP, 8, 1
* Carroll et al. (2004) Carroll, S., Duvvuri, V., Trodden, M., & Turner, M. 2004, Phys. Rev. D, 70, 2839
* (10) Hehl, F.W., and Mashhoon, B., 2009 a, Phys. Lett. B, 673, 279
ibid, 2009 b, Phys. Rev. D, 79, 064028
* Hilker (2006) Hilker M., 2006, A&A, 448, 171.
* Jordi et al. (2009) Jordi K., Grebel, E. K., Hilker, M., Baumgardt, H., Frank, M., Kroupa, P., Haghi, H., Cote, P., Djorgovski, S. G., 2009, AJ, 137, 4586.
* McGaugh et al. (2000) McGhaugh, S. S., Schombert, J.M., Bothun, G.D., and de Blok, W.J.G., 2000, ApJ L 533, L99
* McGhaugh (2005) McGhaugh, S. S., 2005, ApJ, 632, 859-871
* Milgrom (1983) Milgrom, M., 1983, ApJ, 270, 365, 371, 384
* Moffat (2005) Moffat, J. W. 2005 JCAP 2005, 003
* Nojiri & Odintsov (2003) Nojiri, S. & Odintsov, S. D. 2003, Phys. Rev. D, 68, 123512
* Nojiri & Odintsov (2004) Nojiri, S. & Odintsov, S. D. 2004, General Relativity and Gravitation, 36, 1765
* Persic et al. (1996) Persic, M., Salucci, P., and Stel, F., 1996, MNRAS, 281, 27
* Persic and Salucci, (1995) Persic, M., and Salucci, P., ApJ Suppl, 99, 501, 1995
* Sanders (1996) Sanders R. H., 1996, ApJ, 473, 117
* Sanders and Verheijen (1998) Sanders R. H., and Verheijen, M. A. W., 1998, ApJ, 503, 97 (arXiv:astro-ph/9802240).
* Sanders and McGhaugh (2002) Sanders, R. H., and McGhaugh, S. S., 2002, ARAA, 40, 263, (arXiv:astro-ph/0204521)
* Sobouti (2007) Sobouti, Y., 2007, A&A, 464, 921 (arXiv:astro-ph/0603302v4); and Saffari, R., and Sobouti, Y., 2007, A&A, 742, 833
* Sobouti 2008 a, b; (2009) Sobouti, Y., 2008a, arXiv:0810.2198[gr-qc], 2008b, arXiv:0812.4127v2 [gr-qc], 2009, arXiv:0903.5007 [gr-qc]
* Tully and Fisher (1977) Tully, R. B., and Fisher, J. R., 1977, A&A, 54, 661
|
arxiv-papers
| 2009-06-03T09:31:41 |
2024-09-04T02:49:03.128277
|
{
"license": "Public Domain",
"authors": "Y. Sobouti, A. Hasani Zonoozi, H. Haghi",
"submitter": "Hosein Haghi",
"url": "https://arxiv.org/abs/0906.0668"
}
|
0906.0717
|
# Compact polyhedral surfaces of an arbitrary genus and determinants of
Laplacians
Alexey Kokotov 111Department of Mathematics and Statistics, Concordia
University, 1455 de Maisonneuve Blvd. West, Montreal, Quebec, H3G 1M8 Canada,
E-mail: alexey@mathstat.concordia.ca
Abstract. Compact polyhedral surfaces (or, equivalently, compact Riemann
surfaces with conformal flat conical metrics) of an arbitrary genus are
considered. After giving a short self-contained survey of their basic spectral
properties, we study the zeta-regularized determinant of the Laplacian as a
functional on the moduli space of these surfaces. An explicit formula for this
determinant is obtained.
## 1 Introduction
There are several well-known ways to introduce a compact Riemann surface,
e.g., via algebraic equations or by means of some uniformization theorem,
where the surface is introduced as the quotient of the upper half-plane over
the action of a Fuchsian group. In this paper we consider a less popular
approach which is at the same time, perhaps, the most elementary: one can
simply consider the boundary of a connected (but, generally, not simply
connected) polyhedron in three dimensional Euclidean space. This is a
polyhedral surface which carries the structure of a complex manifold (the
corresponding system of holomorphic local parameters is obvious for all points
except the vertices; near a vertex one should introduce the local parameter
$\zeta=z^{2\pi/\alpha}$, where $\alpha$ is the sum of the angles adjacent to
the vertex). In this way the Riemann surface arises together with a conformal
metric; this metric is flat and has conical singularities at the vertices.
Instead of a polyhedron one can also start from some abstract simplicial
complex, thinking of a polyhedral surface as glued from plane triangles.
The present paper is devoted to the spectral theory of the Laplacian on such
surfaces. The main goal is to study the determinant of the Laplacian (acting
in the trivial line bundle over the surface) as a functional on the space of
Riemann surfaces with conformal flat conical metrics (polyhedral surfaces).
The similar question for smooth conformal metrics and arbitrary holomorphic
bundles was very popular in the eighties and early nineties being motivated by
string theory. The determinants of Laplacians in flat singular metrics are
much less studied: among the very few appropriate references we mention
[DP89], where the determinant of the Laplacian in a conical metric was defined
via some special regularization of the diverging Liouville integral and the
question about the relation of such a definition with the spectrum of the
Laplacian remained open, and two papers [K93], [AS94] dealing with flat
conical metrics on the Riemann sphere.
In [KK09] (see also [KK04]) the determinant of the Laplacian was studied as a
functional
${\cal H}_{g}(k_{1},\dots,k_{M})\ni({\cal X},\omega)\mapsto{\rm
det}\,\Delta^{|\omega|^{2}}$
on the space ${\cal H}_{g}(k_{1},\dots,k_{M})$ of equivalence classes of pairs
$({\cal X},\omega)$, where ${\cal X}$ is a compact Riemann surface of genus
$g$ and $\omega$ is a holomorphic one-form (an Abelian differential) with $M$
zeros of multiplicities $k_{1},\dots,k_{M}$. Here ${\rm
det}\,\Delta^{|\omega|^{2}}$ stands for the determinant of the Laplacian in
the flat metric $|\omega|^{2}$ having conical singularities at the zeros of
$\omega$. The flat conical metric $|\omega|^{2}$ considered in [KK09] is very
special: the divisor of the conical points of this metric is not arbitrary (it
should be the canonical one, i. e. coincide with the divisor of a holomorphic
one-form) and the conical angles at the conical points are integer multiples
of $2\pi$. Later in [KK07] this restrictive condition has been eliminated in
the case of polyhedral surfaces of genus one.
In the present paper we generalize the results of [KK09] and [KK07] to the
case of polyhedral surfaces of an arbitrary genus. Moreover, we give a short
and self-contained survey of some basic facts from the spectral theory of the
Laplacian on flat surfaces with conical points. In particular, we discuss the
theory of self-adjoint extensions of this Laplacian and study the asymptotics
of the corresponding heat kernel.
## 2 Flat conical metrics on surfaces
Following [T86] and [KK07], we discuss here flat conical metrics on compact
Riemann surfaces of an arbitrary genus.
### 2.1 Troyanov’s theorem
Let $\sum_{k=1}^{N}b_{k}P_{k}$ be a (generalized, i.e., the coefficients
$b_{k}$ are not necessary integers) divisor on a compact Riemann surface
${\cal X}$ of genus $g$. Let also $\sum_{k=1}^{N}b_{k}=2g-2$. Then, according
to Troyanov’s theorem (see [T86]), there exists a (unique up to a rescaling)
conformal (i. e. giving rise to a complex structure which coincides with that
of ${\cal X}$) flat metric ${\bf m}$ on ${\cal X}$ which is smooth in ${\cal
X}\setminus\\{P_{1},\dots,P_{N}\\}$ and has simple singularities of order
$b_{k}$ at $P_{k}$. The latter means that in a vicinity of $P_{k}$ the metric
${\bf m}$ can be represented in the form
${\bf m}=e^{u(z,\bar{z})}|z|^{2b_{k}}|dz|^{2},$ (1)
where $z$ is a conformal coordinate and $u$ is a smooth real-valued function.
In particular, if $\beta_{k}>-1$ the point $P_{k}$ is conical with conical
angle $\beta_{k}=2\pi(b_{k}+1)$. Here we construct the metric ${\bf m}$
explicitly, giving an effective proof of Troyanov’s theorem (cf. [KK07]).
Fix a canonical basis of cycles on ${\cal X}$ (we assume that $g\geq 1$, the
case $g=0$ is trivial) and let $E(P,Q)$ be the prime-form (see [F73]). Then
for any divisor ${\cal D}=r_{1}Q_{1}+\dots r_{m}Q_{M}-s_{1}R_{1}-\dots-
s_{N}R_{N}$ of degree zero on ${\cal X}$ (here the coefficients $r_{k},s_{k}$
are positive integers) the meromorphic differential
$\omega_{{\cal
D}}=d_{z}\log\frac{\prod_{k=1}^{M}E^{r_{k}}(z,Q_{k})}{\prod_{k=1}^{N}E^{s_{k}}(z,R_{k})}$
is holomorphic outside ${\cal D}$ and has first order poles at the points of
${\cal D}$ with residues $r_{k}$ at $Q_{k}$ and $-s_{k}$ at $R_{k}$. Since the
prime-form is single-valued along the ${\bf a}$-cycles, all ${\bf a}$-periods
of the differential $\omega_{\cal D}$ vanish.
Let $\\{v_{\alpha}\\}_{\alpha=1}^{g}$ be the basis of holomorphic normalized
differentials and ${\mathbb{B}}$ the corresponding matrix of ${\bf
b}$-periods. Then all ${\bf a}$\- and ${\bf b}$-periods of the meromorphic
differential
$\Omega_{\cal D}=\omega_{\cal D}-2\pi
i\sum_{\alpha,\beta=1}^{g}((\Im{\mathbb{B}})^{-1})_{\alpha\beta}\Im\left(\int_{s_{1}R_{1}+\dots
s_{N}R_{N}}^{r_{1}Q_{1}+\dots r_{M}Q_{M}}v_{\beta}\right)v_{\alpha}$
are purely imaginary (see [F73], p. 4).
Obviously, the differentials $\omega_{\cal D}$ and $\Omega_{\cal D}$ have the
same structure of poles: their difference is a holomorphic $1$-form.
Choose a base-point $P_{0}$ on ${\cal X}$ and introduce the following quantity
${\cal F}_{\cal D}(P)=\exp\int_{P_{0}}^{P}\Omega_{\cal D}.$
Clearly, ${\cal F}_{\cal D}$ is a meromorphic section of some unitary flat
line bundle over ${\cal X}$, the divisor of this section coincides with ${\cal
D}$.
Now we are ready to construct the metric ${\bf m}$. Choose any holomorphic
differential $w$ on ${\cal X}$ with, say, only simple zeros
$S_{1},\dots,S_{2g-2}$. Then one can set ${\bf m}=|u|^{2}$, where
$u(P)=w(P){\cal F}_{(2g-2)S_{0}-S_{1}-\dots
S_{2g-2}}(P)\prod_{k=1}^{N}\left[{\cal F}_{P_{k}-S_{0}}(P)\right]^{b_{k}}$ (2)
and $S_{0}$ is an arbitrary point.
Notice that in the case $g=1$ the second factor in (2) is absent and the
remaining part is nonsingular at the point $S_{0}$.
### 2.2 Distinguished local parameter
In a vicinity of a conical point the flat metric (1) takes the form
${\bf m}=|g(z)|^{2}|z|^{2b}|dz|^{2}$
with some holomorphic function $g$ such that $g(0)\neq 0$. It is easy to show
(see, e. g., [T86], Proposition 2) that there exists a holomorphic change of
variable $z=z(x)$ such that in the local parameter $x$
${\bf m}=|x|^{2b}|dx|^{2}\,.$
We shall call the parameter $x$ (unique up to a constant factor $c$, $|c|=1$)
distinguished. In case $b>-1$ the existence of the distinguished parameter
means that in a vicinity of a conical point the surface ${\cal X}$ is
isometric to the standard cone with conical angle $\beta=2\pi(b+1)$.
### 2.3 Euclidean polyhedral surfaces.
In [T86] it is proved that any compact Riemann surface with flat conformal
conical metric admits a proper triangulation (i. e. each conical point is a
vertex of some triangle of the triangulation). This means that any compact
Riemann surface with a flat conical metric is a Euclidean polyhedral surface
(see [B07]) i. e. can be glued from Euclidean triangles. On the other hand as
it is explained in [B07] any compact Euclidean oriented polyhedral surface
gives rise to a Riemann surface with a flat conical metric. Therefore, from
now on we do not discern compact Euclidean polyhedral surfaces and Riemann
surfaces with flat conical metrics.
## 3 Laplacians on polyhedral surfaces. Basic facts
Without claiming originality we give here a short self-contained survey of
some basic facts from the spectral theory of Laplacian on compact polyhedral
surfaces. We start with recalling the (slightly modified) Carslaw construction
(1909) of the heat kernel on a cone, then we describe the set of self-adjoint
extensions of a conical Laplacian (these results are complementary to
Kondratjev’s study ([K67]) of elliptic equations on conical manifolds and are
well-known, being in the folklore since the sixties of the last century; their
generalization to the case of Laplacians acting on $p$-forms can be found in
[M99]). Finally, we establish the precise heat asymptotics for the Friedrichs
extension of the Laplacian on a compact polyhedral surface. It should be noted
that more general results on the heat asymptotics for Laplacians acting on
$p$-forms on piecewise flat pseudomanifolds can be found in [C83].
### 3.1 The heat kernel on the infinite cone
We start from the standard heat kernel
$H_{2\pi}(x,y;t)=\frac{1}{4\pi t}\exp\\{-(x-y)\cdot(x-y)/4t\\}$ (3)
in the space ${\mathbb{R}}^{2}$ which we consider as the cone with conical
angle $2\pi$. Introducing the polar coordinates $(r,\theta)$ and $(\rho,\psi)$
in the $x$ and $y$-planes, one can rewrite (3) as the contour integral
$H_{2\pi}(x,y;t)=$
$\frac{1}{16\pi^{2}it}\exp\\{-(r^{2}+\rho^{2})/4t\\}\int_{C_{\theta,\psi}}\exp\\{r\rho\cos(\alpha-\theta)/2t\\}\cot\frac{\alpha-\psi}{2}\,d\alpha,$
(4)
where $C_{\theta,\psi}$ denotes the union of a small positively oriented
circle centered at $\alpha=\psi$ and the two vertical lines,
$l_{1}=(\theta-\pi-i\infty,\theta-\pi+i\infty)$ and
$l_{2}=(\theta+\pi+i\infty,\theta+\pi-i\infty)$, having mutually opposite
orientations.
To prove (4) one has to notice that
1) $\Re\cos(\alpha-\theta)<0$ in vicinities of the lines $l_{1}$ and $l_{2}$
and, therefore, the integrals over these lines converge.
2) The integrals over the lines cancel due to the $2\pi$-periodicity of the
integrand and the remaining integral over the circle coincides with (3) due to
the Cauchy Theorem.
Observe that one can deform the contour $C_{\theta,\psi}$ into the union,
$A_{\theta}$, of two contours lying in the open domains
$\\{\theta-\pi<\Re\alpha<\theta+\pi\,,\,\Im\alpha>0\\}$ and
$\\{\theta-\pi<\Re\alpha<\theta+\pi\,,\,\Im\alpha<0\\}$ respectively, the
first contour goes from $\theta+\pi+i\infty$ to $\theta-\pi+i\infty$, the
second one goes from $\theta-\pi-i\infty$ to $\theta+\pi-i\infty$. This leads
to the following representation for the heat kernel $H_{2\pi}$:
$H_{2\pi}(x,y;t)=$
$\frac{1}{16\pi^{2}it}\exp\\{-(r^{2}+\rho^{2})/4t\\}\int_{A_{\theta}}\exp\\{r\rho\cos(\alpha-\theta)/2t\\}\cot\frac{\alpha-\psi}{2}\,d\alpha.$
(5)
The latter representation admits a natural generalization to the case of the
cone $C_{\beta}$ with conical angle $\beta$, $0<\beta<+\infty$. Notice here
that in case $0<\beta\leq 2\pi$ the cone $C_{\beta}$ is isometric to the
surface $z_{3}=\sqrt{(\frac{4\pi^{2}}{\beta^{2}}-1)(z_{1}^{2}+z_{2}^{2})}$.
Namely, introducing the polar coordinates on $C_{\beta}$, we see that the
following expression represents the heat kernel on $C_{\beta}$:
$H_{\beta}(r,\theta,\rho,\psi;t)=$ $\frac{1}{8\pi\beta
it}\exp\\{-(r^{2}+\rho^{2})/4t\\}\int_{A_{\theta}}\exp\\{r\rho\cos(\alpha-\theta)/2t\\}\cot\frac{\pi(\alpha-\psi)}{\beta}\,d\alpha\,.$
(6)
Clearly, expression (6) is symmetric with respect to $(r,\theta)$ and
$(\rho,\psi)$ and is $\beta$-periodic with respect to the angle variables
$\theta,\psi$. Moreover, it satisfies the heat equation on $C_{\beta}$.
Therefore, to verify that $H_{\beta}$ is in fact the heat kernel on
$C_{\beta}$ it remains to show that
$H_{\beta}(\cdot,y,t)\longrightarrow\delta(\cdot-y)$ as $t\to 0+$. To this end
deform the contour $A_{\psi}$ into the union of the lines $l_{1}$ and $l_{2}$
and (possibly many) small circles centered at the poles of
$\cot\frac{\pi(\cdot-\psi)}{\beta}$ in the strip
$\theta-\pi<\Re\alpha<\theta+\pi$. The integrals over all the components of
this union except the circle centered at $\alpha=\psi$ vanish in the limit as
$t\to 0+$, whereas the integral over the latter circle coincides with
$H_{2\pi}$.
#### 3.1.1 The heat asymptotics near the vertex
###### Proposition 1
Let $R>0$ and $C_{\beta}(R)=\\{x\in C_{\beta}:{\rm dist}(x,{\cal O})<R\\}$.
Let also $dx$ denote the area element on $C_{\beta}$. Then for some
$\epsilon>0$
$\int_{C_{\beta}(R)}H_{\beta}(x,x;t)\,dx=\frac{1}{4\pi t}{\rm
Area}(C_{\beta}(R))+\frac{1}{12}\left(\frac{2\pi}{\beta}-\frac{\beta}{2\pi}\right)+O(e^{-\epsilon/t})$
(7)
as $t\to 0+$.
Proof (cf. [F94], p. 1433). Make in (6) the change of variable
$\gamma=\alpha-\psi$ and deform the contour $A_{\theta-\psi}$ into the contour
$\Gamma^{-}_{\theta-\psi}\cup\Gamma^{+}_{\theta-\psi}\cup\\{|\gamma|=\delta\\}$,
where the oriented curve $\Gamma^{-}_{\theta-\psi}$ goes from
$\theta-\psi-\pi-i\infty$ to $\theta-\psi-\pi+i\infty$ and intersects the real
axis at $\gamma=-\delta$, the oriented curve $\Gamma^{+}_{\theta-\psi}$ goes
from $\theta-\psi+\pi+i\infty$ to $\theta-\psi+\pi-i\infty$ and intersects the
real axis at $\gamma=\delta$, the circle $\\{|\gamma|=\delta\\}$ is positively
oriented and $\delta$ is a small positive number. Calculating the integral
over the circle $\\{|\gamma|=\delta\\}$ via the Cauchy Theorem, we get
$H_{\beta}(x,y;t)-H_{2\pi}(x,y;t)=$ $\frac{1}{8\pi\beta
it}\exp\\{-(r^{2}+\rho^{2})/4t\\}\int_{\Gamma^{-}_{\theta-\psi}\cup\Gamma^{+}_{\theta-\psi}}\exp\\{r\rho\cos(\gamma+\psi-\theta)/2t\\}\cot\frac{\pi\gamma}{\beta}\,d\gamma$
(8)
and
$\int_{C_{\beta}(R)}\left(H_{\beta}(x,x;t)-\frac{1}{4\pi t}\right)dx=$
$\frac{1}{8\pi
it}\int_{0}^{R}\,dr\,r\int_{\Gamma_{0}^{-}\cup\Gamma_{0}^{+}}\exp\\{-\frac{r^{2}\sin^{2}(\gamma/2)}{t}\\}\cot\frac{\pi\gamma}{\beta}\,d\gamma\,.$
(9)
The integration over $r$ can be done explicitly and the right hand side of (9)
reduces to
$\frac{1}{16\pi
i}\int_{\Gamma_{0}^{-}\cup\Gamma_{0}^{+}}\frac{\cot(\frac{\pi\gamma}{\beta})}{\sin^{2}(\gamma/2)}\,d\gamma+O(e^{-\epsilon/t}).$
(10)
(One can assume that $\Re\sin^{2}(\gamma/2)$ is positive and separated from
zero when $\gamma\in\Gamma_{0}^{-}\cup\Gamma_{0}^{+}$.) The contour of
integration in (10) can be changed for a negatively oriented circle centered
at $\gamma=0$. Since ${\rm
Res}(\frac{\cot(\frac{\pi\gamma}{\beta})}{\sin^{2}(\gamma/2)}\,,\,\gamma=0)=\frac{2}{3}(\frac{\beta}{2\pi}-\frac{2\pi}{\beta})$,
we arrive at (7).
###### Remark 1
The Laplacian $\Delta$ corresponding to the flat conical metric
$(d\rho)^{2}+r^{2}(d\theta)^{2},0\leq\theta\leq\beta$ on $C_{\beta}$ with
domain $C^{\infty}_{0}(C_{\beta}\setminus{\cal O})$ has infinitely many self-
adjoint extensions. Analyzing the asymptotics of (6) near the vertex ${\cal
O}$, one can show that for any $y\in C_{\beta},t>0$ the function
$H_{\beta}(\cdot,y;t)$ belongs to the domain of the Friedrichs extension
$\Delta_{F}$ of $\Delta$ and does not belong to the domain of any other
extension. Moreover, using a Hankel transform, it is possible to get an
explicit spectral representation of $\Delta_{F}$ (this operator has an
absolutely continuous spectrum of infinite multiplicity) and to show that the
Schwartz kernel of the operator $e^{t\Delta_{F}}$ coincides with
$H_{\beta}(\cdot,\cdot;t)$ (see, e. g., [T97] formula (8.8.30) together with
[C10], p. 370.)
### 3.2 Heat asymptotics for compact polyhedral surfaces
#### 3.2.1 Self-adjoint extensions of a conical Laplacian
Let ${\cal X}$ be a compact polyhedral surface with vertices (conical points)
$P_{1},\dots,P_{N}$. The Laplacian $\Delta$ corresponding to the natural flat
conical metric on ${\cal X}$ with domain $C^{\infty}_{0}({\cal
X}\setminus\\{P_{1},\dots,P_{N}\\})$ (we remind the reader that the Riemannian
manifold ${\cal X}$ is smooth everywhere except the vertices) is not
essentially self-adjoint and one has to fix one of its self-adjoint
extensions. We are to discuss now the choice of a self-adjoint extension.
This choice is defined by the prescription of some particular asymptotical
behavior near the conical points to functions from the domain of the
Laplacian; it is sufficient to consider a surface with only one conical point
$P$ of the conical angle $\beta$. More precisely, assume that ${\cal X}$ is
smooth everywhere except the point $P$ and that some vicinity of $P$ is
isometric to a vicinity of the vertex ${\cal O}$ of the standard cone
$C_{\beta}$ (of course, now the metric on ${\cal X}$ no more can be flat
everywhere in ${\cal X}\setminus P$ unless the genus $g$ of ${\cal X}$ is
greater than one and $\beta=2\pi(2g-1)$).
For $k\in{\mathbb{N}}_{0}$ introduce the functions $V_{\pm}^{k}$ on
$C_{\beta}$ by
$V^{k}_{\pm}(r,\theta)=r^{{\pm}\frac{2\pi k}{\beta}}\exp\\{i\frac{2\pi
k\theta}{\beta}\\};\ \ k>0\,,$ $V_{+}^{0}=1,\ \ V_{-}^{0}=\log r\,.$
Clearly, these functions are formal solutions to the homogeneous problem
$\Delta u=0$ on $C_{\beta}$. Notice that the functions $V_{-}^{k}$ grow near
the vertex but are still square integrable in its vicinity if
$k<\frac{\beta}{2\pi}$.
Let ${\cal D}_{\rm min}$ denote the graph closure of $C^{\infty}_{0}({\cal
X}\setminus P)$, i.e.,
$U\in{\cal D}_{{\rm min}}\Leftrightarrow\exists u_{m}\in C^{\infty}_{0}({\cal
X}\setminus P),W\in L_{2}({\cal X}):u_{m}\rightarrow U\ {\rm and}\ \Delta
u_{m}\rightarrow W\ \ {\rm in}\ \ L_{2}({\cal X}).$
Define the space $H_{\delta}^{2}(C_{\beta})$ as the closure of
$C^{\infty}_{0}(C_{\beta}\setminus{\cal O})$ with respect to the norm
$||u;H_{\delta}^{2}(C_{\beta})||^{2}=\sum_{|{\bf\alpha}|\leq
2}\int_{C_{\beta}}r^{2(\delta-2+|{\bf\alpha}|)}|D^{\bf\alpha}_{x}u(x)|^{2}dx.$
Then for any $\delta\in{\mathbb{R}}$ such that $\delta-1\neq\frac{2\pi
k}{\beta},k\in{\mathbb{Z}}$ one has the a priori estimate
$||u;H^{2}_{\delta}(C_{\beta})||\leq c||\Delta u;H^{0}_{\delta}(C_{\beta})||$
(11)
for any $u\in C^{\infty}_{0}(C_{\beta}\setminus{\cal O})$ and some constant
$c$ being independent of $u$ (see, e.g., [NP92], Chapter 2).
It follows from Sobolev’s imbedding theorem that for functions from $u\in
H_{\delta}^{2}(C_{\beta})$ one has the point-wise estimate
$r^{\delta-1}|u(r,\theta)|\leq c||u;H_{\delta}^{2}(C_{\beta})||.$ (12)
Applying estimates (11) and (12) with $\delta=0$, we see that functions $u$
from ${\cal D}_{\rm min}$ must obey the asymptotics $u(r,\theta)=O(r)$ as
$r\to 0$.
Now the description of the set of all self-adjoint extensions of $\Delta$
looks as follows. Let $\chi$ be a smooth function on ${\cal X}$ which is equal
to $1$ near the vertex $P$ and such that in a vicinity of the support of
$\chi$ ${\cal X}$ is isometric to $C_{\beta}$. Denote by ${\mathfrak{M}}$ the
linear subspace of $L_{2}({\cal X})$ spanned by the functions $\chi
V_{\pm}^{k}$ with $0\leq k<\frac{\beta}{2\pi}$. The dimension, $2d$, of
${\mathfrak{M}}$ is even. To get a self-adjoint extension of $\Delta$ one
chooses a subspace ${\mathfrak{N}}$ of ${\mathfrak{M}}$ of dimension $d$ such
that
$(\Delta u,v)_{L_{2}({\cal X})}-(u,\Delta v)_{L_{2}({\cal
X})}=\lim_{\epsilon\to 0+}\oint_{r=\epsilon}\left(u\frac{\partial v}{\partial
r}-v\frac{\partial u}{\partial r}\right)=0$
for any $u,v\in{\mathfrak{N}}$. To any such subspace ${\mathfrak{N}}$ there
corresponds a self-adjoint extension $\Delta_{\mathfrak{N}}$ of $\Delta$ with
domain ${\mathfrak{N}}+{\cal D}_{{\rm min}}$.
The extension corresponding to the subspace ${\mathfrak{N}}$ spanned by the
functions $\chi V_{+}^{k}$, $0\leq k<\frac{\beta}{2\pi}$ coincides with the
Friedrichs extension of $\Delta$. The functions from the domain of the
Friedrichs extension are bounded near the vertex.
From now on we denote by $\Delta$ the Friedrichs extension of the Laplacian on
the polyhedral surface ${\cal X}$; other extensions will not be considered
here.
#### 3.2.2 Heat asymptotics
###### Theorem 1
Let ${\cal X}$ be a compact polyhedral surface with vertices
$P_{1},\dots,P_{N}$ of conical angles $\beta_{1},\dots,\beta_{N}$. Let
$\Delta$ be the Friedrichs extension of the Laplacian defined on functions
from $C^{\infty}_{0}({\cal X}\setminus\\{P_{1},\dots,P_{N}\\})$. Then
1. 1.
The spectrum of the operator $\Delta$ is discrete, all the eigenvalues of
$\Delta$ have finite multiplicity.
2. 2.
Let ${\cal H}(x,y;t)$ be the heat kernel for $\Delta$. Then for some
$\epsilon>0$
${\rm Tr}\,e^{t\Delta}=\int_{\cal X}{\cal H}(x,x;t)\,dx=\frac{{\rm Area}({\cal
X})}{4\pi
t}+\frac{1}{12}\sum_{k=1}^{N}\left\\{\frac{2\pi}{\beta_{k}}-\frac{\beta_{k}}{2\pi}\right\\}+O(e^{-\epsilon/t}),$
(13)
as $t\to 0+$.
3. 3.
The counting function, $N(\lambda)$, of the spectrum of $\Delta$ obeys the
asymptotics $N(\lambda)=O(\lambda)$ as $\lambda\to+\infty$.
Proof. 1) The proof of the first statement is a standard exercise (cf. [K93]).
We indicate only the main idea leaving the details to the reader. Introduce
the closure, $H^{1}({\cal X})$, of $C^{\infty}_{0}({\cal
X}\setminus\\{P_{1},\dots,P_{N}\\}$ with respect to the norm
$|||u|||=||u;L_{2}||+||\nabla u;L_{2}||$. It is sufficient to prove that any
bounded set $S$ in $H^{1}({\cal X})$ is precompact in the $L_{2}$-topology
(this will imply the compactness of the self-adjoint operator
$(I-\Delta)^{-1}$). Moreover, one can assume that the supports of functions
from $S$ belong to a small ball $B$ centered at a conical point $P$. Now to
prove the precompactness of $S$ it is sufficient to make use of the expansion
with respect to eigenfunctions of the Dirichlet problem in $B$ and the
diagonal process.
2)Let ${\cal X}=\cup_{j=0}^{N}K_{j}$, where $K_{j}$, $j=1,\dots,N$ is a
neighborhood of the conical point $P_{j}$ which is isometric to
$C_{\beta_{j}}(R)$ with some $R>0$, and $K_{0}={\cal
X}\setminus\cup_{j=1}^{N}K_{j}$.
Let also $K^{\epsilon_{1}}_{j}\supset K_{j}$ and $K^{\epsilon_{1}}_{j}$ be
isometric to $C_{\beta_{j}}(R+\epsilon_{1})$ with some $\epsilon_{1}>0$ and
$j=1,\dots,N$.
Fixing $t>0$ and $x,y\in K_{j}$ with $j>0$, one has
$\int_{0}^{t}\,ds\int_{K_{j}^{\epsilon_{1}}}\left(\psi\\{\Delta_{z}-\partial_{s}\\}\phi-\phi\\{\Delta_{z}+\partial_{s}\\}\psi\right)\,dz=$
(14) $\int_{0}^{t}ds\int_{\partial
K_{j}^{\epsilon_{1}}}\left(\phi\frac{\partial\psi}{\partial
n}-\psi\frac{\partial\phi}{\partial
n}\right)dl(z)-\int_{K_{j}^{\epsilon_{1}}}\left(\phi(z,t)\psi(z,t)-\phi(z,0)\psi(z,0)\right)\,dz$
(15)
with $\phi(z,t)={\cal H}(z,y;t)-H_{\beta_{j}}(z,y;t)$ and
$\psi(z,t)=H_{\beta_{j}}(z,x;t-s)$. (Here it is important that we are working
with the heat kernel of the Friedrichs extension of the Laplacian, for other
extensions the heat kernel has growing terms in the asymptotics near the
vertex and the right hand side of (14) gets extra terms.) Therefore,
$H_{\beta_{j}}(x,y;t)-{\cal H}(x,y;t)=$ $\int_{0}^{t}ds\int_{\partial
K_{j}^{\epsilon_{1}}}\left({\cal H}(y,z;s)\frac{\partial
H_{\beta_{j}}(x,z;t-s)}{\partial
n(z)}-H_{\beta_{j}}(z,x;t-s)\frac{{\partial\cal H}(z,y;s)}{\partial
n(z)}\right)\,dl(z)$ $=O(e^{-\epsilon_{2}/t})$
with some $\epsilon_{2}>0$ as $t\to 0+$ uniformly with respect to $x,y\in
K_{j}$. This implies that
$\int_{K_{j}}{\cal
H}(x,x;t)dx=\int_{K_{j}}H_{\beta_{j}}(x,x;t)dx+O(e^{-\epsilon_{2}/t}).$ (16)
Since the metric on ${\cal X}$ is flat in a vicinity of $K_{0}$, one has the
asymptotics
$\int_{K_{0}}{\cal H}(x,x;t)dx=\frac{{\rm Area}(K_{0})}{4\pi
t}+O(e^{-\epsilon_{3}/t})$
with some $\epsilon_{3}>0$ (cf. [MS67]). Now (13) follows from (7).
3) The third statement of the theorem follows from the second one due to the
standard Tauberian arguments.
## 4 Determinant of the Laplacian: Analytic surgery and Polyakov-type
formulas
Theorem 1 opens a way to define the determinant, ${\rm det}\,\Delta$, of the
Laplacian on a compact polyhedral surface via the standard Ray-Singer
regularization. Namely introduce the operator $\zeta$-function
$\zeta_{\Delta}(s)=\sum_{\lambda_{k}>0}\frac{1}{\lambda_{k}^{s}},$ (17)
where the summation goes over all strictly positive eigenvalues $\lambda_{k}$
of the operator $-\Delta$ (counting multiplicities). Due to the third
statement of Theorem 1, the function $\zeta_{\Delta}$ is holomorphic in the
half-plane $\\{\Re s>1\\}$. Moreover, due to the equality
$\zeta_{\Delta}(s)=\frac{1}{\Gamma(s)}\int_{0}^{\infty}\left\\{{\rm
Tr}\,e^{t\Delta}-1\right\\}t^{s-1}\,dt$ (18)
and the asymptotics (13), one has the equality
$\zeta_{\Delta}(s)=\frac{1}{\Gamma(s)}\left\\{\frac{{\rm Area}\,({\cal
X})}{4\pi(s-1)}+\left[\frac{1}{12}\sum_{k=1}^{N}\left\\{\frac{2\pi}{\beta_{k}}-\frac{\beta_{k}}{2\pi}\right\\}-1\right]\frac{1}{s}+e(s)\right\\},$
(19)
where $e(s)$ is an entire function. Thus, $\zeta_{\Delta}$ is regular at $s=0$
and one can define the $\zeta$-regularized determinant of the Laplacian via
usual $\zeta$-regularization (cf. [R73]):
${\rm det}\Delta:=\exp\\{-\zeta^{\prime}_{\Delta}(0)\\}\,.$ (20)
Moreover, (19) and the relation $\sum_{k=1}^{N}b_{k}=2g-2$;
$b_{k}=\frac{\beta_{k}}{2\pi}-1$ yield
$\zeta_{\Delta}(0)=\frac{1}{12}\sum_{k=1}^{N}\left\\{\frac{2\pi}{\beta_{k}}-\frac{\beta_{k}}{2\pi}\right\\}-1=\left(\frac{\chi({\cal
X})}{6}-1\right)+\frac{1}{12}\sum_{k=1}^{N}\left\\{\frac{2\pi}{\beta_{k}}+\frac{\beta_{k}}{2\pi}-2\right\\},$
(21)
where $\chi({\cal X})=2-2g$ is the Euler characteristics of ${\cal X}$.
It should be noted that the term $\frac{\chi({\cal X})}{6}-1$ at the right
hand side of (21) coincides with the value at zero of the operator
$\zeta$-function of the Laplacian corresponding to an arbitrary smooth metric
on ${\cal X}$ (see, e. g., [S88], p. 155).
Let ${\bf m}$ and $\tilde{{\bf m}}=\kappa{\bf m}$, $\kappa>0$ be two
homothetic flat metrics with the same conical points with conical angles
$\beta_{1},\dots,\beta_{N}$. Then (17), (20) and (21) imply the following
rescaling property of the conical Laplacian:
${\rm det}\Delta^{\tilde{{\bf m}}}=\kappa^{-\left(\frac{\chi({\cal
X})}{6}-1\right)-\frac{1}{12}\sum_{k=1}^{N}\left\\{\frac{2\pi}{\beta_{k}}+\frac{\beta_{k}}{2\pi}-2\right\\}}{\rm
det}\,\Delta^{{\bf m}}$ (22)
### 4.1 Analytic surgery
Let ${\bf m}$ be an arbitrary smooth metric on ${\cal X}$ and denote by
$\Delta^{\bf m}$ the corresponding Laplacian. Consider $N$ nonoverlapping
connected and simply connected domains $D_{1},\dots,D_{N}\subset{\cal X}$
bounded by closed curves $\gamma_{1},\dots,\gamma_{N}$ and introduce also the
domain $\Sigma={\cal X}\setminus\cup_{k=1}^{N}D_{k}$ and the contour
$\Gamma=\cup_{k=1}^{N}\gamma_{k}$.
Define the Neumann jump operator $R:C^{\infty}(\Gamma)\rightarrow
C^{\infty}(\Gamma)$ by
$R(f)|_{\gamma_{k}}=\partial_{\nu}(V^{-}_{k}-V^{+}_{k}),$
where $\nu$ is the outward normal to $\gamma_{k}=\partial D_{k}$, the
functions $V^{-}_{k}$ and $V^{+}$ are the solutions of the boundary value
problems $\Delta^{{\bf m}}V^{-}_{k}=0$ in $D_{k}$, $V^{-}|_{\partial D_{k}}=f$
and $\Delta^{{\bf m}}V^{+}=0$ in $\Sigma$, $V^{+}|_{\Gamma}=f$. The Neumann
jump operator is an elliptic pseudodifferential operator of order $1$, and it
is known that one can define its determinant via the standard
$\zeta$-regularization.
In what follows it is crucial that the Neumann jump operator does not change
if we vary the metric within the same conformal class.
Let $(\Delta^{{\bf m}}|D_{k})$ and $(\Delta^{\bf m}|\Sigma)$ be the operators
of the Dirichlet boundary problem for $\Delta^{{\bf m}}$ in domains $D_{k}$
and $\Sigma$ respectively, the determinants of these operators also can be
defined via $\zeta$-regularization.
Due to Theorem $B^{*}$ from [BFK92], we have
${\rm det}\Delta^{{\bf m}}=\left\\{\prod_{k=1}^{N}{\rm det}(\Delta^{\bf
m}|D_{k})\right\\}\,{\rm det}(\Delta^{\bf m}|\Sigma)\,{\rm det}R\,\\{{\rm
Area}({\cal X},{\bf m})\\}\,\\{l(\Gamma)\\}^{-1},$ (23)
where $l(\Gamma)$ is the length of the contour $\Gamma$ in the metric ${\bf
m}$
###### Remark 2
We have excluded the zero modes of an operator from the definition of its
determinant, so we are using the same notation ${\rm det}\,A$ for the
determinants of operators $A$ with and without zero modes. In [BFK92] the
determinant of an operator $A$ with zero modes is always equal to zero, and
what we call here ${\rm det}\,A$ is called the modified determinant in [BFK92]
and denoted there by ${\rm det}^{*}\,A$. .
An analogous statement holds for the flat conical metric. Namely let ${\cal
X}$ be a compact polyhedral surface with vertices $P_{1},\dots,P_{N}$ and $g$
be a corresponding flat metric with conical singularities. Choose the domains
$D_{k}$, $k=1,\dots,N$ being (open) nonoverlapping disks centered at $P_{k}$
and let $(\Delta|D_{k})$ be the Friedrichs extension of the Laplacian with
domain $C^{\infty}_{0}(D_{k}\setminus P_{k})$ in $L_{2}(D_{k})$. Then formula
(23) is still valid with $\Delta^{\bf m}=\Delta$ (cf. [KK04] or the recent
paper [LMP07] for a more general result).
### 4.2 Polyakov’s formula
We state this result in the form given in ([F92], p. 62). Let ${\bf
m}_{1}=\rho_{1}^{-2}(z,\bar{z})\widehat{dz}$ and ${\bf
m}_{2}=\rho_{2}^{-2}(z,\bar{z})\widehat{dz}$ be two smooth conformal metrics
on ${\cal X}$ and let ${\rm det}\Delta^{{\bf m}_{1}}$ and ${\rm
det}\Delta^{{\bf m}_{2}}$ be the determinants of the corresponding Laplacians
(defined via the standard Ray-Singer regularization). Then
$\frac{{\rm det}\Delta^{{\bf m}_{2}}}{{\rm det}\Delta^{{\bf
m}_{1}}}=\frac{{\rm Area}({\cal X},{\bf m}_{2})}{{\rm Area}({\cal X},{\bf
m}_{1})}\exp\left\\{\frac{1}{3\pi}\int_{\cal
X}\log\frac{\rho_{2}}{\rho_{1}}\partial^{2}_{z\bar{z}}\log(\rho_{2}\rho_{1})\widehat{dz}\right\\}\,.$
(24)
### 4.3 Analog of Polyakov’s formula for a pair of flat conical metrics
###### Proposition 2
Let $a_{1},\dots,a_{N}$ and $b_{1},\dots,b_{M}$ be real numbers which are
greater than $-1$ and satisfy $a_{1}+\dots+a_{N}=b_{1}+\dots+b_{M}=2g-2$. Let
also $T$ be a connected $C^{1}$-manifold and let
$T\ni t\mapsto{\bf m}_{1}(t),\ \ T\ni t\mapsto{\bf m}_{2}(t)$
be two $C^{1}$-families of flat conical metrics on ${\cal X}$ such that
1. 1.
For any $t\in T$ the metrics ${\bf m}_{1}(t)$ and ${\bf m}_{2}(t)$ define the
same conformal structure on ${\cal X}$,
2. 2.
${{\bf m}_{1}}(t)$ has conical singularities at
$P_{1}(t),\dots,P_{N}(t)\in{\cal X}$ with conical angles $2\pi(a_{1}+1)$,
$\dots$, $2\pi(a_{N}+1)$ .
3. 3.
${{\bf m}_{2}}(t)$ has conical singularities at $Q_{1}(t),\dots,Q_{M}(t)\in L$
with conical angles $2\pi(b_{1}+1)$, $\dots$, $2\pi(b_{M}+1)$ ,
4. 4.
For any $t\in T$ the sets $\\{P_{1}(t),\dots,P_{N}(t)\\}$ and
$\\{Q_{1}(t),\dots,Q_{M}(t)\\}$ do not intersect.
Let $x_{k}$ be distinguished local parameter for ${{\bf m}_{1}}$ near $P_{k}$
and $y_{l}$ be distinguished local parameter for ${{\bf m}_{2}}$ near $Q_{l}$
(we omit the argument $t$).
Introduce the functions $f_{k}$, $g_{l}$ and the complex numbers ${\bf
f_{k}}$, ${\bf g_{l}}$ by
${{\bf m}_{2}}=|f_{k}(x_{k})|^{2}|dx_{k}|^{2}\ \ \mbox{near}\ \ P_{k};\ \ \ \
\ {\bf f_{k}}:=f_{k}(0),$ ${{\bf m}_{1}}=|g_{l}(y_{l})|^{2}|dy_{l}|^{2}\ \
\mbox{near}\ \ Q_{l};\ \ \ \ \ {\bf g_{l}}:=g_{l}(0).$
Then the following equality holds
$\frac{{\rm det}\Delta^{{\bf m}_{1}}}{{\rm det}\Delta^{{\bf m}_{2}}}=C\
\frac{{\rm Area}\,({\cal X},{{\bf m}_{1}})}{{\rm Area}\,({\cal X},{{\bf
m}_{2}})}\ \frac{\prod_{l=1}^{M}|{\bf g_{l}}|^{b_{l}/6}}{\prod_{k=1}^{N}|{\bf
f_{k}}|^{a_{k}/6}},$ (25)
where the constant $C$ is independent of $t\in T$.
Proof. Take $\epsilon>0$ and introduce the disks $D_{k}(\epsilon)$,
$k=1,\dots,M+N$ centered at the points $P_{1},\dots,P_{N}$,
$Q_{1},\dots,Q_{M}$; $D_{k}(\epsilon)=\\{|x_{k}|\leq\epsilon\\}$ for
$k=1,\dots,N$ and $D_{N+l}=\\{|y_{l}|\leq\epsilon\\}$ for $l=1,\dots,M$. Let
$h_{k}:\overline{{\mathbb{R}}}_{+}\rightarrow{\mathbb{R}}$, $k=1,\dots,N+M$ be
smooth positive functions such that
1. 1.
$\int_{0}^{1}h_{k}^{2}(r)rdr=\begin{cases}\int_{0}^{1}r^{2a_{k}+1}dr=\frac{1}{2a_{k}+2},\
\ \mbox{if}\ \ k=1,\dots,N\\\ \int_{0}^{1}r^{2b_{l}+1}dr=\frac{1}{2b_{l}+2},\
\ \mbox{if}\ \ k=N+l,\ l=1,\dots,M\end{cases}$
2. 2.
$h_{k}(r)=\begin{cases}r^{a_{k}}\ \ \mbox{for}\ \ r\geq 1\ \ \mbox{if}\ \
k=1,\dots,N\\\ r^{b_{l}}\ \ \mbox{for}\ \ r\geq 1\ \ \mbox{if}\ \ k=N+l,\
l=1,\dots,M\end{cases}$
Define two families of smooth metrics ${\bf m}_{1}^{\epsilon}$, ${\bf
m}_{2}^{\epsilon}$ on ${\cal X}$ via
${\bf
m}_{1}^{\epsilon}(z)=\begin{cases}\epsilon^{2a_{k}}h_{k}^{2}(|x_{k}|/\epsilon)|dx_{k}|^{2},\
\ \ \ \ z\in D_{k}(\epsilon),\ \ k=1,\dots,N\\\ {\bf m}(z),\ \ \ \ \ \ \ \
z\in{\cal X}\setminus\cup_{k=1}^{N}D_{k}(\epsilon)\,,\end{cases}$
${\bf
m}_{2}^{\epsilon}(z)=\begin{cases}\epsilon^{2b_{l}}h_{N+l}^{2}(|y_{l}|/\epsilon)|dy_{l}|^{2},\
\ \ \ \ z\in D_{N+l}(\epsilon),\ \ l=1,\dots,M\\\ {\bf m}(z),\ \ \ \ \ \ \ \
z\in{\cal X}\setminus\cup_{l=1}^{M}D_{N+l}(\epsilon)\,.\end{cases}$
The metrics ${\bf m}_{1,2}^{\epsilon}$ converge to ${\bf m}_{1,2}$ as
$\epsilon\to 0$ and
${\rm Area}({\cal X},{\bf m}_{1,2}^{\epsilon})={\rm Area}({\cal X},{\bf
m}_{1,2}).$
###### Lemma 1
Let $\partial_{t}$ be the differentiation with respect to one of the
coordinates on $T$ and let ${\rm det}\Delta^{{\bf m}_{1,2}^{\epsilon}}$ be the
standard $\zeta$-regularized determinant of the Laplacian corresponding to the
smooth metric ${\bf m}_{1,2}^{\epsilon}$. Then
$\partial_{t}\log{\rm det}\Delta^{{\bf m}_{1,2}}=\partial_{t}\log{\rm
det}\Delta^{{\bf m}_{1,2}^{\epsilon}}.$ (26)
To establish the lemma consider for definiteness the pair ${\bf m}_{1}$ and
${\bf m}_{1}(\epsilon)$. Due to the analytic surgery formulas from section 4.1
one has
${\rm det}\Delta^{{\bf m}_{1}}=\left\\{\prod_{k=1}^{N}{\rm det}(\Delta^{{\bf
m}_{1}}|D_{k}(\epsilon))\right\\}\,{\rm det}(\Delta^{{\bf
m}_{1}}|\Sigma)\,{\rm det}R\,\\{{\rm Area}({\cal X},{\bf
m}_{1})\\}\,\\{l(\Gamma)\\}^{-1},$ (27) ${\rm det}\Delta^{{\bf
m}_{1}^{\epsilon}}=\left\\{\prod_{k=1}^{N}{\rm det}(\Delta^{{\bf
m}_{1}^{\epsilon}}|D_{k}(\epsilon))\right\\}\,{\rm det}(\Delta^{{\bf
m}_{1}^{\epsilon}}|\Sigma)\,{\rm det}R\,\\{{\rm Area}({\cal X},{\bf
m}_{1}^{\epsilon})\\}\,\\{l(\Gamma)\\}^{-1},$ (28)
with $\Sigma={\cal X}\setminus\cup_{k=1}^{N}D_{k}(\epsilon)$.
Notice that the variations of the logarithms of the first factors in the right
hand sides of (27) and (28) vanish (these factors are independent of $t$)
whereas the variations of the logarithms of all the remaining factors
coincide. This leads to (26).
By virtue of Lemma 1 one has the relation
$\partial_{t}\left\\{\log\frac{{\rm det}\Delta^{{\bf m}_{1}}}{{\rm Area}({\cal
X},{\bf m}_{1})}-\log\frac{{\rm det}\Delta^{{\bf m}_{2}}}{{\rm Area}({\cal
X},{\bf m}_{2})}\right\\}=$ $\partial_{t}\left\\{\log\frac{{\rm
det}\Delta^{{\bf m}_{1}^{\epsilon}}}{{\rm Area}({\cal X},{\bf
m}_{1}^{\epsilon})}-\log\frac{{\rm det}\Delta^{{\bf m}_{2}^{\epsilon}}}{{\rm
Area}({\cal X},{\bf m}_{2}^{\epsilon})}\right\\}.$ (29)
By virtue of Polyakov’s formula the r. h. s. of (29) can be rewritten as
$\sum_{k=1}^{N}\frac{1}{3\pi}\partial_{t}\int_{D_{k}(\epsilon)}(\log
H_{k})_{x_{k}\bar{x}_{k}}\log|f_{k}|\widehat{dx_{k}}-$
$\sum_{l=1}^{M}\frac{1}{3\pi}\partial_{t}\int_{D_{N+l}(\epsilon)}(\log
H_{N+l})_{y_{l},\bar{y}_{l}}\log|g_{l}|\widehat{dy_{l}},$ (30)
where $H_{k}(x_{k})=\epsilon^{-a_{k}}h_{k}^{-1}(|x_{k}|/\epsilon)$,
$k=1,\dots,N$ and
$H_{N+l}(y_{l})=\epsilon^{-b_{l}}h_{N+l}^{-1}(|y_{l}|/\epsilon)$,
$l=1,\dots,M$. Notice that for $k=1,\dots,N$ the function $H_{k}$ coincides
with $|x_{k}|^{-a_{k}}$ in a vicinity of the circle $\\{|x_{k}|=\epsilon\\}$
and the Green formula implies that
$\int_{D_{k}(\epsilon)}(\log
H_{k})_{x_{k}\bar{x}_{k}}\log|f_{k}|\widehat{dw_{k}}=\frac{i}{2}\left\\{\oint_{|x_{k}|=\epsilon}(\log|x_{k}|^{-a_{k}})_{\bar{x}_{k}}\log|f_{k}|d\bar{x}_{k}+\right.$
$\left.+\oint_{|x_{k}|=\epsilon}\log|x_{k}|^{-a_{k}}(\log|f_{k}|)_{x_{k}}dx_{k}+\int_{D_{k}(\epsilon)}(\log|f_{k}|)_{x_{k}\bar{x}_{k}}\log
H_{k}dx_{k}\wedge d\bar{x}_{k}\right\\}$
and, therefore,
$\partial_{t}\int_{D_{k}(\epsilon)}(\log
H_{k})_{x_{k}\bar{x}_{k}}\log|f_{k}|\widehat{dx_{k}}=-\frac{a_{k}\pi}{2}\partial_{t}\log|{\bf
f}_{k}|+o(1)$ (31)
as $\epsilon\to 0$. Analogously
$\partial_{t}\int_{D_{N+l}(\epsilon)}(\log
H_{N+l})_{y_{l}\bar{y}_{l}}\log|g_{l}|\widehat{dy_{l}}=-\frac{b_{l}\pi}{2}\partial_{t}\log|{\bf
g}_{l}|+o(1)$ (32)
as $\epsilon\to 0$.
Formula (25) follows from (29), (31) and (32). $\square$
### 4.4 Lemma on three polyhedra
For any metric ${\bf m}$ on ${\cal X}$ denote by $Q({\bf m})$ the ratio
${\det\Delta^{\bf m}}/{\rm Area}({\cal X},{\bf m})$.
Consider three families of flat conical metrics ${\bf l}(t)\sim{\bf
m}(t)\sim{\bf n}(t)$ on ${\cal X}$ (here $\sim$ means conformal equivalence),
where the metric ${\bf l}(t)$ has conical points $P_{1}(t),\dots,P_{L}(t)$
with conical angles $2\pi(a_{1}+1),\dots,2\pi(a_{L}+1)$, the metric ${\bf
m}(t)$ has conical points $Q_{1}(t),\dots,Q_{M}(t)$ with conical angles
$2\pi(b_{1}+1),\dots,2\pi(b_{M}+1)$ and the metric ${\bf n}(t)$ has conical
points $R_{1}(t),\dots,R_{N}(t)$ with conical angles
$2\pi(c_{1}+1),\dots,2\pi(c_{N}+1)$.
Let $x_{k}$ be the distinguished local parameter for ${\bf l}(t)$ near
$P_{k}(t)$ and let ${\bf m}(t)=|f_{k}(x_{k})|^{2}|dx_{k}|^{2}$ and ${\bf
n}(t)=|g_{k}(x_{k})|^{2}|dx_{k}|^{2}$ near $P_{k}(t)$. Let $\xi$ be an
arbitrary conformal local coordinate in a vicinity of the point $P_{k}(t)$.
Then one has ${\bf m}=|f(\xi)|^{2}|d\xi|^{2}$ and ${\bf
n}=|g(\xi)|^{2}|d\xi|^{2}$ with some holomorphic functions $f$ and $g$ and the
ratio
$\frac{{\bf m}(t)}{{\bf
n}(t)}\left(P_{k}(t)\right):=\frac{|f(0)|^{2}}{|g(0)|^{2}}$
is independent of the choice of the conformal local coordinate. In particular
it coincides with the ratio $|f_{k}(0)|^{2}/|g_{k}(0)|^{2}$.
From Proposition 2, one gets the relation
$1=\left\\{\frac{Q({\bf l}(t))}{Q({\bf m}(t))}\frac{Q({\bf m}(t))}{Q({\bf
n}(t))}\frac{Q({\bf n}(t))}{Q({\bf l}(t))}\right\\}^{-12}=$ ${\rm
C}\,\prod_{i=1}^{N}\left[\frac{{\bf l}(t)}{{\bf
m}(t)}(R_{i}(t))\right]^{c_{i}}\prod_{j=1}^{L}\left[\frac{{\bf m}(t)}{{\bf
n}(t)}(P_{j}(t))\right]^{a_{j}}\prod_{k=1}^{M}\left[\frac{{\bf n}(t)}{{\bf
l}(t)}(Q_{k}(t))\right]^{b_{k}}\,,$ (33)
where the constant $C$ is independent of $t$.
From the following statement (which we call the lemma on three polyhedra) one
can see that the constant $C$ in (33) is equal to $1$.
###### Lemma 2
Let ${\cal X}$ be a compact Riemann surface of an arbitrary genus $g$ and let
${\bf l}$, ${\bf m}$ and ${\bf n}$ be three conformal flat conical metrics on
${\cal X}$. Suppose that the metric ${\bf l}$ has conical points
$P_{1},\dots,P_{L}$ with conical angles $2\pi(a_{1}+1),\dots,2\pi(a_{L}+1)$,
the metric ${\bf m}$ has conical points $Q_{1},\dots,Q_{M}$ with conical
angles $2\pi(b_{1}+1),\dots,2\pi(b_{M}+1)$ and the metric ${\bf n}$ has
conical points $R_{1},\dots,R_{N}$ with conical angles
$2\pi(c_{1}+1),\dots,2\pi(c_{N}+1)$. (All the points $P_{l}$, $Q_{m}$, $R_{n}$
are supposed to be distinct.) Then one has the relation
$\prod_{i=1}^{N}\left[\frac{{\bf l}}{{\bf
m}}(R_{i})\right]^{c_{i}}\prod_{j=1}^{L}\left[\frac{{\bf m}}{{\bf
n}}(P_{j})\right]^{a_{j}}\prod_{k=1}^{M}\left[\frac{{\bf n}}{{\bf
l}}(Q_{k})\right]^{b_{k}}=1\,.$ (34)
Proof. When $g>0$ and all three metrics ${\bf l}$, ${\bf m}$ and ${\bf n}$
have trivial holonomy, i. e. one has ${\bf l}=|\omega_{1}|^{2}$, ${\bf
m}=|\omega_{2}|^{2}$ and ${\bf n}=|\omega_{3}|^{2}$ with some holomorphic one-
forms $\omega_{1}$, $\omega_{2}$ and $\omega_{3}$, relation (34) is an
immediate consequence of the Weil reciprocity law (see [GH78], §2.3). In
general case the statement reduces to an analog of the Weil reciprocity law
for harmonic functions with isolated singularities.
## 5 Polyhedral tori
Here we establish a formula for the determinant of the Laplacian on a
polyhedral torus, i.e., a Riemann surface of genus one with flat conical
metric. We do this by comparing this determinant with the determinant of the
Laplacian corresponding to the smooth flat metric on the same torus. For the
latter Laplacian the spectrum is easy to find and the determinant is
explicitly known (it is given by the Ray-Singer formula stated below).
In this section ${\cal X}$ is an elliptic ($g=1$) curve and it is assumed that
${\cal X}$ is the quotient of the complex plane ${\mathbb{C}}$ by the lattice
generated by $1$ and $\sigma$, where $\Im\sigma>0$. The differential $dz$ on
${\mathbb{C}}$ gives rise to a holomorphic differential $v_{0}$ on ${\cal X}$
with periods $1$ and $\sigma$.
#### 5.0.1 Ray-Singer formula
Let $\Delta$ be the Laplacian on ${\cal X}$ corresponding to the flat smooth
metric $|v_{0}|^{2}$. The following formula for ${\rm det}\Delta$ was proved
in [R73]:
${\rm det}\Delta=C|\Im\sigma|^{2}|\eta(\sigma)|^{4},$ (35)
where $C$ is a $\sigma$-independent constant and $\eta$ is the Dedekind eta-
function.
### 5.1 Determinant of the Laplacian on a polyhedral torus
Let $\sum_{k=1}^{N}b_{k}P_{k}$ be a generalized divisor on ${\cal X}$ with
$\sum_{k=1}^{N}b_{k}=0$ and assume that $b_{k}>-1$ for all $k$. Let ${\bf m}$
be a flat conical metric corresponding to this divisor via Troyanov’s theorem.
Clearly, it has a finite area and is defined uniquely when this area is fixed.
Fixing numbers $b_{1},\dots,b_{N}>-1$ such that $\sum_{k=1}^{N}b_{k}=0$, we
define the space ${\cal M}(b_{1},\dots,b_{N})$ as the moduli space of pairs
$({\cal X},{\bf m})$, where ${\cal X}$ is an elliptic curve and ${\bf m}$ is a
flat conformal metric on ${\cal X}$ having $N$ conical singularities with
conical angles $2\pi(b_{k}+1)$, $k=1,\dots,N$. The space ${\cal
M}(b_{1},\dots,b_{N})$ is a connected orbifold of real dimension $2N+3$.
We are going to give an explicit formula for the function
${\cal M}(\beta_{1},\dots,\beta_{N})\ni({\cal X},{\bf m})\mapsto{\rm
det}\Delta^{\bf m}\,.$
Write the normalized holomorphic differential $v_{0}$ on the elliptic curve
${\cal X}$ in the distinguished local parameter $x_{k}$ near the conical point
$P_{k}$ ($k=1,\dots,N$) as
$v_{0}=f_{k}(x_{k})dx_{k}$
and define
${\bf f}_{k}:=f_{k}(x_{k})|_{x_{k}=0},\ k=1,\dots,N\,.$ (36)
###### Theorem 2
The following formula holds true
${\rm det}\Delta^{\bf m}=C|\Im\sigma|\,{\rm Area}({\cal X},{\bf
m})\,|\eta(\sigma)|^{4}\prod_{k=1}^{N}|{\bf f}_{k}|^{-b_{k}/6},$ (37)
where $C$ is a constant depending only on $b_{1},\dots,b_{N}$.
Proof. The theorem immediately follows from (35) and (25).
## 6 Polyhedral surfaces of higher genus
Here we generalize the results of the previous section to the case of
polyhedral surfaces of an arbitrary genus. Among all polyhedral surfaces of
genus $g\geq 1$ we distinguish flat surfaces with trivial holonomy. In our
calculation of the determinant of the Laplacian, it is this class of surfaces
which plays the role of the smooth flat tori in genus one. For flat surfaces
with trivial holonomy we find an explicit expression for the determinant of
the Laplacian which generalizes the Ray-Singer formula (35) for smooth flat
tori. As we did in genus one, comparing two determinants of the Laplacians by
means of Proposition 2, we derive a formula for the determinant of the
Laplacian on a general polyhedral surface.
### 6.1 Flat surfaces with trivial holonomy and moduli spaces of holomorphic
differentials on Riemann surfaces
We follow [KZ03] and Zorich’s survey [Z06]. Outside the vertices a Euclidean
polyhedral surface ${\cal X}$ is locally isometric to a Euclidean plane and
one can define the parallel transport along paths on the punctured surface
${\cal X}\setminus\\{P_{1},\dots,P_{N}\\}$. The parallel transport along a
homotopically nontrivial loop in ${\cal X}\setminus\\{P_{1},\dots,P_{N}\\}$ is
generally nontrivial. If, e.g., a small loop encircles a conical point $P_{k}$
with conical angle $\beta_{k}$, then a tangent vector to ${\cal X}$ turns by
$\beta_{k}$ after the parallel transport along this loop.
A Euclidean polyhedral surface ${\cal X}$ is called a surface with trivial
holonomy if the parallel transport along any loop in ${\cal
X}\setminus\\{P_{1},\dots,P_{N}\\}$ does not change tangent vectors to ${\cal
X}$ .
All conical points of a surface with trivial holonomy must have conical angles
which are integer multiples of $2\pi$.
A flat conical metric $g$ on a compact real oriented two-dimensional manifold
${\cal X}$ equips ${\cal X}$ with the structure of a compact Riemann surface,
if this metric has trivial holonomy then it necessarily has the form
$g=|w|^{2}$, where $w$ is a holomorphic differential on the Riemann surface
${\cal X}$ (see [Z06]). The holomorphic differential $w$ has zeros at the
conical points of the metric $g$. The multiplicity of the zero at the point
$P_{m}$ with the conical angle $2\pi(k_{m}+1)$ is equal to $k_{m}$ 222 There
exist polyhedral surfaces with nontrivial holonomy whose conical angles are
all integer multiples of $2\pi$. To construct an example take a compact
Riemann surface ${\cal X}$ of genus $g>1$ and choose $2g-2$ points
$P_{1},\dots,P_{2g-2}$ on ${\cal X}$ in such a way that the divisor
$P_{1}+\dots+P_{2g-2}$ is not in the canonical class. Consider the flat
conical conformal metric ${\bf m}$ corresponding to the divisor
$P_{1}+\dots+P_{2g-2}$ according to the Troyanov theorem. This metric must
have nontrivial holonomy and all its conical angles are equal to $4\pi$..
The holomorphic differential $w$ is defined up to a unitary complex factor.
This ambiguity can be avoided if the surface ${\cal X}$ is provided with a
distinguished direction (see [Z06]), and it is assumed that $w$ is real along
this distinguished direction. In what follows we always assume that surfaces
with trivial holonomy are provided with such a direction.
Thus, to a Euclidean polyhedral surface of genus $g$ with trivial holonomy we
put into correspondence a pair $({\cal X},w)$, where ${\cal X}$ is a compact
Riemann surface and $\omega$ is a holomorphic differential on this surface.
This means that we get an element of the moduli space, ${\cal H}_{g}$, of
holomorphic differentials over Riemann surfaces of genus $g$ (see [KZ03]).
The space ${\cal H}_{g}$ is stratified according to the multiplicities of
zeros of $w$.
Denote by ${\cal H}_{g}(k_{1},\dots,k_{M})$ the stratum of ${\cal H}_{g}$,
consisting of differentials $w$ which have $M$ zeros on ${\cal X}$ of
multiplicities $(k_{1},\dots,k_{M})$. Denote the zeros of $w$ by
$P_{1},\dots,P_{M}$; then the divisor of the differential $w$ is given by
$(w)=\sum_{m=1}^{M}k_{m}P_{m}$. Let us choose a canonical basis of cycles
$(a_{\alpha},b_{\alpha})$ on the Riemann surface ${\cal X}$ and cut ${\cal X}$
along these cycles starting at the same point to get the fundamental polygon
$\hat{{\cal X}}$. Inside $\hat{{\cal X}}$ we choose $M-1$ ( homology classes
of) paths $l_{m}$ on ${\cal X}\setminus(w)$ connecting the zero $P_{1}$ with
other zeros $P_{m}$ of $w$, $m=2,\dots,M$. Then the local coordinates on
${\cal H}_{g}(k_{1},\dots,k_{M})$ can be chosen as follows [KZ97]:
$A_{\alpha}:=\oint_{a_{\alpha}}w\;,\ \ B_{\alpha}:=\oint_{b_{\alpha}}w\;,\ \
z_{m}:=\int_{l_{m}}w\;,\ \ \alpha=1,\dots,g;\ m=2,\dots,M\;.$ (38)
The area of the surface ${\cal X}$ in the metric $|w|^{2}$ can be expressed in
terms of these coordinates as follows:
${\rm Area}({\cal
X},|w|^{2})=\Im\sum_{\alpha=1}^{g}A_{\alpha}\bar{B_{\alpha}}\;.$
If all zeros of $w$ are simple, we have $M=2g-2$; therefore, the dimension of
the highest stratum ${\cal H}_{g}(1,\dots,1)$ equals $4g-3$.
The Abelian integral $z(P)=\int_{P_{1}}^{P}w$ provides a local coordinate in a
neighborhood of any point $P\in{\cal X}$ except the zeros $P_{1},\dots,P_{M}$.
In a neighborhood of $P_{m}$ the local coordinate can be chosen to be
$(z(P)-z_{m})^{1/(k_{m}+1)}$.
###### Remark 3
The following construction helps to visualize these coordinates in the case of
the highest stratum $H_{g}(1,\dots,1)$.
Consider $g$ parallelograms $\Pi_{1},\dots,\Pi_{g}$ in the complex plane with
coordinate $z$ having the sides $(A_{1},B_{1})$, $\dots$, $(A_{g},B_{g})$.
Provide these parallelograms with a system of cuts
$[0,z_{2}],\ \ \ [z_{3},z_{4}],\ \ \ \dots,\ \ \ [z_{2g-3},z_{2g-2}]$
(each cut should be repeated on two different parallelograms). Identifying
opposite sides of the parallelograms and gluing the obtained $g$ tori along
the cuts, we get a compact Riemann surface ${\cal X}$ of genus $g$. Moreover,
the differential $dz$ on the complex plane gives rise to a holomorphic
differential $w$ on ${\cal X}$ which has $2g-2$ zeros at the ends of the cuts.
Thus, we get a point $({\cal X},w)$ from ${\cal H}_{g}(1,\dots,1)$. It can be
shown that any generic point of ${\cal H}_{g}(1,\dots,1)$ can be obtained via
this construction; more sophisticated gluing is required to represent points
of other strata, or non generic points of the stratum ${\cal
H}_{g}(1,\dots,1)$.
To shorten the notations it is convenient to consider the coordinates
$A_{\alpha}$,$B_{\alpha}$, $z_{m}$ altogether. Namely, in the sequel we shall
denote them by $\zeta_{k}$, $k=1,\dots,2g+M-1$, where
$\zeta_{\alpha}:=A_{\alpha}\;,\ \ \zeta_{g+\alpha}:=B_{\alpha}\;,\ \
\alpha=1,\dots,g\;,\ \ \zeta_{2g+m}:=z_{m+1}\;,\ m=1,\dots,M-1$ (39)
Let us also introduce corresponding cycles $s_{k}$, $k=1,\dots,2g+M-1$, as
follows:
$s_{\alpha}=-b_{\alpha}\;,\ \ s_{g+\alpha}=a_{\alpha}\;,\ \
\alpha=1,\dots,g\;;$ (40)
the cycle $s_{2g+m}$, $m=1,\dots,M-1$ is defined to be the small circle with
positive orientation around the point $P_{m+1}$.
#### 6.1.1 Variational formulas on the spaces of holomorphic differentials
In the previous section we introduced the coordinates on the space of surfaces
with trivial holonomy and fixed type of conical singularities. Here we study
the behavior of basic objects on these surfaces under the change of the
coordinates. In particular, we derive variational formulas of Rauch type for
the matrix of ${\bf b}$-periods of the underlying Riemann surfaces. We also
give variational formulas for the Green function, individual eigenvalues, and
the determinant of the Laplacian on these surfaces.
Rauch formulas on the spaces of holomorphic differentials. For any compact
Riemann surface ${\cal X}$ we introduce the prime-form $E(P,Q)$ and the
canonical meromorphic bidifferential
${\bf w}(P,Q)=d_{P}d_{Q}\log E(P,Q)$ (41)
(see [F92]). The bidifferential ${\bf w}(P,Q)$ has the following local
behavior as $P\to Q$:
${\bf
w}(P,Q)=\left(\frac{1}{(x(P)-x(Q))^{2}}+\frac{1}{6}S_{B}(x(P))+o(1)\right)dx(P)dx(Q),$
(42)
where $x(P)$ is a local parameter. The term $S_{B}(x(P))$ is a projective
connection which is called the Bergman projective connection (see [F92]).
Denote by $v_{\alpha}(P)$ the basis of holomorphic 1-forms on ${\cal X}$
normalized by $\int_{a_{\alpha}}v_{\beta}=\delta_{\alpha\beta}$.
The matrix of b-periods of the surface ${\cal X}$ is given by ${\bf
B}_{\alpha\beta}:=\oint_{b_{\alpha}}v_{\beta}$.
###### Proposition 3
(see [KK09]) Let a pair $({\cal X},w)$ belong to the space ${\cal
H}_{g}(k_{1},\dots,k_{M})$. Under variations of the coordinates on ${\cal
H}_{g}(k_{1},\dots,k_{M})$ the normalized holomorphic differentials and the
matrix of ${\bf b}$-periods of the surface ${\cal X}$ behaves as follows:
$\frac{\partial v_{\alpha}(P)}{\partial\zeta_{k}}\Big{|}_{z(P)}=\frac{1}{2\pi
i}\oint_{s_{k}}\frac{v_{\alpha}(Q){\bf w}(P,Q)}{w(Q)}\;,$ (43)
$\frac{\partial{\bf
B}_{\alpha\beta}}{\partial\zeta_{k}}=\oint_{s_{k}}\frac{v_{\alpha}v_{\beta}}{w}$
(44)
where $k=1,\dots,2g+M-1$; we assume that the local coordinate
$z(P)=\int_{P_{1}}^{P}w$ is kept constant under differentiation.
Variation of the resolvent kernel and eigenvalues. For a pair $({\cal X},w)$
from ${\cal H}_{g}(k_{1},\dots,k_{M})$ introduce the Laplacian
$\Delta:=\Delta^{|w|^{2}}$ in the flat conical metric $|w|^{2}$ on ${\cal X}$
(recall that we always deal with the Friedrichs extensions). The corresponding
resolvent kernel $G(P,Q;\lambda)$, $\lambda\in{\mathbb{C}}\setminus{\rm
sp}\,(\Delta)$
* •
satisfies
$(\Delta_{P}-\lambda)G(P,Q;\lambda)=(\Delta_{Q}-\lambda)G(P,Q;\lambda)=0$
outside the diagonal $\\{P=Q\\}$,
* •
is bounded near the conical points i. e. for any $P\in{\cal
X}\setminus\\{P_{1},\dots,P_{M}\\}$
$G(P,Q;\lambda)=O(1)$
as $Q\to P_{k}$, $k=1,\dots,M$,
* •
obeys the asymptotics
$G(P,Q;\lambda)=\frac{1}{2\pi}\log|x(P)-x(Q)|+O(1)$
as $P\to Q$, where $x(\cdot)$ is an arbitrary (holomorphic) local parameter
near $P$.
The following proposition is an analog of the classical Hadamard formula for
the variation of the Green function of the Dirichlet problem in a plane
domain.
###### Proposition 4
The following variational formulas for the resolvent kernel $G(P,Q;\lambda)$
hold:
$\frac{\partial G(P,Q;\lambda)}{\partial A_{\alpha}}=2i\int_{{\bf
b}_{\alpha}}\omega(P,Q;\lambda)\,,$ (45)
$\frac{\partial G(P,Q;\lambda)}{\partial B_{\alpha}}=-2i\int_{{\bf
a}_{\alpha}}\omega(P,Q;\lambda)\,,$ (46)
where
$\omega(P,Q;\lambda)=G(P,z;\lambda)G_{z\bar{z}}(Q,z;\lambda)\overline{dz}+G_{z}(P,z;\lambda)G_{z}(Q,z;\lambda)dz$
is a closed $1$-form and $\alpha=1,\dots,g$;
$\frac{\partial G(P,Q;\lambda)}{\partial z_{m}}=-2i\lim_{\epsilon\to
0}\oint_{|z-z_{m}|=\epsilon}G_{z}(z,P;\lambda)G_{z}(z,Q;\lambda)dz\,,$ (47)
where $m=2,\dots,M$. It is assumed that the coordinates $z(P)$ and $z(Q)$ are
kept constant under variation of the moduli $A_{\alpha},B_{\alpha},z_{m}$.
###### Remark 4
One can unite the formulas (45-47) in a single formula:
$\frac{\partial G(P,Q;\lambda)}{\partial\zeta_{k}}=$
$-2i\left\\{\int_{s_{k}}\frac{G(R,P;\lambda)\partial_{R}\overline{\partial_{R}}G(R,Q;\lambda)+\partial_{R}G(R,P;\lambda)\partial_{R}G(R,Q;\lambda)}{w(R)}\right\\}\,,$
(48)
where $k$=1, …, 2g+M-1.
Proof. We start with the following integral representation of a solution $u$
to the homogeneous equation $\Delta u-\lambda u=0$ inside the fundamental
polygon $\hat{{\cal X}}$:
$u(\xi,\bar{\xi})=-2i\int_{\partial\hat{{\cal
X}}}G(z,\bar{z},\xi,\bar{\xi};\lambda)u_{\bar{z}}(z,\bar{z})d\bar{z}+G_{z}(z,\bar{z},\xi,\bar{\xi};\lambda)u(z,\bar{z})dz\,.$
(49)
Cutting the surface ${\cal X}$ along the basic cycles, we notice that the
function $\dot{G}(P,\ \cdot\ ;\lambda)=\frac{\partial G(P,\ \cdot\
;\lambda)}{\partial B_{\beta}}$ is a solution to the homogeneous equation
$\Delta u-\lambda u=0$ inside the fundamental polygon (the singularity of
$G(P,Q;\lambda)$ at $Q=P$ disappears after differentiation) and that the
functions $\dot{G}(P,\ \cdot\ ;\lambda)$ and $\dot{G}_{\bar{z}}(P,\ \cdot\
;\lambda)$ have the jumps $G_{z}(P,\ \cdot\ ;\lambda)$ and $G_{z\bar{z}}(P,\
\cdot\ ;\lambda)$ on the cycle ${\bf a}_{\beta}$. Applying (49) with
$u=\dot{G}(P,\ \cdot\ ;\lambda)$, we get (46). Formula (45) can be proved in
the same manner.
The relation $d\omega(P,Q;\lambda)=0$ immediately follows from the equality
$G_{z\bar{z}}(z,\bar{z},P;\lambda)=\frac{\lambda}{4}G(z,\bar{z},P;\lambda)$.
Let us prove (47). From now on we assume for simplicity that $k_{m}=1$, where
$k_{m}$ is the multiplicity of the zero $P_{m}$ of the holomorphic
differential $w$.
Applying Green’s formula (49) to the domain $\hat{{\cal
X}}\setminus\\{|z-z_{m}|<\epsilon\\}$ and $u=\dot{G}=\frac{\partial
G}{\partial z_{m}}$, one gets
$\dot{G}(P,Q;\lambda)=$ $2i\lim_{\epsilon\to
0}\oint_{|z-z_{m}|=\epsilon}\dot{G}_{\bar{z}}(z,\bar{z},Q;\lambda)G(z,\bar{z},P;\lambda)\bar{dz}+\dot{G}(z,\bar{z},Q;\lambda)G_{z}(z,\bar{z},P;\lambda)dz\,.$
(50)
Observe that the function $x_{m}\mapsto G(x_{m},\bar{x}_{m},P;\lambda)$
(defined in a small neighborhood of the point $x_{m}=0$) is a bounded solution
to the elliptic equation
$\frac{\partial^{2}G(x_{m},\bar{x}_{m},P;\lambda)}{\partial
x_{m}\partial\bar{x}_{m}}-\lambda|x_{m}|^{2}G(x_{m},\bar{x}_{m},P;\lambda)=0$
with real analytic coefficients and, therefore, is real analytic near
$x_{m}=0$.
From now on we write $x$ instead of $x_{m}=\sqrt{z-z_{m}}$. Differentiating
the expansion
$G(x,\bar{x},P;\lambda)=a_{0}(P,\lambda)+a_{1}(P,\lambda)x+a_{2}(P,\lambda)\bar{x}+a_{3}(P,\lambda)x\bar{x}+\dots$
(51)
with respect to $z_{m}$, $z$ and $\bar{z}$, one gets the asymptotics
$\dot{G}(z,\bar{z},Q;\lambda)=-\frac{a_{1}(Q,\lambda)}{2x}+O(1),$ (52)
$\dot{G}_{\bar{z}}(z,\bar{z},Q;\lambda)=\frac{\dot{a}_{2}(Q,\lambda)}{2\bar{x}}-\frac{a_{3}(Q,\lambda)}{4x\bar{x}}+O(1),$
(53) $G_{z}(z,\bar{z},P;\lambda)=\frac{a_{1}(P,\lambda)}{2x}+O(1),$ (54)
Substituting (52), (53) and (54) into (50), we get the relation
$\dot{G}(P,Q,\lambda)=2\pi a_{1}(P,\lambda)a_{1}(Q,\lambda).$
On the other hand, calculation of the right hand side of formula (47) via (54)
leads to the same result. $\square$
Now we give a variation formula for an eigenvalue of the Laplacian on a flat
surface with trivial holonomy.
###### Proposition 5
Let $\lambda$ be an eigenvalue of $\Delta$ (for simplicity we assume it to
have multiplicity one) and let $\phi$ be the corresponding normalized
eigenfunction. Then
$\frac{\partial\lambda}{\partial\zeta_{k}}=2i\int_{s_{k}}\left(\frac{(\partial\phi)^{2}}{w}+\frac{1}{4}\lambda\phi^{2}\bar{w}\right)\,,$
(55)
where $k=1,\dots,2g+M-1$.
Proof. For brevity we give the proof only for the case $k=g+1,\dots,2g$. One
has
$\iint_{\hat{L}}\phi\dot{\phi}=\frac{1}{\lambda}\iint_{\hat{{\cal
X}}}\Delta\phi\,\dot{\phi}=\frac{1}{\lambda}\left\\{2i\int_{\partial\hat{{\cal
X}}}(\phi_{\bar{z}}\dot{\phi}d\bar{z}+\phi\dot{\phi}_{z}\,dz)+\iint_{\hat{{\cal
X}}}\phi(\lambda\phi)^{\cdot}\right\\}=$
$\frac{1}{\lambda}\left\\{2i\int_{{\bf
a}_{\beta}}(\phi_{\bar{z}}\phi_{z}\,d\bar{z}+\phi\phi_{zz}\,dz)+\dot{\lambda}+\lambda\iint_{\hat{{\cal
X}}}\phi\dot{\phi}\right\\}\,.$
This implies (55) after integration by parts (one has to make use of the
relation
$d(\phi\phi_{z})=\phi_{z}^{2}dz+\phi\phi_{zz}dz+\phi_{\bar{z}}\phi_{z}d\bar{z}+\frac{1}{4}\lambda\phi^{2}d\bar{z}$).
$\square$
Variation of the determinant of the Laplacian. For simplicity we consider only
flat surfaces with trivial holonomy having $2g-2$ conical points with conical
angles $4\pi$. The proof of the following proposition can be found in [KK09].
###### Proposition 6
Let $({\cal X},w)\in{\cal H}_{g}(1,\dots,1)$. Introduce the notation
${\mathbb{Q}}({\cal
X},|w|^{2}):=\Big{\\{}\frac{{\rm{det}}\,\Delta^{|w|^{2}}}{{\rm Area}({\cal
X},|w|^{2})\,{\rm det}\Im{\bf B}}\Big{\\}}\;$ (56)
where ${\bf B}$ is the matrix of ${\bf b}$-periods of the surface ${\cal X}$
and ${\rm Area}({\cal X},|w|^{2})$ denotes the area of ${\cal X}$ in the
metric $|w|^{2}$.
The following variational formulas hold
$\frac{\partial\log{\mathbb{Q}}({\cal
X},|w|^{2})}{\partial\zeta_{k}}=-\frac{1}{12\pi
i}\oint_{s_{k}}\frac{S_{B}-S_{w}}{w}\;,$ (57)
where $k=1,\dots,4g-3$; $S_{B}$ is the Bergman projective connection, $S_{w}$
is the projective connection given by the Schwarzian derivative
$\Big{\\{}\int^{P}w,x(P)\Big{\\}}$; $S_{B}-S_{w}$ is a meromorphic quadratic
differential with poles of the second order at the zeroes $P_{m}$ of $w$.
#### 6.1.2 An explicit formula for the determinant of the Laplacian on a flat
surface with trivial holonomy
We start with recalling the properties of the prime form $E(P,Q)$ (see [F73,
F92], some of these properties were already used in our proof of the Troyanov
theorem above).
* •
The prime form $E(P,Q)$ is an antisymmetric $-1/2$-differential with respect
to both $P$ and $Q$,
* •
Under tracing of $Q$ along the cycle ${\bf a}_{\alpha}$ the prime-form remains
invariant; under the tracing along ${\bf b}_{\alpha}$ it gains the factor
$\exp\left(-\pi i{\bf B}_{\alpha\alpha}-2\pi
i\int_{P}^{Q}v_{\alpha}\right)\;.$ (58)
* •
On the diagonal $Q\to P$ the prime-form has first order zero with the
following asymptotics:
$E(x(P),x(Q))\sqrt{dx(P)}\sqrt{dx(Q)}=$
$(x(Q)-x(P))\left(1-\frac{1}{12}S_{B}(x(P))(x(Q)-x(P))^{2}+O((x(Q)-x(P))^{3}\right),$
(59)
where $S_{B}$ is the Bergman projective connection and $x(P)$ is an arbitrary
local parameter.
The next object we shall need is the vector of Riemann constants:
$K^{P}_{\alpha}=\frac{1}{2}+\frac{1}{2}{\bf
B}_{\alpha\alpha}-\sum_{\beta=1,\beta\neq\alpha}^{g}\oint_{{\bf
a}_{\beta}}\left(v_{\beta}\int_{P}^{x}v_{\alpha}\right)$ (60)
where the interior integral is taken along a path which does not intersect
$\partial\widehat{\cal X}$.
In what follows the pivotal role is played by the following holomorphic
multivalued $g(1-g)/2$-differential on ${\cal X}$
${\cal C}(P)=\frac{1}{{\cal
W}[v_{1},\dots,v_{g}](P)}\sum_{\alpha_{1},\dots,\alpha_{g}=1}^{g}\frac{\partial^{g}\Theta(K^{P})}{\partial
z_{\alpha_{1}}\dots\partial z_{\alpha_{g}}}v_{\alpha_{1}}\dots
v_{\alpha_{g}}(P)\;,$ (61)
where $\Theta$ is the theta-function of the Riemann surface ${\cal X}$,
${\cal W}(P):={\rm\det}_{1\leq\alpha,\beta\leq
g}||v_{\beta}^{(\alpha-1)}(P)||$ (62)
is the Wronskian determinant of holomorphic differentials at the point $P$.
This differential has multipliers $1$ and $\exp\\{-\pi i(g-1)^{2}{\bf
B}_{\alpha\alpha}-2\pi i(g-1)K_{\alpha}^{P}\\}$ along basic cycles ${\bf
a}_{\alpha}$ and ${\bf b}_{\alpha}$, respectively.
In what follows we shall often treat tensor objects like $E(P,Q)$, ${\cal
C}(P)$, etc. as scalar functions of one of the arguments (or both). This makes
sense after fixing the local system of coordinates, which is usually taken to
be $z(Q)=\int^{Q}w$. In particular, the expression “the value of the tensor
$T$ at the point $Q$ in local parameter $z(Q)$” denotes the value of the
scalar $Tw^{-\alpha}$ at the point $Q$, where $\alpha$ is the tensor weight of
$T(Q)$.
The following proposition was proved in [KK09].
###### Proposition 7
Consider the highest stratum ${\cal H}_{g}(1,\dots,1)$ of the space ${\cal
H}_{g}$ containing Abelian differentials $w$ with simple zeros.
Let us choose the fundamental polygon $\hat{{\cal X}}$ such that ${\cal
A}_{P}((w))+2K^{P}=0$, where ${\cal A}_{P}$ is the Abel map with the initial
point $P$. Consider the following expression
$\tau({\cal X},w)={{\cal
F}}^{2/3}\prod_{m,l=1\;\;m<l}^{2g-2}[E(Q_{m},Q_{l})]^{{1}/{6}}\,,$ (63)
where the quantity
${\cal F}:=[w(P)]^{\frac{g-1}{2}}{\cal
C}(P)\prod_{m=1}^{2g-2}[E(P,Q_{m})]^{\frac{(1-g)}{2}}$ (64)
does not depend on $P$; all prime-forms are evaluated at the zeroes $Q_{m}$ of
the differential $w$ in the distinguished local parameters
$x_{m}(P)=\left(\int_{Q_{m}}^{P}w\right)^{1/2}$. Then
$\frac{\partial\log\tau}{\partial\zeta_{k}}=-\frac{1}{12\pi
i}\oint_{s_{k}}\frac{S_{B}-S_{w}}{w}\;,$ (65)
where $k=1,\dots,4g-3$.
The following Theorem immediately follows from Propositions 6 and 7. It can be
considered as a natural generalization of the Ray-Singer formula (35) to the
higher genus case.
###### Theorem 3
Let a pair $({\cal X},w)$ be a point of the space ${\cal H}_{g}(1,\dots,1)$.
Then the determinant of the Laplacian $\Delta^{|w|^{2}}$ is given by the
following expression
${{\rm det}}\,\Delta^{|w|^{2}}=C\;{\rm Area}({\cal X},|w|^{2})\;{{\rm
det}}\Im{\bf B}\;|\tau({\cal X},w)|^{2},$ (66)
where the constant $C$ is independent of a point of ${\cal H}_{g}(1,\dots,1)$.
Here $\tau({\cal X},w)$ is given by (63).
### 6.2 Determinant of the Laplacian on an arbitrary polyhedral surface of
genus $g>1$
Let $b_{1},\dots,b_{N}$ be real numbers such that $b_{k}>-1$ and
$b_{1}+\dots+b_{N}=2g-2$. Denote by ${\cal M}_{g}(b_{1},\dots,b_{N})$ the
moduli space of pairs $({\cal X},{\bf m})$, where ${\cal X}$ is a compact
Riemann surface of genus $g>1$ and ${\bf m}$ is a flat conformal conical
metric on ${\cal X}$ having $N$ conical points with conical angles
$2\pi(b_{1}+1),\dots,2\pi(b_{N}+1)$. The space ${\cal
M}_{g}(b_{1},\dots,b_{N})$ is a (real) orbifold of (real) dimension $6g+2N-5$.
Let $w$ be a holomorphic differential with $2g-2$ simple zeroes on ${\cal X}$.
Assume also that the set of conical points of the metric ${\bf m}$ and the set
of zeros of the differential $w$ do not intersect.
Let $P_{1},\dots,P_{N}$ be the conical points of ${\bf m}$ and let
$Q_{1},\dots,Q_{2g-2}$ be the zeroes of $w$. Let $x_{k}$ be a distinguished
local parameter for ${{\bf m}}$ near $P_{k}$ and $y_{l}$ be a distinguished
local parameter for $w$ near $Q_{l}$. Introduce the functions $f_{k}$, $g_{l}$
and the complex numbers ${\bf f_{k}}$, ${\bf g_{l}}$ by
$|w|^{2}=|f_{k}(x_{k})|^{2}|dx_{k}|^{2}\ \ \mbox{near}\ \ P_{k};\ \ \ \ \ {\bf
f_{k}}:=f_{k}(0),$ ${\bf m}=|g_{l}(y_{l})|^{2}|dy_{l}|^{2}\ \ \mbox{near}\ \
Q_{l};\ \ \ \ \ {\bf g_{l}}:=g_{l}(0).$
Then from (25) and (66) and the lemma on three polyhedra from §4.4 it follows
the relation
${\rm det}\Delta^{{\bf m}}=C{\rm Area}\,({\cal X},{{\bf m}}){{\rm det}}\Im{\bf
B}\;|\tau({\cal X},w)|^{2}\frac{\prod_{l=1}^{2g-2}|{\bf
g_{l}}|^{1/6}}{\prod_{k=1}^{N}|{\bf f_{k}}|^{b_{k}/6}},$ (67)
where the constant $C$ depends only on $b_{1},\dots,b_{N}$ (and neither the
differential $w$ nor the point $({\cal X},{\bf m})\in{\cal
M}_{g}(b_{1},\dots,b_{N})$) and $\tau({\cal X},w)$ is given by (63).
Acknowledgements. The author is grateful to D. Korotkin for numerous
suggestions, in particular, his criticism of an earlier version of this paper
[K07] lead to appearance of the lemma from §4.4 and a considerable improvement
of our main result (67). The author also thanks A. Zorich for very useful
discussions. This paper was written during the author’s stay in Max-Planck-
Institut für Mathematik in Bonn, the author thanks the Institute for excellent
working conditions and hospitality.
## References
* [AS94] Aurell, E., Salomonson, P., Further results on Functional Determinants of Laplacians in Simplicial Complexes, hep-th/9405140
* [B07] Bobenko A., Lectures on Riemann surfaces, to appear in LNM
* [BFK92] Burghelea, D., Friedlander, L., and Kappeler, T., Meyer-Vietoris type formula for determinants of elliptic differential operators, J. of Funct. Anal., 107 34-65 (1992)
* [C10] Carslaw, H. S., The Green’s function for a wedge of any angle, and other problems in the conduction of heat, Proc. London Math. Soc., vol. 8 (1910), 365-374
* [C83] Cheeger, J., Spectral Geometry of singular Riemannian spaces, J. Diff. Geometry, 18 (1983), 575-657
* [DP89] D’Hoker E., Phong, D.H., Functional determinants on Mandelstam diagrams, Comm. Math. Phys. 124 629–645 (1989)
* [F73] Fay, John D., Theta-functions on Riemann surfaces, Lect.Notes in Math., 352 Springer (1973)
* [F92] Fay, John D., Kernel functions, analytic torsion, and moduli spaces, Memoirs of the AMS 464 (1992)
* [F94] Fursaev D. V., The heat-kernel expansion on a cone and quantum fields near cosmic strings, Class. Quantum Grav., 11 (1994) 1431-1443
* [GH78] Griffiths P., Harris J., Principles of Algebraic Geometry; John Wiley and Son, 1978
* [K93] Hala Khuri King, Determinants of Laplacians on the space of conical metrics on the sphere, Transactions of AMS, 339, 525-536 (1993)
* [KK07] Klochko Yu, Kokotov A., Genus one polyhedral surfaces, spaces of quadratic differentials on tori and determinants of Laplacians, Manuscripta Mathematica, 122, 195-216 (2007)
* [KK04] Kokotov A., Korotkin D., Tau-functions on the spaces of Abelian and quadratic differentials and determinants of Laplacians in Strebel metrics of finite volume, preprint of Max-Planck Institute for Mathematics in the Science, Leipzig, 46/2004; math.SP/0405042
* [K07] Kokotov A., Preprint of Max-Planck-Institut für Mathematik in Bonn, 2007(127)
* [KK09] A.Kokotov, D.Korotkin, “Tau-functions on spaces of Abelian differentials and higher genus generalization of Ray-Singer formula”, Journal of Differential Geometry, 82(2009), 35–100
* [K67] Kondratjev, V., Boundary value problems for elliptic equations in domains with conical and angle points, Proc. Moscow Math. Soc., 16(1967), 219-292
* [KZ03] Kontsevich, M., Zorich, A., Connected components of the moduli spaces of holomorphic differentials with prescribed singularities, Invent. Math. 153 631-678 (2003)
* [KZ97] Kontsevitch, M., Zorich A., Lyapunov exponents and Hodge theory, hep-th/9701164
* [LMP07] Loya P., McDonald P., Park J., Zeta regularized determinants for conic manifolds, Journal of Functional Analysis (2007), 242, N1, 195–229
* [MS67] McKean, H. P., Singer, I. M., Curvature and the eigenvalues of the laplacian, J. Diff. Geometry, 1(1967), 43-69
* [M99] Mooers, E., Heat kernel asymptotics on manifolds with conic singularities, Journal D’Analyse Mathématique, 78(1999), 1-36
* [NP92] Nazarov S., Plamenevskii B., Elliptic boundary value problems in domains with piece-wise smooth boundary, 1992, Moscow, ”Nauka”
* [S88] Osgood,B., Phillips,R., Sarnak,P., Extremals of determinants of laplacian, Journal of Functional Analysis, Vol. 80, N1, 148-211 (1988)
* [R73] Ray D. B., Singer I. M., Analytic torsion for complex manifolds. Ann. of Math., Vol 98 (1973), N1, 154-177
* [T97] Taylor M., Partial Differential Equations, vol 2., Springer (Appl. Math. Sc., Vol. 116)
* [T86] Troyanov M., Les surfaces euclidiennes à singularités coniques, L’Enseignement Mathématique, 32 (1986), 79-94
* [Z06] Zorich A., Flat Surfaces, in collection ”Frontiers in Number theory, Physics and Geometry. Vol. 1: On random matrices, zeta functions, and dynamical systems”, P. Cartier, B. Julia, P. Moussa, P. Vanhove (Editors), Springer-Verlag, Berlin, 2006, 439–586
* [Z71] Zverovich, E.I., Boundary value problems in the theory of analytic functions in Hölder classes on Riemann surfaces, Russ. Math. Surveys 26 117-192 (1971)
|
arxiv-papers
| 2009-06-03T14:40:18 |
2024-09-04T02:49:03.136185
|
{
"license": "Public Domain",
"authors": "Alexey Kokotov",
"submitter": "Alexey Kokotov Yu",
"url": "https://arxiv.org/abs/0906.0717"
}
|
0906.0806
|
# Theory of Optically-Driven Sideband Cooling for Atomic Collective
Excitations and Its Generalization
Yong Li Department of Physics and Center of Theoretical and Computational
Physics, The University of Hong Kong, Pokfulam Road, Hong Kong, China Z. D.
Wang Department of Physics and Center of Theoretical and Computational
Physics, The University of Hong Kong, Pokfulam Road, Hong Kong, China C. P.
Sun Institute of Theoretical Physics, The Chinese Academy of Sciences,
Beijing, 100190, China
###### Abstract
We explore how to cool atomic collective excitations in an optically-driven
three-level atomic ensemble, which may be described by a model of coupled two
harmonic oscillators (HOs) with a time-dependent coupling. Moreover, the
coupled two-HO model is further generalized to address other cooling issues,
where the lower-frequency HO can be cooled whenever the cooling process
dominates over the heating one during the sideband transitions. Unusually, due
to the absence of the heating process, the optimal cooling of our first
cooling protocol for collective excitations in an atomic ensemble could break
a usual sideband cooling limit for general coupled two-HO models.
###### pacs:
03.65.-w, 37.10.De, 43.58.Wc
_Introduction.-_ Recently, quantum information processing based on collective
excitations in atomic ensembles has attracted more and more attentions.
Photons are good carriers of quantum information due to their fast velocity
and low leakage, while may not be easy to store. Naturally, it is desired to
study atomic ensembles as potential quantum memory units of photons due to the
long coherence time. Interestingly, the form-stable dark-state polariton (DSP)
Fleischhauer2000 associated with the propagation of quantum optical fields
via electromagnetically induced transparency (EIT) Harris-EIT , was proposed
in a three-level $\Lambda$-type atomic ensemble. In the low excitations limit,
DSP can be described as a hybrid bosonic mode Sun2003 . By controlling the
mixing angle between light and matter components of DSP, the optical pulse can
be decelerated and “trapped” via mapping its shape and quantum state onto
meta-stable collective-excitation state of matter. That means the quantum
information storage Fleischhauer2000 ; Sun2003 ; Fleischhauer and Lukin can
be achieved in the atomic ensembles by adiabatically controlling the coupling.
As is known, collective excitations could also be used in quantum
communication in atomic ensembles and linear optics. Since the work of Duan-
Lukin-Cirac-Zoller DLCZ , a number of protocols repeater ; storage ; Liu ;
ZhaoBo2007 have been proposed to implement robust long-distance quantum
communications, quantum repeaters, and quantum information storages based on
atomic ensembles over long photonic lossy channels.
In a realistic atomic ensemble, a given collective-excitation mode may have a
finite thermal population due to the interaction with the thermal bath at
finite temperature. This means that it is necessary to cool the thermal
excitations in quantum information processing based on atomic collective
excitations. In this Letter, we consider a driven three-level atomic ensemble
that can be modeled by coupled two harmonic resonators (HOs), and then
elaborate how to cool the low-frequency collective-excitation mode near its
ground state in this kind of systems.
On the other hand, various nano- (or submicron-) mechanical resonators have
been investigated nano extensively in recent years. To reveal the quantum
effect in the nano-mechanical devices, various cooling schemes MR-cooling ;
radiation-pressure cooling experiment ; Wilson-Rae:2007 ; Marquardt:2007 ;
Genes:2008 ; Rabl2009 were proposed to drive them to reach the standard
quantum limit (SQL) SQL . A famous one among them is the optical radiation-
pressure cooling scheme radiation-pressure cooling experiment attributed to
the (resolved) sideband cooling Wilson-Rae:2007 ; Marquardt:2007 ; Genes:2008
; Rabl2009 , which was previously well-developed to cool the spatial motion of
the trapped ions trap-ion-cooling or the neutral atoms atom-cooling .
Notably, our cooling scheme for atomic ensembles is based on the sideband
structure induced by the lower-frequency mode, which is time-dependently
coupled with the higher-frequency mode to loss its energy. Moreover, we
generalize the above coupled two-HO model to other two types of cooling model
beyond the optical radiation-pressure cooling of mechanical resonator. In the
generalized model, the lower-frequency HO can be cooled with a usual sideband
cooling limit, whose cooling mechanism can also be employed to understand the
cooling of collective excitations in the atomic ensembles. It is remarkable
that our protocol of atomic ensemble breaks the limit of usual sideband
cooling due to the absence of counter-rotating terms in such a coupled two-HO
model.
_Three-Level atomic ensemble modeled by two coupled oscillators.-_ Let us
consider an ensemble of $N$ identical three-level atoms as seen in Fig. 1(a).
A strong classical driving light field is homogenously coupled to each atomic
transition from the metastable state $\left|b_{0}\right\rangle$ to the excited
one $\left|a_{0}\right\rangle$. Then the Hamiltonian reads ($\hbar=1$
hereafter)
$H=\omega_{a}\sum_{i=1}^{N}\sigma_{a_{0}a_{0}}^{(i)}+\omega_{b}\sum_{i=1}^{N}\sigma_{b_{0}b_{0}}^{(i)}+(\Omega
e^{i\omega_{d}t}\sum_{i=1}^{N}\sigma_{b_{0}a_{0}}^{(i)}+h.c.),$ (1)
where $\omega_{g,a,b}$ are the corresponding energies of the atomic states
$\left|g_{0}\right\rangle$, $\left|a_{0}\right\rangle$ and
$\left|b_{0}\right\rangle$ respectively, and the ground state energy
$\omega_{g}=0$. $\Omega$ is the coupling strength of the driving light field
(with the carrier frequency $\omega_{d}$), which can be assumed to be real.
Figure 1: (Color online) (a) Three-level atomic ensemble with most atoms
staying in the ground states $\left|g_{0}\right\rangle$. The strong driving
light couples to the transition from the meta-stable state
$\left|b_{0}\right\rangle$ to the excited one $\left|a_{0}\right\rangle$ for
each atom. The electric-dipole transition
$\left|g_{0}\right\rangle$$\rightarrow$$\left|a_{0}\right\rangle$ is
permitted, but
$\left|g_{0}\right\rangle$$\rightarrow$$\left|b_{0}\right\rangle$ is
forbidden. The waved lines denote the decay processes with $\gamma_{a,b}$ the
corresponding decay rates. (b) The cooling process
($\left|n\right\rangle_{b}\rightarrow\left|n-1\right\rangle_{b}$) and (c) the
heating process
($\left|n\right\rangle_{b}\rightarrow\left|n+1\right\rangle_{b}$) for mode $b$
starting form $\left|m\right\rangle_{a}\left|n\right\rangle_{b}$ in the
sideband structure forming by splitting a-mode with the low-frequency b-mode.
$\Delta_{c}$ ($\equiv\omega_{a}-\omega_{b}-\omega_{d})$ and $\Delta_{h}$
($\equiv\omega_{a}+\omega_{b}-\omega_{d})$ are the detunings for the anti-
Stokes (cooling) and Stokes (heating) transitions, respectively.
Normally, a weak quantized probe light would couple to the transition
$\left|g_{0}\right\rangle$$\rightarrow$$\left|a_{0}\right\rangle$. Thus, a so-
called $\Lambda$-type three-level atomic ensemble configuration can be
constructed associated with the well-known EIT and group-velocity slowdown
phenomena. In such an ensemble, the DSP can also be obtained as the
superposition of the optical mode and the atomic collective-excitation mode
Fleischhauer2000 ; Sun2003 ; Fleischhauer and Lukin . Based on the notations
of EIT and DSP, the atomic ensemble can be a unit of quantum memory and be
used to store the quantum information of, e.g., the photons. Here, we focus
only on the cooling of the atomic collective excitations in the absence of the
probe light field, also noting that extensive studies have been made in the
framework of optically-pumping an individual atom into its internal lowest-
energy ground state pump .
We now introduce the bosonic operators
$\hat{a}=\sum_{i}\hat{\sigma}_{g_{0}a_{0}}^{(i)}/\sqrt{N}$ and
$\hat{b}=\sum_{i}\hat{\sigma}_{g_{0}b_{0}}^{(i)}/\sqrt{N}$ for atomic
collective excitations Sun2003 ; sun-excitation , which satisfy
$[\hat{a},\hat{a}^{\dagger}]=1$, $[\hat{b},\hat{b}^{\dagger}]=1$ and
$[\hat{a},\hat{b}^{\dagger}]=0=[\hat{a},\hat{b}]$ in the limit of
$N\rightarrow\infty$ with low excitations. Then, Hamiltonian (1) is modeled by
the coupled two-HO model, and can be further rewritten in a time-independent
form in the rotating framework as
$H_{I}=\Delta\hat{a}^{\dagger}\hat{a}+\omega_{b}\hat{b}^{\dagger}\hat{b}+\Omega(\hat{a}^{\dagger}\hat{b}+h.c.)$
(2)
with the detuning $\Delta\equiv\omega_{a}-\omega_{d}$. In the derivation of
the above Hamiltonian, we have used the rotating wave approximation (RWA) when
$\\{\left|\omega_{ab}-\omega_{d}\right|,\left|\Omega\right|\\}\ll(\omega_{ab}+\omega_{d})$
(where $\omega_{ab}\equiv\omega_{a}-\omega_{b}$), which is always fulfilled
for most realistic atoms.
_Sideband cooling for atomic collective excitations.-_ Generally, the atomic
collective-excitation modes have non-vanishing mean thermal populations due to
their couplings to the bath at finite temperature. In experiments, the
frequency of the higher-frequency atomic collective-excitation, i.e., mode
$a$, is of the order of $2\pi\times 10^{14}$ Hz, which implies that its mean
thermal excitation number can be considered as zero even at room temperature.
Usually, the atomic ground state $\left|g_{0}\right\rangle$ and meta-stable
one $\left|b_{0}\right\rangle$ are selected as the atomic two hyperfine levels
with the frequency difference being the order of $2\pi\times 10^{9}$ Hz.
Although there is no optical dipole transition between
$\left|b_{0}\right\rangle$ and $\left|g_{0}\right\rangle$ because of the
electric dipole transition rule, the decay from $\left|b_{0}\right\rangle$ to
$\left|g_{0}\right\rangle$ still exists due to the atomic collision or some
other cases, with a very low decay rate. Such a very-low decay rate means that
the lower-energy mode $b$ possesses a long coherence time, which is just a
distinct advantage of using the atomic collective excitations as quantum
memory units. However, in consideration of the high initial mean thermal
population $\bar{n}_{b}=[\exp(\omega_{b}/k_{B}T)-1]^{-1}\sim 10^{4}\gg 1$ at
room temperature $T\sim 300$ K (with $k_{B}$ the Boltzmann constant), it is
necessary to cool the atomic collective-excitation modes to their ground
states before quantum information processing based on atomic collective
excitations.
In the presence of noises, we may have the following Langevin equation from
Hamiltonian (2)
$\dot{\hat{C}}=-\Gamma_{C}\hat{C}+i\Omega\hat{C}^{\prime}+\hat{F}_{C}(t),$ (3)
where $C,C^{\prime}=a,b$ ($C\neq C^{\prime}$),
$\Gamma_{a}=\gamma_{a}/{2}+i\Delta$ and
$\Gamma_{b}$$=\gamma_{b}/{2}+i\omega_{b}$. The noise operators are described
by the correlations
$\langle\hat{F}_{C}^{\dagger}(t)\hat{F}_{C}(t^{\prime})\rangle=\gamma_{C}\bar{n}_{C}\delta(t-t^{\prime})$.
Here, $\gamma_{a,b}$ are the decay rates of collective-excitation modes $a$
and $b$, respectively (for simplicity, we adopt the same symbols as those of
the atomic levels $\left|a_{0}\right\rangle$ and $\left|b_{0}\right\rangle$),
and $\bar{n}_{a,b}=[\exp(\omega_{a,b}/k_{B}T)-1]^{-1}$ are the corresponding
initial thermal populations with $T$ the initial temperature of the thermal
bath. Although the above quantum Langevin equation has vanishing steady state
solutions $\langle\hat{a}\rangle=\langle\hat{b}\rangle=0$, the corresponding
quantum rate equations for the excitation numbers
$\hat{n}_{C}=\hat{C}^{\dagger}\hat{C}$ ($C=a,b$) read
$\displaystyle\frac{d}{dt}\left\langle\hat{n}_{C}\right\rangle$
$\displaystyle=\gamma_{C}(\bar{n}_{C}-\left\langle\hat{n}_{C}\right\rangle)-\left(i\Omega\langle\hat{\Sigma}\rangle+h.c.\right),$
(4) $\displaystyle\frac{d}{dt}\langle\hat{\Sigma}\rangle$
$\displaystyle=-\zeta\langle\hat{\Sigma}\rangle+ig(\left\langle\hat{n}_{b}\right\rangle-\left\langle\hat{n}_{a}\right\rangle),$
(5)
where $\hat{\Sigma}=\hat{a}^{\dagger}\hat{b}$ and
$\zeta=(\gamma_{a}+\gamma_{b})/2+i(\omega_{b}-\Delta)$. Here we have used the
non-vanishing noise-based relations Scully-book $\langle
F_{C}^{\dagger}(t)\hat{C}(t)\rangle=\gamma_{C}\bar{n}_{C}/2$.
The steady state solutions of the quantum rate equations give the variation of
final mean population
$\bar{n}_{b}^{\mathrm{f}}=\langle(\hat{b}^{\dagger}-\langle\hat{b}^{\dagger}\rangle)(\hat{b}-\langle\hat{b}\rangle)\rangle_{\mathrm{ss}}\equiv\bar{n}_{b}-\xi(\bar{n}_{b}-\bar{n}_{a})$
with
$\xi=\frac{\Omega^{2}\gamma_{a}(\gamma_{a}+\gamma_{b})}{(\gamma_{a}+\gamma_{b})^{2}\left(\Omega^{2}+\frac{\gamma_{a}\gamma_{b}}{4}\right)+\gamma_{a}\gamma_{b}(\Delta-\omega_{b})^{2}}.$
Then, from the Bose-Einstein distribution, the effective temperature
$T_{\mathrm{eff}}$ of mode $b$ is expressed as
$T_{\mathrm{eff}}=\frac{\omega_{b}}{k_{B}\ln(1/\bar{n}_{b}^{\mathrm{f}}+1)}.$
(6)
For a simple case of $\Delta=\omega_{b}$ (namely, the driving light is exactly
resonant to the atomic transition
$\left|b_{0}\right\rangle\rightarrow\left|a_{0}\right\rangle$:
$\omega_{d}=\omega_{ab}$), the nice cooling reaches with
$\bar{n}_{b}^{\mathrm{f}}=\frac{\gamma_{b}\bar{n}_{b}+\gamma_{a}\bar{n}_{a}}{\gamma_{a}+\gamma_{b}}\approx\frac{\gamma_{b}}{\gamma_{a}}\bar{n}_{b}+\bar{n}_{a}$
(7)
in the strong driving strength limit $\Omega\gg\gamma_{a},\gamma_{b}$. For a
realistic atomic system, one has $\gamma_{a}\gg\gamma_{b}$ and
$\bar{n}_{b}\gg\bar{n}_{a}$ ($\omega_{a}\gg\omega_{b}$). Especially, when
$\gamma_{b}$ is sufficiently small such that
$\gamma_{b}\bar{n}_{b}\ll\gamma_{a}\bar{n}_{a}$ atom parameters , the final
mean population reaches its limit:
$\bar{n}_{b}^{\text{{f}}}\rightarrow\bar{n}_{b}^{\text{{lim}}}=\bar{n}_{a}$.
As mentioned above, the mean thermal population of mode $a$ is usually tiny,
which means that the atomic collective-excitation mode $b$ can be cooled close
to its ground state with the final thermal population
$\bar{n}_{b}^{\text{{f}}}\rightarrow\bar{n}_{a}\ll 1$.
A physical explanation of the above results can resort to the sideband-
cooling-like mechanism (see Fig. 1(b)). The Jaynes-Cummings (JC) term
($\hat{a}^{{\dagger}}\hat{b}$) causes the anti-Stokes transition from
$\left|m\right\rangle_{a}\left|n\right\rangle_{b}$ to
$\left|m+1\right\rangle_{a}\left|n-1\right\rangle_{b}$, which will decay fast
to the state $\left|m\right\rangle_{a}\left|n-1\right\rangle_{b}$. Thus, such
a process makes the lower-frequency oscillator $b$ to lose one quantum and
then results in its cooling. When the anti-Stokes transition is resonantly
coupled, namely, $\Delta=\omega_{b}$, or
$\Delta_{c}\equiv\omega_{ab}-\omega_{d}=0$, the best cooling happens with the
corresponding optimal final mean population ($\bar{n}_{b}^{\mathrm{f}}$) given
by the initial population $\bar{n}_{a}$ of higher-frequency mode $a$. All in
all, in order to reach the optimal cooling of lower-energy collective-
excitation mode $b$, the following conditions should be satisfied: (i) strong
enough pumping light $\Omega\gg\gamma_{a},\gamma_{b}$; (ii) the resonantly
driving condition: $\Delta_{c}\equiv\omega_{ab}-\omega_{d}=0$; (iii)
$\gamma_{b}\ll\gamma_{a}$ and $\bar{n}_{a}\ll\bar{n}_{b}$ (that is,
$\omega_{b}\ll\omega_{a}$). It is notable that the above three conditions can
be met for experimentally accessible parameters of realistic atomic systems
atom parameters .
It is seen clearly from the above analysis that the time-dependent coupling
between two large-detuned HOs could cool down the lower-frequency one. This
cooling model is different from the existing mechanical cooling scheme based
on the optical radiation pressure radiation-pressure cooling experiment ;
Wilson-Rae:2007 ; Marquardt:2007 ; Genes:2008 ; Rabl2009 , with an external
laser-driving. Nevertheless, we will show below that these two cooling schemes
may be generalized to a more universal model.
_Generalized sideband cooling model of two coupled HOs.-_ A naive cooling
process could be realized when a hotter object contacts directly with a cold
one. If there exists no external driving for two objects at the same initial
temperature, it is obviously impossible that the temperature of any one can
change via their direct interaction. But the situation changes dramatically
when we add an additional time-dependent driving or manipulate the coupling
between them to be time-dependent in largely-detuned two coupled HOs. This
kind of setup leads to a more general sideband cooling framework.
Figure 2: (Color online) Coupled two HOs ($a$ and $b$) model. $b$ is the
desired lower-frequency HO to be cooled. (a) The interaction between the HOs
is time-dependent modulated ($\propto g\cos(\omega_{d}t)/2$); (b) The coupling
strength of the interaction between two HOs is time-independent but there is
an external time-dependent driving ($\propto f_{0}\cos(\omega_{d}t)/2$) on the
higher-frequency mode $a$.
Let us first consider two coupled HOs with large-detuned frequencies
($\omega_{a}\gg\omega_{b}$) as seen in Fig. 2(a). The free Hamiltonian reads
$\hat{H}_{0}=\omega_{a}\hat{a}^{{\dagger}}\hat{a}+\omega_{b}\hat{b}^{{\dagger}}\hat{b}$.
A time-dependent coupling is generally expressed as
$\hat{V}_{1}(t)=g\cos(\omega_{d}t)F_{1}(\hat{a}^{{\dagger}},\hat{a})(\hat{b}^{{\dagger}}+\hat{b})/2$,
where $\hat{a}^{{\dagger}}$ ($\hat{a}$) and $\hat{b}^{{\dagger}}$ ($\hat{b}$)
are the creation (annihilation) operators of the oscillators $a$ and $b$ with
$g$ the coupling coefficient between them and $\omega_{d}$ the modulating
frequency. Here, $F_{1}(\hat{a}^{{\dagger}},\hat{a})$ is a function of
operators $\hat{a}^{{\dagger}}$ and $\hat{a}$. For simplicity, in what follows
we consider only the simplest case of
$F_{1}(\hat{a}^{{\dagger}},\hat{a})=\hat{a}^{{\dagger}}+\hat{a}$, though a
more general function (i.e.,
$F_{1}(\hat{a}^{{\dagger}},\hat{a})=\sum_{n}c_{n}\hat{a}^{{\dagger}n}(\hat{a}^{{\dagger}}+\hat{a})\hat{a}^{n}$
with $c_{n}$ dimensionless coefficients) would lead to a similar result. In
the time-varying frame reference defined by
$\hat{R}(t)=\exp(-i\omega_{d}\hat{a}^{{\dagger}}\hat{a}t)$, the effective
Hamiltonian of the coupled system reads
$\hat{H}_{\mathrm{eff}}=\Delta\hat{a}^{{\dagger}}\hat{a}+\omega_{b}\hat{b}^{{\dagger}}\hat{b}+g(\hat{a}^{{\dagger}}+\hat{a})(\hat{b}^{{\dagger}}+\hat{b}),$
(8)
where the high-oscillating terms have been neglected and the detuning
$\Delta=\omega_{a}-\omega_{d}$ could be negative when $\omega_{a}<\omega_{d}$.
Next we consider another type of two-HO system (see Fig. 2 (b)): a general
time-independent interaction is
$\hat{V}_{2}=g^{\prime}F_{2}^{\prime}(\hat{a}^{\prime{\dagger}},\hat{a}^{\prime})(\hat{b}^{\prime{\dagger}}+\hat{b}^{\prime})$
with $g^{\prime}$ the coupling strength and
$F_{2}^{\prime}(\hat{a}^{\prime{\dagger}},\hat{a}^{\prime})$ being Hermitian,
and a periodically driving field on the higher-frequency HO reads
$\hat{H}_{d}(t)=f_{0}\cos(\omega_{d}t)(\hat{a}^{\prime{\dagger}}+\hat{a}^{\prime})/2$.
In the time-varying frame reference defined by
$\hat{R}^{\prime}(t)=\exp(-i\omega_{d}\hat{a}^{\prime{\dagger}}\hat{a}^{\prime}t)$,
the total Hamiltonian reads
$\hat{H}=\Delta_{0}\hat{a}^{\prime{\dagger}}\hat{a}^{\prime}+\omega_{b}\hat{b}^{\prime{\dagger}}\hat{b}^{\prime}+g^{\prime}F_{2}(\hat{a}^{\prime{\dagger}},\hat{a}^{\prime})(\hat{b}^{\prime{\dagger}}+\hat{b}^{\prime})$
$+f_{0}(\hat{a}^{\prime{\dagger}}+\hat{a}^{\prime})$ with
$\Delta_{0}=\omega_{a}-\omega_{d}$ after neglecting the high-oscillating
terms, where $F_{2}(\hat{a}^{\prime{\dagger}},\hat{a}^{\prime})$ keeps the
time-independent terms in
$F_{2}^{\prime}(\hat{a}^{\prime{\dagger}}e^{i\omega_{d}t},\hat{a}^{\prime}e^{-i\omega_{d}t})$.
Around some quasi-classical state $\left|Q\right\rangle$ such that
$\left\langle Q\right|\hat{a}^{\prime}\left|Q\right\rangle=\alpha$ and
$\left\langle Q\right|\hat{b}^{\prime}\left|Q\right\rangle=\beta$, the quantum
dynamics is determined by an effective Hamiltonian
$\hat{H}_{\mathrm{eff}}=\hat{H}_{\mathrm{eff}}(\hat{a}^{{\dagger}},\hat{b}^{{\dagger}},\hat{a},\hat{b})$
with the displacement operators $\hat{a}=\hat{a}^{\prime}-\alpha$ and
$\hat{b}=\hat{b}^{\prime}-\beta$ for quantum fluctuations. Then, when the
displacements $\beta$ and $\alpha$ take the equilibrium values
$\beta=-F_{2}(\alpha,\alpha)/\omega_{b}$ and
$\alpha=-\left[f_{0}+2\beta\partial_{\alpha}F_{2}(\alpha,y)|_{y=\alpha}\right]/\Delta_{0}$,
the effective Hamiltonian $\hat{H}_{\mathrm{eff}}$ has the same form as that
given in Eq. (8) with the parameters
$\Delta=\Delta_{0}+2\beta\left[\partial^{2}F_{2}(x,y)/\partial_{x}\partial_{y}\right]|_{x,y=\alpha}$
and $g=g^{\prime}\partial_{\alpha}F_{2}(\alpha,y)|_{y=\alpha}$. Therefore,
these types of coupled two-HO model should have the same cooling mechanism to
cool the lower-frequency HO mode.
We wish to point out that the optical radiation-pressure cooling of mechanical
resonator Wilson-Rae:2007 ; Marquardt:2007 ; Genes:2008 ; Rabl2009 , is just a
special case of the second type with
$F_{2}^{\prime}(\hat{a}^{\prime{\dagger}},\hat{a}^{\prime})$
$=g^{\prime}\hat{a}^{{\prime}{\dagger}}\hat{a}^{\prime}$. A similar
linearization Wilson-Rae:2007 ; Rabl2009 of the effective Hamiltonian as
given in Eq. (8) was also mentioned in the optical radiation-pressure cooling
of mechanical resonator. Here we present only the cooling limit (so-called
sideband cooling limit) note of the general coupled two-HO model:
$\bar{n}_{b}^{\text{{f}}}\rightarrow\bar{n}_{b}^{\text{{lim,sid}
}}=\bar{n}_{a}+\frac{\gamma_{a}^{2}}{4\omega_{b}^{2}}\approx\frac{\gamma_{a}^{2}}{4\omega_{b}^{2}}$
(9)
in the resolved sideband case $\gamma_{a}^{2}\ll\omega_{b}^{2}$ when $\Delta$
$=\sqrt{\omega_{b}^{2}+\gamma_{a}^{2}}\approx\omega_{b}$. Here the usual
relation $\bar{n}_{a}\ll\gamma_{a}^{2}/4\omega_{b}^{2}$ has been used.
Although the above Hamiltonian (8) describes only a simple coupled two-HO
system, it can capture the essence of almost all sideband cooling schemes. We
need to emphasize the necessarity of the time-dependence of modulating
coupling or external driving. It lies in a fact that, when
$\omega_{a}\gg\omega_{b}$, there still exists the effective interaction for
$\left|\Delta\right|\sim\omega_{b}$ (or
$\omega_{a}\pm\omega_{b}\sim\omega_{d}$). It is the effective resonance
$\left|\Delta\right|\sim\omega_{b}$ that results in the sideband transitions
to cool down (or heat up) the oscillator $b$ (see Fig. 1(b) and (c)): the JC
term ($\hat{a}^{{\dagger}}\hat{b}$) (associated with the fast decay of mode
$a$) denotes the cooling process of lower-frequency oscillator $b$
($\left|n\right\rangle_{b}\rightarrow\left|n-1\right\rangle_{b}$); on the
contrary, the anti-JC term (that is, the anti-rotating term)
($\hat{a}^{{\dagger}}\hat{b}^{{\dagger}}$) denotes the heating process of mode
$b$ ($\left|n\right\rangle_{b}\rightarrow\left|n+1\right\rangle_{b}$). When
the cooling process dominates (e.g., when $\Delta_{c}\sim 0$), the cooling of
mode $b$ happens with the optimal cooling subject to the usual sideband
cooling limit
($\bar{n}_{b}^{\text{{lim,sid}}}\approx\gamma_{a}^{2}/4\omega_{b}^{2}$).
Comparing the cooling models described by the Hamiltonians (2) and (8), it is
clear that the anti-JC terms ($\hat{a}^{{\dagger}}\hat{b}^{{\dagger}}+h.c.$)
are absent in the former describing the atomic ensemble. Thus, due to absence
of the heating process induced by the anti-JC term during the resolved
sideband cooling, the optimal cooling of lower-frequency collective-excitation
happens at the exact resonant ($\Delta_{c}=0$) of (first) anti-Stokes
transitions and the corresponding cooling limit
($\bar{n}_{b}^{\text{{lim}}}=\bar{n}_{a}$) is certainly much less than that of
the usual sideband cooling limit
($\bar{n}_{b}^{\text{{lim,sid}}}\approx\gamma_{a}^{2}/4\omega_{b}^{2}$).
_Conclusion.-_ We have established a theory to cool atomic collective
excitations in an optically-driven three-level atomic ensemble. Such a cooling
protocol is quite useful and promising in quantum information processing based
on atomic collective excitations, which breaks the usual sideband cooling
limit. Moreover, motivated by the optical radiation-pressure cooling scheme of
mechanical oscillator, we have also proposed two generalized cooling types of
the coupled two-HO model: the first one possesses a time-dependent modulating
coupling coefficient between the HOs without the external driving; while for
the second one, an additional external time-dependent driving on the higher-
frequency HO is involved, with the coupling coefficient between the HOs being
time-independent. In fact, the second type is a generalized model of the
optical radiation-pressure cooling of mechanical resonator. For both types,
the lower-frequency HO can be cooled in the resolved sideband cooling case
with the usual sideband cooling limit.
This work was supported by the RGC of Hong Kong under Grant No. HKU7051/06P,
and partially supported by the NFRP of China under Grant Nos. 10874091 and
2006CB921205 and NSFC Grants through ITP, CAS.
## References
* (1) M. Fleischhauer and M. D. Lukin, Phys. Rev. Lett. 84, 5094 (2000); Phys. Rev. A 65, 022314 (2002).
* (2) K.-J. Boller, A. Imamolu, and S. E. Harris, Phys. Rev. Lett. 66, 2593 (1991); S. E. Harris, Phys. Today 50, No. 7, 36 (1997); M. Xiao et al., Phys. Rev. Lett. 74, 666 (1995); H. Wu et al., Phys. Rev. Lett. 100, 173602 (2008).
* (3) C. P. Sun, Y. Li, and X. F. Liu, Phys. Rev. Lett. 91, 147903 (2003); Y. Li and C. P. Sun, Phys. Rev. A 69, 051802(R) (2004).
* (4) M. Fleischhauer, A. Imamoglu, and J. P. Marangos, Rev. Mod. Phys. 77, 633 (2005).
* (5) L.-M. Duan, M. D. Lukin, J. I. Cirac, and P. Zoller, Nature 414, 413 (2001).
* (6) A. Kuzmich et al., Nature 423, 731 (2003); C. W. Chou et al., Science 316, 1316 (2007).
* (7) B. Julsgaard et al., Nature 432, 482 (2004); M. D. Eisaman et al., Nature 438, 837 (2005).
* (8) C. Liu et al., Nature 409, 490 (2001).
* (9) B. Zhao et al., Phys. Rev. Lett. 98, 240502 (2007).
* (10) A. N. Clelandand and M. L. Roukes, Appl. Phys. Lett. 69, 2653 (1996); X. M. H. Huang et al., Nature 421, 496 (2003).
* (11) I. Wilson-Rae et al., Phys. Rev. Lett. 92, 075507 (2004); P. Zhang et al., Phys. Rev. Lett. 95, 097204 (2005); A. Naik et al., Nature 443, 193 (2006).
* (12) C. H. Metzger and K. Karrai, Nature (London) 432, 1002 (2004); S. Gigan, et al., Nature (London) 444, 67 (2006); O. Arcizet et al., Nature (London) 444, 71 (2006);
* (13) I. Wilson-Rae et al., Phys. Rev. Lett. 99, 093901 (2007); T. J. Kippenberg and K. J. Vahala, Opt. Express 15, 17172 (2007).
* (14) F. Marquardt et al., Phys. Rev. Lett. 99, 093902 (2007); F. Marquardt et al., J. Mod. Opt. 55, 3329 (2008).
* (15) C. Genes et al., Phys. Rev. A 77, 033804 (2008); Y. Li et al., 78, 134301 (2008); M. Grajcar et al., Phys. Rev. B 78, 035406 (2008).
* (16) P. Rabl, C. Genes, K. Hammerer, and M. Aspelmeyer, arXiv: 0903.1637; S. Gröblacher, K. Hammerer, M. R. Vanner, M. Aspelmeyer, arXiv: 0903.5293.
* (17) M. D. LaHaye, O. Buu, B. Camarota, and K. C. Schwab, Science 304, 74 (2004).
* (18) F. Diedrich et al., Phys. Rev. Lett. 62, 403 (1989); C. Monroe et al., Phys. Rev. Lett. 75, 4011 (1995).
* (19) S. E. Hamann et al., Phys. Rev. Lett. 80, 4149 (1998).
* (20) S. Chu, Science 253, 861 (1991); C. S. Wood et al., Science 275, 1759 (1997).
* (21) G. R. Jin et al., Phys. Rev. B 68, 134301 (2003).
* (22) M. O. Scully and M. S. Zubairy, _Quantum Optics_ , Chap. 9 (Cambridge Univercity Press, New York, 1997).
* (23) For typical alkali-(like)-metal $\Lambda$-type atoms, $\omega_{a}/2\pi\sim 10^{14}$ Hz, $\omega_{b}/2\pi\sim 10^{9}$Hz, $\gamma_{a}/2\pi\sim 10^{7}$Hz, $\gamma_{b}/2\pi\sim 10^{0-3}$Hz (See Refs. Harris-EIT ; Liu ; Mosk and references therein). The condition $\gamma_{b}\bar{n}_{b}\ll\gamma_{a}\bar{n}_{a}$ may be fulfilled for $\gamma_{b}/2\pi\sim 10^{0-1}$ Hz.
* (24) For detailed calculations, one may refer to those in Wilson-Rae:2007 ; Marquardt:2007 ; Genes:2008 .
* (25) A. Mosk et al., Appl. Phys. B 73, 791 (2001).
|
arxiv-papers
| 2009-06-04T01:09:02 |
2024-09-04T02:49:03.148073
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Yong Li, Z. D. Wang, C. P. Sun",
"submitter": "Yong Li",
"url": "https://arxiv.org/abs/0906.0806"
}
|
0906.0813
|
# Structures and magnetic properties of ZnO nanoislands
Yu Yang1,2, Ping Zhang1111Corresponding author. E-mail address: zhang
ping@iapcm.ac.cn 1LCP, Institute of Applied Physics and Computational
Mathematics, P.O. Box 8009, Beijing 100088, People’s Republic of China
2Center for Advanced Study and Department of Physics, Tsinghua University,
Beijing 100084, People’s Republic of China
###### Abstract
Using first-principles calculations, we systematically study the atomic
structures and electronic properties for two dimensional triangular ZnO
nanoislands that are graphite-like with monolayer and bilayer thickness. We
find that the monolayer ZnO nanoisland with O terminated zigzag edges is
magnetic at its ground state, with the magnetism coming from the O edge
states. The other monolayer and bilayer ZnO nanoislands with different edge
structures are all nonmagnetic at their ground states. It is further revealed
that for different ZnO nanoislands, their magnetic properties are quite
dependent on their sizes, with larger nanoislands having larger magnetic
moments.
As a semiconducting metal oxide with a direct band gap, zinc oxide (ZnO) has
vast applications on optoelectronics, transducers and spintronics Bagnall1997
; Bai2003 ; Wang2004 . Recently, ZnO nanostructures of different morphologies
have been fabricated and studied Kong2003 ; Gao2005 . Especially, graphite-
like hexagonal ZnO nanofilms have been successfully fabricated Tusche2007 and
extensively studied for possible usages in electronic devices Freeman2006 ;
Goniakowski2007 ; Pala2007 . Moreover, similar to the situations in graphene
nanostuctures, ZnO nanotubes and nanoribbons have then also been investigated
for possible potential applications Wang2007 ; Shen2007 ; Mendez2007 ;
Mendez2007NL . It is noted that the monolayer ZnO nanoribbons with zigzag
edges exhibit magnetic behavior Mendez2007NL . Besides carbon nanotubes and
graphene nanoribbons, two dimensional (2D) graphene nanoislands have also been
paid vast attentions because of their unique flexibility in tuning magnetic
properties Rossier2007 ; Wang2008 . Thus when turn to ZnO nanostructures, one
may wonder whether 2D ZnO nanoislands show tunable magnetic properties or not.
Especially when ZnO nanoisland samples is experimentally attainable by
depositing hexagonal ZnO nanofilms on the Ag(111) surface Tusche2007 .
Motivated by this question, in this paper we present systematical
investigations on the atomic structures and electronic properties of 2D
triangular ZnO nanoislands.
Moreover, the so-called $d^{0}$-magnetism have recently attracted extensive
interests, which represents for magnetic behaviors in semiconducting materials
(including ZnO) with the absence of transition metal doping. It was firstly
observed in HfO2 thin films where defects or oxygen vacancies cause magnetic
behaviors Venkatesan2004 . Later, it has also been theoretically predicted
that carbon doping in ZnO Pan07 and cation-vacancy in GaN and BN Dev08 both
will lead to magnetic behaviors. In addition to these previous researches,
here through first-principles calculations, we further reveal that the
monolayer ZnO nanoisland with O terminated zigzag edge also shows magnetism in
its ground state, even without any vacancies or defects.
In the present work, the first-principles calculations are carried out using
the DMOL3 package. Density functional semicore pseudopotentials (DSPP) DSPP
are adopted to replace the effects of core electrons of Zn and O with a simple
potential including some degree of relativistic correction. The spin-polarized
PW91 PW91 functional based on generalized gradient approximation (GGA) is
employed to take into account the exchange and correlation effects of
electrons. For valence electrons, the double-numerical basis with polarized
functions is adopted for expansion of the single-particle wave functions in
Kohn-Sham equations. For better accuracy, the octupole expansion scheme is
adopted for resolving the charge density and Coulombic potential, and a fine
grid is chosen for numerical integration. The charge density is converged to
1$\times$10-6 a.u. in the selfconsistent calculation. In the optimization
process, the energy, energy gradient, and atomic displacement are converged to
1$\times$10-5, 1$\times$10-4 and 1$\times$10-3 a.u., respectively. The atomic
charge and magnetic moment are obtained by the Mulliken population analysis.
In the experiment by Tusche et al., the observed triangular 2D ZnO nanoislands
all have the size of about 20 Å Tusche2007 . So we here start by investigating
several kinds of ZnO nanoislands at this size, including the monolayer and
bilayer ZnO nanoislands with both armchair and zigzag edges. It should be
noted that there are two different kinds of zigzag edges (respectively the Zn-
and O-terminated) for monolayer ZnO nanoislands. In total, five different ZnO
nanoislands are studied here: the monolayer armchair-edge ZnO nanoisland
(AZnONI), and zigzag-edge ZnO nanoisland with Zn- (Zn-ZZnONI) and O-terminated
edges (O-ZZnONI), the bilayer armchair-edge (BAZnONI) and zigzag-edge ZnO
nanoisland (BZZnONI). The optimized atomic structures for these ZnO
nanoislands are all shown in Fig. 1.
During geometry optimizations, different kinds of ZnO nanoislands show
different structural reconstructions. As shown in Figs. 1(a)-(c), after
geometry optimizations for the monolayer ZnO nanoislands, all the atoms are
still inside a same plane. For the AZnONI, the atoms at the edges and corners
are distorted from their equilibrium places, and nine new Zn-Zn bonds form at
the three edges as shown in Fig. 1(a). For the monolayer zigzag-edge ZnO
nanoislands, the structural reconstructions only occur at the three corner
areas. One can see from Fig. 1(b) that in the Zn-ZZnONI structure, the two Zn
atoms at each corner come close with each other and form a new Zn-Zn bond,
with the three O atoms at the corners pushed out from their original sites. In
the O-ZZnONI structure shown in Fig. 1(c), the two O atoms at each corner move
away from each other a little making the three Zn atoms at the corners pulled
in from their original sites. The reconstructions for the BAZnONI and BZZnONI
structures are more complex. Figures. 1(e) and (g) are respectively the side
view for the atomic structures in the blue and red squares in Figs. 1(d) and
(f), from which one can see that the atoms at corners and edges are no longer
in the same plane with the middle atoms. The bond angles at these distorted
areas are decreased to be smaller than 120∘, indicating that these atoms
incline to transit to the wurtzite configuration.
The electronic properties for the 2D ZnO nanoislands are then studied, and
found to depend critically on their edge structures. Figure 2 shows the
calculated energy levels for the five geometrically optimized ZnO nanoislands
with different edge structures. From Figs. 2(a), (d), and (e), we can clearly
see that there exist large energy gaps for the AZnONI, BAZnONI and BZZnONI
structures. This is in accordance with the saturation that the ratio between
Zn and O atom numbers is 1:1 in them. The Zn-ZZnONI and O-ZZnONI structure
respectively contains more Zn and O atoms, and thus are unsaturated. However,
spin-splitting effect can only be observed for the O-ZZnONI structure as shown
in Fig. 2(c). And the calculated total spin ($S_{\rm tot}$) is 2. From the
calculated energy levels shown in Fig. 2(c), we see that the energy difference
between the highest occupied molecular orbital (HOMO) of spin-up and spin-down
electrons ($E_{d}$ (HOMO)) is only 0.13 eV, which is very small and indicates
that the spin-splitting effect is very weak. The smaller energy gap in
individual gaps of spin-up and -down electronic states is defined as
$E_{g}^{*}$, which is 0.18 eV and zero for the Zn- and O-ZZnONI structure
respectively. The zero energy gap indicates that the O-ZZnONI structure is
metallic at its ground state. And the small energy gap of 0.18 eV indicates
that the Zn-ZZnONI structure is chemically not very stable. However, the Zn-
and O-ZZnONI structures still might be fabricated using some special methods,
since unpolarized hexagonal ZnO monolayers have been successfully produced
already Tusche2007 .
The deformation electron density and spin density are then further analyzed to
study the origin of the magnetism in the O-ZZnONI structure. From the
deformation electron density shown in Fig. 3(a), we can see that all the O
atoms get some electrons from Zn atoms in the nanoisland. Mulliken charge
density analysis further shows that the O atoms at the corners and edges get
about 0.1 $e$ more electrons than the O atoms in the middle of the nanoisland.
These extra electrons might supply the excessive electrons in one spin state.
Detailed wavefunction analysis proves that the electronic states around the
Fermi energy are mainly contributed by the oxygen-dominated edge states.
Figure 3(b) shows the spin density distribution in the O-ZZnONI structure. In
this case, it is clear that the magnetic behavior is due to the oxygen edge
states.
The fact that the O-ZZnONI structure without any magnetic impurities is
magnetic might help us better understand the profound origin of magnetism in
ZnO-based nanostructures. Besides, the magnetism in the O-ZZnONI structure
also hints that it might has direct applications in nanoscale spintronics. So
we further investigate the size effects on the magnetism in different O-ZZnONI
structures, which can be differentiated by the number of O atoms at each edge
($i$), and we will use O$i$-ZZnONI to represent the O-ZZnONI structure of
different sizes. Based on this definition, the previously discussed O-ZZnONI
structure is actually the O$6$-ZZnONI structure.
Table I shows the calculated magnetic and electronic properties for some
O$i$-ZZnONI structures. We find that the value of all $E_{d}$ (HOMO) ranges
between 0 and 0.2 eV, which is very small and indicates that the spin-
splitting effect is weak for all the O$i$-ZZnONI structures. From the
calculated $S_{\rm tot}$ listed in Table I, we can see obvious quantum size
effects. Except for O$3$-ZZnONI, the O$i$-ZZnONI structure with a larger size
always has a larger total spin. From careful wavefunction analysis, we find
that in the O$3$-ZZnONI structure, electrons of the middle oxygen atoms also
contribute to its magnetic states. Considering that the magnetic states in
other O$i$-ZZnONI structures are all contributed by their O edge states, this
explains why the O$3$-ZZnONI structure has a larger $S_{\rm tot}$ than the
O$4$-ZZnONI and O$5$-ZZnONI structures. However, the abnormal large total spin
of the O$3$-ZZnONI structure is because of its too small size. For most
O$i$-ZZnONI structures, larger ones will have larger total spin.
In summary, we have systematically investigated the structures and magnetic
properties for 2D ZnO nanoislands. It is found that the structural
reconstructions are mainly around the three corners and three edges for
triangular ZnO nanoislands. Among the five ZnO nanoislands with different edge
structures, only the O-ZZnONI structure is magnetic at its ground state. And
its spin density mainly distributes at the edge oxygen atoms. For the O-ZZnONI
structure with different sizes, we reveal that the magnetic and electronic
properties are quite dependent on their sizes. The O$i$-ZZnONI structure with
a larger size always has a larger magnetic moment, implying their potential
usages in spintronics.
This work was supported by the NSFC under grants No. 10604010 and No.
60776063.
## References
* (1) D. M. Bagnall, Y. F. Chen, Z. Zhu, T. Yao, S. Koyama, M. Y. Shen and T. Goto, Appl. Phys. Lett. 70, 2230 (1997).
* (2) X. D. Bai, P. X. Gao and Z. L. Wang, Appl. Phys. Lett. 82, 4806 (2003).
* (3) Z. L. Wang, X. Y. Kong, Y. Ding, P. Gao, W. L. Hughes, R. Yang and Y. Zhang, Adv. Funct. Mater. 14, 944 (2004).
* (4) X. Y. Kong and Z. L. Wang, Nano Lett. 3, 1625 (2003).
* (5) P. X. Gao, Y. Ding, W. J. Mai, W. L. Hughes, C. S. Lao and Z. L. Wang, Science 309, 1700 (2005).
* (6) C. Tusche, H. L. Meyerheim and J. Kirschner, Phys. Rev. Lett. 99, 026102 (2007).
* (7) C. L. Freeman, F. Claeyssens, N. L. Allan, and J. H. Harding, Phys. Rev. Lett. 96, 066102 (2006).
* (8) J. Goniakowski, C. Noguera, and L. Giordano, Phys. Rev. Lett. 98, 205701 (2007).
* (9) R. G. S. Pala and H. Metiu, J. Phys. Chem. C 111, 12715 (2007).
* (10) B. L. Wang, S. Nagase, J. J. Zhao, and G. H. Wang, Nanotechnology 18, 345706 (2007).
* (11) X. Shen, P. B. Allen, J. T. Muckerman, J. W. Davenport and J. C. Zheng, Nano Lett. 7, 2267 (2007).
* (12) A. R. B. Médez, M. T. M. Martínez, F. L. Urías, M. Terrones and H. Terrones, Chem. Phys. Lett. 448, 258 (2007).
* (13) A. R. B. Méndez, F. L. Urías, M. Terrones and H. Terrones, Nano Lett. 8, 1562 (2007).
* (14) J. F. Rossier and J. J. Palacios, Phys. Rev. Lett. 99, 177204 (2007).
* (15) W. L. Wang, S. Meng and E. Kaxiras, Nano Lett. 8, 241 (2008).
* (16) M. Venkatesan, C. B. Fitzgerald, and J. M. D. Coey, Nature 430, 630 (2004).
* (17) H. Pan, J. B. Yi, L. Shen, R. Q. Wu, J. H. Yang, J. Y. Lin, Y. P. Feng, J. Ding, L. H. Van, and J. H. Yin, Phys. Rev. Lett. 99, 127201 (2007).
* (18) P. Dev, Y. Xue, and P. H. Zhang, Phys. Rev. Lett. 100, 117204 (2008).
* (19) B. Delley, Phys. Rev. B 66, 155125 (2002).
* (20) J. P. Perdew and Y. Wang, Phys. Rev. B 45, 13244 (1992).
Table 1: The energy difference of HOMO ($E_{d}$ (HOMO)), the smaller energy gap in individual gaps of spin-up and spin-down states ($E_{g}^{*}$), and the total spin ($S_{\rm tot}$) for the O$i$-ZZnONI structure with different sizes. $i$ | $E_{d}$ (HOMO) (eV) | $E_{g}^{*}$ (eV) | $S_{\rm tot}$
---|---|---|---
3 | 0.01 | 0.19 | 2
4 | 0.08 | 0.06 | 1
5 | 0.17 | 0.06 | 1
6 | 0.13 | 0.00 | 2
7 | 0.04 | 0.02 | 3
List of captions
Fig.1 (Color online). Atomic structures from top view for the AZnONI (a), Zn-
ZZnONI (b), O-ZZnONI (c), BAZnONI (d) and BZZnONI (f). (e) and (g) Atomic
structures from side view for the selected atoms in (d) and (f). In all the
structures, red and grey balls represent O and Zn atoms respectively.
Fig.2 (Color online). Energy levels of AZnONI (a), Zn-ZZnONI (b), O-ZZnONI
(c), BAZnONI (d) and BZZnONI (e). The left and right columns in each figure
correspond to spin-up and spin-down electronic states, respectively.
Fig.3 (Color online). Contour plot of the deformation electron density (a) and
isosurface plot of the spin density (b) for the O-ZZnONI. Red and grey balls
respectively represent O and Zn atoms. The isosurface with the value of 0.01
$e$/Å3 is shown in blue.
Figure 1:
Figure 2:
Figure 3:
|
arxiv-papers
| 2009-06-04T02:08:58 |
2024-09-04T02:49:03.154462
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Yu Yang, Ping Zhang",
"submitter": "Yu Yang",
"url": "https://arxiv.org/abs/0906.0813"
}
|
0906.1028
|
# Spectral representation of infimum of bounded quantum observables††thanks:
This project is supported by Natural Science Found of China (10771191 and
10471124).
Jun Shen1,2, Jun-De Wu1 E-mail: wjd@zju.edu.cn
###### Abstract
In 2006, Gudder introduced a logic order on bounded quantum observable set
$S(H)$. In 2007, Pulmannova and Vincekova proved that for each subset $\cal D$
of $S(H)$, the infimum of $\cal D$ exists with respect to this logic order. In
this paper, we present the spectral representation for the infimum of $\cal
D$.
1Department of Mathematics, Zhejiang University, Hangzhou 310027, P. R. China
2Department of Mathematics, Anhui Normal University, Wuhu 241003, P. R. China
Key Words. Quantum observable, infimum, spectral representation.
1\. Introduction
Let $H$ be a complex Hilbert space, $S(H)$ be the set of all bounded linear
self-adjoint operators on $H$, $S^{+}(H)$ be the set of all positive operators
in $S(H)$, $P(H)$ be the set of all projections on $H$. Each element in $P(H)$
is said to be a quantum event, each element in $S(H)$ is said to be a bounded
quantum observable on $H$. For $A\in S(H)$, $P^{A}$ denote the spectral
measure of $A$. Let ${\mathbf{R}}$ be the set of real numbers,
$\mathcal{B}({\mathbf{R}})$ be the set of all Borel subsets of ${\mathbf{R}}$.
Let $A,B\in S(H)$. If for each $x\in H$, $\langle Ax,x\rangle\leq\langle
Bx,x\rangle$, then we say that $A\leq B$. Equivalently, there exists a $C\in
S^{+}(H)$ such that $A+C=B$. $\leq$ is a partial order on $S(H)$. The physical
meaning of $A\leq B$ is that the expectation of $A$ is not greater than the
expectation of $B$ for each state of the system. So the order $\leq$ is said
to be a numerical order on $S(H)$.
In 2006, Gudder introduced the order $\preceq$ on $S(H)$: If there exists a
$C\in S(H)$ such that $AC=0$ and $A+C=B$, then we say that $A\preceq B$ ([1]).
Equivalently, $A\preceq B$ if and only if for each
$0\notin\Delta\subseteq\mathcal{B}({\mathbf{R}})$, $P^{A}(\Delta)\leq
P^{B}(\Delta)$. The physical meaning of $A\preceq B$ is that for each
$0\notin\Delta\subseteq\mathcal{B}({\mathbf{R}})$, the quantum event
$P^{A}(\Delta)$ implies the quantum event $P^{B}(\Delta)$. Thus, the order
$\preceq$ is said to be a logic order on $S(H)$ ([1]).
Let $\\{A_{\alpha}\\}_{\alpha}\subseteq S(H)$ be a family of bounded linear
self-adjoint operators on $H$, if there exists a $C\in S(H)$ such that
$C\preceq A_{\alpha}$ for each $\alpha$, and $D\preceq C$ for any $D\in S(H)$
satisfies $D\preceq A_{\alpha}$ for each $\alpha$, then $C$ is said to be an
infimum of $\\{A_{\alpha}\\}_{\alpha}$ with respect to the logic order
$\preceq$, and we denote $C=\bigwedge\limits_{\alpha}A_{\alpha}$.
If $P,Q\in P(H)$, then $P\leq Q$ if and only if $P\preceq Q$, and $P$ and $Q$
have the same infimum with respect to the orders $\leq$ and $\preceq$ ([1]).
For a given order of the bounded quantum observables set $S(H)$, the infimum
problem is to find out under what condition the infimum $A\wedge B$ exists for
$A,B\in S(H)$ with respect to the given order? Moreover, can we give out the
structure of $A\wedge B$?
For the numerical order $\leq$ of $S(H)$, the problem has been studied in
different content by Kadison, Gudder, Moreland, Ando, Du, etc ([2-6]).
In 2007, Pulmannova and Vincekova proved that for each subset $\cal D$ of
$S(H)$, its infimum exists with respect to the logic order $\preceq$. Their
proof is abstract and there is no information about the structure of the
infimum ([7]).
In 2008, Liu and Wu found a representation of the infimum $A\wedge B$ for
$A,B\in S(H)$, but the representation is still complicated and implicit, in
particular, the spectral representation of the infimum $A\wedge B$ is still
unknown ([8]).
In this note, we present a spectral representation of the infimum
$\bigwedge\limits_{\alpha}A_{\alpha}$ for any subset
$\\{A_{\alpha}\\}_{\alpha}$ of $S(H)$. Our approach and results are very
different from [8] and are much more simple and explicit.
2\. The spectral representation theorem
Now, we present the spectral representation theorem of infimum by the
following construction:
For each $\Delta\in\mathcal{B}({\mathbf{R}})$, if
$\Delta=\bigcup\limits^{n}_{i=1}\Delta_{i}$, where $\\{\Delta_{i}\\}$ are
pairwise disjoint Borel subsets of ${\mathbf{R}}$, then we say
$\gamma=\\{\Delta_{i}\\}$ is a partition of $\Delta$. We denote all the
partitions of $\Delta$ by $\Gamma(\Delta)$.
Let $\\{A_{\alpha}\\}_{\alpha}\subseteq S(H)$ be a family of bounded linear
self-adjoint operators on $H$.
Define $G(\emptyset)=0$. For each nonempty
$\Delta\in\mathcal{B}({\mathbf{R}})$,
(1) if $0\notin\Delta$, define
$G(\Delta)=\bigwedge\limits_{\gamma\in\Gamma(\Delta)}\sum\limits_{\Delta^{\prime}\in\gamma}\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta^{\prime})\Big{)}$;
(2) if $0\in\Delta$, define $G(\Delta)=I-G({\mathbf{R}}\backslash\Delta)$.
It is clear that for each $\Delta\in\mathcal{B}({\mathbf{R}})$, $G(\Delta)$ is
a projection, so $G$ defines a map from $\mathcal{B}({\mathbf{R}})$ to $P(H)$.
Theorem 1. $G:{\mathcal{B}}({\mathbf{R}})\rightarrow P(H)$ is a spectral
measure.
Proof. (1) $G(\emptyset)=0$,
$G({\mathbf{R}})=I-G({\mathbf{R}}\backslash{\mathbf{R}})=I-G(\emptyset)=I$.
(2) Suppose $\Delta_{1},\Delta_{2}\in\mathcal{B}({\mathbf{R}})$,
$\Delta_{1}\cap\Delta_{2}=\emptyset$, $\Delta_{3}=\Delta_{1}\cup\Delta_{2}$.
There are two possibilities.
(i) $0\notin\Delta_{3}$.
Note that $G(\Delta_{1})\leq P^{A_{\alpha}}(\Delta_{1})$, $G(\Delta_{2})\leq
P^{A_{\alpha}}(\Delta_{2})$ and
$P^{A_{\alpha}}(\Delta_{1})P^{A_{\alpha}}(\Delta_{2})=0$ for any $\alpha$, we
have $G(\Delta_{1})G(\Delta_{2})=0$. Now, we prove that
$G(\Delta_{3})=G(\Delta_{1})+G(\Delta_{2})$.
Let $\gamma\in\Gamma(\Delta_{3})$. It is easy to see that for each
$\Delta^{\prime}\in\gamma$,
$\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta^{\prime})\geq\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta^{\prime}\cap\Delta_{1})+\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta^{\prime}\cap\Delta_{2})\
.$
Let $\gamma_{1}=\\{\Delta^{\prime}\cap\Delta_{1}|\
\Delta^{\prime}\in\gamma\\}$, $\gamma_{2}=\\{\Delta^{\prime}\cap\Delta_{2}|\
\Delta^{\prime}\in\gamma\\}$. Then $\gamma_{1}\in\Gamma(\Delta_{1})$,
$\gamma_{2}\in\Gamma(\Delta_{2})$, thus
$\sum\limits_{\Delta^{\prime}\in\gamma}\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta^{\prime})\Big{)}\geq\sum\limits_{\Delta^{\prime}\in\gamma}\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta^{\prime}\cap\Delta_{1})\Big{)}+\sum\limits_{\Delta^{\prime}\in\gamma}\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta^{\prime}\cap\Delta_{2})\Big{)}\geq
G(\Delta_{1})+G(\Delta_{2})\ .$
We conclude that $G(\Delta_{3})\geq G(\Delta_{1})+G(\Delta_{2})\ .$
Now we prove the converse inequality.
Let $x\in H$ such that $G(\Delta_{3})x=x$. Then for each
$\gamma\in\Gamma(\Delta_{3})$,
$\sum\limits_{\Delta^{\prime}\in\gamma}\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta^{\prime})\Big{)}x=x$.
Let $\gamma_{1}\in\Gamma(\Delta_{1})$, $\gamma_{2}\in\Gamma(\Delta_{2})$.
Then $\gamma_{3}=\gamma_{1}\cup\gamma_{2}\in\Gamma(\Delta_{3})$. Thus
$\sum\limits_{\Delta^{\prime}\in\gamma_{3}}\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta^{\prime})\Big{)}x=x$.
That is,
$x=\sum\limits_{\Delta^{\prime}_{1}\in\gamma_{1}}\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta^{\prime}_{1})\Big{)}x+\sum\limits_{\Delta^{\prime}_{2}\in\gamma_{2}}\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta^{\prime}_{2})\Big{)}x\
.$
Thus
$\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta_{1})\Big{)}x=\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta_{1})\Big{)}(\sum\limits_{\Delta^{\prime}_{1}\in\gamma_{1}}\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta^{\prime}_{1})\Big{)}x+\sum\limits_{\Delta^{\prime}_{2}\in\gamma_{2}}\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta^{\prime}_{2})\Big{)}x\
)$
$=\sum\limits_{\Delta^{\prime}_{1}\in\gamma_{1}}\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta^{\prime}_{1})\Big{)}x,$
and
$\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta_{2})\Big{)}x=\sum\limits_{\Delta^{\prime}_{2}\in\gamma_{2}}\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta^{\prime}_{2})\Big{)}x.$
We conclude that
$\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta_{1})\Big{)}x=G(\Delta_{1})x,\
\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta_{2})\Big{)}x=G(\Delta_{2})x\
.$
So $x=G(\Delta_{1})x+G(\Delta_{2})x$. Thus we have $G(\Delta_{3})\leq
G(\Delta_{1})+G(\Delta_{2})\ .$ From the above we have
$G(\Delta_{3})=G(\Delta_{1})+G(\Delta_{2})\ .$
(ii) If $0\in\Delta_{3}$, suppose $0\in\Delta_{1}$ and $0\notin\Delta_{2}$.
By (i) we have
$G(\Delta_{2})+G({\mathbf{R}}\backslash\Delta_{3})=G({\mathbf{R}}\backslash\Delta_{1})$.
Since $G(\Delta_{1})=I-G({\mathbf{R}}\backslash\Delta_{1})$ and
$G(\Delta_{2})\leq G({\mathbf{R}}\backslash\Delta_{1})$, we have
$G(\Delta_{1})G(\Delta_{2})=0$.
$G(\Delta_{3})=I-G({\mathbf{R}}\backslash\Delta_{3})=I-\Big{(}G({\mathbf{R}}\backslash\Delta_{1})-G(\Delta_{2})\Big{)}=I-G({\mathbf{R}}\backslash\Delta_{1})+G(\Delta_{2})=G(\Delta_{1})+G(\Delta_{2})\
.$
(3) Let $\\{\Delta_{n}\\}_{n=1}^{\infty}\subseteq\mathcal{B}({\mathbf{R}})$ be
pairwise disjoint, we now prove that
$G(\bigcup\limits_{n=1}^{\infty}\Delta_{n})=\sum\limits_{n=1}^{\infty}G(\Delta_{n})$
in the strong operator topology. By (2), we only need to show that
$G(\bigcup\limits_{i=n+1}^{\infty}\Delta_{i})\longrightarrow 0$ in the strong
operator topology.
Since $\\{\Delta_{n}\\}_{n=1}^{\infty}$ are pairwise disjoint, there exists
some positive integer $N$ such that
$0\notin\bigcup\limits_{i=N+1}^{\infty}\Delta_{i}$. Thus for $n\geq N$,
$G(\bigcup\limits_{i=n+1}^{\infty}\Delta_{i})=\bigwedge\limits_{\gamma\in\Gamma(\bigcup\limits_{i=n+1}^{\infty}\Delta_{i})}\sum\limits_{\Delta^{\prime}\in\gamma}\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta^{\prime})\Big{)}\leq\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\bigcup\limits_{i=n+1}^{\infty}\Delta_{i})\longrightarrow
0\ .$
By (1),(2) and (3), we completed the proof.
Theorem 2. $\bigwedge\limits_{\alpha}A_{\alpha}=\int_{\mathbf{R}}\lambda dG$.
Proof. Let $C=\int_{\mathbf{R}}\lambda dG$. Note that for each $\alpha$,
$G\Big{(}(-\infty,-\|A_{\alpha}\|)\cup(\|A_{\alpha}\|,+\infty)\Big{)}\leq
P^{A_{\alpha}}\Big{(}(-\infty,-\|A_{\alpha}\|)\cup(\|A_{\alpha}\|,+\infty)\Big{)}=0\
,$
thus $C$ is bounded and $C\in S(H)$, in particular, $P^{C}=G$.
For each $\Delta\in\mathcal{B}({\mathbf{R}})$, if $0\notin\Delta$, we have
$P^{C}(\Delta)=G(\Delta)=\bigwedge\limits_{\gamma\in\Gamma(\Delta)}\sum\limits_{\Delta^{\prime}\in\gamma}\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta^{\prime})\Big{)}\leq\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta)\
.$
Thus $C\preceq A_{\alpha}$ for each $\alpha$ ([1]).
If $D\preceq A_{\alpha}$ for each $\alpha$, then for each
$0\notin\Delta\in\mathcal{B}({\mathbf{R}})$ and each
$\gamma\in\Gamma(\Delta)$, since
$P^{D}(\Delta^{\prime})\leq\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta^{\prime})$
for each $\Delta^{\prime}\in\gamma$, we have
$P^{D}(\Delta)=\sum\limits_{\Delta^{\prime}\in\gamma}P^{D}(\Delta^{\prime})\leq\sum\limits_{\Delta^{\prime}\in\gamma}\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta^{\prime})\Big{)}\
.$
So we conclude that
$P^{D}(\Delta)\leq\bigwedge\limits_{\gamma\in\Gamma(\Delta)}\sum\limits_{\Delta^{\prime}\in\gamma}\Big{(}\bigwedge\limits_{\alpha}P^{A_{\alpha}}(\Delta^{\prime})\Big{)}=G(\Delta)=P^{C}(\Delta).$
Thus $D\preceq C$, so we have $\bigwedge\limits_{\alpha}A_{\alpha}=C$.
References
[1]. Gudder S. An Order for quantum observables. _Math. Slovaca_. 56: 573-589,
(2006)
[2]. Kadison, R. Order properties of bounded self-adjoint operators. _Proc.
Amer. Math. Soc_. 34: 505-510, (1951)
[3]. Moreland, T, Gudder, S. Infima of Hilbert space effects. _Linear Algebra
and Its Applications_. 286: 1-17, (1999)
[4]. Gudder, S. Lattice properties of quantum effects. _J. Math. Phys._. 37:
2637-2642, (1996)
[5]. Ando, T. Problem of infimum in the positive cone. _Analytic and Geometric
Inequalities and Applications_. 478: 1-12, (1999)
[6]. Du Hongke, Deng Chunyuan, Li Qihui. On the infimum problem of Hilbert
space effects. _Science in China: Series A Mathematics_. 49: 545-556 (2006)
[7]. Pulmannova S, Vincekova E. Remarks on the order for quantum observables.
_Math Slovaca_. 57: 589-600, (2007)
[8]. Liu Weihua, Wu Junde. A representation theorem of infimum of bounded
quantum observables. _J. Math. Phys._. 49: 073521, (2008)
|
arxiv-papers
| 2009-06-05T02:03:19 |
2024-09-04T02:49:03.167193
|
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/3.0/",
"authors": "Shen Jun and Wu Junde",
"submitter": "Junde Wu",
"url": "https://arxiv.org/abs/0906.1028"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.